From schily at schily.net Sun Jan 1 02:52:45 2017 From: schily at schily.net (Joerg Schilling) Date: Sat, 31 Dec 2016 17:52:45 +0100 Subject: [TUHS] Historic Linux versions not on kernel.org In-Reply-To: References: Message-ID: <5867e25d.fFUOMfx6x+cso8oD%schily@schily.net> Warner Losh wrote: > I have a ImageMagic CD that I got back in 1994 that I found in my > garage. It has a bunch of versions of linux that aren't on kernel.org. > The 0.99 series, the 0.98 series and what looks like 1.0 alpha pl14 > and pl15. Isn't everything in the source code controlsystem? I thought that everything has been integrated in BitKeeper with help from Larry McVoy. Or are you interested in parts that are not in the kernel? Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From imp at bsdimp.com Sun Jan 1 03:17:37 2017 From: imp at bsdimp.com (Warner Losh) Date: Sat, 31 Dec 2016 10:17:37 -0700 Subject: [TUHS] 2.11BSD on a Z180 (was: merry christmas) In-Reply-To: References: <019c01d26200$3c30aa30$b491fe90$@ronnatalie.com> Message-ID: On Sat, Dec 31, 2016 at 3:27 AM, Nick Downing wrote: > I hadn't seen apout but it is a good idea, I was toying with doing the same > thing by cross compiling the 2.11BSD kernel to create something like User > Mode Linux (but then adding user mode PDP-11 CPU emulation since it does not > make sense to compile x86-64 2.11BSD userspace executables even though it is > theoretically possible). I don't know how stable is the ABI between Unix V7 > and BSD or between BSD versions. I suspect it is very similar but there > would be differences in things like struct stat which are bound to cause > breakage. So I think it has to run the correct kernel to get the correct > ABI, and in turn that kernel has to have null or pass thru drivers to access > the host facilities. I do hope to get this working someday, especially since > a full cross compile of 2.11BSD includes the f77 stuff which cannot > reasonably be compiled on a modern system given the compiler is not written > in C, I think it is PDP-11 assembly. qemu has a 'user mode' that lets one implement the 'kernel' inside qemu so that you can execute mips binaries on an x86 with full awareness of the host's filesystem, but with enough 'special case hooks' that things like shared libraries use the mips .so rather than the x86 .so. It emulates a bunch of systems, but none of them pdp-. It has both BSD and Linux user-mode support, and FreeBSD uses it to build armv7, armv8, mips and powerpc packages on x86_64 hosts. Warner From david at kdbarto.org Sun Jan 1 08:37:04 2017 From: david at kdbarto.org (David) Date: Sat, 31 Dec 2016 14:37:04 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: Message-ID: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> > On Dec 31, 2016, at 8:58 AM, tuhs-request at minnie.tuhs.org wrote: > > From: Michael Kjörling > To: tuhs at tuhs.org > Subject: Re: [TUHS] Historic Linux versions not on kernel.org > Message-ID: <20161231111339.GK576 at yeono.kjorling.se> > Content-Type: text/plain; charset=utf-8 > > I might be colored by the fact that I'm running Linux myself, but I'd > say that those are almost certainly worth preserving somehow, > somewhere. Linux and OS X are the Unix-like systems people are most > likely to come in contact with these days MacOS X is a certified Unix (tm) OS. Not Unix-Like. http://www.opengroup.org/openbrand/register/apple.htm It has been so since 10.0. Since 10.5 (Leopard) it has been so noted on the above Open Group page. The Open Group only lists the most recent release however. The Tech Brief for 10.7 (http://images.apple.com/media/us/osx/2012/docs/OSX_for_UNIX_Users_TB_July2011.pdf) also notes the compliance. David From khm at sciops.net Sun Jan 1 09:00:33 2017 From: khm at sciops.net (Kurt H Maier) Date: Sat, 31 Dec 2016 15:00:33 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> Message-ID: <20161231230033.GE17848@wopr> On Sat, Dec 31, 2016 at 02:37:04PM -0800, David wrote: > > MacOS X is a certified Unix (tm) OS. Not Unix-Like. I am confused by your apparent argument that a thing cannot be like itself. khm From downing.nick at gmail.com Sun Jan 1 10:43:09 2017 From: downing.nick at gmail.com (Nick Downing) Date: Sun, 1 Jan 2017 11:43:09 +1100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> Message-ID: One significant area of non compliance with unix conventions is its non case sensitive filesystem (HFS and variants like HFS+ if I recall). I think this is partly for historical reasons to make Classic / MacOS9 emulation easier during the transition. But I could never understand why they did this, they could have put case insensitivity in their shell and apps without breaking the filesystem. Anyway despite its being unix I can't really see it gaining much traction with serious unix users (when did you last get a 404 from a major website with a tagline "Apache running on MacOSX"?), the MacPorts and Fink repos are a really bad and patchy implementation of something like apt/ctan/cpan/etc (I think possibly at least one of those repos builds from source with attendant advantages/problems), it does not support X properly, the dylibs are non standard, everything is a bit broken compared with Linux (or FreeBSD) and Apple does not really have the motivation or the manpower to create a modern, clean system like unix users expect. Open sourcing Darwin was supposed to open it up to user contributed enhancements but Apple was never serious about this, it was just a sop to people who claimed (correctly) that Apple was riding on the back of open source and giving nothing back to the community. Since Apple refused to release any important code like drivers or bootstrap routines the Darwin release was never really any more useable than something like 4.4BSDLite. People who loved their Macs and loved unix and dreamed of someday running the Mac UI on top of a proper unix, put significant effort into supplying the missing pieces but were rebuffed by Apple at every turn, Apple would constantly make new releases with even more missing pieces and breakage and eventually stopped making any open source releases at all, leaving a lot of people crushed and very bitter. As for me I got on the Apple bandwagon briefly in 2005 or so, at that time I was experimenting with RedHat but my primary development machines were Windows 98 and 2000 (occasionally XP). My assessment was RedHat was not ready for desktop use, since I had trouble with stuff like printers and scanners that required me to stay with Windows (actually this was probably surmountable but I did not have the knowledge or really the desire to spend time debugging it). That's why I selected Apple as a "compromise unix" which should connect to my devices easily. I got enthusiastic and spent a good $4k on new hardware. Shortly afterwards Apple announced the Intel transition so I realized my brand new gear would soon be obsolete and unsupported. I was still pretty happy though. Two things took the shine off eventually (a) I spilt champagne on my machine, tore it down to discover my beautiful and elegant and spare (on the outside) machine was a horrible hodgepodge of strange piggyback PCBs and third party gear (on the inside), this apparently happened because options like the backlit keyboard had become standard equipment at some point but Apple had never redesigned them into the motherboard, the whole thing was horribly complicated and fragile and never worked well after the teardown (b) I got seriously into FreeBSD and Linux and soon discovered the shortcomings of the Mac as a serious development machine, everything was just slightly incompatible leading to time waste. Happily matters have improved a lot. Lately I was setting up some Windows 7 and 10 machines for my wife to use MS Office on for her uni work. Both had serious driver issues like "The graphics card has crashed and recovered". And on the Windows 10 machine, despite it being BRAND NEW out of the box and manufacturer preloaded, the wifi also did not work, constantly crashed requiring a reboot. Windows Update did not fix these problems. Downloading and trying various updated drivers from the manufacturer's website seems to have for now, except on the Windows 7 machine where the issue is noted and listed as "won't fix" because the graphics card is out of date, the fixed driver won't load on this machine. Given this seems to be the landscape even for people who are happy to spend the $$ on the official manufacturer supported Windows based solutions, Linux looks pretty easy to install and use by comparison. Not problem free, but may have fewer problems and easier to fix problems. It appears to me that with the growing complexity of the hardware due to the millions of compatibility layers and ad hoc protocols built into it, the job of the manufacturers and official OS or driver writers gets harder and harder, whereas the crowdsourced principle of open source shows its value since the gear is better tested in a wider variety of realistic situations. cheers, Nick On Jan 1, 2017 9:46 AM, "David" wrote: > On Dec 31, 2016, at 8:58 AM, tuhs-request at minnie.tuhs.org wrote: > > From: Michael Kjörling > To: tuhs at tuhs.org > Subject: Re: [TUHS] Historic Linux versions not on kernel.org > Message-ID: <20161231111339.GK576 at yeono.kjorling.se> > Content-Type: text/plain; charset=utf-8 > > I might be colored by the fact that I'm running Linux myself, but I'd > say that those are almost certainly worth preserving somehow, > somewhere. Linux and OS X are the Unix-like systems people are most > likely to come in contact with these days MacOS X is a certified Unix (tm) OS. Not Unix-Like. http://www.opengroup.org/openbrand/register/apple.htm It has been so since 10.0. Since 10.5 (Leopard) it has been so noted on the above Open Group page. The Open Group only lists the most recent release however. The Tech Brief for 10.7 (http://images.apple.com/media/us/osx/2012/docs/OSX_ for_UNIX_Users_TB_July2011.pdf) also notes the compliance. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Sun Jan 1 14:32:35 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sat, 31 Dec 2016 20:32:35 -0800 Subject: [TUHS] Historic Linux versions not on kernel.org In-Reply-To: <5867e25d.fFUOMfx6x+cso8oD%schily@schily.net> References: <5867e25d.fFUOMfx6x+cso8oD%schily@schily.net> Message-ID: <20170101043235.GN5983@mcvoy.com> On Sat, Dec 31, 2016 at 05:52:45PM +0100, Joerg Schilling wrote: > Warner Losh wrote: > > > I have a ImageMagic CD that I got back in 1994 that I found in my > > garage. It has a bunch of versions of linux that aren't on kernel.org. > > The 0.99 series, the 0.98 series and what looks like 1.0 alpha pl14 > > and pl15. > > Isn't everything in the source code controlsystem? > > I thought that everything has been integrated in BitKeeper with help from Larry > McVoy. Or are you interested in parts that are not in the kernel? I didn't help with all that, I was pretty butthurt at the time. I've gotten over that but I'm still butthurt that Git won. It's a really shitty answer. I can give you guys a writeup I did recently but I don't want to spam the list. I'd be 100% OK with BitKeeper not winning if Git was at parity or better than BitKeeper, but that's not the case. It's trivial to do a BK to Git exporter, it's really hard to do the other direction, Git records far less information. It's pretty much what I predicted back in the early 2000's. If you don't leave space for commercial companies to make money and pay people to try and reach for the better answer, you're gonna get mediocre answers. And here we are. From lm at mcvoy.com Sun Jan 1 15:00:11 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sat, 31 Dec 2016 21:00:11 -0800 Subject: [TUHS] Unix stories Message-ID: <20170101050011.GQ5983@mcvoy.com> Inspired by: > Stephen Bourne after some time wrote a cron job that checked whether an > update in a binary also resulted in an updated man page and otherwise > removed the binary. This is why these programs have man pages. I want to tell a story about working at Sun. I feel like I've sent this but I can't find it in my outbox. If it's a repeat chalk it up to old age. I wanted to work there, they were the Bell Labs of the day, or as close as you could get. I got hired as a contractor through Lachman (anyone remember them?) to do POSIX conformance in SunOS (the 4.x stuff, not that Solaris crap that I hate). As such, I was frequently the last guy to touch any file in the kernel, my fingerprints were everywhere. So when there was a panic, it was frequently laid at my doorstep. So here is how I got a pager and learned about source management. Sun had two guys, who will remain nameless, but they were known as "the SCSI twins". These guys decided, based on feedback that "people can interrupt sun install", to go into the SCSI tape driver and disable SIGINT, in the driver. The kernel model doesn't allow for drivers messing with your signal mask so on exit, sometimes, we would get a "panic: psig". Somehow, I sure was because of the POSIX stuff, I ended up debugging this panic. It had nothing to with me, I'm not a driver person (I've written a few but I pretty much suck at them), but it landed in my lap. Once I figured it out (which was not easy, you had to hit ^C to trigger it so unless you did that, and who does that during an install) I tracked down the code to SCSI twins. No problem, everyone makes mistakes. Oh, wait. Over the next few months I'm tracking down more problems, that were blamed on me since I'm all over the kernel, but came from the twins. Suns integration machines were argon, radon, and krypton. I wrote scripts, awk I think, that watched every update to the tree on all of those machines and if anything came from the SCSI twins the script paged me. That way I could go build and test that kernel and get ahead of the bugs. If I could fix up their bugs before the rest of the team saw it then I wouldn't get blamed for them. I wish I could have figured out something like Steve did that would have made them not screw up so much but this was the next best thing. I actually got bad reviews because of their crap. My boss at the time, Eli Lamb, just said "you are in kadb too much". --lm From lm at mcvoy.com Sun Jan 1 15:13:42 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sat, 31 Dec 2016 21:13:42 -0800 Subject: [TUHS] Another Unix hacker passes Message-ID: <20170101051342.GR5983@mcvoy.com> Mr /proc. RIP, Roger. He would have loved this list. http://thenewstack.io/remembering-roger-faulkner/ https://news.ycombinator.com/item?id=13293596 From downing.nick at gmail.com Sun Jan 1 16:48:24 2017 From: downing.nick at gmail.com (Nick Downing) Date: Sun, 1 Jan 2017 17:48:24 +1100 Subject: [TUHS] Unix stories In-Reply-To: <20170101050011.GQ5983@mcvoy.com> References: <20170101050011.GQ5983@mcvoy.com> Message-ID: I can never emphasize enough how much damage it does to just get people in on a contract and let them scribble all over the company's valuable IP. Soon they are gone but the damage lives on making the real programmers' lives very difficult. Luckily the thing that took the contract programmer months to do can usually be redone by the real programmer in a day or two, but if it's been released it takes careful planning and opportunism to get all the breakage out of the system. This was my life when I worked on cash registers since software was basically a pimple on the side of hardware, my bosses didn't understand software (or worse, thought they understood it and hence trivialized it) so this would happen a lot, the software was seen as more of a vehicle to get the hardware where it needed to go, rather than an end in itself. Working there and dealing with customer complaints and overseeing the expansion of what was originally a few BASIC programs to print a report of the day's transactions on the receipt printer... to a thousands and thousands of lines of code so the cash registers could participate in multiple networks, download software and price updates, report on takings and performances of different categories, order stock, track till balances etc etc etc... really taught me a MASSIVE respect for code quality, I almost never meet anyone who cares as much about code quality and careful analysis of the system's assumptions and invariants (like the assumption about drivers modifying a process' s signal mask in your example)... as I do. I can remember a conversation I had with a new hire in my research group later on (not cash registers this time)... this dude had a PhD and had written a hugely successful open source package that is still standard today for a lot of courses etc in our field... and was hired to rewrite some similar stuff created by our research group that was a bit of a dogs breakfast but nevertheless was in daily use and publicly disseminated. Well I had hacked on this dude's code a lot and I hated it, way overcomplicated and using a very awkward structure of millions of interdependent C++ templates and what-have-you. He showed me his progress after some months and I showed him a competing implementation (very immature) that I had put together in my summer holiday using Python. So I tried to sell him the idea of doing it in Python and structuring it all for simplicity and maintainability... he was not having it. I could see his code would rapidly descend into a dogs breakfast as soon as it was used to solve any real world problem, because he was repeating all the same mistakes as in his open source package. So fast forward 5 years or so and he has a useable system, it is in daily use and is being publicly disseminated. It is not too bad, until one looks under the hood. I used it as a module in one of my major research tools and it is great that it's available, BUT, it falls over miserably when you stray away from the normal standard use cases that his group have tested and made to work by extensive layers of band-aid fixes, leaving the code in an incomprehensible state. I would spend days debugging and then send him a comprehensive report on my investigation including several proposals for short and long term fixes, he was initially enthusiastic about this but lately my reports get labelled "won't fix" with weak excuses about it being outside the common use cases. "Can't fix" would be more accurate. In the process of all this I looked at the changelogs for the releases. In the past 3 years there were a couple of feature releases and about 30 bugfix releases, each accompanied by a release note which just kind of casually passes this off as no big deal and implies the code is approaching a reliable state. Ha!! By contrast in May/June this year I decided to enter my tool in a competition run by my research group and open to outside entrants, I think about 20 groups entered including 3 or 4 internal entries like mine. Well my tool was far from perfect since I had embarked on a major rewrite of the frontend some months earlier and it was hard to produce anything working at all, let alone competition quality. Luckily I had help from the competition organizer, since internal entries are not eligible for prizes he was happy to alert me to any problems he found and let me submit fixes if it did not mess up his schedule. Well he found quite a few issues and I fixed them and ended up having the fastest and best tool in the competition even though it was not eligible for prizes. But now to the point of the story: The CHARACTER of the problems he found. So much do I care about code quality that it turns out most of the problems amounted to basically an oversight, a misplaced comma that was hard to see, a pointer violation that occurred because a realloc had moved some data during the evaluation of an expression, that sort of thing. The fix never required any significant restructuring of the code, except in one case where it was basically caused by my using that other broken software as a module and I had to work around it. I am so happy that my basic asusumptions and algorithms turned out to be robust, because this means that after some period of getting all the typos and minor oversights out, I will have a tool that is close to perfect despite its complexity and the things I still plan to refactor and rewrite. The guys who do not understand code quality will never experience this. cheers, Nick On 01/01/2017 4:00 PM, "Larry McVoy" wrote: > Inspired by: > > > Stephen Bourne after some time wrote a cron job that checked whether an > > update in a binary also resulted in an updated man page and otherwise > > removed the binary. This is why these programs have man pages. > > I want to tell a story about working at Sun. I feel like I've sent this > but I can't find it in my outbox. If it's a repeat chalk it up to old > age. > > I wanted to work there, they were the Bell Labs of the day, or as close > as you could get. > > I got hired as a contractor through Lachman (anyone remember them?) to do > POSIX conformance in SunOS (the 4.x stuff, not that Solaris crap that I > hate). > > As such, I was frequently the last guy to touch any file in the kernel, > my fingerprints were everywhere. So when there was a panic, it was > frequently laid at my doorstep. > > So here is how I got a pager and learned about source management. > > Sun had two guys, who will remain nameless, but they were known as > "the SCSI twins". These guys decided, based on feedback that "people > can interrupt sun install", to go into the SCSI tape driver and disable > SIGINT, in the driver. The kernel model doesn't allow for drivers messing > with your signal mask so on exit, sometimes, we would get a "panic: psig". > > Somehow, I sure was because of the POSIX stuff, I ended up debugging this > panic. It had nothing to with me, I'm not a driver person (I've written > a few but I pretty much suck at them), but it landed in my lap. > > Once I figured it out (which was not easy, you had to hit ^C to trigger it > so unless you did that, and who does that during an install) I tracked down > the code to SCSI twins. > > No problem, everyone makes mistakes. Oh, wait. Over the next few months > I'm tracking down more problems, that were blamed on me since I'm all over > the kernel, but came from the twins. > > Suns integration machines were argon, radon, and krypton. I wrote > scripts, awk I think, that watched every update to the tree on all > of those machines and if anything came from the SCSI twins the script > paged me. > > That way I could go build and test that kernel and get ahead of the bugs. > If I could fix up their bugs before the rest of the team saw it then I > wouldn't get blamed for them. > > I wish I could have figured out something like Steve did that would have > made them not screw up so much but this was the next best thing. I > actually > got bad reviews because of their crap. My boss at the time, Eli Lamb, just > said "you are in kadb too much". > > --lm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kayparker at mailite.com Sun Jan 1 19:08:48 2017 From: kayparker at mailite.com (=?utf-8?Q?Kay=20Parker=20=09=20?=) Date: Sun, 01 Jan 2017 01:08:48 -0800 Subject: [TUHS] Another Unix hacker passes In-Reply-To: <20170101051342.GR5983@mcvoy.com> References: <20170101051342.GR5983@mcvoy.com> Message-ID: <1483261728.3894246.834186289.1406DDA5@webmail.messagingengine.com> RIP. For those interested. Link to Roger Faulkner's - The Process File System and Process Model in UNIX System V https://www.usenix.org/memoriam-roger-faulkner https://www.usenix.org/sites/default/files/usenix_winter91_faulkner.pdf On Sat, Dec 31, 2016, at 09:13 PM, Larry McVoy wrote: > > Mr /proc. RIP, Roger. He would have loved this list. > > http://thenewstack.io/remembering-roger-faulkner/ > https://news.ycombinator.com/item?id=13293596 -- Kay Parker kayparker at mailite.com -- http://www.fastmail.com - Send your email first class From tfb at tfeb.org Sun Jan 1 20:26:20 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Sun, 1 Jan 2017 10:26:20 +0000 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> Message-ID: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> On 1 Jan 2017, at 00:43, Nick Downing wrote: > > One significant area of non compliance with unix conventions is its non case sensitive filesystem (HFS and variants like HFS+ if I recall). I think this is partly for historical reasons to make Classic / MacOS9 emulation easier during the transition. But I could never understand why they did this, they could have put case insensitivity in their shell and apps without breaking the filesystem. In fact this is an option: you can construct HFS+ filesystems which are case-sensitive and some are (I think the filesystem used for time machine is case-sensitive). FWIW case-insensitive-case-preserving (which is the default) seems to be the naming option which is least vulnerable to awful braindeath: case-sensitive is clearly purer but is ExtremelyVulnerable, while non-case-preserving ends up like THIS or requires magic hacks. More importantly, there was a fairly significant cohort of people -- I am one of them -- whose first serious exposure to Unix was BSD 4.x. Many of those people were then terribly scarred by Sun's defection to SysV (the early Solaris 2s were just seriously grim). If, 10-12 years ago, you wanted a desktop machine which ran a BSD-derived system, which did not require you to spend a lot of time grovelling around in the guts of broken device drivers and/or did not suffer from minor updates which caused the real-time-clock to stop or something, which had a window system which was not a crappy Windows knockoff and for which a reasonable competent set of desktopy applications was available if you wanted them, then MacOS was the only option. Indeed, it was the only option even if you relax the BSD requirement. Linux clearly is a lot better now from that perspective (although based on my experiences with Ubuntu they still do not really understand why 'let's just completely change how everything works every two years' is not a great idea for users: the CADET software-development model is alive and well). --tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Sun Jan 1 23:01:25 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Sun, 1 Jan 2017 08:01:25 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> Message-ID: <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> OS/X (Mac) is Mach-derived I think you do it a disservice to call it BSD-derived. While the kernel-to-application interface was compatible with 4.2 BSD, the kernel is largely of CMU's only creation. The thing came layered with Doug Gwyn's (where is he? I invited him) BRL SV on BSD user environment to silence the critics that it wasn't SVID compatible. I hadn't even realized it until I got a few Mach kerneled machines (notably our NeXT cube) and found that it had my version of the Bourne shell with job control and command line editing hacked in (to battle the tcsh guys at BRL because I detested the csh syntax and Korn's shell hadn't gotten out of the labs yet at that point). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Sun Jan 1 23:11:56 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Sun, 1 Jan 2017 08:11:56 -0500 Subject: [TUHS] Unix stories In-Reply-To: <20170101050011.GQ5983@mcvoy.com> References: <20170101050011.GQ5983@mcvoy.com> Message-ID: <02e201d26430$a111ecc0$e335c640$@ronnatalie.com> > That way I could go build and test that kernel and get ahead of the bugs. > If I could fix up their bugs before the rest of the team saw it, then I wouldn't get blamed for them. Ugh. I had a programmer who was sent over from the other side of our company to help out. The guy was a complete dolt and broke more than he fixed. Mostly, he never really caught on to how dynamic memory worked in C and was always trashing the malloc areas. So I just backed out every change he made (at least we had source code control going at that point, one of the first things I insisted happen when I took over). Finally, I was on my way in to fire him when I found he had quit an hour previously (fine, I love it when that works out). Unfortunately, he checked in all his "work in progress" which didn't even compile. Backed all that out. Several years later I get a hold of a tool called Purify which finds memory leaks (among other things) In your code. I find a piece of code written by one our better programmers that predate the source code control system that's leaking memory. This can't be. Look through the edit history and there is one edit, my former programmer. He's deleted the free() call in the routine without explanation. Obviously, he had corrupted the malloced area one day, and it crashed in the subsequent free in this routine, so he just deleted it. Bad programmers can hurt you for a good long time after they leave. The only other guy I had to fire at least had done absolutely nothing over his tenure, so we were unaffected in the long run. From michael at kjorling.se Sun Jan 1 23:28:12 2017 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Sun, 1 Jan 2017 13:28:12 +0000 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> Message-ID: <20170101132812.GW576@yeono.kjorling.se> On 1 Jan 2017 11:43 +1100, from downing.nick at gmail.com (Nick Downing): > One significant area of non compliance with unix conventions is its non > case sensitive filesystem (HFS and variants like HFS+ if I recall). I think > this is partly for historical reasons to make Classic / MacOS9 emulation > easier during the transition. Just a side note, but ZFS takes the middle ground of making this a trivially tunable option. Just set casesensitivity=[sensitive | insensitive | mixed] on the relevant file system (that's right, like almost everything else that matters in daily use in ZFS, it's a file system property, not a pool property). I suspect that the major use case for casesensitivity=insensitive is support for operating systems that are normally used with case insensitive file name matching. By setting `mixed`, userland software can apparently specify whether it wants case-sensitive or case-insensitive matching on a per-request basis. I suspect that requires ZFS-specific knowledge in the software. Of course the default is `sensitive`. Something similar to `mixed` could have been done in OS X and HFS/+; just make the default case sensitive, and adjust on a per-process basis if the process is running through the compatibility layer. (Or the other way around, if preferred. It's not like either way would have been significantly more work than the other, and it _would_ have been moving the ecosystem in a Unixy direction.) -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “People who think they know everything really annoy those of us who know we don’t.” (Bjarne Stroustrup) From tfb at tfeb.org Sun Jan 1 23:56:33 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Sun, 1 Jan 2017 13:56:33 +0000 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> Message-ID: <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> Yes, I know it's Mach: I really meant the userland and (to a smaller extent) the system calls. Given my aim of a desktop machine on which to live that's actually all I care about: if I have to worry about the guts of the kernel then the system has failed to provide what I need from it. > On 1 Jan 2017, at 13:01, Ron Natalie wrote: > > OS/X (Mac) is Mach-derived I think you do it a disservice to call it BSD-derived. While the kernel-to-application interface was compatible with 4.2 BSD, the kernel is largely of CMU’s only creation. > The thing came layered with Doug Gwyn’s (where is he? I invited him) BRL SV on BSD user environment to silence the critics that it wasn’t SVID compatible. I hadn’t even realized it until I got a few Mach kerneled machines (notably our NeXT cube) and found that it had my version of the Bourne shell with job control and command line editing hacked in (to battle the tcsh guys at BRL because I detested the csh syntax and Korn’s shell hadn’t gotten out of the labs yet at that point). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Mon Jan 2 02:50:56 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 1 Jan 2017 11:50:56 -0500 (EST) Subject: [TUHS] Unix stories Message-ID: <20170101165056.1179318C095@mercury.lcs.mit.edu> > From: Nick Downing > way overcomplicated and using a very awkward structure of millions of > interdependent C++ templates and what-have-you. > ... > the normal standard use cases that his group have tested and made to > work by extensive layers of band-aid fixes, leaving the code in an > incomprehensible state. Which just goes to provide support for my long-term contention, that language features can't help a bad programmer, or prevent them from writing garbage. Sure, you can take away 'goto' and other dangerous things, and add a lot of things that _can_ be used to write good code (e.g. complete typing and type checking), but that doesn't mean that a user _will_ write good code. I once did a lot of work with an OS written in a macro assembler, done by someone really good. (He'd even created macros to do structure declarations!) It was a joy to work with (very clean and simple), totally bug-free; and very easy to change/modify, while retaining those characteristics. (I modified the I/O system to use upcalls to signal asynchronous I/O completion, instead of IPC messages, and it was like falling off a log.) Thinking we can provide programming tools/languages which will make good programmers is like thinking we can provide sculpting equipment which will make good sculptors. I don't, alas, have any suggestions for what we _can_ do to make good programmers. It may be impossible (like making good sculptors - they are born, not made). I do recall talking to Jerry Saltzer about system architects, and he said something to the effect of 'we can run this stuff past students, and some of them get it, and some don't, and that's about all we can do'. Noel From david at kdbarto.org Mon Jan 2 05:33:45 2017 From: david at kdbarto.org (David) Date: Sun, 1 Jan 2017 11:33:45 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> Message-ID: MacOS is the best of a bad lot, in my opinion. Unix at the core and as a result pretty well able to withstand attacks from the outside world. For a desktop OS, not too bad. Linux is to diversified at this point to make it to the desktop any time soon. As a server OS it is a wonderful thing and I am willing to work with it there any time. Though the differences between Linux A and Linux B are sometimes grating on my ability to get the code I am paid to work on running in all environments. Windows is just broken. Yes, it is getting better (for some definition of better) with each release. The ability to hack and attack usually has this common vector. It is usually chosen by some IT guy because the initial cost is low and it is what they got trained to use sometime in history. As to Doug Gwyn and The BRL code, I ported all of that onto an early Celerity release. Celerity was 4.2 and I did the port to get the feature set that some customers were clamoring for. And Ron, you may have been the one to send me your patches for the Bourne shell way back when. I do remember doing that integration for Celerity as well. David > On Jan 1, 2017, at 5:56 AM, Tim Bradshaw wrote: > > Yes, I know it's Mach: I really meant the userland and (to a smaller extent) the system calls. Given my aim of a desktop machine on which to live that's actually all I care about: if I have to worry about the guts of the kernel then the system has failed to provide what I need from it. > > On 1 Jan 2017, at 13:01, Ron Natalie wrote: > >> OS/X (Mac) is Mach-derived I think you do it a disservice to call it BSD-derived. While the kernel-to-application interface was compatible with 4.2 BSD, the kernel is largely of CMU’s only creation. >> The thing came layered with Doug Gwyn’s (where is he? I invited him) BRL SV on BSD user environment to silence the critics that it wasn’t SVID compatible. I hadn’t even realized it until I got a few Mach kerneled machines (notably our NeXT cube) and found that it had my version of the Bourne shell with job control and command line editing hacked in (to battle the tcsh guys at BRL because I detested the csh syntax and Korn’s shell hadn’t gotten out of the labs yet at that point). From tfb at tfeb.org Mon Jan 2 06:12:47 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Sun, 1 Jan 2017 20:12:47 +0000 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfe b.org> Message-ID: <45EEDF24-40D3-4ACD-8F67-9DA6424097FE@tfeb.org> Well: where I work the default desktop is RHEL as far as I know (there are management and admin people who have Windows desktops I think, and laptops are Windows). This is a scientific environment however: if there were not desktop linux systems they'd need an even bigger farm of headless machines (probably VMs) so there was cultural compatibility with the HPC. And of course mail is Outlook/Citrix so they cheat there. I think the answer is that it works if you remember that you are not deploying 'Linux' but RHEL or Ubuntu (or MacOS!) or whatever. Scale also helps. --tim > On 1 Jan 2017, at 19:33, David wrote: > > Linux is to diversified at this point to make it to the desktop any time soon. From khm at sciops.net Mon Jan 2 06:28:50 2017 From: khm at sciops.net (Kurt H Maier) Date: Sun, 1 Jan 2017 12:28:50 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> Message-ID: <20170101202850.GF17848@wopr> On Sun, Jan 01, 2017 at 11:33:45AM -0800, David wrote: > Linux is to diversified at this point to make it to the desktop any time soon. I haven't had a job that didn't provide a Linux workstation since the early 2000s. Linux has made it to the desktop already, regardless of what goes on in discount electronics stores. It's not particularly good at the desktop, but then, it's not particularly good at anything else, either. Like any other operating system, you get out what you put in. Happy new year. khm From lm at mcvoy.com Mon Jan 2 06:38:13 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 1 Jan 2017 12:38:13 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <20170101202850.GF17848@wopr> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> <20170101202850.GF17848@wopr> Message-ID: <20170101203813.GV5983@mcvoy.com> On Sun, Jan 01, 2017 at 12:28:50PM -0800, Kurt H Maier wrote: > On Sun, Jan 01, 2017 at 11:33:45AM -0800, David wrote: > > Linux is to diversified at this point to make it to the desktop any time soon. > > I haven't had a job that didn't provide a Linux workstation since the > early 2000s. Linux has made it to the desktop already, regardless of what > goes on in discount electronics stores. It's not particularly good at > the desktop, but then, it's not particularly good at anything else, > either. I'd like to know where you can get a better performing OS. The file systems scream when compared to Windows or MacOS, they know about SSDs and do the right thing. The processes are light weight, I regularly do "make -j" which on my machines just spawns as many processs as needed. $ time make -j real 0m17.336s user 1m7.652s sys 0m5.116s $ time make -j12 # this is a 6 cpu/hyperthreaded to 12 real 0m16.473s user 1m5.856s sys 0m4.736s So if I size it to the number of CPUs it is slightly faster. On the other hand, when I tell it just spawn as many as it wants it peaks at about 267 processes running in parallel. Solaris, AIX, IRIX, HP-UX, MacOS would all thrash like crazy under that load, their context switch times are crappy. Source: author of LMbench which has been measuring this stuff since the mid 1990s. From berny at berwynlodge.com Mon Jan 2 07:10:34 2017 From: berny at berwynlodge.com (Berny Goodheart) Date: Sun, 1 Jan 2017 21:10:34 +0000 Subject: [TUHS] Another Unix hacker passes Message-ID: <387B45F2-2DDE-4B8B-9835-256ADB39AB64@berwynlodge.com> Very sad news. RIP Roger. From cym224 at gmail.com Mon Jan 2 07:45:30 2017 From: cym224 at gmail.com (Nemo) Date: Sun, 1 Jan 2017 16:45:30 -0500 Subject: [TUHS] Unix stories In-Reply-To: <20170101165056.1179318C095@mercury.lcs.mit.edu> References: <20170101165056.1179318C095@mercury.lcs.mit.edu> Message-ID: On 1 January 2017 at 11:50, Noel Chiappa wrote (in part): > Which just goes to provide support for my long-term contention, that language > features can't help a bad programmer, or prevent them from writing garbage. Indeed. In one of his books, Wirth laments about programmers proudly showing him terrible code written in Pascal and goes on to say the same thing. N. From downing.nick at gmail.com Mon Jan 2 12:42:31 2017 From: downing.nick at gmail.com (Nick Downing) Date: Mon, 2 Jan 2017 13:42:31 +1100 Subject: [TUHS] Unix stories In-Reply-To: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: This is something I think about a lot. And I can't really say how adding an option to gcc (for example) should be tested. In fact I'm a bit lazy when it comes to adding test cases even though I recognize their validity. Programming by test case is a good idea, but one problem is the setup cost, for a new project I tend to just write a two-line Makefile and start coding, and I really want to keep doing this. Once I start adding a testcase infrastructure then the Makefile is a lot bigger (make tests and so forth) and relies on external tools etc. Simplicity really can be better, I mean once you start using tools like cmake/smake, ant, or (THE HORRORS) maven, it's a time waste. But what I really want to say here, is a lot of the bloat arises because different ecosystems have different ways of doing things. For instance the object model. Well C++ has an object model. And C has an object model (based on void * pointers, lots of libraries export this kind of object model including FILE * in stdio). And gnome has an object model. And OpenOffice/StarOffice defines its own object model which is similar to Microsoft's COM. And so on and so forth. So, much as I like the democracy of an open ecosystem where everyone can define a competing way of doing the same thing and de facto standards develop... my own plan is a little bit different. What I want to do is go back to a REALLY SIMPLE unbloated system, which is why I am very interested in 2.11BSD (you probably saw my earlier posts about the 2.11BSD system and potential port to Z180 and so on). And then I want to define the ONE TRUE WAY(TM) of doing each thing. But before I do this I want to go right back to basics and look at the object model used in the operating system itself. For instance the stuff like the oft (open file table), inode table, the filesystem table (I mean Sun VFS which isn't in 2.11BSD AFAIK but eventually should be), the device table and so on. And also the user-visible objects like files, sockets etc which map to kernel objects. So once I have sorted all this out and created an object model that the kernel can use efficiently, with compiler support (like C++ without the bloat, like java with internal pointers and ability to get at the bits and bytes of your objects, like C without all the void * stuff and with automatic handling of stuff like vtables), and converted the kernel and all drivers to use it, then I think I will have an object model that is useful in practice. So my idea then is to export it to userlevel, so that userland programs can be rewritten into this new C-like object oriented language and calls like: count = read(fd, buf, size) would change to: count = fd.read(buf, size) since kernel objects are objects. There would also be a compatibility layer that allows you to keep a table of integer fds and their kernel objects in userspace for porting. After this I would start to look at popular libraries like the C library or the X-Window system, and convert them to use the new object model, while also providing the compatibility layer as I did for the kernel interface. Ultimately the result would be a bit like Java, but using all the familiar Unix objects and familiar Unix calling conventions (such as argument passing by reference or malloc/memcpy stuff that Java can't do). Also without any header files or boilerplate of any description, which is one of my pet peeves with Java, C, C++ etc. I really think that the solution to bloat is to go through and rewrite everything to do things in a more standardized way with more reuse. Also I think that the massive amount of bloat arises to some extent because the environment lends itself to writing non maintainable code (for example you have to write loads of boilerplate and synchronize function definitions in various places, which discourages you from changing a function definition even if that's the right thing to do in a situation). So there's always the temptation to add another compatibility layer rather than dealing with the bloat. Rewriting things in a much more minimal and maintainable style is the answer. Another reason for bloat is that authors have to support millions of slightly different systems. My idea is to totally standardize it, like POSIX but much more drastically so. Think about Java, it defines a strict virtual machine so there's nothing to change when you port your code to another platform. I haven't totally decided how to handle word-size issues in this context, but I am sure there is a way. Did i mention I also have plans for world domination? :) cheers, Nick On Mon, Jan 2, 2017 at 1:03 PM, Steve Johnson wrote: > > These stories certainly rang true to me. I think it's interesting to pose > the question, to be a bit more contemporary, "What do we need to do to make > open source code higher quality?" I think the original arguments (that open > source would be high quality because everybody would read the code and fix > the bugs) has a bit of validity. But, IMHO, it is swamped by the typical > in-coherence of the software. > > It seems to me to be glaringly obvious that if you add a single on/off > option to a program, and don't want the quality of the code to decrease, you > should a priori double the amount of testing you do before it releasing it. > If you have a carefully designed program with multiple phases with firewalls > between them, you might be able to get away with only 10 or 20% more > testing. > > So look at gcc with nearly 600 lines in the man page describing just the > names of the options... It seems obvious that the actual amount of testing > of these combinations is a set of measure 0 in the space of all possible > invocations. And the observed fact is that if you try to do something at > all unusual, no matter how reasonable it may seem, it is likely to fail. > Additionally, it is kind sad to see the same functionality (e.g., increasing > the default stack size) is done so differently on different OS's. Even > within Linux, setting the locale (at least at one time) was quite different > from Linux to Linux. > > And I think you can argue that gcc is a success story... > > But how much longer can this go on? What can we do to fight the exponential > bloat of this and other open-souce projects. All ideas cheerfully > entertained... > > Steve > From wes.parish at paradise.net.nz Mon Jan 2 12:53:28 2017 From: wes.parish at paradise.net.nz (Wesley Parish) Date: Mon, 02 Jan 2017 15:53:28 +1300 (NZDT) Subject: [TUHS] Unix stories In-Reply-To: References: <20170101165056.1179318C095@mercury.lcs.mit.edu> Message-ID: <1483325608.5869c0a87423e@www.paradise.net.nz> Don't we usually express that as "You can write FORTRAN in any language"? Wesley Parish Quoting Nemo : > On 1 January 2017 at 11:50, Noel Chiappa > wrote (in part): > > Which just goes to provide support for my long-term contention, that > language > > features can't help a bad programmer, or prevent them from writing > garbage. > > Indeed. In one of his books, Wirth laments about programmers proudly > showing him terrible code written in Pascal and goes on to say the > same thing. > > N. > "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, Method for Guitar "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn From usotsuki at buric.co Mon Jan 2 16:01:32 2017 From: usotsuki at buric.co (Steve Nickolas) Date: Mon, 2 Jan 2017 01:01:32 -0500 (EST) Subject: [TUHS] Unix stories In-Reply-To: References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: On Mon, 2 Jan 2017, Nick Downing wrote: > What I want to do is go back to a REALLY SIMPLE unbloated system, > which is why I am very interested in 2.11BSD (you probably saw my > earlier posts about the 2.11BSD system and potential port to Z180 and > so on). And then I want to define the ONE TRUE WAY(TM) of doing each > thing. But before I do this I want to go right back to basics and look > at the object model used in the operating system itself. For instance > the stuff like the oft (open file table), inode table, the filesystem > table (I mean Sun VFS which isn't in 2.11BSD AFAIK but eventually > should be), the device table and so on. And also the user-visible > objects like files, sockets etc which map to kernel objects. > > So once I have sorted all this out and created an object model that > the kernel can use efficiently, with compiler support (like C++ > without the bloat, like java with internal pointers and ability to get > at the bits and bytes of your objects, like C without all the void * > stuff and with automatic handling of stuff like vtables), and > converted the kernel and all drivers to use it, then I think I will > have an object model that is useful in practice. So my idea then is to > export it to userlevel, so that userland programs can be rewritten > into this new C-like object oriented language and calls like: count = > read(fd, buf, size) would change to: count = fd.read(buf, size) since > kernel objects are objects. There would also be a compatibility layer > that allows you to keep a table of integer fds and their kernel > objects in userspace for porting. > > After this I would start to look at popular libraries like the C > library or the X-Window system, and convert them to use the new object > model, while also providing the compatibility layer as I did for the > kernel interface. Ultimately the result would be a bit like Java, but > using all the familiar Unix objects and familiar Unix calling > conventions (such as argument passing by reference or malloc/memcpy > stuff that Java can't do). Also without any header files or > boilerplate of any description, which is one of my pet peeves with > Java, C, C++ etc. > > I really think that the solution to bloat is to go through and rewrite > everything to do things in a more standardized way with more reuse. > Also I think that the massive amount of bloat arises to some extent > because the environment lends itself to writing non maintainable code > (for example you have to write loads of boilerplate and synchronize > function definitions in various places, which discourages you from > changing a function definition even if that's the right thing to do in > a situation). So there's always the temptation to add another > compatibility layer rather than dealing with the bloat. Rewriting > things in a much more minimal and maintainable style is the answer. > > Another reason for bloat is that authors have to support millions of > slightly different systems. My idea is to totally standardize it, like > POSIX but much more drastically so. Think about Java, it defines a > strict virtual machine so there's nothing to change when you port your > code to another platform. I haven't totally decided how to handle > word-size issues in this context, but I am sure there is a way. This vaguely reminds me of an idea I had a couple years ago, but never mentioned because I thought it might be perceived as daft. I had the idea of writing a virtual machine specifically tailored to running C code, with a virtual operating system similar to Unix. On the surface it would probably feel like running Unix under an emulator, but the design would probably be a bit less baroque, because everything would be designed to work together cleanly. Like the mutant child of Unix and UCSD Pascal, I suppose. -uso. From wkt at tuhs.org Mon Jan 2 16:21:59 2017 From: wkt at tuhs.org (Warren Toomey) Date: Mon, 2 Jan 2017 16:21:59 +1000 Subject: [TUHS] Unix stories In-Reply-To: References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: <20170102062159.GA3939@minnie.tuhs.org> On Mon, Jan 02, 2017 at 01:01:32AM -0500, Steve Nickolas wrote: > I had the idea of writing a virtual machine specifically tailored to running > C code, with a virtual operating system similar to Unix. On the surface it > would probably feel like running Unix under an emulator, but the design > would probably be a bit less baroque, because everything would be designed > to work together cleanly. I've been admiring Swieros: https://github.com/rswier/swieros which sort of does this. Cheers, Warren From downing.nick at gmail.com Mon Jan 2 16:25:16 2017 From: downing.nick at gmail.com (Nick Downing) Date: Mon, 2 Jan 2017 17:25:16 +1100 Subject: [TUHS] Unix stories In-Reply-To: References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: Yes, I agree, I want to do exactly that. And, I know my ideas are probably also perceived as daft. But it's OK :) Unfortunately a C interpreter does not really work in principle, because of the fact you can jump into loops and other kinds of abuse. So the correct execution model is a bytecode. This means the C compiler is pretty much unchanged, so there are no real simplifications there. Also, you can cast a pointer and access individual bytes of a structure and that kind of abuse. So the data model is pretty much unchanged from a real machine's too, and there are no real simplifications there either. But in my opinion that's not the end of the story. What I would like to do is to implement pointer safety, I know this is a pretty big ask and will break compatibility with a big number of C applications out there. But it might be surprising how many actually do use pointers in a safe way. So my idea was something like this: Suppose there is "struct foo { char *p; int *q; long b; short a; };", well the user wouldn't be allowed to access the individual bytes of the pointers p and q. But they would be allowed to access the individual bytes of a and b since this does not introduce anything dangerous. Therefore the struct would be internally converted to "struct foo { char *p; int *q; char data[6] };" allowing 4 bytes for the long and 2 for the short. Then, now suppose I call a function and I pass it "&a". This would be internally converted to "data + 4". But it would be passed as a triple (data, data + 4, data + 6) which gives the bounds of the underlying array. Or suppose the virtual machine was implemented in Java or Python or some such, it would be passed as a pair (data, 4) where data is a 6-byte array and 4 is an integer index into it. I was then going to make the compiler recognize some idiomatic C constructs like "struct foo *p = malloc(10 * sizeof(foo));" and internally translate them to something like "struct foo *p = new foo[10];". Hopefully there could eventually be discriminated unions so that I could declare something like "union foo { char *p; int *q; long b; short a; };" and this would be internally translated to "struct foo { int which; union { char *, int *, data[4] } };" and the field "which" would be set to 0, 1 or 2 depending on whether a char *, an int *, or plain old data had been stored there. That way, when you access a given field of the union it can validate that the correct thing was stored there earlier. Also, with a little bit of syntactic sugar we could implement intptr_t as "union { void *p; int a; };" to increase compatibility with C. If we do it like this, then your P-system idea works quite well, because the memory space doesn't exist anymore, it's an OO database. cheers, Nick On Mon, Jan 2, 2017 at 5:01 PM, Steve Nickolas wrote: > On Mon, 2 Jan 2017, Nick Downing wrote: > >> What I want to do is go back to a REALLY SIMPLE unbloated system, >> which is why I am very interested in 2.11BSD (you probably saw my >> earlier posts about the 2.11BSD system and potential port to Z180 and >> so on). And then I want to define the ONE TRUE WAY(TM) of doing each >> thing. But before I do this I want to go right back to basics and look >> at the object model used in the operating system itself. For instance >> the stuff like the oft (open file table), inode table, the filesystem >> table (I mean Sun VFS which isn't in 2.11BSD AFAIK but eventually >> should be), the device table and so on. And also the user-visible >> objects like files, sockets etc which map to kernel objects. >> >> So once I have sorted all this out and created an object model that >> the kernel can use efficiently, with compiler support (like C++ >> without the bloat, like java with internal pointers and ability to get >> at the bits and bytes of your objects, like C without all the void * >> stuff and with automatic handling of stuff like vtables), and >> converted the kernel and all drivers to use it, then I think I will >> have an object model that is useful in practice. So my idea then is to >> export it to userlevel, so that userland programs can be rewritten >> into this new C-like object oriented language and calls like: count = >> read(fd, buf, size) would change to: count = fd.read(buf, size) since >> kernel objects are objects. There would also be a compatibility layer >> that allows you to keep a table of integer fds and their kernel >> objects in userspace for porting. >> >> After this I would start to look at popular libraries like the C >> library or the X-Window system, and convert them to use the new object >> model, while also providing the compatibility layer as I did for the >> kernel interface. Ultimately the result would be a bit like Java, but >> using all the familiar Unix objects and familiar Unix calling >> conventions (such as argument passing by reference or malloc/memcpy >> stuff that Java can't do). Also without any header files or >> boilerplate of any description, which is one of my pet peeves with >> Java, C, C++ etc. >> >> I really think that the solution to bloat is to go through and rewrite >> everything to do things in a more standardized way with more reuse. >> Also I think that the massive amount of bloat arises to some extent >> because the environment lends itself to writing non maintainable code >> (for example you have to write loads of boilerplate and synchronize >> function definitions in various places, which discourages you from >> changing a function definition even if that's the right thing to do in >> a situation). So there's always the temptation to add another >> compatibility layer rather than dealing with the bloat. Rewriting >> things in a much more minimal and maintainable style is the answer. >> >> Another reason for bloat is that authors have to support millions of >> slightly different systems. My idea is to totally standardize it, like >> POSIX but much more drastically so. Think about Java, it defines a >> strict virtual machine so there's nothing to change when you port your >> code to another platform. I haven't totally decided how to handle >> word-size issues in this context, but I am sure there is a way. > > > This vaguely reminds me of an idea I had a couple years ago, but never > mentioned because I thought it might be perceived as daft. > > I had the idea of writing a virtual machine specifically tailored to running > C code, with a virtual operating system similar to Unix. On the surface it > would probably feel like running Unix under an emulator, but the design > would probably be a bit less baroque, because everything would be designed > to work together cleanly. > > Like the mutant child of Unix and UCSD Pascal, I suppose. > > -uso. From arnold at skeeve.com Mon Jan 2 17:29:12 2017 From: arnold at skeeve.com (arnold at skeeve.com) Date: Mon, 02 Jan 2017 00:29:12 -0700 Subject: [TUHS] Unix stories In-Reply-To: References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: <201701020729.v027TCh6025641@freefriends.org> Nick Downing wrote: > What I want to do is go back to a REALLY SIMPLE unbloated system, Ahem. This is not a new idea. Take a hard look at Plan 9 From Bell Labs, where the original Unix guys decided that it was worth starting over. They did A LOT of neat stuff. There's a lot of overlap of readership between TUHS and 9fans (not suprising); I'm suprised noone beat me to mentioning this. But before totally reinventing the wheel, I'd strongly recommend looking at what Ken et al did with Plan 9. Arnold P.S. And yes, we've gotten far off topic, I suppose. :-) From arnold at skeeve.com Mon Jan 2 20:06:39 2017 From: arnold at skeeve.com (arnold at skeeve.com) Date: Mon, 02 Jan 2017 03:06:39 -0700 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> Message-ID: <201701021006.v02A6dwf032366@freefriends.org> "Ron Natalie" wrote: > OS/X (Mac) is Mach-derived I think you do it a disservice to call it > BSD-derived. While the kernel-to-application interface was compatible > with 4.2 BSD, the kernel is largely of CMU's only creation. > > The thing came layered with Doug Gwyn's (where is he? I invited him) BRL SV > on BSD user environment to silence the critics that it wasn't SVID > compatible. I hadn't even realized it until I got a few Mach kerneled > machines (notably our NeXT cube) and found that it had my version of the > Bourne shell with job control and command line editing hacked in (to battle > the tcsh guys at BRL because I detested the csh syntax and Korn's shell > hadn't gotten out of the labs yet at that point). > I remember Ron's job control stuff. I even backported it to the BSD v7 Bourne shell and posted it to one of the sources groups. I don't remember a history mechanism by Ron, but I do remember one that I did; it was very csh style. I don't think tcsh was around then. This is ~ 1984-1986 time frame. Ron - can you give more detail about your history mechanism? Thanks, Arnold From schily at schily.net Mon Jan 2 21:14:53 2017 From: schily at schily.net (Joerg Schilling) Date: Mon, 02 Jan 2017 12:14:53 +0100 Subject: [TUHS] Another Unix hacker passes In-Reply-To: <20170101051342.GR5983@mcvoy.com> References: <20170101051342.GR5983@mcvoy.com> Message-ID: <586a362d.zbHKmtPSFH27q3vy%schily@schily.net> Larry McVoy wrote: > > Mr /proc. RIP, Roger. He would have loved this list. > > http://thenewstack.io/remembering-roger-faulkner/ > https://news.ycombinator.com/item?id=13293596 Well, this happened in Summer and at that time, I was wondering whether I should send a note to this list. He had a heart attack in spring and returned to the weekly POSIX teleconferences after aprox. 6 weeks. After another 6 weeks where he attended the teleconferences again, we received the message that he passed away... BTW: Roger died a few days before this thread started: https://lists.gnu.org/archive/html/bug-tar/2016-07/msg00015.html The Linux people did not seem to know where /proc is from... Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Mon Jan 2 21:31:47 2017 From: schily at schily.net (Joerg Schilling) Date: Mon, 02 Jan 2017 12:31:47 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> Message-ID: <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> David wrote: > MacOS X is a certified Unix (tm) OS. Not Unix-Like. Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. Note that passing the certification tests unfortunately does not grant POSIX compliance :-( Try e.g. this program on Mac OS X: #include #include #include #include /* * Non-standard compliant platforms may need * #include or something similar * in addition to the include files above. */ int main() { siginfo_t si; pid_t pid; int ret; if ((pid = fork()) < 0) exit(1); if (pid == 0) { _exit(1234567890); } ret = waitid(P_PID, pid, &si, WEXITED); printf("ret: %d si_pid: %ld si_status: %d si_code: %d\n", ret, (long) si.si_pid, si.si_status, si.si_code); if (pid != si.si_pid) printf("si_pid in struct siginfo should be %ld but is %ld\n", (long) pid, (long) si.si_pid); if (si.si_status != 1234567890) printf("si_status in struct siginfo should be %d (0x%x) but is %d (0x%x)\n", 1234567890, 1234567890, si.si_status, si.si_status); if (si.si_code != CLD_EXITED) printf("si_code in struct siginfo should be %d (0x%x) but is %d (0x%x)\n", CLD_EXITED, CLD_EXITED, si.si_code, si.si_code); if (CLD_EXITED != 1) printf("CLD_EXITED is %d on this platform\n", CLD_EXITED); return (0); } Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From ron at ronnatalie.com Mon Jan 2 21:34:35 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Mon, 2 Jan 2017 06:34:35 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <201701021006.v02A6dwf032366@freefriends.org> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <201701021006.v02A6dwf032366@freefriends.org> Message-ID: <032601d264ec$34b93c60$9e2bb520$@ronnatalie.com> I added TCSH command line editng. The history was you could scroll back through your previous commands. By default, the thing used an emacs-like command binding, but it was entirely configurable. Most of the Mach /bin/sh were my earlier shell with only job control. The command editing one didn't make Doug's distribution. I also remember sitting down with some other guys and explaining how the BSD job control worked. For this garnered me a mention in some of the early linux docs. I also remember having a nice discussion of shell internals with Dave Korn at another show. The two of us had a lot of stories about banging our heads on the shell internals. The most significant change out of the labs was between SV and SVR2 when someone finally got rid of all those Bourne macros that made the C look like Algol or whatever. From arnold at skeeve.com Mon Jan 2 22:24:43 2017 From: arnold at skeeve.com (arnold at skeeve.com) Date: Mon, 02 Jan 2017 05:24:43 -0700 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <032601d264ec$34b93c60$9e2bb520$@ronnatalie.com> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <201701021006.v02A6dwf032366@freefriends.org> <032601d264ec$34b93c60$9e2bb520$@ronnatalie.com> Message-ID: <201701021224.v02COhVn005971@freefriends.org> "Ron Natalie" wrote: > The command editing one didn't make Doug's distribution. That explains why I never saw it... :-) > The most significant change out of the labs was between SV and SVR2 when > someone finally got rid of all those Bourne macros that made the C look like > Algol or whatever. Yes, I remember that. It was a real breath of fresh air. It was really USG at that point and not "the Labs" (= the Research group). I think shell functions came in at that point, too. Thanks, Arnold From scj at yaccman.com Mon Jan 2 12:03:10 2017 From: scj at yaccman.com (Steve Johnson) Date: Sun, 01 Jan 2017 18:03:10 -0800 Subject: [TUHS] Unix stories In-Reply-To: Message-ID: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> These stories certainly rang true to me.  I think it's interesting to pose the question, to be a bit more contemporary, "What do we need to do to make open source code higher quality?"  I think the original arguments (that open source would be high quality because everybody would read the code and fix the bugs) has a bit of validity.  But, IMHO, it is swamped by the typical in-coherence of the software. It seems to me to be glaringly obvious that if you add a single on/off option to a program, and don't want the quality of the code to decrease, you should _a_ _priori_ double the amount of testing you do before it releasing it.  If you have a carefully designed program with multiple phases with firewalls between them, you might be able to get away with only 10 or 20% more testing. So look at gcc with nearly 600 lines in the man page describing just the _names_ of the options...   It seems obvious that the actual amount of testing of these combinations is a set of measure 0 in the space of all possible invocations.  And the observed fact is that if you try to do something at all unusual, no matter how reasonable it may seem, it is likely to fail.  Additionally, it is kind sad to see the same functionality (e.g., increasing the default stack size) is done so differently on different OS's.  Even within Linux, setting the locale (at least at one time) was quite different from Linux to Linux. And I think you can argue that gcc is a success story... But how much longer can this go on?  What can we do to fight the exponential bloat of this and other open-souce projects.  All ideas cheerfully entertained... Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at cs.dartmouth.edu Tue Jan 3 00:30:22 2017 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Mon, 02 Jan 2017 09:30:22 -0500 Subject: [TUHS] Unix stories Message-ID: <201701021430.v02EUMvr092955@tahoe.cs.Dartmouth.EDU> > In one of his books, Wirth laments about programmers proudly > showing him terrible code written in Pascal For your amusement, here's Wirth himself committing that sin: http://www.cs.dartmouth.edu/~doug/wirth.pdf From cym224 at gmail.com Tue Jan 3 02:32:29 2017 From: cym224 at gmail.com (Nemo) Date: Mon, 2 Jan 2017 11:32:29 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> Message-ID: On 2 January 2017 at 06:31, Joerg Schilling wrote: > David wrote: > >> MacOS X is a certified Unix (tm) OS. Not Unix-Like. > > Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. > > Note that passing the certification tests unfortunately does not grant > POSIX compliance :-( Interesting -- I had always thought that UNIX03 incorporated IEEE 1003.x (http://www.opengroup.org/openbrand/register/xym0.htm ). So what is missing and why? N. P.S. As UNIX is a registered trademark of X/Open, anything passing UNIX03 is UNIX by definition. From chet.ramey at case.edu Tue Jan 3 02:42:01 2017 From: chet.ramey at case.edu (Chet Ramey) Date: Mon, 2 Jan 2017 11:42:01 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <032601d264ec$34b93c60$9e2bb520$@ronnatalie.com> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <201701021006.v02A6dwf032366@freefriends.org> <032601d264ec$34b93c60$9e2bb520$@ronnatalie.com> Message-ID: <758ae628-201a-49b3-ecc7-f40f484870c4@case.edu> On 1/2/17 6:34 AM, Ron Natalie wrote: > I added TCSH command line editng. The history was you could scroll back > through your previous commands. By default, the thing used an emacs-like > command binding, but it was entirely configurable. > Most of the Mach /bin/sh were my earlier shell with only job control. The > command editing one didn't make Doug's distribution. I think the command editing lives on via pdksh and its descendents, in a more limited form. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~chet/ From chet.ramey at case.edu Tue Jan 3 02:44:06 2017 From: chet.ramey at case.edu (Chet Ramey) Date: Mon, 2 Jan 2017 11:44:06 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> Message-ID: On 1/2/17 6:31 AM, Joerg Schilling wrote: > David wrote: > >> MacOS X is a certified Unix (tm) OS. Not Unix-Like. > > Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. > > Note that passing the certification tests unfortunately does not grant > POSIX compliance :-( That's pretty much exactly what it means. You have either found a bug or a place where the interpretation is disputed. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~chet/ From lm at mcvoy.com Tue Jan 3 02:49:13 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 2 Jan 2017 08:49:13 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> Message-ID: <20170102164913.GA5983@mcvoy.com> On Mon, Jan 02, 2017 at 11:44:06AM -0500, Chet Ramey wrote: > On 1/2/17 6:31 AM, Joerg Schilling wrote: > > David wrote: > > > >> MacOS X is a certified Unix (tm) OS. Not Unix-Like. > > > > Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. > > > > Note that passing the certification tests unfortunately does not grant > > POSIX compliance :-( > > That's pretty much exactly what it means. You have either found a bug or a > place where the interpretation is disputed. Joerg is pretty smart, he's been around the block. There have been multiple POSIX standards, just like there is C9, C11, etc. My guess is MacOS got certified for an earlier standard and hasn't kept up. From schily at schily.net Tue Jan 3 02:53:47 2017 From: schily at schily.net (Joerg Schilling) Date: Mon, 02 Jan 2017 17:53:47 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> Message-ID: <586a859b.QVJrfxBi/plOuz0X%schily@schily.net> Nemo wrote: > On 2 January 2017 at 06:31, Joerg Schilling wrote: > > David wrote: > > Note that passing the certification tests unfortunately does not grant > > POSIX compliance :-( > > Interesting -- I had always thought that UNIX03 incorporated IEEE > 1003.x (http://www.opengroup.org/openbrand/register/xym0.htm ). > > So what is missing and why? The problem is that only by failing, you can approach a test suite that gives a sufficient code coverage. The current method is that every time a bug in the test suite or the POSIX test is discovered, the text and/or the test suite is fixed. Platforms that passed the test are however not withdrawn their state. They just need to fix their problem in case they like to get a certification for the next version of the standard, if they implemented things the wrong way. BTW: The OpenGroup manages the POSIX text and after a new version has been accepted, it is reviewed by the IEEE people. This usually causes aprox. 6 months of delay, but no real changes in the text. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Tue Jan 3 03:02:52 2017 From: schily at schily.net (Joerg Schilling) Date: Mon, 02 Jan 2017 18:02:52 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <20170102164913.GA5983@mcvoy.com> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <20170102164913.GA5983@mcvoy.com> Message-ID: <586a87bc.Q5crYKWQG6onDDPY%schily@schily.net> Larry McVoy wrote: > Joerg is pretty smart, he's been around the block. There have been > multiple POSIX standards, just like there is C9, C11, etc. My guess is > MacOS got certified for an earlier standard and hasn't kept up. The test suite seems to be written to test for the problems of a genetic UNIX. It does not test a lot of things that are known to be correct in e.g. Solaris. If you like to verify other implementations (like e.g. Mac OS X), you need to check other places in the interfaces and the problem with "waitid()" is that the POSIX text has been correct when waitid() has been introduced in 1996, but later a bug slipped in. A bug that has been fixed just a year ago (in case of e.g. waitid()). In order to get a fix for waitid(), we would need to wait until Apple certifies their OS for ISSUE 7 + TC2. This is because ISSUE 7 + TC2 has been accepted by IEEE in December 2016... Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From chet.ramey at case.edu Tue Jan 3 03:05:15 2017 From: chet.ramey at case.edu (Chet Ramey) Date: Mon, 2 Jan 2017 12:05:15 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <20170102164913.GA5983@mcvoy.com> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <20170102164913.GA5983@mcvoy.com> Message-ID: <82cf47cd-e8d6-d335-e901-9c10ecf28c5e@case.edu> On 1/2/17 11:49 AM, Larry McVoy wrote: > On Mon, Jan 02, 2017 at 11:44:06AM -0500, Chet Ramey wrote: >> On 1/2/17 6:31 AM, Joerg Schilling wrote: >>> David wrote: >>> >>>> MacOS X is a certified Unix (tm) OS. Not Unix-Like. >>> >>> Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. >>> >>> Note that passing the certification tests unfortunately does not grant >>> POSIX compliance :-( >> >> That's pretty much exactly what it means. You have either found a bug or a >> place where the interpretation is disputed. > > Joerg is pretty smart, he's been around the block. There have been > multiple POSIX standards, just like there is C9, C11, etc. My guess is > MacOS got certified for an earlier standard and hasn't kept up. This came up on the Posix list. He found what I consider to be a bug. The fact that Mac OS X passed the compliance test and got the branding is not in dispute. > -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~chet/ From lm at mcvoy.com Tue Jan 3 03:32:26 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 2 Jan 2017 09:32:26 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <82cf47cd-e8d6-d335-e901-9c10ecf28c5e@case.edu> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <20170102164913.GA5983@mcvoy.com> <82cf47cd-e8d6-d335-e901-9c10ecf28c5e@case.edu> Message-ID: <20170102173226.GB5983@mcvoy.com> On Mon, Jan 02, 2017 at 12:05:15PM -0500, Chet Ramey wrote: > On 1/2/17 11:49 AM, Larry McVoy wrote: > > On Mon, Jan 02, 2017 at 11:44:06AM -0500, Chet Ramey wrote: > >> On 1/2/17 6:31 AM, Joerg Schilling wrote: > >>> David wrote: > >>> > >>>> MacOS X is a certified Unix (tm) OS. Not Unix-Like. > >>> > >>> Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. > >>> > >>> Note that passing the certification tests unfortunately does not grant > >>> POSIX compliance :-( > >> > >> That's pretty much exactly what it means. You have either found a bug or a > >> place where the interpretation is disputed. > > > > Joerg is pretty smart, he's been around the block. There have been > > multiple POSIX standards, just like there is C9, C11, etc. My guess is > > MacOS got certified for an earlier standard and hasn't kept up. > > This came up on the Posix list. He found what I consider to be a bug. > The fact that Mac OS X passed the compliance test and got the branding > is not in dispute. Only somewhat on point but... My first job at Sun was doing POSIX conformance in SunOS 4.x. Nobody else wanted to do it so they hired me (through Lachman) as a contractor to do it. As an aside, I'm super stoked I got to this work, it was early enough in my career that there were plenty of wholes in my knowledge. Having to POSIX really plugged those wholes and brought the whole kernel into focus if that makes sense. Anyhoo, if the POSIX test suite now is anything like the one back then (late 1980's), it's a steaming pile of shit. It was absolutely trivial to pass that test and have all sorts of stuff that wasn't POSIX compliant. I ended up writing a lot of my own tests to make sure SunOS actually conformed to the spec. It was really painstaking work but it gave me some good habits. My POSIX book is all marked up with one color for it passed the test suite and another color for when I believed SunOS conformed. As a further aside, this was one of the best years in my life. I was young and fit, and Sun wouldn't let me bill more than an average of 40 hours/week. I lived close to campus and it wasn't hard for me to do 80 hours in a week (which was more productive than 80 hours in two weeks, longer bursts of work meant I didn't have to ramp up state as often). So I'd work a week and then take the week + 2 weekends off. I skiied 29 days that winter and spent 6 weeks backpacking in the Sierras. It was glorious. And I still got SunOS to conform in about a year. Gave a presentation about it at some conference and the NCR or NEC team came up to me and asked "where's the rest of your team?". I asked them if they were talking about my docs person (Kennan Rossi, great guy, either he or I hacked vi to allow for multiple targets on the same tag so you could tag on VOP_OPEN and it would say 1 of 19, the first tag got you to the macro, the next one got you to vop_open, the next got you to ufs_open, etc, through all the file systems. Without that you really couldn't say if open() had POSIX semantics in all cases). Anyhoo, they said "no, the rest of the kernel hacking team". I just looked at them and said "it's just me". They had 3 kernel people and had been working on it for years and were not done. My guess is that SunOS was far closer to conforming than whatever kernel they were working on, they didn't strike me as stupid people. I loved that project. Hard work but man did you see how Unix was hung together (and where it had wandered) as a result. Super useful education for me. And I was parked in Steve Kleinman's office (he was over doing a sun4 hardware bringup). I read all the technical stuff he had stashed there and decided that this guy had really good taste. So when Sun wanted to hire me (they bought out the Lachman contract) I said "sure - on two conditions: (a) I work for Steve (who I hadn't even met, was just sure he was very smart) and (b) I get a modification to my employee contract that says I can't be penalized for skipping meetings (I had figured out that most, not all, but most meetings were a waste of time)." Sun gave me both after some amount of churn :) Fun times. Steve and I hit off famously, one of the most fun people I've ever worked with because we both had pretty deep knowledge about the stuff we worked on and we could have super fast technical discussions (that nobody else could understand because we jumped forward so fast). From chneukirchen at gmail.com Tue Jan 3 03:37:24 2017 From: chneukirchen at gmail.com (Christian Neukirchen) Date: Mon, 02 Jan 2017 18:37:24 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <82cf47cd-e8d6-d335-e901-9c10ecf28c5e@case.edu> (Chet Ramey's message of "Mon, 2 Jan 2017 12:05:15 -0500") References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <20170102164913.GA5983@mcvoy.com> <82cf47cd-e8d6-d335-e901-9c10ecf28c5e@case.edu> Message-ID: <87tw9hjrcb.fsf@gmail.com> Chet Ramey writes: > On 1/2/17 11:49 AM, Larry McVoy wrote: >> On Mon, Jan 02, 2017 at 11:44:06AM -0500, Chet Ramey wrote: >>> On 1/2/17 6:31 AM, Joerg Schilling wrote: >>>> David wrote: >>>> >>>>> MacOS X is a certified Unix (tm) OS. Not Unix-Like. >>>> >>>> Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. >>>> >>>> Note that passing the certification tests unfortunately does not grant >>>> POSIX compliance :-( >>> >>> That's pretty much exactly what it means. You have either found a bug or a >>> place where the interpretation is disputed. >> >> Joerg is pretty smart, he's been around the block. There have been >> multiple POSIX standards, just like there is C9, C11, etc. My guess is >> MacOS got certified for an earlier standard and hasn't kept up. > > This came up on the Posix list. He found what I consider to be a bug. > The fact that Mac OS X passed the compliance test and got the branding > is not in dispute. I seem to remember HFS+ had non-POSIX-compliant rename(2) semantics with respect to atomicity... so the tests ran on UFS? (And what good for is the result then, when noone else uses that?) -- Christian Neukirchen http://chneukirchen.org From chet.ramey at case.edu Tue Jan 3 03:53:57 2017 From: chet.ramey at case.edu (Chet Ramey) Date: Mon, 2 Jan 2017 12:53:57 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <20170102173226.GB5983@mcvoy.com> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <20170102164913.GA5983@mcvoy.com> <82cf47cd-e8d6-d335-e901-9c10ecf28c5e@case.edu> <20170102173226.GB5983@mcvoy.com> Message-ID: On 1/2/17 12:32 PM, Larry McVoy wrote: > On Mon, Jan 02, 2017 at 12:05:15PM -0500, Chet Ramey wrote: >> On 1/2/17 11:49 AM, Larry McVoy wrote: >>> On Mon, Jan 02, 2017 at 11:44:06AM -0500, Chet Ramey wrote: >>>> On 1/2/17 6:31 AM, Joerg Schilling wrote: >>>>> David wrote: >>>>> >>>>>> MacOS X is a certified Unix (tm) OS. Not Unix-Like. >>>>> >>>>> Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. >>>>> >>>>> Note that passing the certification tests unfortunately does not grant >>>>> POSIX compliance :-( >>>> >>>> That's pretty much exactly what it means. You have either found a bug or a >>>> place where the interpretation is disputed. >>> >>> Joerg is pretty smart, he's been around the block. There have been >>> multiple POSIX standards, just like there is C9, C11, etc. My guess is >>> MacOS got certified for an earlier standard and hasn't kept up. I totally agree that this happened. Apple ships bash-3.2 as their shell, and there is a combination of bug fixes and subsequent interpretations that renders it non-conformant to the Posix of today. Lots of things change in 14 years. My objection was that the original statement lacked nuance. > Anyhoo, if the POSIX test suite now is anything like the one back then > (late 1980's), it's a steaming pile of shit. It was absolutely trivial to > pass that test and have all sorts of stuff that wasn't POSIX compliant. I find this very easy to believe. I ran bash through several versions of the 1003.2 -- back then -- conformance test, and the earliest ones (early to mid 90s) were obviously written to ensure that ksh passed. Just chock full of constructs and builtins that were ksh-specific. Most of that stuff got cleaned up eventually, but the first versions were clearly the product of someone who was not familiar with Posix or historical versions of sh at all. The worst part was the multiple subsequent discussions on the austin-group list that ended up with some resolution like "yes, I know this is what the text of the standard says, but this is what we really meant." Definitely a moving target. > As a further aside, this was one of the best years in my life. I was > young and fit, Weren't we all. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~chet/ From crossd at gmail.com Tue Jan 3 04:36:45 2017 From: crossd at gmail.com (Dan Cross) Date: Mon, 2 Jan 2017 13:36:45 -0500 Subject: [TUHS] Unix stories In-Reply-To: <201701021430.v02EUMvr092955@tahoe.cs.Dartmouth.EDU> References: <201701021430.v02EUMvr092955@tahoe.cs.Dartmouth.EDU> Message-ID: On Mon, Jan 2, 2017 at 9:30 AM, Doug McIlroy wrote: > > In one of his books, Wirth laments about programmers proudly > > showing him terrible code written in Pascal > > For your amusement, here's Wirth himself committing that sin: > > http://www.cs.dartmouth.edu/~doug/wirth.pdf > Doug, I'm trying to understand the provenance of that paper. It appears to be a variant of a part of CSTR #155, but it is not the same as "Ellipses Not Yet Made Easy" from that report. It also appears that this particular document is a scan; is it perhaps a pre-print version of the paper? Or perhaps it was expanded from what was published in #155? - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Tue Jan 3 08:22:48 2017 From: rminnich at gmail.com (ron minnich) Date: Mon, 02 Jan 2017 22:22:48 +0000 Subject: [TUHS] Looking for two papers someone may have Message-ID: I recall reading a long time ago a sentence in a paper Dennis wrote which went something like "Unix is profligate with processes". The word profligate sticks in my mind. This is a 30+-year-old memory of a probably 35+-year-old paper, from back in the day when running a shell as a user level process was very controversial. I've scanned the papers (and BSTJ) I can find but can't find that quote. Geez, is my memory that bad? Don't answer that! Rob Pike did a talk in the early 90s about right and wrong ways to expose the network stack in a synthetic file system. I'd like to find those slides, because people keep implementing synthetics for network stacks and they always look like the "wrong" version from Rob's slides. I've asked him but he can't find it. I've long since lost the email with the slides, several jobs back ... thanks ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Tue Jan 3 08:52:51 2017 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 3 Jan 2017 09:52:51 +1100 (EST) Subject: [TUHS] Unix stories In-Reply-To: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: On Sun, 1 Jan 2017, Steve Johnson wrote: > But how much longer can this go on?  What can we do to fight the > exponential bloat of this and other open-souce projects.  All ideas > cheerfully entertained... Line up all the (ir)responsible programmers against a wall? Computer programming is the only discipline I've seen where no formal qualifications are required; any idiot can call themselves a "programmer" if they've barely learned how to write in BASIC... Yes, I have a BSc majoring in Computer Science and Mathematics (both pure and applied), so I guess I'm a bit elitist :-) -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From lm at mcvoy.com Tue Jan 3 08:56:39 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 2 Jan 2017 14:56:39 -0800 Subject: [TUHS] Unix stories In-Reply-To: References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: <20170102225639.GH5983@mcvoy.com> On Tue, Jan 03, 2017 at 09:52:51AM +1100, Dave Horsfall wrote: > On Sun, 1 Jan 2017, Steve Johnson wrote: > > > But how much longer can this go on??? What can we do to fight the > > exponential bloat of this and other open-souce projects.?? All ideas > > cheerfully entertained... > > Line up all the (ir)responsible programmers against a wall? A buddy of mine at RedHat, in the early days, gaveth me this quote: "Average programmers should be rounded up and placed in internment camps to keep their fingers away from keyboards." That was ~20 years ago. Not looking like anything has changed. --lm From ron at ronnatalie.com Tue Jan 3 08:58:15 2017 From: ron at ronnatalie.com (Ronald Natalie) Date: Mon, 2 Jan 2017 17:58:15 -0500 Subject: [TUHS] Unix stories In-Reply-To: References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: Johns Hopkins had no Computer Science department at the time I was a student there. You came out with either a degree in EE or Math if you wanted to do computers. Of course, due to circumstances, I came out of there with nearly four years of UNIX internals experience. I also learned in the school of hard knocks about decent code design. I’ve generally found that having a degree in computer science didn’t indicate that the applicant had any qualification. I look for prior experience (or at least some sort of indication of a programming experience). Within 90 days I could generally tell if people were worth retaining (reasonable skills or ability to learn at least). _Ron From ron at ronnatalie.com Tue Jan 3 08:59:19 2017 From: ron at ronnatalie.com (Ronald Natalie) Date: Mon, 2 Jan 2017 17:59:19 -0500 Subject: [TUHS] Unix stories In-Reply-To: <20170102225639.GH5983@mcvoy.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <20170102225639.GH5983@mcvoy.com> Message-ID: <4FA98219-D73D-47DA-A2C4-8B512C1034CD@ronnatalie.com> I once threatened to break one of my programmers fingers if he continued to make the same mistake. Another programmer I told him I was going to remove the copy/paste facility from his text editor if he didn’t stop just COPYING the program over and over, rather than using some common code. From tfb at tfeb.org Tue Jan 3 09:23:47 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Mon, 2 Jan 2017 23:23:47 +0000 Subject: [TUHS] Unix stories In-Reply-To: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> If you think open source is bad you haven't seen much closed-source software, because a lot of it is deeply terrible. I claim that all large software systems which are not designed to be used by naive users are shit (and most systems which are are also shit). > On 2 Jan 2017, at 02:03, Steve Johnson wrote: > > But how much longer can this go on? What can we do to fight the exponential bloat of this and other open-souce projects. All ideas cheerfully entertained... From lm at mcvoy.com Tue Jan 3 10:49:59 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 2 Jan 2017 16:49:59 -0800 Subject: [TUHS] Unix stories In-Reply-To: <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> Message-ID: <20170103004959.GA29088@mcvoy.com> It's simply a lack of craftsmen level thinking. Managers think that people are fungable, whatever that means, I think it means anyone can do this work. That's simply not the case, some people get the big picture and the details and some people don't. There is also a culture of the cool kids vs the not cool kids. For example, at Sun, the kernel group was the top of the heap. When I was doing nselite which begat Teamware then BitKeeper then Git et al, I was in the kernel group. They wanted me to go over to the tools group. I looked over there and saw people that weren't as good as the kernel people and passed. Same thing with testing. So many bad test harnesses. Because testing isn't respected so they get the crappy programmers. One of the best things I did at BitKeeper was to put testing on the same level as the code and the documentation. As a result, we have a kick ass testing harness. Here's a sample t.file in our test harness: echo $N Check that component rename is blocked ......................$NL nested product cd "$HERE/product" bk mv gcc gccN 2>ERR && fail -f ERR cat <WANT mvdir: gcc is a component Component renaming is unsupported in this release EOF cmpfiles WANT ERR echo OK The harness, which is all open source under the apache license at bitkeeper.org, let's you write tiny shell scripts, they either echo OK (pass), (bug) (pass but not really), or anything else and they fail. The harness allows for parallel runs. In the early days we ran the tests in alpha order, later we got a huge speed up by running them in most recently modified order (fastest time to failure). All of that is the result of me (semi good, setting it up) and Wayne Scott (crazy good, he made it work in parallel and bunch of other improvements) being the guys who worked on it. We're $250K/year people on average. On Mon, Jan 02, 2017 at 11:23:47PM +0000, Tim Bradshaw wrote: > If you think open source is bad you haven't seen much closed-source software, because a lot of it is deeply terrible. I claim that all large software systems which are not designed to be used by naive users are shit (and most systems which are are also shit). > > > On 2 Jan 2017, at 02:03, Steve Johnson wrote: > > > > But how much longer can this go on? What can we do to fight the exponential bloat of this and other open-souce projects. All ideas cheerfully entertained... -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From stephen.strowes at gmail.com Tue Jan 3 20:44:08 2017 From: stephen.strowes at gmail.com (sds) Date: Tue, 3 Jan 2017 11:44:08 +0100 Subject: [TUHS] Leap Second In-Reply-To: <20161229002105.GB94858@server.rulingia.com> References: <20161229002105.GB94858@server.rulingia.com> Message-ID: <0d5eeef9-3dbb-0ddd-1b22-51fecee735d8@gmail.com> On 29/12/2016 01:21, Peter Jeremy wrote: > On 2016-Dec-29 10:59:32 +1100, Dave Horsfall wrote: >> (Yes, a repeat, but this momentous event only happens every few years.) > Actually, they've been more frequent of late. Important question: did anybody have an "exciting" new year because of a leap second bug? >> The International Earth Rotation Service has announced that there will be >> a Leap Second inserted at 23:59:59 UTC on the 31st December, due to the >> earth slowly slowing down. It's fun to listen to see how the time beeps >> handle it; will your GPS clock display 23:59:60, or will it go nuts >> (because the programmer was an idiot)? > Google chose an alternative approach to avoiding the 23:59:60 issue and will > smear the upcoming leap second across the period 2016-12-31 14:00:00 UTC > through 2017-01-01 10:00:00 UTC (see https://developers.google.com/time/smear). > Of course, this means that mixing time.google.com with normal NTP servers > will have "interesting" effects. FWIW, Akamai and AWS are similar (but different) implementations of leap second smear: * https://blogs.akamai.com/2016/11/planning-for-the-end-of-2016-a-leap-second-and-the-end-of-support-for-sha-1-tls-certificates.html * https://aws.amazon.com/blogs/aws/look-before-you-leap-the-coming-leap-second-and-aws/ S. From schily at schily.net Tue Jan 3 21:36:31 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 12:36:31 +0100 Subject: [TUHS] Unix stories In-Reply-To: <20170103004959.GA29088@mcvoy.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> Message-ID: <586b8cbf.IEv0eb/3/JZMg7TB%schily@schily.net> Larry McVoy wrote: > There is also a culture of the cool kids vs the not cool kids. For example, > at Sun, the kernel group was the top of the heap. When I was doing nselite > which begat Teamware then BitKeeper then Git et al, I was in the kernel > group. They wanted me to go over to the tools group. I looked over there > and saw people that weren't as good as the kernel people and passed. >From looking at various changes, it is obvious that the tools group also had brilliant people, but it is also obvious that the tools in general do not have that overall quality level as seen in the kernel. >From looking at projects or features, it seems that the kernel people did add new ideas whenever they had the idea. From looking at the progress made in the tools, it seems that this mostly happened when a manager decided that it would be time to put effort in a tool. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Tue Jan 3 23:17:00 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 14:17:00 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <20170101203813.GV5983@mcvoy.com> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> <20170101202850.GF17848@wopr> <20170101203813.GV5983@mcvoy.com> Message-ID: <586ba44c.dnHd1Caeq6INr3FG%schily@schily.net> Larry McVoy wrote: > I'd like to know where you can get a better performing OS. The file systems > scream when compared to Windows or MacOS, they know about SSDs and do the > right thing. The processes are light weight, I regularly do "make -j" > which on my machines just spawns as many processs as needed. ... > So if I size it to the number of CPUs it is slightly faster. On the other > hand, when I tell it just spawn as many as it wants it peaks at about 267 > processes running in parallel. > > Solaris, AIX, IRIX, HP-UX, MacOS would all thrash like crazy under that > load, their context switch times are crappy. > > Source: author of LMbench which has been measuring this stuff since the > mid 1990s. Could you give a verification for this claim please? My experiences are different. >From what I can tell, the filesystem concepts in Linux are slow and it is usually not possible to tell what happened in a given time frame. It however creates the impression that it is fast if you are the only user on a system, but it makes it hard to gauge a time that is comparable to a time retrived from a different OS. This is because you usually don't know what happened in a given time. If you e.g. use gtar to unpack a Linux kernel archive on your local disk, the wall clock run time of gtar is low, but it takes some time before the first disk I/O takes place and the disk I/O continues until a long time after gtar already returned control to the shell. If you however use star for the same task and do not specify the "-no-fsync" option, star takes 4x longer than gtar to return control to the shell. If you do the same on Solaris using the same hardware, a standard star extract (with the default fsync) takes only 10% more real time with UFS compared to the time without fsync and the time is aprox. the same as the unpack time on Linux using gtar. This is because on Solaris, disk I/O starts immediately even when you use gtar and there is no time wasted with filling the kernel cache before I/O starts. Aprox. 12 years ago, I converted the central web server for the OSS hosting platform berlios.de (*) (at that time the second largest one) from Linux to Solaris and the performance did increase ponderably.... Even worse, at the same time, I did a test where I unpacked the Linux kernel archive using gtar and switched off the power just after gtar returned control to the shell. The result was a rotten filesystem that could not be repaired by fsck. *** So my question is: did you manage to find a method to gauge something on Linux that is comparable to other platforms or do you also suffer from the problem that Linux tries to hide the real time needed for filesystem operations? *** BTW: ZFS has a similar problem as Linux: It is extremely slow when you ask it to to things in a way that result in a known state. ZFS however does not result in a rotten FS when you switch the system off while it is updating the FS. *) The reason for this conversion was that Linux completely stalled 3-4 times a day and only a reset did help. There was no way to get the reason for that problem using Linux debug tools. After the conversion to Solaris, it turned out that memory overcommitment on Linux was the reason for the freeze. Since we had two CPUs, the Linux kernel could copy on write pages faster than searching for processes to kill in order to recover. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From david at kdbarto.org Wed Jan 4 00:06:15 2017 From: david at kdbarto.org (David) Date: Tue, 3 Jan 2017 06:06:15 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> Message-ID: <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> MacOS passes this except for the si_status test. MacOS uses a signed int there. I’m not sure what the standard says. David > On Jan 2, 2017, at 3:31 AM, Joerg Schilling wrote: > > David wrote: > >> MacOS X is a certified Unix (tm) OS. Not Unix-Like. > > Given that MacOS X is not POSIX compliant, I would call it a UNIX-alike. > > Note that passing the certification tests unfortunately does not grant > POSIX compliance :-( > > Try e.g. this program on Mac OS X: > > #include > #include > #include > #include > /* > * Non-standard compliant platforms may need > * #include or something similar > * in addition to the include files above. > */ > > int > main() > { > siginfo_t si; > pid_t pid; > int ret; > > if ((pid = fork()) < 0) > exit(1); > if (pid == 0) { > _exit(1234567890); > } > ret = waitid(P_PID, pid, &si, WEXITED); > printf("ret: %d si_pid: %ld si_status: %d si_code: %d\n", > ret, > (long) si.si_pid, si.si_status, si.si_code); > if (pid != si.si_pid) > printf("si_pid in struct siginfo should be %ld but is %ld\n", > (long) pid, (long) si.si_pid); > if (si.si_status != 1234567890) > printf("si_status in struct siginfo should be %d (0x%x) but is %d (0x%x)\n", > 1234567890, 1234567890, si.si_status, si.si_status); > if (si.si_code != CLD_EXITED) > printf("si_code in struct siginfo should be %d (0x%x) but is %d (0x%x)\n", > CLD_EXITED, CLD_EXITED, si.si_code, si.si_code); > if (CLD_EXITED != 1) > printf("CLD_EXITED is %d on this platform\n", CLD_EXITED); > return (0); > } > > Jörg > > -- > EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin > joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ > URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From david at kdbarto.org Wed Jan 4 00:11:18 2017 From: david at kdbarto.org (David) Date: Tue, 3 Jan 2017 06:11:18 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <45EEDF24-40D3-4ACD-8F67-9DA6424097FE@tfeb.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfe b.org> <45EEDF24-40D3-4ACD-8F67-9DA6424097FE@tfeb.org> Message-ID: My complaint about Linux here is not that it isn’t useful or isn’t a development workstation for some people, it is more that it won’t be a desktop for your mother/father/brother-in-law any time soon. The fragmentation of Linux is what is holding it back. If I purchase a Mac or Windows box I know what I’m getting and I know that millions of others have the same thing. If I want support there are store fronts for both that I can walk into and get expert help without much hassle. No such thing exists for Linux and I don’t think it is going to happen this year. As in: this is not the year that Linux takes over the desktop. I don’t think it will happen next year either. :-) David > On Jan 1, 2017, at 12:12 PM, Tim Bradshaw wrote: > > Well: where I work the default desktop is RHEL as far as I know (there are management and admin people who have Windows desktops I think, and laptops are Windows). This is a scientific environment however: if there were not desktop linux systems they'd need an even bigger farm of headless machines (probably VMs) so there was cultural compatibility with the HPC. And of course mail is Outlook/Citrix so they cheat there. > > I think the answer is that it works if you remember that you are not deploying 'Linux' but RHEL or Ubuntu (or MacOS!) or whatever. Scale also helps. > > --tim > >> On 1 Jan 2017, at 19:33, David wrote: >> >> Linux is to diversified at this point to make it to the desktop any time soon. > From david at kdbarto.org Wed Jan 4 00:11:18 2017 From: david at kdbarto.org (David) Date: Tue, 3 Jan 2017 06:11:18 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <45EEDF24-40D3-4ACD-8F67-9DA6424097FE@tfeb.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfe b.org> <45EEDF24-40D3-4ACD-8F67-9DA6424097FE@tfeb.org> Message-ID: My complaint about Linux here is not that it isn’t useful or isn’t a development workstation for some people, it is more that it won’t be a desktop for your mother/father/brother-in-law any time soon. The fragmentation of Linux is what is holding it back. If I purchase a Mac or Windows box I know what I’m getting and I know that millions of others have the same thing. If I want support there are store fronts for both that I can walk into and get expert help without much hassle. No such thing exists for Linux and I don’t think it is going to happen this year. As in: this is not the year that Linux takes over the desktop. I don’t think it will happen next year either. :-) David > On Jan 1, 2017, at 12:12 PM, Tim Bradshaw wrote: > > Well: where I work the default desktop is RHEL as far as I know (there are management and admin people who have Windows desktops I think, and laptops are Windows). This is a scientific environment however: if there were not desktop linux systems they'd need an even bigger farm of headless machines (probably VMs) so there was cultural compatibility with the HPC. And of course mail is Outlook/Citrix so they cheat there. > > I think the answer is that it works if you remember that you are not deploying 'Linux' but RHEL or Ubuntu (or MacOS!) or whatever. Scale also helps. > > --tim > >> On 1 Jan 2017, at 19:33, David wrote: >> >> Linux is to diversified at this point to make it to the desktop any time soon. > From random832 at fastmail.com Wed Jan 4 00:33:44 2017 From: random832 at fastmail.com (Random832) Date: Tue, 03 Jan 2017 09:33:44 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> Message-ID: <1483454024.1265981.835921849.4F752F10@webmail.messagingengine.com> On Tue, Jan 3, 2017, at 09:06, David wrote: > MacOS passes this except for the si_status test. MacOS uses a signed int > there. I’m not sure what the standard says. The problem isn't the fact that it's signed, it's the fact that it's only a 24-bit value (i.e. the high 8 bits are replaced with sign-extension of bit 23). Look at the hex - expected 0x499602d2 vs actual 0xff9602d2 However, OSX only claims compliance to Issue 6 (unistd.h _XOPEN_VERSION 600), and the text requiring that the full 32-bit value be preserved is new to Issue 7. From schily at schily.net Wed Jan 4 00:49:00 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 15:49:00 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> Message-ID: <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> David wrote: > MacOS passes this except for the si_status test. MacOS uses a signed int there. I???m not sure what the standard says. The standard says that si_status is a signed int. Which version are you using? It seems that Apple changed things... recently? Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Wed Jan 4 01:08:56 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 16:08:56 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <1483454024.1265981.835921849.4F752F10@webmail.messagingengine.com> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <1483454024.1265981.835921849.4F752F10@webmail.messagingengine.com> Message-ID: <586bbe88.njLSal35/ZSrGHMO%schily@schily.net> Random832 wrote: > On Tue, Jan 3, 2017, at 09:06, David wrote: > > MacOS passes this except for the si_status test. MacOS uses a signed int > > there. I???m not sure what the standard says. > > The problem isn't the fact that it's signed, it's the fact that it's > only a 24-bit value (i.e. the high 8 bits are replaced with > sign-extension of bit 23). Look at the hex - expected 0x499602d2 vs > actual 0xff9602d2 > > However, OSX only claims compliance to Issue 6 (unistd.h _XOPEN_VERSION > 600), and the text requiring that the full 32-bit value be preserved is > new to Issue 7. The text requiring the full 32-bit value is not new.... It was required in 1996 already, but then somebody introduced a bug into the text and that was not aligned with the expected behavior. BTW: the 24 bits are a result of coding a new wait interface into the historic ABI by using the top 16 bits in the wait() argument value. I should write a test program that retrieved the waitid() results from the the SIGCHLD handler and see whether that is OK as well. Here is the new code: #include #include #include #include /* * Non-standard compliant platforms may need * #include or something similar * in addition to the include files above. */ extern void handler(int sig, siginfo_t *sip, void *context); extern void dosig(void); pid_t cpid; int main() { siginfo_t si; pid_t pid; int ret; dosig(); if ((pid = fork()) < 0) exit(1); cpid = pid; if (pid == 0) { _exit(1234567890); } ret = waitid(P_PID, pid, &si, WEXITED); printf("ret: %d si_pid: %ld si_status: %d si_code: %d\n", ret, (long) si.si_pid, si.si_status, si.si_code); if (pid != si.si_pid) printf("si_pid in struct siginfo should be %ld but is %ld\n", (long) pid, (long) si.si_pid); if (si.si_status != 1234567890) printf("si_status in struct siginfo should be %d (0x%x) but is %d (0x%x)\n", 1234567890, 1234567890, si.si_status, si.si_status); if (si.si_code != CLD_EXITED) printf("si_code in struct siginfo should be %d (0x%x) but is %d (0x%x)\n", CLD_EXITED, CLD_EXITED, si.si_code, si.si_code); if (CLD_EXITED != 1) printf("CLD_EXITED is %d on this platform\n", CLD_EXITED); return (0); } /* * Include it here to allow to verify that #include * makes siginfo_t available */ #include void handler(int sig, siginfo_t *sip, void *context) { printf("received SIGCHLD (%d), si_pid: %ld si_status: %d si_code: %d\n", sig, (long) sip->si_pid, sip->si_status, sip->si_code); if (sip->si_pid != cpid) printf("SIGCHLD: si_pid in struct siginfo should be %ld but is %ld\n", (long) cpid, (long) sip->si_pid); if (sip->si_status != 1234567890) printf("SIGCHLD: si_status in struct siginfo should be %d (0x%x) but is %d (0x%x)\n", 1234567890, 1234567890, sip->si_status, sip->si_status); if (sip->si_code != CLD_EXITED) printf("SIGCHLD: si_code in struct siginfo should be %d (0x%x) but is %d (0x%x)\n", CLD_EXITED, CLD_EXITED, sip->si_code, sip->si_code); } void dosig() { struct sigaction sa; sa.sa_handler = handler; sigemptyset(&sa.sa_mask); sa.sa_flags = SA_RESTART|SA_SIGINFO; sigaction(SIGCHLD, &sa, NULL); } Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From dot at dotat.at Wed Jan 4 01:06:11 2017 From: dot at dotat.at (Tony Finch) Date: Tue, 3 Jan 2017 15:06:11 +0000 Subject: [TUHS] Leap Second In-Reply-To: <0d5eeef9-3dbb-0ddd-1b22-51fecee735d8@gmail.com> References: <20161229002105.GB94858@server.rulingia.com> <0d5eeef9-3dbb-0ddd-1b22-51fecee735d8@gmail.com> Message-ID: sds wrote: > Important question: did anybody have an "exciting" new year because of a leap > second bug? I've been collecting failure reports on the LEAPSECS list https://pairlist6.pair.net/pipermail/leapsecs/2017-January/thread.html Tony. -- f.anthony.n.finch http://dotat.at/ - I xn--zr8h punycode Fitzroy, Sole: Easterly or southeasterly 5 to 7, occasionally gale 8 at first in west. Moderate or rough, occasionally very rough at first in west. Rain. Moderate or good. From schily at schily.net Wed Jan 4 01:29:23 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 16:29:23 +0100 Subject: [TUHS] Leap Second In-Reply-To: References: <20161229002105.GB94858@server.rulingia.com> <0d5eeef9-3dbb-0ddd-1b22-51fecee735d8@gmail.com> Message-ID: <586bc353.tOFm/S0IGecYYlh6%schily@schily.net> Tony Finch wrote: > sds wrote: > > > Important question: did anybody have an "exciting" new year because of a leap > > second bug? > > I've been collecting failure reports on the LEAPSECS list https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/ "go" seems to have a related bug. BTW: The POSIX standard intentionally does not include leap seconds in the UNIX time interface as it seems that this would cause more problems than it claims to fix. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From michael at kjorling.se Wed Jan 4 01:52:52 2017 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Tue, 3 Jan 2017 15:52:52 +0000 Subject: [TUHS] ZFS (was: Re: MacOS X is Unix (tm)) In-Reply-To: <586ba44c.dnHd1Caeq6INr3FG%schily@schily.net> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> <20170101202850.GF17848@wopr> <20170101203813.GV5983@mcvoy.com> <586ba44c.dnHd1Caeq6INr3FG%schily@schily.net> Message-ID: <20170103155252.GG15153@yeono.kjorling.se> On 3 Jan 2017 14:17 +0100, from schily at schily.net (Joerg Schilling): > BTW: ZFS has a similar problem as Linux: It is extremely slow when you ask it > to to things in a way that result in a known state. ZFS however does not result > in a rotten FS when you switch the system off while it is updating the FS. In all fairness to it, ZFS never (at least that I have seen) _claims_ to be a high performance file system. Lots of people (myself included) point out that it is _anything but_. Its Merkle tree data structures on disk, copy-on-write behavior and ubiquitous, nearly free snapshots are, in fact, prone to encourage fragmentation along with causing normal I/O patterns to require massive seeking on rotational media. Its saving grace in terms of performance is caching; just try setting primarycache=none or sync=always and observe your system grind to a halt I/O-wise almost no matter what else you do. Yes, I use ZFS; nearly exclusively so on my main machine. Yes, it has saved my rear a few times with disk/cabling/controller combinations that have somehow not been quite happy together (for whatever reason, LSI 9211, SFF-8087 to SFF-8482 breakout cables, and HGST SATA disks is not a good combination). But not even ZFS is perfect, and its design does have several sharp edges to watch out for alongside the performance implications. Where ZFS shines IMO is in the end-to-end integrity guarantees department and its integration of the volume and file system managers into a cohesive whole. -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “People who think they know everything really annoy those of us who know we don’t.” (Bjarne Stroustrup) From dfawcus+lists-tuhs at employees.org Wed Jan 4 02:09:26 2017 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Tue, 3 Jan 2017 16:09:26 +0000 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586bbe88.njLSal35/ZSrGHMO%schily@schily.net> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <1483454024.1265981.835921849.4F752F10@webmail.messagingengine.com> <586bbe88.njLSal35/ZSrGHMO%schily@schily.net> Message-ID: <20170103160926.GA83502@cowbell.employees.org> On Tue, Jan 03, 2017 at 04:08:56pm +0100, Joerg Schilling wrote: > > I should write a test program that retrieved the waitid() results from the the > SIGCHLD handler and see whether that is OK as well. > > Here is the new code: FWIW, on 10.10.5 [1] this gives essentially the same result as the prior program, the signal handler seeing the same sign extended 24 bit value: $ ./posix2 received SIGCHLD (20), si_pid: 2281 si_status: -6946094 si_code: 1 SIGCHLD: si_status in struct siginfo should be 1234567890 (0x499602d2) but is -6946094 (0xff9602d2) ret: 0 si_pid: 2281 si_status: -6946094 si_code: 1 si_status in struct siginfo should be 1234567890 (0x499602d2) but is -6946094 (0xff9602d2) Mind, one should probably assign the handler to sa.sa_sigaction, as someone could implement the struct w/o both fields in a union. As I recall, there are also other bugs in OSX to do with poll() handling, so that would be another area where the conformance tests fall short. DF [1] Darwin Old-MBA.local 14.5.0 Darwin Kernel Version 14.5.0: Sun Sep 25 22:07:15 PDT 2016; root:xnu-2782.50.9~1/RELEASE_X86_64 x86_64 From schily at schily.net Wed Jan 4 02:41:31 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 17:41:31 +0100 Subject: [TUHS] ZFS (was: Re: MacOS X is Unix (tm)) In-Reply-To: <20170103155252.GG15153@yeono.kjorling.se> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> <20170101202850.GF17848@wopr> <20170101203813.GV5983@mcvoy.com> <586ba44c.dnHd1Caeq6INr3FG%schily@schily.net> <20170103155252.GG15153@yeono.kjorling.se> Message-ID: <586bd43b.ggkkH8kIn7K+V47n%schily@schily.net> Michael Kjörling wrote: > On 3 Jan 2017 14:17 +0100, from schily at schily.net (Joerg Schilling): > > BTW: ZFS has a similar problem as Linux: It is extremely slow when you ask it > > to to things in a way that result in a known state. ZFS however does not result > > in a rotten FS when you switch the system off while it is updating the FS. > > In all fairness to it, ZFS never (at least that I have seen) _claims_ > to be a high performance file system. Lots of people (myself included) > point out that it is _anything but_. Its Merkle tree data structures Well in practice ZFS is amazingly fast. Wat I wanted to mention is that some methods to make things fast or to look like they were fast may result in a slow system in case you ask it to produce a known state at a given arbitrary time. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Wed Jan 4 02:47:00 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 17:47:00 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <20170103160926.GA83502@cowbell.employees.org> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <1483454024.1265981.835921849.4F752F10@webmail.messagingengine.com> <586bbe88.njLSal35/ZSrGHMO%schily@schily.net> <20170103160926.GA83502@cowbell.employees.org> Message-ID: <586bd584.SKpA3hA4oaJSGLri%schily@schily.net> Derek Fawcus wrote: > FWIW, on 10.10.5 [1] this gives essentially the same result as the prior program, > the signal handler seeing the same sign extended 24 bit value: > > $ ./posix2 > received SIGCHLD (20), si_pid: 2281 si_status: -6946094 si_code: 1 > SIGCHLD: si_status in struct siginfo should be 1234567890 (0x499602d2) but is -6946094 (0xff9602d2) > ret: 0 si_pid: 2281 si_status: -6946094 si_code: 1 > si_status in struct siginfo should be 1234567890 (0x499602d2) but is -6946094 (0xff9602d2) OK, then the main change during the past 8 years was that Apple now includes the siginfo structure in sys/wait.h and that si_pid and si_code are now filled in. > Mind, one should probably assign the handler to sa.sa_sigaction, as someone > could implement the struct w/o both fields in a union. Done - thank you.... Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From random832 at fastmail.com Wed Jan 4 03:29:00 2017 From: random832 at fastmail.com (Random832) Date: Tue, 03 Jan 2017 12:29:00 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586bbe88.njLSal35/ZSrGHMO%schily@schily.net> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <1483454024.1265981.835921849.4F752F10@webmail.messagingengine.com> <586bbe88.njLSal35/ZSrGHMO%schily@schily.net> Message-ID: <1483464540.1306689.836096801.2D7CA4E9@webmail.messagingengine.com> On Tue, Jan 3, 2017, at 10:08, Joerg Schilling wrote: > Random832 wrote: > > However, OSX only claims compliance to Issue 6 (unistd.h _XOPEN_VERSION > > 600), and the text requiring that the full 32-bit value be preserved is > > new to Issue 7. > > The text requiring the full 32-bit value is not new.... Is there some other text you have in mind other than "The exit value in si_status shall be equal to the full exit value (that is, the value passed to _exit(), _Exit(), or exit(), or returned from main()); it shall not be limited to the least significant eight bits of the value." in the description of signal.h (not present in Issue 6 or SUSv2)? Or maybe something from "2.13. Status Information" (whole section is new in Issue 7). In SUSv2, the text of exit() states "If the parent process of the calling process is executing a wait(), wait3(), waitid() or waitpid(), [...] it is notified of the calling process' termination and the low-order eight bits (that is, bits 0377) of status are made available to it." with no indication of any of these functions allowing the parent process to get more bits of status. More or less the same text appears in Issue 6, with some rearrangement due to waitid being part of the XSI option. > It was required in 1996 already, but then somebody introduced a bug into > the text and that was not aligned with the expected behavior. Oh, so there was a bug in *the text*. That would be the text of the standard that OSX conforms to? From david at kdbarto.org Wed Jan 4 03:39:36 2017 From: david at kdbarto.org (David) Date: Tue, 3 Jan 2017 09:39:36 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> Message-ID: I’m running Yosemite, Sierra won’t run on my hardware. Does the standard expect an int to be a specific size? I can’t imagine this to be the case. On Mac ints are 32 bits, as are longs. Unlike Linux where long defaults to 64 bits. So keeping the code I work on portable between Linux and the Mac requires more than a bit of ‘ifdef’ hell. David > On Jan 3, 2017, at 6:49 AM, Joerg Schilling wrote: > > David wrote: > >> MacOS passes this except for the si_status test. MacOS uses a signed int there. I???m not sure what the standard says. > > The standard says that si_status is a signed int. > > Which version are you using? > > It seems that Apple changed things... recently? > > > > Jörg > > -- > EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin > joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ > URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Wed Jan 4 03:51:36 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 18:51:36 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <1483464540.1306689.836096801.2D7CA4E9@webmail.messagingengine.com> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <1483454024.1265981.835921849.4F752F10@webmail.messagingengine.com> <586bbe88.njLSal35/ZSrGHMO%schily@schily.net> <1483464540.1306689.836096801.2D7CA4E9@webmail.messagingengine.com> Message-ID: <586be4a8.IhycRVfXy8f03gHY%schily@schily.net> Random832 wrote: > On Tue, Jan 3, 2017, at 10:08, Joerg Schilling wrote: > > Random832 wrote: > > > However, OSX only claims compliance to Issue 6 (unistd.h _XOPEN_VERSION > > > 600), and the text requiring that the full 32-bit value be preserved is > > > new to Issue 7. > > > > The text requiring the full 32-bit value is not new.... > > Is there some other text you have in mind other than "The exit value in > si_status shall be equal to the full exit value (that is, the value > passed to _exit(), _Exit(), or exit(), or returned from main()); it > shall not be limited to the least significant eight bits of the value." > in the description of signal.h (not present in Issue 6 or SUSv2)? Or > maybe something from "2.13. Status Information" (whole section is new in > Issue 7). This has now been worded to make it obvious that masking off bits is not permitted - except for the historic UNIX wait()/waitpid() interfaces. > In SUSv2, the text of exit() states "If the parent process of the > calling process is executing a wait(), wait3(), waitid() or waitpid(), > [...] it is notified of the calling process' termination and the > low-order eight bits (that is, bits 0377) of status are made available > to it." with no indication of any of these functions allowing the parent > process to get more bits of status. More or less the same text appears > in Issue 6, with some rearrangement due to waitid being part of the XSI > option. SUSv2 is the standard with the correct text: Here is a part of it's wait() description: WEXITSTATUS(stat_val) If the value of WIFEXITED(stat_val) is non-zero, this macro evaluates to the low-order 8 bits of the status argument that the child process passed to _exit() or exit(), or the value the child process returned from main(). from the signal.h description: int si_status exit value or signal So masking is only mentioned for wait() and waitpid() > > It was required in 1996 already, but then somebody introduced a bug into > > the text and that was not aligned with the expected behavior. > > Oh, so there was a bug in *the text*. That would be the text of the > standard that OSX conforms to? In SUSv2, it did just describes the waitid() interface without mentioning that there is a mask. People at that time seem to believe that everybody knows that si_status is a full int. Note that the WEXITSTATUS() macro does not apply to waitid(). Later, the wrong masking text has been added. POSIX never introduced own inventions, but rather standardizes existing features. The siginfo/waitid() interface has been introduced by SVr4 in 1989 and platforms that correctly implement SVr4 compliance of course all return the full int. Apple seem to have known that the compliance test did only check for "more than 8 bits" by using a 16 bit exit() value in the tests... Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From dfawcus+lists-tuhs at employees.org Wed Jan 4 03:59:26 2017 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Tue, 3 Jan 2017 17:59:26 +0000 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> Message-ID: <20170103175926.GA4824@cowbell.employees.org> On Tue, Jan 03, 2017 at 09:39:36am -0800, David wrote: > I’m running Yosemite, Sierra won’t run on my hardware. > > Does the standard expect an int to be a specific size? I can’t imagine this to be the case. > On Mac ints are 32 bits, as are longs. Unlike Linux where long defaults to 64 bits. Depends: $ uname -a Darwin Old-MBA.local 14.5.0 Darwin Kernel Version 14.5.0: Sun Sep 25 22:07:15 PDT 2016; root:xnu-2782.50.9~1/RELEASE_X86_64 x86_64 $ cat size.c #include int main() { printf("sz(int) = %lu, sz(long) = %lu\n", (unsigned long)sizeof(int), (unsigned long)sizeof(long)); return 0; } $ cc -o size size.c $ ./size sz(int) = 4, sz(long) = 8 $ file size size: Mach-O 64-bit executable x86_64 $ cc -m32 -o size size.c $ ./size sz(int) = 4, sz(long) = 4 $ file size size: Mach-O executable i386 As I recall the same applies on linux for amd64, with the size of logn changing depending upon if one compiles as x86 or amd64. ILP32 vs LP64 DF From schily at schily.net Wed Jan 4 04:04:35 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 03 Jan 2017 19:04:35 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> Message-ID: <586be7b3.TVbwM5I7Y6v2DJC8%schily@schily.net> David wrote: > I???m running Yosemite, Sierra won???t run on my hardware. > > Does the standard expect an int to be a specific size? I can???t imagine this to be the case. On Mac ints are 32 bits, as are longs. Unlike Linux where long defaults to 64 bits. POSIX requires "int" to be at least 32 bits and all UNIX implementations I am aware of, use the LP64 model for the compiler. Here int is 32 and long is 64 bits. Microsoft still does and "True64" did use the ILP64 model. This allows lazy written software to work even though it does not the right data types for pointer arithmetic. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Wed Jan 4 04:20:54 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 3 Jan 2017 10:20:54 -0800 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586ba44c.dnHd1Caeq6INr3FG%schily@schily.net> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> <20170101202850.GF17848@wopr> <20170101203813.GV5983@mcvoy.com> <586ba44c.dnHd1Caeq6INr3FG%schily@schily.net> Message-ID: <20170103182054.GB12264@mcvoy.com> On Tue, Jan 03, 2017 at 02:17:00PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > I'd like to know where you can get a better performing OS. The file systems > > scream when compared to Windows or MacOS, they know about SSDs and do the > > right thing. The processes are light weight, I regularly do "make -j" > > which on my machines just spawns as many processs as needed. > ... > > > So if I size it to the number of CPUs it is slightly faster. On the other > > hand, when I tell it just spawn as many as it wants it peaks at about 267 > > processes running in parallel. > > > > Solaris, AIX, IRIX, HP-UX, MacOS would all thrash like crazy under that > > load, their context switch times are crappy. > > > > Source: author of LMbench which has been measuring this stuff since the > > mid 1990s. > > Could you give a verification for this claim please? First, it was two claims, fast file system, and fast processes. You seem to have ignored the second one. That second one is a big deal for multi process/multi processor jobs. If you have access to solaris and linux running on the same hardware, get a copy of lmbench and run it. I can walk you through the results and if LMbench has bit rotted I'll fix it. http://mcvoy.com/lm/bitmover/lmbench/lmbench2.tar.gz > >From what I can tell, the filesystem concepts in Linux are slow and it is > usually not possible to tell what happened in a given time frame. It however > creates the impression that it is fast if you are the only user on a system, > but it makes it hard to gauge a time that is comparable to a time retrived > from a different OS. This is because you usually don't know what happened in > a given time. You are right that Linux has struggled under some multi user loads (but then so does Netapp and a lot of other vendors whose job it is to serve up files). There are things you can do to make it faster. We used to have thousands of clones of the linux kernel and we absolutely KILLED performance by walking all those and hardlinking the files that were the same (BitKeeper, unlike Git, has a history file for each user file so lots of them are the same across different clones). The reason it killed performance is that Linux, like any reasonable OS, will put files next to each other. If you unpack /usr/include/*.h it is very likely that they are on the same cylinder[s]. When you start hardlinking you break that locality and reading back and one directory's set of files is going to force a lot of seeks. So don't do that. Another thing that Linux [used to have] has is an fsync problem, an fsync on any file in any file system caused the ENTIRE cache to be flushed. I dunno if they have fixed that, I suspect so. My experience is that for a developer, Linux just crushes the performance found on any other system. My typical test is to get two boxes and time $ cd /usr && time tar cf - . | rsh other_box "cd /tmp && time tar xf -; rm -rf /tmp/usr" I suspect your test would be more like $ cat > /tmp/SH < References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> <586be7b3.TVbwM5I7Y6v2DJC8%schily@schily.net> Message-ID: <001001d265ef$c832d790$589886b0$@ronnatalie.com> > Microsoft still does and "True64" did use the ILP64 model. This allows lazy written software to work even though it does not the right data types for pointer arithmetic. Well, there's a predisposition in C for "int" to be the standard word size of the machine. On designing for a 64-bit machine, one needs to think long and hard about the integral sizes. When we wrote the compilers for the Denelcor HEP, we had that problem. The thing had 64-bit words and a partial word mode for 16 and 32 bits. It didn't even have the concept of 8-bit math. We decided that rather than arbitrarily sizing an int to 32, so we had the char-short-int-long progression, we'd make it the word size. This means we had to introduce another keyword for the 32-bit type. After dismissing "short long" or "long short" as a type, the suggestion was "medium," but we settled for int32 (this was before the language standards considered how to manage the system namespaces). An amusing thing we found basing our code on 4.2 BSD was that there was a really bad latent bug. There was a kernel structure that looked like this: union ptrs { int* p_int; short* p_short; char* p_char; long* p_long; }; The kernel would store one sort of pointer in one element and then retrieve it from another. This "conversion by union" didn't work on the HEP as the word pointers had the partial word size encoded in the low order bits. Reading out a short* where an int* was stored caused the processor to access the wrong sized entity (though roughly in the proper location). I had to run all over the kernel changing those to a void* so that the generated code could properly convert to the right pointer type. Don't get me started on the inanity that states that "char" is the smallest addressable integer. Char should have been divorced from the integer sizings and let be the native CHARACTER size alone. From clemc at ccc.com Wed Jan 4 04:33:24 2017 From: clemc at ccc.com (Clem Cole) Date: Tue, 3 Jan 2017 13:33:24 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586be7b3.TVbwM5I7Y6v2DJC8%schily@schily.net> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> <586be7b3.TVbwM5I7Y6v2DJC8%schily@schily.net> Message-ID: On Tue, Jan 3, 2017 at 1:04 PM, Joerg Schilling wrote: > "True64" did use the ILP64 model DEC's Tru64 for Alpha defaulted to LP64 (ex-DECie, Tru64 hacker et al). You could get ILP64 with compiler switches, but Tru64 was the first of the LP64 systems. We chaired the 64 bit team between the vendors and ISV etc, but I'll not drag that dirt here. As I like to point out, the greatest gift DEC had to the industry was their 64 bit (LP64) DEC C and C++​ compiler, after 3-5 years before SGI, Sun et al. (much of that work lives today as the Intel compiler BTW). As a result the ISV had to make their code clean. When Sun, IBM et al followed suite a few years later, that had better code to work with***. Besides the Gimpel Flexelint (a very cool program and I recommend it highly for serious programmers in the key of what Larry has been discussing); Judy Ward's incredible diagnostics in the DEC C++ front-end was the only tool that I know of that really told you what was going on in your code (funny, I literally am typing this message after I just returned from lunch with her today, and we we talking about what we had to do in those days to help people). Since, I've taken on the topic in Quora so I'll not overly repeat myself here - there in more detail in that post. But it really comes down to the this: early (v7 and before) C code assumed sizeof(int) == sizeof(*int). When the Vax was introduced, it was just easier to go ILP32 when moving code from the old compilers and in the case of the Vax that made sense but was not always true. I personally built an early 68000 compiler that was LP32 and the MIT guys did it ILP32 in their compiler. I had no tools like we would have later, so porting code to the 68000 could >>sometimes<< be an issue. The MIT compiler (or its children) became the standard C compiler for most of the 68000 code [we both were "right" depending on your design preference - speed of port vs speed of execution]. When we did Alpha, we had >>long discussions<< about word size. It was agreed that LPxx was not reasonably safe (and that proved to be true). The problem was that -1 @ 64bits is not 0xFFFFFFFF and those sorts of errors >>were<< riddled through much code. Today, I think many of those error have been found, and the fact that most Intel*64 compilers use LP64 also and code pretty much works is a tribute to that work. That said, when we start using a 128bit int, I'm sure "dirty" code will appear. If I recall, the Gimpel guys will warning you when you have a bit mask. *** Picking on a Sun and the Sun ISV's here a little. I love to tell the stories (<-- plural) of ISV's that found their bugs rate on Solaris drop after the Tru64 port; because we forced them to clean up their act some. The Sun compiler would accept most anything and generate code, and as Larry says, much of the resultant code was garbage. While I can write crappy program in any language, a compiler with excellent warnings or tools like Flexelint, will not rid you of structural stuff, it will at least make what it written unambiguous and thus less likely to take a surprising nose dive in the field. -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jan 4 04:35:37 2017 From: clemc at ccc.com (Clem Cole) Date: Tue, 3 Jan 2017 13:35:37 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> <586be7b3.TVbwM5I7Y6v2DJC8%schily@schily.net> Message-ID: On Tue, Jan 3, 2017 at 1:33 PM, Clem Cole wrote: > LPxx was not reasonably safe ​grrr - dyslexia-r-me LPX was NOW reasonably safe​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Wed Jan 4 04:45:33 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Tue, 3 Jan 2017 13:45:33 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> <586be7b3.TVbwM5I7Y6v2DJC8%schily@schily.net> Message-ID: <003501d265f1$910085e0$b30191a0$@ronnatalie.com> Yeah, we were kind of unique in developing a few products that cut across many UNIX architectures: Sun 4.1.3 / Solaris 2.0 DEC Alpha HP 9000 (in various incarnations) SGI in various incarnations (Oxygen, O2, Onyx, …) Intel processors in both 32 and 64 bit modes Ardent Stellar G1000 MIPS (both MIPS’s native workstation and the DEC SPIM machine) Some i860 machines from IBM and Oki IBM RS6000 Cray YMP. The latter was the one that really had some issues. The thing really only had char and word. Int, short, and long were all 64 bits. That one discovered a portability hack. At least I had put a diagnostic in to catch the fact I hadn’t implemented such a case in the generic code. I got a call from the guy doing to port (he had to go to the Cray offices) to tell me that the first thing the product said was “You’ve got to be kidding.” Later we bopped back and forth between various NT-based systems including Intel at 32 and 64 bits (don’t get me started about the inane DWORD_PTR type which is not a pointer nor a double word) and on the iTanium (which we dubbed the iTanic). Never got around to trying the NT Alpha. Not only type sizing issues but having to worry about byte order, etc… I still remember finding a #define notyet 1 in one piece of code on the Ardent…that onewas scary. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at cs.dartmouth.edu Wed Jan 4 06:19:08 2017 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Tue, 03 Jan 2017 15:19:08 -0500 Subject: [TUHS] Mac OS X is Unix Message-ID: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> > keeping the code I work on portable between Linux and the Mac requires > more than a bit of ‘ifdef’ hell. Curmudgeonly comment: I bristle at the coupling of "ifdef" and "portable". Ifdefs that adjust code for different systems are prima facie evidence of NON-portability. I'll buy "configurable" as a descriptor for such ifdef'ed code, but not "portable". And, while I am venting about ifdef: As a matter of sytle, ifdefs are global constructs. Yet they often have local effects like an if statement. Why do we almost always write #ifdef LINUX linux code #else default unix code #endif instead of the much cleaner if(LINUX) linux code else default unix code In early days the latter would have cluttered precious memory with unfreachable code, but now we have optimizing compilers that will excise the useless branch just as effectively as cpp. Much as the trait of overeating has been ascribed to our hunter ancestors' need to eat fully when nature provided, so overuse of ifdef echos coding practices tuned to the capabilities of bygone computing systems. "Ifdef hell" is a fitting image for what has to be one of Unix's least felicitous contributions to computing. Down with ifdef! Doug From dds at aueb.gr Wed Jan 4 06:54:57 2017 From: dds at aueb.gr (Diomidis Spinellis) Date: Tue, 3 Jan 2017 22:54:57 +0200 Subject: [TUHS] Pipes in the Third Edition Unix Message-ID: Peter Salus writes "The other innovation present in the Third Edition was the pipe" ("A Quarter Century of Unix", p. 50). Yet, in the corresponding sys/ken/sysent.c, the pipe system call seems to be a stump. 1, &fpe, /* 40 = fpe */ 0, &dup, /* 41 = dup */ 0, &nosys, /* 42 = pipe */ 1, ×, /* 43 = times */ On the other hand, the Fourth Edition manual documents the pipe system call, the construction of pipelines through the shell, and the use of wc as a filter (without an input file, as was required in the Second Edition). Would it therefore be correct to say that pipes were introduced in the Fourth rather than the Third Edition? From charles.unix.pro at gmail.com Wed Jan 4 07:05:53 2017 From: charles.unix.pro at gmail.com (Charles Anthony) Date: Tue, 3 Jan 2017 13:05:53 -0800 Subject: [TUHS] Mac OS X is Unix In-Reply-To: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: On Tue, Jan 3, 2017 at 12:19 PM, Doug McIlroy wrote: > > keeping the code I work on portable between Linux and the Mac requires > > more than a bit of ‘ifdef’ hell. > > Curmudgeonly comment: I bristle at the coupling of "ifdef" and "portable". > Ifdefs that adjust code for different systems are prima facie > evidence of NON-portability. I'll buy "configurable" as a descriptor > for such ifdef'ed code, but not "portable". > > And, while I am venting about ifdef: > As a matter of sytle, ifdefs are global constructs. Yet they often > have local effects like an if statement. Why do we almost always write > > #ifdef LINUX > linux code > #else > default unix code > #endif > > instead of the much cleaner > > if(LINUX) > linux code > else > default unix code > > In early days the latter would have cluttered precious memory > with unfreachable code, but now we have optimizing compilers > that will excise the useless branch just as effectively as cpp. > > I have seen an interesting failure for the latter construct; I was compiling some [unremembered] chess program for some [unremembered] UNIX workstation in the late '80s. The code had bit array data structures with get/set routines that optimized for host word size, with code something like: if (sizeof (unsigned int) == 64) { // cast structure into array of 64 bit unsigned ints and use bit operators } else // sizeof (int) == 32 { // cast structure into array of 32 bit unsigned ints and use bit operators } (It might of been 32/16; I don't remember, but it isn't relevant) The '64' branch had an expression containing something like '1u << 60' in it. I was compiling on a 32 bit int machine; the compiler flagged the '1u << 60' as a fatal error due to the size of the shift -- on this compiler the expression evaluator was running before the dead code remover. -- Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at kjorling.se Wed Jan 4 07:33:31 2017 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Tue, 3 Jan 2017 21:33:31 +0000 Subject: [TUHS] When was #if introduced in C? (was: Re: Mac OS X is Unix) In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: <20170103213331.GN31772@yeono.kjorling.se> On 3 Jan 2017 13:05 -0800, from charles.unix.pro at gmail.com (Charles Anthony): > I was compiling on a 32 bit int machine; the compiler flagged the '1u << > 60' as a fatal error due to the size of the shift -- on this compiler the > expression evaluator was running before the dead code remover. That was my thought too; the only way to guarantee that the code is removed before the compiler sees it is to do so through the preprocessor, thus #ifdef. Of course, #ifdef is rather limited. The #if preprocessor directive is more generic, but still significantly less versatile than the if() language keyword. Which makes me curious... Does anyone here happen to know when #if was introduced in C? I suspect #ifdef came earlier simply by virtue of being (at least to a naiive first approximation) far easier to implement, as all that would be required would be to look at the macro expansion table (already required by #define) and see if that particular name had previously been #defined, as opposed to actually evaluating an expression. -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “People who think they know everything really annoy those of us who know we don’t.” (Bjarne Stroustrup) From clemc at ccc.com Wed Jan 4 07:35:38 2017 From: clemc at ccc.com (Clem Cole) Date: Tue, 3 Jan 2017 16:35:38 -0500 Subject: [TUHS] Mac OS X is Unix In-Reply-To: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: On Tue, Jan 3, 2017 at 3:19 PM, Doug McIlroy wrote: > Curmudgeonly comment: I bristle at the coupling of "ifdef" and "portable". > Ifdefs that adjust code for different systems are prima facie > evidence of NON-portability. I'll buy "configurable" as a descriptor > for such ifdef'ed code, but not "portable". > ​I hear you and partially agree. But I would point out that my experience in 40+ years of professional "coding" the programs that have "lasted" the longest and matured the most over time, were all written in a language that had some sort of preprocessor.​ Using a preprocessor is not a bad thing, it's the miss use that is the problem and leaving it out I think weakens the language. IIRC: Brian made this same point in the "Why Pascal is not my Favorite Programming Language." I think your point of over use, is valid (like symlinks in the file system - my curmudgeonly beef) - just because we >>have<< a feature does not make it the best way to do something. I like to think of this sort of choice as "good taste" -- simple and elegant. ifdef's are a real solution to a problem. But like you, I >>hate<< seeing them all over the place and they certainly can muck up the code. BTW: I much prefer a local_func.c which sucks in unix_bindings.c, vms_bindings.c, winders_bindings.c or the like with ifdefs and alls the configuration control needed in >>one<< place; and then all of the rest of the code try to be a pure as possible as you describe. FWIW: One of the best I ever saw was the old FORTRAN SPICE 2G6 of Ellis Cohen and Tom Quarles. That program runs everywhere (still does). Ellis and Tom used m4 to preprocess it, IIRC and keeps the localization in a very controlled space. If you even looked at it, Ellis does some really imaginative coding (compiling the internal loop in a data block and jump to it), it all portable and actually very easy to understand. For one thing the comments are there and very, clear. But Ellis has great style and have mopped up after a number of other of the UCB greats in those days, I'd much rather deal with Ellis's and TQ's code base then some of my other peeps. Anyway - I agree - ifdef >>is<< miss used in way too much code, but I think part of that is so many programmers never had a real software engineering course ( a thought I will follow up in another thread). Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jan 4 07:50:57 2017 From: clemc at ccc.com (Clem Cole) Date: Tue, 3 Jan 2017 16:50:57 -0500 Subject: [TUHS] Pipes in the Third Edition Unix In-Reply-To: References: Message-ID: Hmm.. it's all about where you count. Clearly, he was working on it if the stubs is there. If the only difference between 3rd and 4th is the pipe code, then it would be fair to say that. You might say something like: Pipe's were developed in a 3rd edition kernel, where there was is evidence of nascent idea (its has a name and there are subs for it), but the code to fully support it is lacking in the 3rd release. Pipes became a completed feature in the 4th edition. On Tue, Jan 3, 2017 at 3:54 PM, Diomidis Spinellis wrote: > Peter Salus writes "The other innovation present in the Third Edition was > the pipe" ("A Quarter Century of Unix", p. 50). Yet, in the corresponding > sys/ken/sysent.c, the pipe system call seems to be a stump. > > 1, &fpe, /* 40 = fpe */ > 0, &dup, /* 41 = dup */ > 0, &nosys, /* 42 = pipe */ > 1, ×, /* 43 = times */ > > On the other hand, the Fourth Edition manual documents the pipe system > call, the construction of pipelines through the shell, and the use of wc as > a filter (without an input file, as was required in the Second Edition). > > Would it therefore be correct to say that pipes were introduced in the > Fourth rather than the Third Edition? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmswierczek at gmail.com Wed Jan 4 07:53:02 2017 From: rmswierczek at gmail.com (Robert Swierczek) Date: Tue, 3 Jan 2017 16:53:02 -0500 Subject: [TUHS] When was #if introduced in C? (was: Re: Mac OS X is Unix) In-Reply-To: <20170103213331.GN31772@yeono.kjorling.se> References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> <20170103213331.GN31772@yeono.kjorling.se> Message-ID: > Which makes me curious... Does anyone here happen to know when #if was introduced in C? >From what I can gather from studying the TUHS archives, the earliest cpp that supports #if is in PWB: http://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/c/c/cpp.c Before that, the pre-processor resides within the cc.c command and #if is not supported: http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s1/cc.c From wkt at tuhs.org Wed Jan 4 07:53:10 2017 From: wkt at tuhs.org (Warren Toomey) Date: Wed, 4 Jan 2017 07:53:10 +1000 Subject: [TUHS] Pipes in the Third Edition Unix In-Reply-To: References: Message-ID: <20170103215310.GA26242@minnie.tuhs.org> On Tue, Jan 03, 2017 at 10:54:57PM +0200, Diomidis Spinellis wrote: > Peter Salus writes "The other innovation present in the Third Edition was > the pipe" ("A Quarter Century of Unix", p. 50). Yet, in the corresponding > sys/ken/sysent.c, the pipe system call seems to be a stump. The Third edition was still written in assembly code. The Fourth edition was the first to be rewritten in C. So there was a time when both existed in parallel. > 1, &fpe, /* 40 = fpe */ > 0, &dup, /* 41 = dup */ > 0, &nosys, /* 42 = pipe */ > 1, ×, /* 43 = times */ This code, from the nsys kernel, clearly shows this. The kernel was being rewritten in C. The C version had not yet caught up with the functionality in the assembly version of the kernel, which did have pipes. Cheers, Warren From clemc at ccc.com Wed Jan 4 07:56:38 2017 From: clemc at ccc.com (Clem Cole) Date: Tue, 3 Jan 2017 16:56:38 -0500 Subject: [TUHS] When was #if introduced in C? (was: Re: Mac OS X is Unix) In-Reply-To: <20170103213331.GN31772@yeono.kjorling.se> References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> <20170103213331.GN31772@yeono.kjorling.se> Message-ID: The cpp.c code in 6th edition only supported #ifdef. IIRC the v7 cpp had #if, but it may have been in typesetter C. I think mashey needed it for SCCS (I don't have sources easy to look at PWB 1.0), so my guess is it was developed during that transition. The key is that PWB 1.0 was a v6++ system. UNIX/TS and later V7 had the newer kernel. PWB 2.0 was based on that one. The typesetter compiler was developed in parallel to those projects, which also put constraints on the language. Clem On Tue, Jan 3, 2017 at 4:33 PM, Michael Kjörling wrote: > On 3 Jan 2017 13:05 -0800, from charles.unix.pro at gmail.com (Charles > Anthony): > > I was compiling on a 32 bit int machine; the compiler flagged the '1u << > > 60' as a fatal error due to the size of the shift -- on this compiler the > > expression evaluator was running before the dead code remover. > > That was my thought too; the only way to guarantee that the code is > removed before the compiler sees it is to do so through the > preprocessor, thus #ifdef. > > Of course, #ifdef is rather limited. The #if preprocessor directive is > more generic, but still significantly less versatile than the if() > language keyword. > > Which makes me curious... Does anyone here happen to know when #if was > introduced in C? I suspect #ifdef came earlier simply by virtue of > being (at least to a naiive first approximation) far easier to > implement, as all that would be required would be to look at the macro > expansion table (already required by #define) and see if that > particular name had previously been #defined, as opposed to actually > evaluating an expression. > > -- > Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se > “People who think they know everything really annoy > those of us who know we don’t.” (Bjarne Stroustrup) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jan 4 07:57:55 2017 From: clemc at ccc.com (Clem Cole) Date: Tue, 3 Jan 2017 16:57:55 -0500 Subject: [TUHS] When was #if introduced in C? (was: Re: Mac OS X is Unix) In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> <20170103213331.GN31772@yeono.kjorling.se> Message-ID: Great minds think alike.. This lines up with my memory. On Tue, Jan 3, 2017 at 4:53 PM, Robert Swierczek wrote: > > Which makes me curious... Does anyone here happen to know when #if was > introduced in C? > > From what I can gather from studying the TUHS archives, the earliest > cpp that supports #if is in PWB: > http://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/c/c/cpp.c > > Before that, the pre-processor resides within the cc.c command and #if > is not supported: > http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s1/cc.c > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkt at tuhs.org Wed Jan 4 08:04:07 2017 From: wkt at tuhs.org (Warren Toomey) Date: Wed, 4 Jan 2017 08:04:07 +1000 Subject: [TUHS] Pipes in the Third Edition Unix In-Reply-To: <20170103215310.GA26242@minnie.tuhs.org> References: <20170103215310.GA26242@minnie.tuhs.org> Message-ID: <20170103220407.GA29268@minnie.tuhs.org> On Wed, Jan 04, 2017 at 07:53:10AM +1000, Warren Toomey wrote: > The Third edition was still written in assembly code. The Fourth edition > was the first to be rewritten in C. So there was a time when both > existed in parallel. I should have waited to add this. The nsys kernel is dated August 31, 1973 (see http://www.tuhs.org/Archive/PDP-11/Distributions/research/Dennis_v3/Readme.nsys) and the Third Edition manuals are dated February 1973 (see http://minnie.tuhs.org/cgi-bin/utree.pl?file=V3/man/man0/intro) The 3e manuals have a pipe syscall: http://minnie.tuhs.org/cgi-bin/utree.pl?file=V3/man/man2/pipe.2 so pipes existed in February 1973. In fact, they existed as early as January 15, 1973, as Doug McIlroy put out the notice for a talk which described the state of UNIX at that time; page 4 describes SYS PIPE and its implementation, (see http://www.tuhs.org/Archive/Documentation/Papers/Unix_Users_Talk_Notes_Jan73.pdf) Interestingly, the pipe manpage says: SYNOPSIS sys pipe / pipe = 42.; not in assembler and I don't quite understand the comment :-) Other manpages with the same comment are boot(2), csw(2), fpe(2), kill(2), rele(2), sleep(2), sync(2) and times(2). So it's not particular to pipe(2). Can anybody help explain the "not in assembler" comment? Thanks, Warren From lyndon at orthanc.ca Wed Jan 4 07:39:40 2017 From: lyndon at orthanc.ca (Lyndon Nerenberg) Date: Tue, 3 Jan 2017 13:39:40 -0800 Subject: [TUHS] Mac OS X is Unix In-Reply-To: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: <81B054A7-A361-49F4-B974-51370500D60D@orthanc.ca> > #ifdef LINUX > linux code > #else > default unix code > #endif > > instead of the much cleaner > > if(LINUX) > linux code > else > default unix code > > In early days the latter would have cluttered precious memory > with unfreachable code, but now we have optimizing compilers > that will excise the useless branch just as effectively as cpp. Plan 9 refreshingly evicted this nonsense from the native compilers (mostly) and the code base.[1] I remember reading a Usenet post from the mid-late 80s that showed a roughly 40 line sequence of #foo hell from some bit of SVRx code. There wasn't a single line of actual C there. That it involved conditionalizing around the tty/termio drivers and some machine-specific ioctl goop ... well, let's not go *there*. It might have been posted as an example for the Obfuscated C Contest. It certainly could have won. (Assuming an entry without any actual C code was eligible. Vague memories say anything that survived 'cc -o foo xxx.c [...]' was allowed.) --lyndon [1] Eliminating many binary APIs -- e.g. ioctl() -- in favour of textual ones, was a stroke of genius. Not just with fileservers (/net et al), but also with things like dial(). From lyndon at orthanc.ca Wed Jan 4 08:10:41 2017 From: lyndon at orthanc.ca (Lyndon Nerenberg) Date: Tue, 3 Jan 2017 14:10:41 -0800 Subject: [TUHS] Mac OS X is Unix In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: <8B6E4879-2D81-4A24-A3B7-0AB38EA68EE6@orthanc.ca> > On Jan 3, 2017, at 1:35 PM, Clem Cole wrote: > > If you even looked at it, Ellis does some really imaginative coding (compiling the internal loop in a data block and jump to it), it all portable and actually very easy to understand. For one thing the comments are there and very, clear. But that's not going to survive modern W^X. Unless you compile to interpreted byte-code, which mostly wipes out the inline loop unroll to machine code, no? From rminnich at gmail.com Wed Jan 4 08:12:04 2017 From: rminnich at gmail.com (ron minnich) Date: Tue, 03 Jan 2017 22:12:04 +0000 Subject: [TUHS] Mac OS X is Unix In-Reply-To: <81B054A7-A361-49F4-B974-51370500D60D@orthanc.ca> References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> <81B054A7-A361-49F4-B974-51370500D60D@orthanc.ca> Message-ID: On Tue, Jan 3, 2017 at 2:07 PM Lyndon Nerenberg wrote: > > > Plan 9 refreshingly evicted this nonsense from the native compilers > (mostly) and the code base.[1] > Yes, You can write portable code without #ifdef, configure scripts, and libtool. Plan 9 shows how. Some people get upset at mentions of Plan 9, however, so for a more current example, the Go source tree is a good reference. There's no cpp in Go, thank goodness, and they've shown superior portability to systems that revolve around #ifdef. ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Wed Jan 4 08:20:42 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Tue, 3 Jan 2017 23:20:42 +0100 Subject: [TUHS] Pipes in the Third Edition Unix In-Reply-To: <20170103220407.GA29268@minnie.tuhs.org> References: <20170103215310.GA26242@minnie.tuhs.org> <20170103220407.GA29268@minnie.tuhs.org> Message-ID: <0FF148A7-9394-4BCA-8A0C-E94247AAF0C2@planet.nl> > Can anybody help explain the "not in assembler" comment? In early 'as' some syscall mnemonics were predefined, see for instance: http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/as/as29.s 'pipe' isn't one of those. Paul From norman at oclsc.org Wed Jan 4 08:14:33 2017 From: norman at oclsc.org (Norman Wilson) Date: Tue, 03 Jan 2017 17:14:33 -0500 Subject: [TUHS] Pipes in the Third Edition Unix Message-ID: <1483481677.19822.for-standards-violators@oclsc.org> Warren: Can anybody help explain the "not in assembler" comment? ==== I think it means `as(1) has predefined symbols with the numbers of many system calls, but not this one.' Norman Wilson Toronto ON From random832 at fastmail.com Wed Jan 4 08:31:42 2017 From: random832 at fastmail.com (Random832) Date: Tue, 03 Jan 2017 17:31:42 -0500 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <586be7b3.TVbwM5I7Y6v2DJC8%schily@schily.net> References: <52C99F50-E24A-4BBF-A129-180A1271B4E3@kdbarto.org> <586a3a23.udW0nRrOopzHoQbP%schily@schily.net> <8168FD75-9C3E-47C1-9BA8-EADAD7D33C38@kdbarto.org> <586bb9dc.iVkFRSLWnXd79ger%schily@schily.net> <586be7b3.TVbwM5I7Y6v2DJC8%schily@schily.net> Message-ID: <1483482702.1374828.836447849.6B86D17C@webmail.messagingengine.com> On Tue, Jan 3, 2017, at 13:04, Joerg Schilling wrote: > Microsoft still does and "True64" did use the ILP64 model. This allows > lazy > written software to work even though it does not the right data types for > pointer arithmetic. Microsoft uses IL32LLP64 - I don't know what Tru64 uses - google suggests it's LP64. Wikipedia suggests there was an early port of Solaris that used ILP64. From tfb at tfeb.org Wed Jan 4 09:39:01 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Tue, 3 Jan 2017 23:39:01 +0000 Subject: [TUHS] Mac OS X is Unix In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> <81B054A7-A361-49F4-B974-51370500D60D@orthanc.ca> Message-ID: <9FC94387-9453-4BCF-8FBE-37193D70E512@tfeb.org> I think you can do so only if every language processor you ever expect to deal with your code is lexically-compatible: you *can't* do so if the lexer will puke: you need some frontend which will prevent the lexer ever seeing the toxin, and that thing is what Lisp would call read-time conditionalization. Plan 9 and Go both avoid this problem by being single-implementation or nearly-single-implementation systems: many things are easier with that assumption. > On 3 Jan 2017, at 22:12, ron minnich wrote: > > > >> On Tue, Jan 3, 2017 at 2:07 PM Lyndon Nerenberg wrote: >> >> >> Plan 9 refreshingly evicted this nonsense from the native compilers (mostly) and the code base.[1] > > Yes, You can write portable code without #ifdef, configure scripts, and libtool. Plan 9 shows how. > > Some people get upset at mentions of Plan 9, however, so for a more current example, the Go source tree is a good reference. There's no cpp in Go, thank goodness, and they've shown superior portability to systems that revolve around #ifdef. > > ron > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Wed Jan 4 09:52:14 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 3 Jan 2017 18:52:14 -0500 (EST) Subject: [TUHS] Pipes in the Third Edition Unix Message-ID: <20170103235214.C3DF918C0BA@mercury.lcs.mit.edu> > From: Clem Cole > You might say something like: Pipe's were developed in a 3rd edition > kernel, where there was is evidence of nascent idea (its has a name and > there are subs for it), but the code to fully support it is lacking in > the 3rd release. Pipes became a completed feature in the 4th edition. To add to what others have pointed out (about the assembler and C kernels), let me add one more data-bit. In the Unix oral histories done by Michael S. Mahoney, there's this: McIlroy: .. And on-e day I came up with a syntax for the shell that went along with the piping, and Ken said, "I'm going to do it!" He was tired of hearing all this stuff, and that was - you've read about it several times, I'm sure - that was absolutely a fabulous day the next day. He said, "I'm going to do it." He didn't do exactly what I had proposed for the pipe system call; he invented a slightly better one that finally got changed once more to what we have today. He did use my clumsy syntax. He put pipes into Unix, he put this notation [Here McIlroy pointed to the board, where he had written f > g > c] into shell, all in one night. The next morning, we had this - people came in, and we had - oh, and he also changed a lot of - most of the programs up to that time couldn't take standard input, because there wasn't the real need. So they all had file arguments; grep had a file argument, and cat had a file argument, and Thompson saw that that wasn't going to fit with this scheme of things and he went in and changed all those programs in the same night. I don't know how ... And the next morning we had this orgy of one-liners. So I don't think that suggested text, that it was added slowly, is appropriate. If this account is correct, it was pretty atomic. It sounds more the correct answer to the stuff in the source is the one proposed, that it got added to the assembler version of the system before it was done in the C version. Noel From rminnich at gmail.com Wed Jan 4 10:12:04 2017 From: rminnich at gmail.com (ron minnich) Date: Wed, 04 Jan 2017 00:12:04 +0000 Subject: [TUHS] Mac OS X is Unix In-Reply-To: <9FC94387-9453-4BCF-8FBE-37193D70E512@tfeb.org> References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> <81B054A7-A361-49F4-B974-51370500D60D@orthanc.ca> <9FC94387-9453-4BCF-8FBE-37193D70E512@tfeb.org> Message-ID: On Tue, Jan 3, 2017 at 3:39 PM Tim Bradshaw wrote: > I think you can do so only if every language processor you ever expect to > deal with your code is lexically-compatible: you *can't* do so if the lexer > will puke: you need some frontend which will prevent the lexer ever seeing > the toxin, and that thing is what Lisp would call read-time > conditionalization. Plan 9 and Go both avoid this problem by being > single-implementation or nearly-single-implementation systems: many things > are easier with that assumption. > > > Well, if by single-implementation, you mean single-compiler, I have a counter example: Harvey is Plan 9 with just enough changed to be C11 compliant. All of userland, libraries, and kernel build and boot with gcc 4, 5,6 and clang 3.5, 3.6, and 3.8. Harvey builds and boots on amd64 and riscv. We've adhered as much as we can to the coding style of Plan 9 and we've seen the benefits. Probably the only major change was removal of anonymous struct members, and that actually improved the code as it uncovered some long-standing driver bugs. In contrast, it's very hard to build portable code that uses extensive ifdefs and 'modern' tools like libtool today. Just the other day I had a build failure because something needed autotools x.6.61 and I only had x.y.60. Portable seems to now mean 'builds on ubuntu AND Fedora!' and even that's rare. The portability principles used in Plan 9 work just fine with 'modern' compilers. The cpp abuse we see in so much code today seems completely unnecessary to me. C code written with #ifdefs and libtool is far less portable than it might be. ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From scj at yaccman.com Wed Jan 4 10:13:54 2017 From: scj at yaccman.com (Steve Johnson) Date: Tue, 03 Jan 2017 16:13:54 -0800 Subject: [TUHS] Mac OS X is Unix In-Reply-To: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: <74a6eaa2dc832b400b58a56a4fa1f4bc00398314@webmail.yaccman.com> Following on Doug's comment, when I wrote the portable C compiler my vision was to separate out the machine independent parts of the compiler (e.g, the lexer and parser) from the machine dependent parts (those parts involving stack frames, instructions, etc.).  Then to port the compiler, you could leave the machine independent code alone (much of which was rather hairy, involving symbol tables, optimizations, etc.) and simply describe the instructions and calling sequence in the machine dependent files.  The preprocessor was actually pretty important in carrying this out, because there were a fair number of machine characteristics ( such as byte order and word size) that were handled identically across different architectures but the actual values were different.  These were handled by defining some preprocessor macros. Several years later, I had moved to development and was responsible for shipping a variety of different compilers, most resting on the PCC base.  At one point, I was appalled to see that somebody had put some code into one of the machine-independent files that was bracketed with "# ifdef VAX".  There followed a most difficult conversation with the perpetrator who kept insisting that, after all, the code WAS the same on all machines... ----- Original Message ----- From: "Doug McIlroy" To: Cc: Sent:Tue, 03 Jan 2017 15:19:08 -0500 Subject:Re: [TUHS] Mac OS X is Unix > keeping the code I work on portable between Linux and the Mac requires > more than a bit of ‘ifdef’ hell. Curmudgeonly comment: I bristle at the coupling of "ifdef" and "portable". Ifdefs that adjust code for different systems are prima facie evidence of NON-portability. I'll buy "configurable" as a descriptor for such ifdef'ed code, but not "portable". And, while I am venting about ifdef: As a matter of sytle, ifdefs are global constructs. Yet they often have local effects like an if statement. Why do we almost always write #ifdef LINUX linux code #else default unix code #endif instead of the much cleaner if(LINUX) linux code else default unix code In early days the latter would have cluttered precious memory with unfreachable code, but now we have optimizing compilers that will excise the useless branch just as effectively as cpp. Much as the trait of overeating has been ascribed to our hunter ancestors' need to eat fully when nature provided, so overuse of ifdef echos coding practices tuned to the capabilities of bygone computing systems. "Ifdef hell" is a fitting image for what has to be one of Unix's least felicitous contributions to computing. Down with ifdef! Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Wed Jan 4 12:41:27 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 3 Jan 2017 18:41:27 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines Message-ID: <20170104024127.GN12264@mcvoy.com> I was in building 5 at Sun when they were switching to SVr4 which became Solaris 2.0 (I think). Building 5 housed the kernel people at Sun. John Pope was the poor bastard who got stuck with doing the bring up. Everyone hated him for doing it, we all wanted it to fail. I was busting my ass on something in SunOS 4.x and I was there late into the night, frequently to around midnight or beyond. So was John. We became close friends. We both moved to San Francisco and ended up commuting to Mountain View together (and hit the bars together). John was just at my place, here's a few pictures for those who might be interested. He's a great guy, got stuck with a shitty job. http://www.mcvoy.com/lm/2016-pope/ --lm From imp at bsdimp.com Wed Jan 4 13:00:05 2017 From: imp at bsdimp.com (Warner Losh) Date: Tue, 3 Jan 2017 20:00:05 -0700 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104024127.GN12264@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> Message-ID: On Tue, Jan 3, 2017 at 7:41 PM, Larry McVoy wrote: > I was in building 5 at Sun when they were switching to SVr4 which became > Solaris 2.0 (I think). Solaris 2.0 was the first SVr4 version of Solaris. 4.1.{1,2,3} were still BSD based, and Solaris 2.0 was SunOS 5.0 and OpenWindows. I recently came across a CD ROM that was labeled Solaris 2.0 Preview and Solbourne's name written in Magic Marker on it. I have no clue how I came to have it, but it was mixed in my ancient CDROM disc collection... Warner From crossd at gmail.com Wed Jan 4 13:23:28 2017 From: crossd at gmail.com (Dan Cross) Date: Tue, 3 Jan 2017 22:23:28 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104024127.GN12264@mcvoy.com> Message-ID: On Tue, Jan 3, 2017 at 10:00 PM, Warner Losh wrote: > On Tue, Jan 3, 2017 at 7:41 PM, Larry McVoy wrote: > > I was in building 5 at Sun when they were switching to SVr4 which became > > Solaris 2.0 (I think). > > Solaris 2.0 was the first SVr4 version of Solaris. 4.1.{1,2,3} were > still BSD based, and Solaris 2.0 was SunOS 5.0 and OpenWindows. > My favorite version number was SunOS 4.1.4U1: I was told that the ``U1'' meant, "you won", as in "you won. Here's another BSD-based release." Whether that's true or not, I have no idea. I recently came across a CD ROM that was labeled Solaris 2.0 Preview > and Solbourne's name written in Magic Marker on it. I have no clue how > I came to have it, but it was mixed in my ancient CDROM disc > collection... > Wow. Solbourne; that's a name I haven't heard in a while. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Wed Jan 4 13:35:12 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 3 Jan 2017 19:35:12 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104024127.GN12264@mcvoy.com> Message-ID: <20170104033512.GA22116@mcvoy.com> On Tue, Jan 03, 2017 at 10:23:28PM -0500, Dan Cross wrote: > On Tue, Jan 3, 2017 at 10:00 PM, Warner Losh wrote: > > > On Tue, Jan 3, 2017 at 7:41 PM, Larry McVoy wrote: > > > I was in building 5 at Sun when they were switching to SVr4 which became > > > Solaris 2.0 (I think). > > > > Solaris 2.0 was the first SVr4 version of Solaris. 4.1.{1,2,3} were > > still BSD based, and Solaris 2.0 was SunOS 5.0 and OpenWindows. > > > > My favorite version number was SunOS 4.1.4U1: I was told that the ``U1'' > meant, "you won", as in "you won. Here's another BSD-based release." That might have been the Greg Limes release. I may be all wrong but someone, I think it was Greg, busted their ass to try and make SunOS 4.x scale up on SMP machines. There were a lot of us at the time that hated the SVr4 thing, it was such a huge step backwards. I dunno how much you care about Sun history, but SunOS, the BSD based stuff before 5.0, the engineers and the customers *loved* it. I was not the first guy who worked until midnight on that OS, I wasn't even on the radar screen. Guy Harris worked on it, tons of people worked on it, tons of people poured their heart and soul into it. It crushed us when they went to SVr4, that shit sucked. My boss, Ken Okin, paid me for 6 months to go fight management to stop the switch to SVr4. It was more than a decade later that I learned that the reason for the switch was that Sun was out of money and AT&T bought $200M of Sun stock at 35% over market but the deal was no more SunOS, it had to be SVr4. I really wonder what the world would look like right now if Sun had open sourced SunOS 4.x and put energy behind it. I wrote a paper about it, I still wonder. http://www.mcvoy.com/lm/bitmover/lm/papers/srcos.html From crossd at gmail.com Wed Jan 4 13:50:18 2017 From: crossd at gmail.com (Dan Cross) Date: Tue, 3 Jan 2017 22:50:18 -0500 Subject: [TUHS] Mac OS X is Unix In-Reply-To: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: On Tue, Jan 3, 2017 at 3:19 PM, Doug McIlroy wrote: > > keeping the code I work on portable between Linux and the Mac requires > > more than a bit of ‘ifdef’ hell. > > Curmudgeonly comment: I bristle at the coupling of "ifdef" and "portable". > Ifdefs that adjust code for different systems are prima facie > evidence of NON-portability. I'll buy "configurable" as a descriptor > for such ifdef'ed code, but not "portable". > Here, here. And, while I am venting about ifdef: > As a matter of sytle, ifdefs are global constructs. Yet they often > have local effects like an if statement. Why do we almost always write > > #ifdef LINUX > linux code > #else > default unix code > #endif > > instead of the much cleaner > > if(LINUX) > linux code > else > default unix code > > In early days the latter would have cluttered precious memory > with unfreachable code, but now we have optimizing compilers > that will excise the useless branch just as effectively as cpp. > Interesting, but I'm curious how this would work in the context of C (or a C-like variant)? The code must parse and type-check in accordance with the existing standard, no? So if the `if(LINUX)` branch referred to, say, Linux-specific structure members, then how would the compiler recognize that avoid spitting out a diagnostic/erroring out? The existing C language seems defined to expressly disallow this sort of thing. What might be interesting would be an additional lexical token that would tag a particular expression as, "compile-time evaluated" with the consequence that the compiler would ignore the following statement or block if it appeared in a conditional (or something along those lines). It would be an error if such a conditional did not evaluate to a compile-time constant. So one might say something along the lines of, `if #(LINUX) { ... }` (or whatever syntax was reasonable and allowable with respect to the existing language) and this would be ignored by the compiler on systems for which LINUX evaluated to logical false (0 or whatever the kids call it these days). I imagine this would open other cans of worms, though. Much as the trait of overeating has been ascribed to our > hunter ancestors' need to eat fully when nature provided, > so overuse of ifdef echos coding practices tuned to > the capabilities of bygone computing systems. > > "Ifdef hell" is a fitting image for what has to be one of > Unix's least felicitous contributions to computing. Down > with ifdef! > I've learned two hard lessons about #ifdef. First, one should strive to avoid them by programming to an abstract interface, with "system" specific implementations of that interface that are selected by the build system (make, mk, whatever). I've further found that one rarely needs complicated shell scripts for things like that. Second, if one cannot avoid them, one should #ifdef around a particular feature, not a system. For example, instead of `#if defined(__linux__) || defined(__FreeBSD__) || defined(__OpenBSD__) || ... // Use mmap()` all over the place, one should just, `#ifdef PROJECT_USE_MMAP` and `#define PROJECT_USE_MMAP` for one's project somewhere (possibly based on system there or whatever). Then adding a new system is relatively easier: one defines a set of symbols for the new system, instead of trying to find every place where __linux__ is tested and adding to that. Ideally, one would avoid #if, though. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From usotsuki at buric.co Wed Jan 4 14:07:38 2017 From: usotsuki at buric.co (Steve Nickolas) Date: Tue, 3 Jan 2017 23:07:38 -0500 (EST) Subject: [TUHS] Unix stories In-Reply-To: References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> Message-ID: On Mon, 2 Jan 2017, Nick Downing wrote: > Yes, I agree, I want to do exactly that. And, I know my ideas are > probably also perceived as daft. But it's OK :) > > Unfortunately a C interpreter does not really work in principle, > because of the fact you can jump into loops and other kinds of abuse. > So the correct execution model is a bytecode. I was mainly looking at UCSD Pascal as a model, which iirc uses a bytecode optimized for Pascal. C isn't too different so the major difference is probably its implementation of pseudothreads/multitasking, which I don't think UCSD Pascal supported (at least on the Apple ][ where it was actually somewhat common). Well, there's also the issue of UCSD Pascal having a really braindead filesystem (even MS-DOS 1.25 was more advanced). I guess the question would be, what precisely would be needed for a reasonably simple Unix-alike (I suppose V7 would be a good initial target?) to be run on such, and then I'd have to figure out how to come up with the system from that. Not exactly my strong point =P That said, it would perhaps result in being able to run a (slow) Unix-alike on smaller systems which might not otherwise be able to run such an OS because of CPU limitations - like the Apple //e I have collecting dust off in the back room. ;) -uso. From downing.nick at gmail.com Wed Jan 4 15:46:43 2017 From: downing.nick at gmail.com (Nick Downing) Date: Wed, 4 Jan 2017 16:46:43 +1100 Subject: [TUHS] First release of 2.11BSD conversion Message-ID: hi all, For those who have been following my 2.11BSD conversion, I was working on this in about 2005 and I might have posted about it then, and then nothing much happened while I did a university degree and so on, but recently I picked it up again. When I left it, I was partway through an ambitious conversion of the BSD build system to my own design (a file called "defs.mk" was included in all Makefiles apparently), and I threw this out because it was too much work upfront. The important build tools like "cc" were working, but I have since reviewed all changes and done things differently. The result is I can now build the C and kernel libraries and a kernel, and they work OK. This seems like a pretty good milestone so I'm releasing the code on bitbucket. See https://bitbucket.org/nick_d2/ for the list of my repositories, there is another one there called "uzi" which is a related project, but not document so I will write about it later, in the meantime anyone is welcome to view the source and changelogs of it. The 2.11BSD repository is at the following link: https://bitbucket.org/nick_d2/211bsd There is a detailed readme.txt in the root of repository which explains exactly how I approached the conversion and gives build instructions, caveats and so forth. To avoid duplication I won't post this in the list, but I suggest people read it as a post, since it's extremely interesting to see all the porting issues laid out together. See https://bitbucket.org/nick_d2/211bsd/src/27343e0e0b273c2df1de958db2ef5528cccd0725/readme.txt?at=master Happy browsing :) cheers, Nick From tfb at tfeb.org Wed Jan 4 19:11:48 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Wed, 4 Jan 2017 09:11:48 +0000 Subject: [TUHS] Mac OS X is Unix In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> <81B054A7-A361-49F4-B974-51370500D60D@orthanc.ca> <9FC94387-9453-4BCF-8FBE-37193D70E512@tfeb.org> Message-ID: So, can you build the system with both C11 compiler and its older compiler, from the same sources (so, no keeping two copies of the same functionality and selecting at build time based on the compiler)? If you can I'm impressed. > On 4 Jan 2017, at 00:12, ron minnich wrote: > > Well, if by single-implementation, you mean single-compiler, I have a counter example: Harvey is Plan 9 with just enough changed to be C11 compliant. From elbingmiss at gmail.com Wed Jan 4 20:04:37 2017 From: elbingmiss at gmail.com (=?UTF-8?Q?=C3=81lvaro_Jurado?=) Date: Wed, 4 Jan 2017 11:04:37 +0100 Subject: [TUHS] Mac OS X is Unix In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> <81B054A7-A361-49F4-B974-51370500D60D@orthanc.ca> <9FC94387-9453-4BCF-8FBE-37193D70E512@tfeb.org> Message-ID: Not. Plan 9 developed a significant dependency of its original compilers. Plan 9 cc is a caller compiler so main parts of the code couldn't work now adapted in Harvey for gcc, clang and icc (intel) that are all callee. But it's not a C problem, it's a compiler issue. Probably with Steve's pcc you could too. Nerver tried, but it could be a good experiment. At the end is extremely difficult not to be at any point toolchain dependent, if Plan 9 original compilers could be improved enough to build an standard ANSI program out of the tree (they are strongly dependent of Plan 9 environment), also the Plan 9 code, that way you're suggesting would be possible. Anyway, for that, you wouldn't need any preprocessor code, also you needen't to build amd64 or riscv with the same sources. Álvaro El 04/01/2017 10:12, "Tim Bradshaw" escribió: > So, can you build the system with both C11 compiler and its older > compiler, from the same sources (so, no keeping two copies of the same > functionality and selecting at build time based on the compiler)? If you > can I'm impressed. > > > On 4 Jan 2017, at 00:12, ron minnich wrote: > > > > Well, if by single-implementation, you mean single-compiler, I have a > counter example: Harvey is Plan 9 with just enough changed to be C11 > compliant. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erc at pobox.com Wed Jan 4 22:24:30 2017 From: erc at pobox.com (Ed Carp) Date: Wed, 4 Jan 2017 06:24:30 -0600 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104033512.GA22116@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> Message-ID: That's a good question. I was working for Sun at the time, and no one I know was in favor of the switch - all we knew was that Scott McNealy was cramming it down the throats of both Sun and Suns customers (or that was the perception, anyway). I think they lost a lot of customers because of that. On 1/3/17, Larry McVoy wrote: > On Tue, Jan 03, 2017 at 10:23:28PM -0500, Dan Cross wrote: >> On Tue, Jan 3, 2017 at 10:00 PM, Warner Losh wrote: >> >> > On Tue, Jan 3, 2017 at 7:41 PM, Larry McVoy wrote: >> > > I was in building 5 at Sun when they were switching to SVr4 which >> > > became >> > > Solaris 2.0 (I think). >> > >> > Solaris 2.0 was the first SVr4 version of Solaris. 4.1.{1,2,3} were >> > still BSD based, and Solaris 2.0 was SunOS 5.0 and OpenWindows. >> > >> >> My favorite version number was SunOS 4.1.4U1: I was told that the ``U1'' >> meant, "you won", as in "you won. Here's another BSD-based release." > > That might have been the Greg Limes release. I may be all wrong but > someone, I think it was Greg, busted their ass to try and make SunOS > 4.x scale up on SMP machines. There were a lot of us at the time that > hated the SVr4 thing, it was such a huge step backwards. > > I dunno how much you care about Sun history, but SunOS, the BSD based > stuff before 5.0, the engineers and the customers *loved* it. I was > not the first guy who worked until midnight on that OS, I wasn't even > on the radar screen. Guy Harris worked on it, tons of people worked > on it, tons of people poured their heart and soul into it. It crushed > us when they went to SVr4, that shit sucked. > > My boss, Ken Okin, paid me for 6 months to go fight management to stop > the switch to SVr4. It was more than a decade later that I learned > that the reason for the switch was that Sun was out of money and AT&T > bought $200M of Sun stock at 35% over market but the deal was no more > SunOS, it had to be SVr4. > > I really wonder what the world would look like right now if Sun had > open sourced SunOS 4.x and put energy behind it. I wrote a paper > about it, I still wonder. > > http://www.mcvoy.com/lm/bitmover/lm/papers/srcos.html > From tfb at tfeb.org Wed Jan 4 22:26:45 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Wed, 4 Jan 2017 12:26:45 +0000 Subject: [TUHS] Mac OS X is Unix In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: On 4 Jan 2017, at 03:50, Dan Cross wrote: > > Interesting, but I'm curious how this would work in the context of C (or a C-like variant)? The code must parse and type-check in accordance with the existing standard, no? So if the `if(LINUX)` branch referred to, say, Linux-specific structure members, then how would the compiler recognize that avoid spitting out a diagnostic/erroring out? The existing C language seems defined to expressly disallow this sort of thing. Common Lisp has a notion of 'suppressing the reader' which basically means that the reader (which in CL is the thing which turns a stream of characters into the data structure that is the source code of the language) will do just enough to consume a form, but not any more than that. In particular it will ignore all sorts of things which would make it very unhappy if it looked too closely at them. And there are then read-time conditionals which will cause the reader to suppress the following thing, or not. It seems to me that, even without defining how things work in the very fine-grained way that CL does (where the data structure the reader produces is defined and you can program the reader itself), a C-like language could define what it means to 'suppress' a form, and support conditionals which did that. I think, reading again, that this might be quite close to your compile-time-evaluated idea. The thing to avoid is 'language in a string', where one language contains another language in strings (or equivalent), because then you end up putting the inner language together by concatenating strings, which can cross-cut constructs in the inner language in a horrible way. C is the language in a string of the C preprocessor. Where I work we use a tool which has a deeply horrible preprocessor which has the main syntax as its language in a string. That syntax *in turn* has a whole other language in its strings. Every time I look at this I want to hit someone. --tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at sdaoden.eu Wed Jan 4 23:04:34 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 14:04:34 +0100 Subject: [TUHS] Unix stories In-Reply-To: <20170103004959.GA29088@mcvoy.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> Message-ID: <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> Larry McVoy wrote: |It's simply a lack of craftsmen level thinking. Managers think that people |are fungable, whatever that means, I think it means anyone can do this \ |work. | |That's simply not the case, some people get the big picture and the details |and some people don't. | |There is also a culture of the cool kids vs the not cool kids. For \ |example, |at Sun, the kernel group was the top of the heap. When I was doing nselite |which begat Teamware then BitKeeper then Git et al, I was in the kernel |group. They wanted me to go over to the tools group. I looked over there |and saw people that weren't as good as the kernel people and passed. | |Same thing with testing. So many bad test harnesses. Because testing |isn't respected so they get the crappy programmers. One of the best |things I did at BitKeeper was to put testing on the same level as the |code and the documentation. As a result, we have a kick ass testing I had the thought this being standardized, as part of ISO 9001? Groups of three which iterate in a rotation over the three parts of a codebase, documentation / implementation / tests, also rotating internally, inside the group? And having some student satellites. Or atoms and electrons that form molecules, to be more bionic. I think much of the grief is owned to money, a.k.a. short-living interests. I think it is crucial, and very likely even essential for survival, that there are Universities where people have the possibility to linger badly paid in dark caves for years and years, to finally come to a spiral conclusion, so to say. If you doom a young human or student to spend a month doing nothing but writing tests, you surely do no good, and gain the same in return, and that only probably not in short-term. Donald E. Knuth really got that right in his TeXbook, with those funny homework excercises for over the weekend, say, "write an operating system". Laziness is another problem. I for one dislike a lot what C now is, where one has to write functions like explicit_bzero() / _memset() in order to avoid that compilers optimize away something that seems to "go away anyway" at the end of a scope. Or code blow due to automatic inlining and loop unroll. Or terrible aliasing and "sequence point" rules, where i think it is clear what i mean when i write "i = j + ++i" (i believe this is undefined behaviour). Explicit instrumentation via new language constructs would require more man power than paying a standard gremium to define semantics which effectively allow more compiler optimization and gain more performance and thus a sellable catchphrase, but on the long term this surely soils the ground on which we stand. I for one maintain a codebase that has grown over now almost four decades and i still cannot say i stand on grounds of pureness, beauty and elegance; attributes which, were possible, brighten up everydays work and make a day. --steffen From steffen at sdaoden.eu Wed Jan 4 23:32:02 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 14:32:02 +0100 Subject: [TUHS] Leap Second In-Reply-To: <586bc353.tOFm/S0IGecYYlh6%schily@schily.net> References: <20161229002105.GB94858@server.rulingia.com> <0d5eeef9-3dbb-0ddd-1b22-51fecee735d8@gmail.com> <586bc353.tOFm/S0IGecYYlh6%schily@schily.net> Message-ID: <20170104133202.VIWUz5j-a%steffen@sdaoden.eu> schily at schily.net (Joerg Schilling) wrote: |Tony Finch wrote: |> sds wrote: |>> Important question: did anybody have an "exciting" new year because \ |>> of a leap |>> second bug? |> |> I've been collecting failure reports on the LEAPSECS list | |https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare\ |-dns/ | |"go" seems to have a related bug. | |BTW: The POSIX standard intentionally does not include leap seconds \ |in the UNIX |time interface as it seems that this would cause more problems than \ |it claims |to fix. I think it is a problem, or better a gap, a void, with the current standard that software has no option to become informed of the event of a leap second for one, but further more that CLOCK_TAI is not available. I think it would make things easier if software which wants just that can get it, e.g., for periodic timer events etc. This is surely not a healing given that most timestamps etc. are based on UTC, but i think the severity of the problems could possibly be lowered. Especially now that multi-hour smears seem to become used by big companies it seems to be important to have a correct clock available. This is in fact something i don't really understand, at _that_ level that is to say. If, e.g., Google and Bloomberg both would have stated instead that they slew the leap second, then only a single second would have been affected, instead of multiple hours. --steffen From random832 at fastmail.com Wed Jan 4 23:49:40 2017 From: random832 at fastmail.com (Random832) Date: Wed, 04 Jan 2017 08:49:40 -0500 Subject: [TUHS] Mac OS X is Unix In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: <1483537780.1568758.837041257.2B050532@webmail.messagingengine.com> On Wed, Jan 4, 2017, at 07:26, Tim Bradshaw wrote: > The thing to avoid is 'language in a string', where one language contains > another language in strings (or equivalent), because then you end up > putting the inner language together by concatenating strings, which can > cross-cut constructs in the inner language in a horrible way. What about having the *same* language in strings? Like TCL? From random832 at fastmail.com Thu Jan 5 00:07:11 2017 From: random832 at fastmail.com (Random832) Date: Wed, 04 Jan 2017 09:07:11 -0500 Subject: [TUHS] Unix stories In-Reply-To: <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> Message-ID: <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> On Wed, Jan 4, 2017, at 08:04, Steffen Nurpmeso wrote: > terrible aliasing and "sequence point" rules, where i think it is > clear what i mean when i write "i = j + ++i" (i believe this is > undefined behaviour). I assume you're imagining it as being equivalent to i = j + i + 1, with a redundant store operation. But why couldn't it equally well mean i = 0; i += j; i+= ++i i = 0; i += j; i += (i += 1) If an architecture were to the most efficient way to assign an additive expression to a variable to zero it out and add each successive operand to it. The example seems contrived, because it's honestly hard to make a reasonable-sounding case for the prefix operators, and my usual go-to examples require postfix operators and/or pointers. But to be fair, your example is contrived too; why wouldn't you just do i += j + 1? But for a better example, I was in a discussion a couple weeks ago with someone who thought it was clear what they meant by an expression like this: *a++ = *a And not only was I easily able to come up with two reasonable-looking implementations where it means different things, I guessed wrong on which one they thought it should mean. My examples were stack-based architectures with a "store to address" instruction taking operands in each of the two possible orders, making it natural to evaluate either the RHS or the address-of-LHS first. A more realistic register-based architecture with pipelining might make it more efficient to evaluate one or the other first, or parts of each [assuming more complex expressions than in the example] mixed together, depending on the exact data dependencies in both expressions. > Explicit instrumentation via new language constructs would require > more man power than paying a standard gremium to define semantics > which effectively allow more compiler optimization and gain more > performance and thus a sellable catchphrase, but on the long term > this surely soils the ground on which we stand. > > I for one maintain a codebase that has grown over now almost four > decades and i still cannot say i stand on grounds of pureness, > beauty and elegance; attributes which, were possible, brighten up > everydays work and make a day. > > --steffen From david at kdbarto.org Thu Jan 5 00:51:35 2017 From: david at kdbarto.org (David) Date: Wed, 4 Jan 2017 06:51:35 -0800 Subject: [TUHS] Ifdef hell Message-ID: From: "Doug McIlroy" Subject:Re: [TUHS] Mac OS X is Unix > keeping the code I work on portable between Linux and the Mac requires > more than a bit of ‘ifdef’ hell. | Curmudgeonly comment: I bristle at the coupling of "ifdef” and "portable". | Ifdefs that adjust code for different systems are prima facie | evidence of NON-portability. I'll buy "configurable" as a descriptor | for such ifdef'ed code, but not "portable". | | "Ifdef hell" is a fitting image for what has to be one of | Unix's least felicitous contributions to computing. Down | with ifdef! | Doug Doug makes a very good point about ifdef hell. Though I’d claim that it isn’t even “configurable” at some level. Several years ago I was working at Megatek, a graphics h/w vendor. We were porting the X11 suite to various new boards at the rate of about 1 a week it seemed. Needless to say the code became such a mishmash of ifdef’s that you couldn’t figure out what some functions did any longer. You just hoped and prayed that your patch worked properly on the various hardware you were targeting and didn’t break it for anyone else. You ran the unit tests, if they passed you pushed your change and ran an hid under a rock for a while until you were sure it was safe to come out again. David From ron at ronnatalie.com Thu Jan 5 00:54:14 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Wed, 4 Jan 2017 09:54:14 -0500 Subject: [TUHS] Unix stories In-Reply-To: <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> Message-ID: <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> > I assume you're imagining it as being equivalent to i = j + i + 1, with a redundant store operation. It's what the language standard specifies, not imagination. C and C++ state that modifying twice between sequence points or using the value other than to compute the value for a store is undefined behavior. The languages put no constraint on what may happen when you do this. From crossd at gmail.com Thu Jan 5 01:02:41 2017 From: crossd at gmail.com (Dan Cross) Date: Wed, 4 Jan 2017 10:02:41 -0500 Subject: [TUHS] Mac OS X is Unix In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: On Wed, Jan 4, 2017 at 7:26 AM, Tim Bradshaw wrote: > On 4 Jan 2017, at 03:50, Dan Cross wrote: > > Interesting, but I'm curious how this would work in the context of C (or a > C-like variant)? The code must parse and type-check in accordance with the > existing standard, no? So if the `if(LINUX)` branch referred to, say, > Linux-specific structure members, then how would the compiler recognize > that avoid spitting out a diagnostic/erroring out? The existing C language > seems defined to expressly disallow this sort of thing. > > Common Lisp has a notion of 'suppressing the reader' which basically means > that the reader (which in CL is the thing which turns a stream of > characters into the data structure that is the source code of the language) > will do just enough to consume a form, but not any more than that. In > particular it will ignore all sorts of things which would make it very > unhappy if it looked too closely at them. And there are then read-time > conditionals which will cause the reader to suppress the following thing, > or not. It seems to me that, even without defining how things work in the > very fine-grained way that CL does (where the data structure the reader > produces is defined and you can program the reader itself), a C-like > language could define what it means to 'suppress' a form, and support > conditionals which did that. I think, reading again, that this might be > quite close to your compile-time-evaluated idea. > What I'm proposing is almost exactly like Common Lisp's `#-` and `#+` (these use reader suppression, of course). Delving further into the realm of reader macros and other Lisp-like reader things is, I think, a mistake: the complexity of the reader is arguably a wart on the side of Common Lisp. Of course, in C one doesn't have the notion of forms in the Lispy sense; it's a statement based language. So the syntactical construct consumed by a compile-time conditional would have to be specified to a greater level of precision (e.g., an `if` selection-statement in C is followed by general statement, in all of its richness). But the idea here is to create something that works with the syntax of the language, not separate from it, which is Lispy in character. A major problem of the preprocessor is that it's sort of bolted onto the side of the language and doesn't work with it very cleanly. On the other hand, it's kind of neat that one can use the preprocessor for things that aren't C. The thing to avoid is 'language in a string', where one language contains > another language in strings (or equivalent), because then you end up > putting the inner language together by concatenating strings, which can > cross-cut constructs in the inner language in a horrible way. C is the > language in a string of the C preprocessor. Where I work we use a tool > which has a deeply horrible preprocessor which has the main syntax as its > language in a string. That syntax *in turn* has a whole other language in > its strings. Every time I look at this I want to hit someone. > Precisely why I wouldn't want reader macros in all of their hideous glory. Consider LOOP. Ugh. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aps at ieee.org Thu Jan 5 00:59:56 2017 From: aps at ieee.org (Armando Stettner) Date: Wed, 4 Jan 2017 06:59:56 -0800 Subject: [TUHS] unsubsribe Message-ID: An HTML attachment was scrubbed... URL: From stephen.strowes at gmail.com Thu Jan 5 01:54:35 2017 From: stephen.strowes at gmail.com (sds) Date: Wed, 4 Jan 2017 16:54:35 +0100 Subject: [TUHS] Leap Second In-Reply-To: References: <20161229002105.GB94858@server.rulingia.com> <0d5eeef9-3dbb-0ddd-1b22-51fecee735d8@gmail.com> Message-ID: <93b155fe-be96-c812-7b67-810d8ecc9e8e@gmail.com> On 03/01/2017 16:06, Tony Finch wrote: > sds wrote: > >> Important question: did anybody have an "exciting" new year because of a leap >> second bug? > I've been collecting failure reports on the LEAPSECS list > > https://pairlist6.pair.net/pipermail/leapsecs/2017-January/thread.html This is great, thanks! S. From random832 at fastmail.com Thu Jan 5 01:59:03 2017 From: random832 at fastmail.com (Random832) Date: Wed, 04 Jan 2017 10:59:03 -0500 Subject: [TUHS] Unix stories In-Reply-To: <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> Message-ID: <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> On Wed, Jan 4, 2017, at 09:54, Ron Natalie wrote: > > I assume you're imagining it as being equivalent to i = j + i + 1, with a redundant store operation. > > It's what the language standard specifies, not imagination. C and C++ > state that modifying twice between sequence points or using the value > other than to compute the value for a store is undefined behavior. > The languages put no constraint on what may happen when you do this. But I'm talking about the alternate universe in which the person I was replying to is justified in thinking that it's clear what he means, vs a 'plausible' implementation that could arise from methods of translating expressions into machine operations (since people don't tend to respond to "it's undefined because it is, and the compiler can arbitrarily mess things up because it's allowed to by the fact that it's undefined" without a plausible theory of why something might ever behave in a way other than the obvious way) From rminnich at gmail.com Thu Jan 5 02:13:51 2017 From: rminnich at gmail.com (ron minnich) Date: Wed, 04 Jan 2017 16:13:51 +0000 Subject: [TUHS] Ifdef hell In-Reply-To: References: Message-ID: I forgot another good example of portability *mostly* minus ifdef, mainly plan9ports. This is the full set of plan 9 tools, including graphics tools like the rio window manager, and it's pretty much ifdef-free. I just did a quick grep for ifdef and it's basically ifdef cplusplus in some include files, which seems unavoidable nowadays, and of course anything imported from elsewhere has the usual pile of ifdefs. the configure script is just echo read the README file same for Makefile. To build, you just run an INSTALL script which builds it all. Compilers are fast. Makefile errors are avoided by just building it all, each time. Would that the gnubin tools had learned these lessons. On Wed, Jan 4, 2017 at 6:52 AM David wrote: > From: "Doug McIlroy" > Subject:Re: [TUHS] Mac OS X is Unix > > > keeping the code I work on portable between Linux and the Mac requires > > more than a bit of ‘ifdef’ hell. > > | Curmudgeonly comment: I bristle at the coupling of "ifdef” and > "portable". > | Ifdefs that adjust code for different systems are prima facie > | evidence of NON-portability. I'll buy "configurable" as a descriptor > | for such ifdef'ed code, but not "portable". > > | > > | "Ifdef hell" is a fitting image for what has to be one of > | Unix's least felicitous contributions to computing. Down > | with ifdef! > | Doug > > Doug makes a very good point about ifdef hell. Though I’d claim that it > isn’t even “configurable” at some level. > > Several years ago I was working at Megatek, a graphics h/w vendor. We were > porting the X11 suite to various new boards at the rate of about 1 a week > it seemed. Needless to say the code became such a mishmash of ifdef’s that > you couldn’t figure out what some functions did any longer. You just hoped > and prayed that your patch worked properly on the various hardware you were > targeting and didn’t break it for anyone else. You ran the unit tests, if > they passed you pushed your change and ran an hid under a rock for a while > until you were sure it was safe to come out again. > > David -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Thu Jan 5 02:17:59 2017 From: rminnich at gmail.com (ron minnich) Date: Wed, 04 Jan 2017 16:17:59 +0000 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104033512.GA22116@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> Message-ID: Larry, had Sun open sourced SunOS, as you fought so hard to make happen, Linux might not have happened as it did. SunOS was really good. Chalk up another win for ATT! OTOH, I bet Sun would have done a CDL type license, which would have made it all pointless. To this day, I run into problems talking ZFS to people, entirely because of the CDL. ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at sdaoden.eu Thu Jan 5 02:22:38 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 17:22:38 +0100 Subject: [TUHS] Unix stories In-Reply-To: <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> Message-ID: <20170104162238.qUWzAcIu7%steffen@sdaoden.eu> Random832 wrote: |On Wed, Jan 4, 2017, at 08:04, Steffen Nurpmeso wrote: |> terrible aliasing and "sequence point" rules, where i think it is |> clear what i mean when i write "i = j + ++i" (i believe this is |> undefined behaviour). | |I assume you're imagining it as being equivalent to i = j + i + 1, with |a redundant store operation. | |But why couldn't it equally well mean No i don't, and the thing is that it could definetely not equally mean anything. That is exactly the point. I skip quite a lot here. ... |example is contrived too; why wouldn't you just do i += j + 1? But for a |better example, I was in a discussion a couple weeks ago with someone |who thought it was clear what they meant by an expression like this: | |*a++ = *a | |And not only was I easily able to come up with two reasonable-looking |implementations where it means different things, I guessed wrong on |which one they thought it should mean. My examples were stack-based |architectures with a "store to address" instruction taking operands in So if we agree that a high level language should abstract such problems from the programmer, with a right hand side that is evaluated and stored in the target of the left hand side, then all is fine. |each of the two possible orders, making it natural to evaluate either |the RHS or the address-of-LHS first. A more realistic register-based |architecture with pipelining might make it more efficient to evaluate |one or the other first, or parts of each [assuming more complex |expressions than in the example] mixed together, depending on the exact |data dependencies in both expressions. It is all right by me. "Can't say that on the radio." --steffen From steffen at sdaoden.eu Thu Jan 5 02:30:17 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 17:30:17 +0100 Subject: [TUHS] Unix stories In-Reply-To: <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> Message-ID: <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> Random832 wrote: |On Wed, Jan 4, 2017, at 09:54, Ron Natalie wrote: |>> I assume you're imagining it as being equivalent to i = j + i + \ |>> 1, with a redundant store operation. |> |> It's what the language standard specifies, not imagination. C and C++ |> state that modifying twice between sequence points or using the value |> other than to compute the value for a store is undefined behavior. |> The languages put no constraint on what may happen when you do this. | |But I'm talking about the alternate universe in which the person I was |replying to is justified in thinking that it's clear what he means, vs a |'plausible' implementation that could arise from methods of translating |expressions into machine operations (since people don't tend to respond |to "it's undefined because it is, and the compiler can arbitrarily mess |things up because it's allowed to by the fact that it's undefined" |without a plausible theory of why something might ever behave in a way |other than the obvious way) It is clear in assembler, and C was ment, as i understand it, as a higher-level portable abstraction of assembler. Which alternate universe do you refer to? --steffen From schily at schily.net Thu Jan 5 02:31:09 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 17:31:09 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> Message-ID: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> ron minnich wrote: > Larry, had Sun open sourced SunOS, as you fought so hard to make happen, > Linux might not have happened as it did. SunOS was really good. Chalk up > another win for ATT! Well, Sun opensourced Solaris and Solaris is based on the SunOS sources. Note that the Svr4 kernel was derived from the SunOS-4.0 kernel. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From random832 at fastmail.com Thu Jan 5 02:32:24 2017 From: random832 at fastmail.com (Random832) Date: Wed, 04 Jan 2017 11:32:24 -0500 Subject: [TUHS] Unix stories In-Reply-To: <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> Message-ID: <1483547544.1606930.837228297.014AB061@webmail.messagingengine.com> On Wed, Jan 4, 2017, at 11:30, Steffen Nurpmeso wrote: > It is clear in assembler The operation you described does not exist as a single-statement construct in assembler for any architecture I'm familiar with. > , and C was ment, as i understand it, as > a higher-level portable abstraction of assembler. Which alternate > universe do you refer to? > > --steffen From rminnich at gmail.com Thu Jan 5 02:34:52 2017 From: rminnich at gmail.com (ron minnich) Date: Wed, 04 Jan 2017 16:34:52 +0000 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> Message-ID: On Wed, Jan 4, 2017 at 8:31 AM Joerg Schilling wrote: > > > Well, Sun opensourced Solaris and Solaris is based on the SunOS sources. > > > too little, years too late, with a license few (including me) really felt comfortable with. Larry IIRC was pushing for a basic BSD license. I don't know if any of us really understood the full implications of the GPL for kernels back then -- both positive and negative. -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Thu Jan 5 02:35:11 2017 From: random832 at fastmail.com (Random832) Date: Wed, 04 Jan 2017 11:35:11 -0500 Subject: [TUHS] Unix stories In-Reply-To: <20170104162238.qUWzAcIu7%steffen@sdaoden.eu> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <20170104162238.qUWzAcIu7%steffen@sdaoden.eu> Message-ID: <1483547711.1607300.837230857.4B111F27@webmail.messagingengine.com> On Wed, Jan 4, 2017, at 11:22, Steffen Nurpmeso wrote: > Random832 wrote: > |On Wed, Jan 4, 2017, at 08:04, Steffen Nurpmeso wrote: > |> terrible aliasing and "sequence point" rules, where i think it is > |> clear what i mean when i write "i = j + ++i" (i believe this is > |> undefined behaviour). > | > |I assume you're imagining it as being equivalent to i = j + i + 1, with > |a redundant store operation. > | > |But why couldn't it equally well mean > > No i don't, Then I guessed wrong. Again. (So much for "clear", I suppose). But you're the one who "think[s] it's clear what [you] mean by it"; so you simply *must* have a meaning in mind. Why not explain what it is? From rminnich at gmail.com Thu Jan 5 02:41:06 2017 From: rminnich at gmail.com (ron minnich) Date: Wed, 04 Jan 2017 16:41:06 +0000 Subject: [TUHS] # as first character of file Message-ID: I just went looking at the v6 source to confirm a memory, namely that cpp was only invoked if a # was the first character in the file. Hence, this: https://github.com/dspinellis/unix-history-repo/blob/Research-V6-Snapshot-Development/usr/source/c/c01.c#L1 People occasionally forgot this, and hilarity ensued. Now I'm curious. Anyone know when that convention ended? ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Thu Jan 5 02:46:30 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 08:46:30 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> Message-ID: <20170104164630.GA3405@mcvoy.com> On Wed, Jan 04, 2017 at 05:31:09PM +0100, Joerg Schilling wrote: > ron minnich wrote: > > > Larry, had Sun open sourced SunOS, as you fought so hard to make happen, > > Linux might not have happened as it did. SunOS was really good. Chalk up > > another win for ATT! > > Well, Sun opensourced Solaris and Solaris is based on the SunOS sources. > > Note that the Svr4 kernel was derived from the SunOS-4.0 kernel. I was in the kernel group at Sun at the time. The pictures I posted are of the guy that did the bring up. In no way was SVr4 even remotely derived from the SunOS 4.0 kernel. The only relation the two had was that both were derived from the original Unix sources but by this time they had diverged so much there was very little in common. Very little. There was good reason for all the SunOS people being butt hurt, Scooter threw out a lot of very hard work that he wasn't smart enough to value. I get the $200M part, he didn't get the value part. From clemc at ccc.com Thu Jan 5 02:46:58 2017 From: clemc at ccc.com (Clem Cole) Date: Wed, 4 Jan 2017 11:46:58 -0500 Subject: [TUHS] # as first character of file In-Reply-To: References: Message-ID: IIRC the V7 compiler did not require it. It's possible Typesetter C may have also. On Wed, Jan 4, 2017 at 11:41 AM, ron minnich wrote: > I just went looking at the v6 source to confirm a memory, namely that cpp > was only invoked if a # was the first character in the file. Hence, this: > https://github.com/dspinellis/unix-history-repo/blob/Research-V6-Snapshot- > Development/usr/source/c/c01.c#L1 > > People occasionally forgot this, and hilarity ensued. > > Now I'm curious. Anyone know when that convention ended? > > ron > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at sdaoden.eu Thu Jan 5 02:51:20 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 17:51:20 +0100 Subject: [TUHS] Unix stories In-Reply-To: <1483547544.1606930.837228297.014AB061@webmail.messagingengine.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> <1483547544.1606930.837228297.014AB061@webmail.messagingengine.com> Message-ID: <20170104165120.rYUyVGovj%steffen@sdaoden.eu> Random832 wrote: |On Wed, Jan 4, 2017, at 11:30, Steffen Nurpmeso wrote: |> It is clear in assembler | |The operation you described does not exist as a single-statement |construct in assembler for any architecture I'm familiar with. Ok, but that quite clearly was not what i have meant. I meant that if you program in assembler, well, all those newer assembler languages that i have seen, the target of an operation is the target of a store, and say if it is a register that is also one of the sources, it means nothing, from the language side. ARM has even predicates that perform operations on that value before the store, even if the source is the same as the destination. It simply strives me absurd that i, in C, cannot simply say what i want and let the C compiler with all its knowledge of the target system decide what to do about it. --steffen From random832 at fastmail.com Thu Jan 5 02:54:02 2017 From: random832 at fastmail.com (Random832) Date: Wed, 04 Jan 2017 11:54:02 -0500 Subject: [TUHS] Unix stories In-Reply-To: <20170104165120.rYUyVGovj%steffen@sdaoden.eu> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> <1483547544.1606930.837228297.014AB061@webmail.messagingengine.com> <20170104165120.rYUyVGovj%steffen@sdaoden.eu> Message-ID: <1483548842.1612851.837252537.064CFDA2@webmail.messagingengine.com> On Wed, Jan 4, 2017, at 11:51, Steffen Nurpmeso wrote: > Ok, but that quite clearly was not what i have meant. I meant > that if you program in assembler, well, all those newer assembler > languages that i have seen, the target of an operation is the > target of a store, and say if it is a register that is also one of > the sources, it means nothing, from the language side. Yes but you are storing *twice*, two different values, to the same variable, in the same statement. There's no operation in any assembler language that does that, and at this point I honestly don't know what value you expect to 'win'. > ARM has > even predicates that perform operations on that value before the > store, even if the source is the same as the destination. It > simply strives me absurd that i, in C, cannot simply say what > i want Why do you think that "i = ... + ++i" is a reasonable way to say what you want? > and let the C compiler with all its knowledge of the target > system decide what to do about it. > > --steffen From schily at schily.net Thu Jan 5 02:57:33 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 17:57:33 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> Message-ID: <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> ron minnich wrote: > too little, years too late, with a license few (including me) really felt > comfortable with. Larry IIRC was pushing for a basic BSD license. I don't > know if any of us really understood the full implications of the GPL for > kernels back then -- both positive and negative. The Sun employees have been asked whether they would support the BSD license and many of then said that they will terminate their contract if Sun uses BSD. The GPL cannot be used as it is far too limiting. I am happy with the license - the problem you may have is FUD against the CDDL spread by the FSF... Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From ron at ronnatalie.com Thu Jan 5 02:58:41 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Wed, 4 Jan 2017 11:58:41 -0500 Subject: [TUHS] Unix stories In-Reply-To: <1483548842.1612851.837252537.064CFDA2@webmail.messagingengine.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> <1483547544.1606930.837228297.014AB061@webmail.messagingengine.com> <20170104165120.rYUyVGovj%steffen@sdaoden.eu> <1483548842.1612851.837252537.064CFDA2@webmail.messagingengine.com> Message-ID: <017201d266ab$cda885a0$68f990e0$@ronnatalie.com> There's a trademark between allowing the compiler to reorder things and having a defined order of operations. Steps like that are well-defined in Java for instance. C lets the compiler do what it sees fit. Note that it's not necessarily any better in assembler. There are RISC architectures where load-followed-by-store and vice versa may not always be valid if done in quick succession. Requiring the compiler to insert sequence points typically wastes a lot of cycles. Assembler programmers tend to think about what they are doing, the C compiler tries to do some of this on its own and its not clairvoyant. From steffen at sdaoden.eu Thu Jan 5 03:03:24 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 18:03:24 +0100 Subject: [TUHS] Unix stories In-Reply-To: <1483547711.1607300.837230857.4B111F27@webmail.messagingengine.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <20170104162238.qUWzAcIu7%steffen@sdaoden.eu> <1483547711.1607300.837230857.4B111F27@webmail.messagingengine.com> Message-ID: <20170104170324.cIsJDV7So%steffen@sdaoden.eu> Random832 wrote: |On Wed, Jan 4, 2017, at 11:22, Steffen Nurpmeso wrote: |> Random832 wrote: |>|On Wed, Jan 4, 2017, at 08:04, Steffen Nurpmeso wrote: |>|> terrible aliasing and "sequence point" rules, where i think it is |>|> clear what i mean when i write "i = j + ++i" (i believe this is |>|> undefined behaviour). |>| |>|I assume you're imagining it as being equivalent to i = j + i + 1, with |>|a redundant store operation. |>| |>|But why couldn't it equally well mean |> |> No i don't, | |Then I guessed wrong. Again. (So much for "clear", I suppose). But |you're the one who "think[s] it's clear what [you] mean by it"; so you |simply *must* have a meaning in mind. Why not explain what it is? Hey. I was a football (we play it with the feet) goalkeeper, and i can assure you i can trump louder than many alike. --steffen From schily at schily.net Thu Jan 5 03:02:51 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 18:02:51 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104164630.GA3405@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> Message-ID: <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> Larry McVoy wrote: > > Note that the Svr4 kernel was derived from the SunOS-4.0 kernel. > > I was in the kernel group at Sun at the time. The pictures I posted > are of the guy that did the bring up. > > In no way was SVr4 even remotely derived from the SunOS 4.0 kernel. I cannot confirm this at all. I have access to both SunOS-4.x and Solaris sources and it is obvious that the SVr4 and Solaris kernel code is very similar. You could e.g. convert device drivers easily from SunOS-4.x to Solaris and I did this for my drivers. I remember one notable difference between SunOS-4.x and Svr4/Solaris: The kernel function as_hole() has been renamed to as_gap() on request from AT&T ;-) On the other side, the kernels SVr3 and SVr4 are very different and it is close to impossible to port a SVr3 device driver to SVr4. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Thu Jan 5 03:06:35 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 09:06:35 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> Message-ID: <20170104170635.GB3405@mcvoy.com> On Wed, Jan 04, 2017 at 05:57:33PM +0100, Joerg Schilling wrote: > ron minnich wrote: > > > too little, years too late, with a license few (including me) really felt > > comfortable with. Larry IIRC was pushing for a basic BSD license. I don't > > know if any of us really understood the full implications of the GPL for > > kernels back then -- both positive and negative. > > The Sun employees have been asked whether they would support the BSD license > and many of then said that they will terminate their contract if Sun uses BSD. Huh? First of all, Sun employees don't have a contract, they are (or were) at will employees. So I think you mean they would quit if it were BSD licensed. Second, I've _never_ heard a single Sun person say they would quit if Sun open sourced something under the BSD license. I'm sure I've heard someone say they didn't like that license but never heard anyone giving up their job over it. From steffen at sdaoden.eu Thu Jan 5 03:08:48 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 18:08:48 +0100 Subject: [TUHS] Unix stories In-Reply-To: <1483548842.1612851.837252537.064CFDA2@webmail.messagingengine.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> <1483547544.1606930.837228297.014AB061@webmail.messagingengine.com> <20170104165120.rYUyVGovj%steffen@sdaoden.eu> <1483548842.1612851.837252537.064CFDA2@webmail.messagingengine.com> Message-ID: <20170104170848.SHZpzJfXR%steffen@sdaoden.eu> Random832 wrote: |On Wed, Jan 4, 2017, at 11:51, Steffen Nurpmeso wrote: |> Ok, but that quite clearly was not what i have meant. I meant |> that if you program in assembler, well, all those newer assembler |> languages that i have seen, the target of an operation is the |> target of a store, and say if it is a register that is also one of |> the sources, it means nothing, from the language side. | |Yes but you are storing *twice*, two different values, to the same |variable, in the same statement. There's no operation in any assembler |language that does that, and at this point I honestly don't know what |value you expect to 'win'. Hm. Yet this is exactly what i want? (Hihi. Don't be offended, i really have already forgotten the example. It was something like "*i = j + *i++" or the like..) |> ARM has |> even predicates that perform operations on that value before the |> store, even if the source is the same as the destination. It |> simply strives me absurd that i, in C, cannot simply say what |> i want | |Why do you think that "i = ... + ++i" is a reasonable way to say what |you want? Man, i write it down, and it even stands several code iterations? That must be it, then! |> and let the C compiler with all its knowledge of the target |> system decide what to do about it. Ciao. --steffen From clemc at ccc.com Thu Jan 5 03:08:40 2017 From: clemc at ccc.com (Clem Cole) Date: Wed, 4 Jan 2017 12:08:40 -0500 Subject: [TUHS] SunOS vs Linux Message-ID: On Wed, Jan 4, 2017 at 11:17 AM, ron minnich > wrote: > Larry, had Sun open sourced SunOS, as you fought so hard to make happen, > Linux might not have happened as it did. SunOS was really good. Chalk up > another win for ATT! > ​FWIW: I disagree​. For details look at my discussion of rewriting Linux in RUST on quora. But a quick point is this .... Linux original took off (and was successful) not because of GPL, but in spite of it and later the GPL would help it. But it was not the GPL per say that made Linux vs BSD vs SunOS et al. What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a lot of hackers (myself included) thought the case was about *copyright*. It was not, it was about *trade secret* and the ideas around UNIX. * i.e.* folks like, we "mentally contaminated" with the AT&T Intellectual Property. When the case came, folks like me that were running 386BSD which would later begat FreeBSD et al, got scared. At that time, *BSD (and SunOS) were much farther along in the development and stability. But .... may of us hought Linux would insulate us from losing UNIX on cheap HW because their was not AT&T copyrighted code in it. Sadly, the truth is that if AT&T had won the case, *all UNIX-like systems* would have had to be removed from the market in the USA and EU [NATO-allies for sure]. That said, the fact the *BSD and Linux were in the wild, would have made it hard to enforce and at a "Free" (as in beer) price it may have been hard to make it stick. But that it was a misunderstanding of legal thing that made Linux "valuable" to us, not the implementation. If SunOS has been available, it would not have been any different. It would have been thought of based on the AT&T IP, but trade secret and original copyright. Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Thu Jan 5 03:10:33 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 09:10:33 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> Message-ID: <20170104171033.GC3405@mcvoy.com> On Wed, Jan 04, 2017 at 06:02:51PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > > Note that the Svr4 kernel was derived from the SunOS-4.0 kernel. > > > > I was in the kernel group at Sun at the time. The pictures I posted > > are of the guy that did the bring up. > > > > In no way was SVr4 even remotely derived from the SunOS 4.0 kernel. > > I cannot confirm this at all. > > I have access to both SunOS-4.x and Solaris sources and it is obvious that the I'm not sure how you have legal access to the SunOS 4.x code. I'd love a copy of that source but so far as I know it's locked up. > SVr4 and Solaris kernel code is very similar. Sure it's similar. The process was: untar the SVr4 code take anything useful from the SunOS code try and make it compat If you want to call that derived from that's your call. In my mind "derived from" would mean start with the SunOS code and make it SVr4 compat. That is *not* what AT&T paid $200M to have happen. They knew that System V was a non-starter and they wanted all the SunOS goodness in System V. From schily at schily.net Thu Jan 5 03:11:20 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 18:11:20 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104170635.GB3405@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> Message-ID: <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> Larry McVoy wrote: > > The Sun employees have been asked whether they would support the BSD license > > and many of then said that they will terminate their contract if Sun uses BSD. > > Huh? First of all, Sun employees don't have a contract, they are (or were) > at will employees. So I think you mean they would quit if it were BSD > licensed. > > Second, I've _never_ heard a single Sun person say they would quit if Sun > open sourced something under the BSD license. I'm sure I've heard someone > say they didn't like that license but never heard anyone giving up their > job over it. I discussed the possible options for an OpenSolaris license with Andrew Tucker in September 2004 during a dinner. Andrew Tucker was "distinguished engineer" and the chief architect for the OpenSolaris creation. BTW: this fact has been confirmed by Simon Phipps, so I am very sure about it. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From tfb at tfeb.org Thu Jan 5 03:14:33 2017 From: tfb at tfeb.org (tfb at tfeb.org) Date: Wed, 4 Jan 2017 17:14:33 +0000 Subject: [TUHS] Mac OS X is Unix In-Reply-To: References: <201701032019.v03KJ8oq028944@tahoe.cs.Dartmouth.EDU> Message-ID: <2CB47146-0555-46C9-8A2A-F028EC1DE263@tfeb.org> On 4 Jan 2017, at 15:02, Dan Cross wrote: > > What I'm proposing is almost exactly like Common Lisp's `#-` and `#+` (these use reader suppression, of course). Delving further into the realm of reader macros and other Lisp-like reader things is, I think, a mistake: the complexity of the reader is arguably a wart on the side of Common Lisp. I kind of disagree about the CL reader in the context of CL, but strongly agree that such a thing would be a mistake for a C-family language. but yes, #+ & #- are what I'd like for a C-family language. As you say statement languages make this a bit harder: I've never understood why anyone thought they were a good idea (I think a C-level expression-language ought to be reasonably easy to do). -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Thu Jan 5 03:15:50 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 09:15:50 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> Message-ID: <20170104171550.GD3405@mcvoy.com> On Wed, Jan 04, 2017 at 06:11:20PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > > The Sun employees have been asked whether they would support the BSD license > > > and many of then said that they will terminate their contract if Sun uses BSD. > > > > Huh? First of all, Sun employees don't have a contract, they are (or were) > > at will employees. So I think you mean they would quit if it were BSD > > licensed. > > > > Second, I've _never_ heard a single Sun person say they would quit if Sun > > open sourced something under the BSD license. I'm sure I've heard someone > > say they didn't like that license but never heard anyone giving up their > > job over it. > > I discussed the possible options for an OpenSolaris license with Andrew Tucker > in September 2004 during a dinner. Andrew Tucker was "distinguished engineer" > and the chief architect for the OpenSolaris creation. > > BTW: this fact has been confirmed by Simon Phipps, so I am very sure about it. "this fact" being that "many" Sun employees were prepared to quit if Solaris was BSD licensed? I'd like to see a list of those people, I find it extremely hard to believe, but data will change my opinion. From david at kdbarto.org Thu Jan 5 03:22:14 2017 From: david at kdbarto.org (David) Date: Wed, 4 Jan 2017 09:22:14 -0800 Subject: [TUHS] Mentally Contaminated In-Reply-To: References: Message-ID: <96E8D150-E30F-4AC5-BEA1-C7DDC000D22A@kdbarto.org> > > What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a > lot of hackers (myself included) thought the case was about *copyright*. > It was not, it was about *trade secret* and the ideas around UNIX. * i.e.* > folks like, we "mentally contaminated" with the AT&T Intellectual Property. > Wasn’t there a Usenix button with the phrase “Mentally Contaminated” on it? I’m sure I’ve got it around here somewhere. Or is my memory suffering from parity errors? David From clemc at ccc.com Thu Jan 5 03:23:42 2017 From: clemc at ccc.com (Clem Cole) Date: Wed, 04 Jan 2017 17:23:42 +0000 Subject: [TUHS] Mentally Contaminated In-Reply-To: <96E8D150-E30F-4AC5-BEA1-C7DDC000D22A@kdbarto.org> References: <96E8D150-E30F-4AC5-BEA1-C7DDC000D22A@kdbarto.org> Message-ID: Yes, i have one somewhere On Wed, Jan 4, 2017 at 12:22 PM David wrote: > > What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a > lot of hackers (myself included) thought the case was about *copyright*. > It was not, it was about *trade secret* and the ideas around UNIX. * i.e.* > folks like, we "mentally contaminated" with the AT&T Intellectual Property. > Wasn’t there a Usenix button with the phrase “Mentally Contaminated” on it? I’m sure I’ve got it around here somewhere. Or is my memory suffering from parity errors? David -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Thu Jan 5 03:36:00 2017 From: rminnich at gmail.com (ron minnich) Date: Wed, 04 Jan 2017 17:36:00 +0000 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> Message-ID: On Wed, Jan 4, 2017 at 8:57 AM Joerg Schilling wrote: > > > I am happy with the license - the problem you may have is FUD against the > CDDL > spread by the FSF... > > > I am glad you are happy with the license. Somebody had to be :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at sdaoden.eu Thu Jan 5 03:38:04 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 18:38:04 +0100 Subject: [TUHS] Unix stories In-Reply-To: <017201d266ab$cda885a0$68f990e0$@ronnatalie.com> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> <1483547544.1606930.837228297.014AB061@webmail.messagingengine.com> <20170104165120.rYUyVGovj%steffen@sdaoden.eu> <1483548842.1612851.837252537.064CFDA2@webmail.messagingengine.com> <017201d266ab$cda885a0$68f990e0$@ronnatalie.com> Message-ID: <20170104173804.xrkvjknBx%steffen@sdaoden.eu> "Ron Natalie" wrote: |There's a trademark between allowing the compiler to reorder things \ |and having a defined order of operations. |Steps like that are well-defined in Java for instance. C lets the \ |compiler do what it sees fit. | |Note that it's not necessarily any better in assembler. There are \ |RISC architectures where load-followed-by-store and vice versa may \ |not always be valid if done in quick succession. Requiring the compiler \ |to insert sequence points typically wastes a lot of cycles. Assembler \ |programmers tend to think about what they are doing, the C compiler \ |tries to do some of this on its own and its not clairvoyant. I have just read again Clive Feather's ISO/IEC JTC1/SC22/WG14 N925 draft on sequence points, and i seem to be wrong about especially the shown exampl,e and Random knew that earlier. I first read that document in the context of aliasing issues a few years back, when i saw some BSD changesets fly by, and i remember a thread on a FreeBSD list, too, where objects backing pointers could no longer be accessed directly, but first need to be copied over to some -- then newly introduced -- local scope storage before being used, because of new aliasing rules of the C language. It seems i hyperventilated in the sequence point document back then. --steffen From schily at schily.net Thu Jan 5 03:39:25 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 18:39:25 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104171033.GC3405@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> Message-ID: <586d334d.XcKOxzKwrzmvL326%schily@schily.net> Larry McVoy wrote: > > I cannot confirm this at all. > > > > I have access to both SunOS-4.x and Solaris sources and it is obvious that the > > I'm not sure how you have legal access to the SunOS 4.x code. I'd love a > copy of that source but so far as I know it's locked up. You did not make a backup while you worked at Sun? Well, I am working in a governmental owned research unit and we did buy the SunOS-4.x sources for 100$ via the university access program. Before, I was working at H.Berthold AG, the first OEM customer for Sun equipment. Given that H.Berthold AG sold aprox. 25% of all Suns made in the 1980s, I had partial source access since 1986 and in 1988, I received a SunOS-4.0 kernel source tape from Bill Joy after the Sun Europe CEO asked him whether Bill could help me with with SunOS sources for my Dimploma thesis that is a Copy on Write filesystem for optical media (WOFS). While I cannot OSS this filesystem for SunOS-4.x, I am still planning to port it to OpenSolaris as this would permit me to OSS it. Hint: I have been told from Sun employees that the Sun ZFS group did read my diploma thesis before they started with ZFS even though it is written in German ;-) My dimploma thesis was also used as the VFS documentation for people who intended to write a new filesystem. > > SVr4 and Solaris kernel code is very similar. > > Sure it's similar. The process was: > > untar the SVr4 code But the SVr4 code has been created from modifying the SunOS-4.0 sources. BTW: AFAIK, Solaris 2 has been derived from SunOS-4.1.4 by adding the few parts of the SVr4 code that really differ from SunOS-4.x. There seems to be a general missunderstandings: I do not call SunOS-4.x a "BSD based OS" as SunOS-4.0 introduced a new memory management subsystem in the kernel. AT&T was very interested in this feature and because of this subsystem, the SVr4 kernel had to derived from the SunOS-4.0 kernel. This has been mentioned in talks on the Sun User group meeting in December 1987. I am not sure whether this was a talk from Bill Joy or from other people from the SunOS kernel group. The userland code from SVr4 however is fully derived from SVr3, ignoring all enhancements and fixes that appeared in BSD and SunOS before. What I have been told about why people believed that Solaris is slow was mainly caused by the fact that there was a "dd" based benchmark that did a lot 512 byte block transfers and since AT&T did not understand that an OS with virtual memory needs to use page aligned tranfser buffers, the AT&T "dd" until 1994 used "malloc()" instead of "valloc()" and this usually caused a 512 byte tansfer in "dd" to be split into two kernel transfers. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Thu Jan 5 03:40:07 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 18:40:07 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104171550.GD3405@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> Message-ID: <586d3377.O9F94JXabKYeeaLf%schily@schily.net> Larry McVoy wrote: > > BTW: this fact has been confirmed by Simon Phipps, so I am very sure about it. > > "this fact" being that "many" Sun employees were prepared to quit if > Solaris was BSD licensed? I'd like to see a list of those people, > I find it extremely hard to believe, but data will change my opinion. Try to ask Simon..... Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Thu Jan 5 03:41:40 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 18:41:40 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> Message-ID: <586d33d4.iqNjumG+K3wZD/gM%schily@schily.net> ron minnich wrote: > On Wed, Jan 4, 2017 at 8:57 AM Joerg Schilling wrote: > > I am happy with the license - the problem you may have is FUD against the > > CDDL > > spread by the FSF... > > > I am glad you are happy with the license. Somebody had to be :-) Well, the final CDDL text has been changed on my request after the first published draft in order to make me happy ;-) Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Thu Jan 5 03:42:17 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 09:42:17 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d3377.O9F94JXabKYeeaLf%schily@schily.net> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> Message-ID: <20170104174217.GG3405@mcvoy.com> On Wed, Jan 04, 2017 at 06:40:07PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > > BTW: this fact has been confirmed by Simon Phipps, so I am very sure about it. > > > > "this fact" being that "many" Sun employees were prepared to quit if > > Solaris was BSD licensed? I'd like to see a list of those people, > > I find it extremely hard to believe, but data will change my opinion. > > Try to ask Simon..... > > J?rg You're the guy making the claim, onus is on you to back it up. That's how things work. From steffen at sdaoden.eu Thu Jan 5 03:47:57 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 04 Jan 2017 18:47:57 +0100 Subject: [TUHS] Unix stories In-Reply-To: <20170104173804.xrkvjknBx%steffen@sdaoden.eu> References: <5257291ca0a0e1d80c646cab730129d589c5d707@webmail.yaccman.com> <42922C34-342F-4E86-83E2-3618129139B2@tfeb.org> <20170103004959.GA29088@mcvoy.com> <20170104130434.NQFzLGpVU%steffen@sdaoden.eu> <1483538831.1573798.837053385.2EB8CAC9@webmail.messagingengine.com> <012e01d2669a$6b2f89c0$418e9d40$@ronnatalie.com> <1483545543.1599443.837188969.6EAAD62B@webmail.messagingengine.com> <20170104163017.XtxbzN7PQ%steffen@sdaoden.eu> <1483547544.1606930.837228297.014AB061@webmail.messagingengine.com> <20170104165120.rYUyVGovj%steffen@sdaoden.eu> <1483548842.1612851.837252537.064CFDA2@webmail.messagingengine.com> <017201d266ab$cda885a0$68f990e0$@ronnatalie.com> <20170104173804.xrkvjknBx%steffen@sdaoden.eu> Message-ID: <20170104174757.17kurddFA%steffen@sdaoden.eu> And Random was sorted out of Cc:, i'm digging into it ;) --steffen From schily at schily.net Thu Jan 5 03:48:06 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 18:48:06 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104174217.GG3405@mcvoy.com> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> Message-ID: <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> Larry McVoy wrote: > On Wed, Jan 04, 2017 at 06:40:07PM +0100, Joerg Schilling wrote: > > Larry McVoy wrote: > > > > > > BTW: this fact has been confirmed by Simon Phipps, so I am very sure about it. > > > > > > "this fact" being that "many" Sun employees were prepared to quit if > > > Solaris was BSD licensed? I'd like to see a list of those people, > > > I find it extremely hard to believe, but data will change my opinion. > > > > Try to ask Simon..... > > > > J?rg > > You're the guy making the claim, onus is on you to back it up. That's > how things work. Well, I thought using google should be simple: https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License Check the video mentioned there as this just lists what Simon did say. BTW: Danese Cooper was (from what I know) not involved in the CDDL at all. I had a 2 hour telephone conference with Andrew Tucker, a Sun lawyer and a lady from Sun (I do no longer remember her name but it was definitely not Danese). The reason for the telephone confernce was to discuss the changes for the final CDDL license text. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Thu Jan 5 03:52:27 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 09:52:27 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d334d.XcKOxzKwrzmvL326%schily@schily.net> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> Message-ID: <20170104175227.GH3405@mcvoy.com> On Wed, Jan 04, 2017 at 06:39:25PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > > I cannot confirm this at all. > > > > > > I have access to both SunOS-4.x and Solaris sources and it is obvious that the > > > > I'm not sure how you have legal access to the SunOS 4.x code. I'd love a > > copy of that source but so far as I know it's locked up. > > You did not make a backup while you worked at Sun? Apparently your ethics and my ethics differ. It was Sun's property, not mine. > Hint: I have been told > from Sun employees that the Sun ZFS group did read my diploma thesis before > they started with ZFS even though it is written in German ;-) Huh, interesting. I'll check that out. Both Jeff Bonwick and Bill Moore have worked for me. Bonwick was one of my students at Stanford and I hired him into the kernel group. Bill worked for me on BitKeeper. I'll let you know what they say. > There seems to be a general missunderstandings: > > I do not call SunOS-4.x a "BSD based OS" as SunOS-4.0 introduced a new memory > management subsystem in the kernel. I think we can stop here. The rest of the world at the time described SunOS as "a bug fixed BSD". The mmap() interface was designed by Bill Joy while at UCB and was documented but not implemented in 4.2 BSD [*] To say that SunOS 4.x wasn't BSD based is ludicrous. And that's coming from the guy who made it conform to POSIX and in the process wrote lint libraries for SunOS, BSD, Posix, and System V. You're arguing with someone who was in the kernel group at Sun at the time and is close friends with the guy who did the bringup. I'm not sure you could get a better source but if you want to keep pushing your version of history I'll be here to point out where you get it wrong. [*] https://en.wikipedia.org/wiki/Mmap Though that wiki page is suspect, did 4.3BSD-Reno really have the Mach VM system? From lm at mcvoy.com Thu Jan 5 03:57:50 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 09:57:50 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> References: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> Message-ID: <20170104175750.GI3405@mcvoy.com> On Wed, Jan 04, 2017 at 06:48:06PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > On Wed, Jan 04, 2017 at 06:40:07PM +0100, Joerg Schilling wrote: > > > Larry McVoy wrote: > > > > > > > > BTW: this fact has been confirmed by Simon Phipps, so I am very sure about it. > > > > > > > > "this fact" being that "many" Sun employees were prepared to quit if > > > > Solaris was BSD licensed? I'd like to see a list of those people, > > > > I find it extremely hard to believe, but data will change my opinion. > > > > > > Try to ask Simon..... > > > > > > J?rg > > > > You're the guy making the claim, onus is on you to back it up. That's > > how things work. > > Well, I thought using google should be simple: > > https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License Yeah, read the whole thing. Still looking for a list of Sun employees who were willing to quit if Sun chose the BSD license. > Check the video mentioned there as this just lists what Simon did say. What video? From schily at schily.net Thu Jan 5 04:23:12 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 19:23:12 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104175227.GH3405@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> Message-ID: <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> Larry McVoy wrote: > > You did not make a backup while you worked at Sun? > > Apparently your ethics and my ethics differ. It was Sun's property, not mine. You do not understand jokes? > > Hint: I have been told > > from Sun employees that the Sun ZFS group did read my diploma thesis before > > they started with ZFS even though it is written in German ;-) > > Huh, interesting. I'll check that out. Both Jeff Bonwick and Bill Moore > have worked for me. Bonwick was one of my students at Stanford and I > hired him into the kernel group. Bill worked for me on BitKeeper. > I'll let you know what they say. I had a long discussion about this background in September 2004 with Jeff in his office while we discussed how a new secure interface that does not need root privileges could be added to support readin hole lists for files. This resulted in the design of SEEK_HOLE/SEEK_DATA. Before (around 1992) I had a really long meeting with Wolfgang Thaler (he is the designer and author of DDI/DKI) and after he mentioned that there are many people inside Sun that understand German, mentioned that my dimploma thesis http://cdrtools.sourceforge.net/private/WoFS.pdf is used by many Sun kernel engineers as there was no similar paper from Sun. > > There seems to be a general missunderstandings: > > > > I do not call SunOS-4.x a "BSD based OS" as SunOS-4.0 introduced a new memory > > management subsystem in the kernel. > > I think we can stop here. The rest of the world at the time described > SunOS as "a bug fixed BSD". The mmap() interface was designed by Bill > Joy while at UCB and was documented but not implemented in 4.2 BSD [*] This is definitely a missunderstanding: Bill did create a mmap() interface for BSD while in UCB, but this was already around 1984 and hard to use as there was no universal address space description inside the kernel. What you could do with the old interface that was also available in e.g. SunOS-3.5 is to map user space addresses to VME addresses, but you first had to valloc() the space before, to get a mmap() target address. After you then called mmap(), you did waste all the swap space that was needed to hold the address space description. We used this method at H.Berthold AG in 1986 for the Berthold image processor to get direct access to the 256 MB of image memory in the image proessor. We needed to waste a whole disk for swap to get the initial mapping into the address space descriptor for the related userland process. For SunOS-4.0, Bill did a complete rewrite of the whole virtual memory subsystem. This rewrite includes the filesystem and since SunOS-4.0, all file access is done via mmap(). Even read() based file I/O basically maps the related parts of a file into a transient kernel area from where a copyout() is done. With SunOS-4.x, mmap() has been much easier to use as there now was an object oriented linked list of address space descriptions. If you like to know why Sun could not donate the new virtual memory implementation to BSD, this is because of the contract signed with AT&T in late 1987 - even before SunOS-4.0 was send to customers. AT&T did know from the memory subsystem from a talk at USENIX in spring 1987. What I had to do when the Sparcstation came up was to write a segment driver to support the MMU in our VME<->S-Bus adaptor in order to come around the 32 MByte limitation for the addresses in an S-Bus slot. BTW: in order to avoid more missunderstandings, could you mention when you have been in the Sun kernel group and what kind of things you did with the kernel? Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From crossd at gmail.com Thu Jan 5 04:24:13 2017 From: crossd at gmail.com (Dan Cross) Date: Wed, 4 Jan 2017 13:24:13 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104175750.GI3405@mcvoy.com> References: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> <20170104175750.GI3405@mcvoy.com> Message-ID: On Wed, Jan 4, 2017 at 12:57 PM, Larry McVoy wrote: > On Wed, Jan 04, 2017 at 06:48:06PM +0100, Joerg Schilling wrote: > > Larry McVoy wrote: > > > > > On Wed, Jan 04, 2017 at 06:40:07PM +0100, Joerg Schilling wrote: > > > > Larry McVoy wrote: > > > > > > > > > > BTW: this fact has been confirmed by Simon Phipps, so I am very > sure about it. > > > > > > > > > > "this fact" being that "many" Sun employees were prepared to quit > if > > > > > Solaris was BSD licensed? I'd like to see a list of those people, > > > > > I find it extremely hard to believe, but data will change my > opinion. > > > > > > > > Try to ask Simon..... > > > > > > > > J?rg > > > > > > You're the guy making the claim, onus is on you to back it up. That's > > > how things work. > > > > Well, I thought using google should be simple: > > > > https://en.wikipedia.org/wiki/Common_Development_and_ > Distribution_License > > Yeah, read the whole thing. Still looking for a list of Sun employees > who were willing to quit if Sun chose the BSD license. > > > Check the video mentioned there as this just lists what Simon did say. > > What video? > Larry, There are links to a recording of a presentation at DebConf 2016 in the "References" section of the Wikipedia page: numbers 19 and 20. I haven't watched it myself because I've literally got a sleeping baby on my shoulder and often painful experience has taught me to not mess with such things. However, the section of the Wikipedia CDDL article on GPL compatibility mentions this: 'Simon Phipps (Sun's Chief Open Source Officer at the time), who had introduced Ms. Cooper as "the one who actually wrote the CDDL",[19] did not immediately comment, but later in the same video, he says, referring back to the license issue, "I actually disagree with Danese to some degree",[20] while describing the strong preference among the engineers who wrote the code for a BSD-like license, which was in conflict with Sun's preference for something copyleft, and that waiting for legal clearance to release some parts of the code under the then unreleased GNU GPL v3 would have taken several years, and would probably also have involved massed resignations from engineers (unhappy with either the delay, the GPL, or both—this is not clear from the video).' This implies to me that the attitude among employees at Sun was that a BSD-style license was *preferred*, and licensing under the *GPL* was the thing that would (potentially) have brought about a mass exodus of unhappy engineers. That is, it seems to be the inverse of what Joerg is suggesting. But I think two things are being conflated: Solaris 2.0 was open-sourced over a period of several years starting in the mid-2000s, and extending through the end of the decade (2008, I guess). But your earlier proposal for SunOS 4 (retroactively renamed Solaris 1.x...) dates from 1993, more than a decade prior. It was my sense in the early 90s that licenses weren't given nearly as much thought or consideration by individual engineers as they are now. While I wasn't there, I imagine that at the time, folks were probably more of the attitude, "open up SunOS? Yeah, that'd be cool...License? Uh, I think my buddy from Berkeley has something about this...." Anyway, I don't think one can directly compare the two because a *lot* changed in that decade in between. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From schily at schily.net Thu Jan 5 04:25:28 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 19:25:28 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104175750.GI3405@mcvoy.com> References: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> <20170104175750.GI3405@mcvoy.com> Message-ID: <586d3e18.u3/INdZqC5FHaFcz%schily@schily.net> Larry McVoy wrote: > > https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License > > Yeah, read the whole thing. Still looking for a list of Sun employees > who were willing to quit if Sun chose the BSD license. > > > Check the video mentioned there as this just lists what Simon did say. > > What video? See reference 19 and 20. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Thu Jan 5 04:27:23 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 10:27:23 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> Message-ID: <20170104182723.GC3006@mcvoy.com> On Wed, Jan 04, 2017 at 07:23:12PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > I think we can stop here. The rest of the world at the time described > > SunOS as "a bug fixed BSD". The mmap() interface was designed by Bill > > Joy while at UCB and was documented but not implemented in 4.2 BSD [*] > > This is definitely a missunderstanding: You got that right. > For SunOS-4.0, Bill did a complete rewrite of the whole virtual memory > subsystem. Bill did no such thing. The rewrite was by Joe Moran. You're spouting a lot of misinformation and it's getting old. From schily at schily.net Thu Jan 5 04:29:47 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 19:29:47 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104182723.GC3006@mcvoy.com> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> <20170104182723.GC3006@mcvoy.com> Message-ID: <586d3f1b.wbiEn0FZgT341tKK%schily@schily.net> Larry McVoy wrote: > > This is definitely a missunderstanding: > > You got that right. > > > For SunOS-4.0, Bill did a complete rewrite of the whole virtual memory > > subsystem. > > Bill did no such thing. The rewrite was by Joe Moran. You're spouting > a lot of misinformation and it's getting old. OK, I may be mistaken with the author but what you write is mainly wrong, so please explain when you have been in the Sun kernel group and what kind of work you did in the kernel. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From crossd at gmail.com Thu Jan 5 04:30:11 2017 From: crossd at gmail.com (Dan Cross) Date: Wed, 4 Jan 2017 13:30:11 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> <20170104175750.GI3405@mcvoy.com> Message-ID: On Wed, Jan 4, 2017 at 1:24 PM, Dan Cross wrote: > [snip There are links to a recording of a presentation at DebConf 2016 > in the "References" section of the Wikipedia page: numbers 19 and 20. [snip] > Oops, I'm sorry, I meant Debconf 20*0*6, not 2016. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From schily at schily.net Thu Jan 5 04:32:14 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 04 Jan 2017 19:32:14 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> Message-ID: <586d3fae.uf5FiS568GsBeCKB%schily@schily.net> schily at schily.net (Joerg Schilling) wrote: > https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License > > Check the video mentioned there as this just lists what Simon did say. > > BTW: Danese Cooper was (from what I know) not involved in the CDDL at all. > > I had a 2 hour telephone conference with Andrew Tucker, a Sun lawyer and > a lady from Sun (I do no longer remember her name but it was definitely not > Danese). The reason for the telephone confernce was to discuss the changes for > the final CDDL license text. I just discovered the name again: The lady that was in the teleconference has been Claire Giordano. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Thu Jan 5 04:44:48 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 10:44:48 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> Message-ID: <20170104184448.GD3006@mcvoy.com> On Wed, Jan 04, 2017 at 07:23:12PM +0100, Joerg Schilling wrote: > BTW: in order to avoid more missunderstandings, could you mention when you have > been in the Sun kernel group and what kind of things you did with the kernel? Sure. Here's some notes I put together for Eli Lamb when I was thinking about moving to Dec (to work for Jim Gray). The date on the file is 1992 so I had been there about 4 years. I was in the kernel group from 1988 to about 1992, then moved over to hardware where I did a cluster based NFS server and LMbench. Then I went to SGI and did a new name server that could serve all of California on a 200 mhz server, made NFS deliver serve up files at 60MB/sec per file (we could do as many streams in parallel as we had network cards). --lm I showed up in October 1988. This is what I can remember that I've done since I've been here. When I interviewed at DEC, their HR people thought I was lieing and I went through two more interviews before they finally believed me. * Doubled file system throughput. Publication. Generated sales. Talk to Steve Kleiman for confirmation. * Single handly implemented POSIX conformance in the 4.x OS. Bullet item on lots of sales. Talk to Don Cragun for confirmation. * Implemented smoosh - basis for Avocet and nselite. Talk to Shannon for confirmation. * Implemented nselite - almost *all* kernel devlopment on 5.0 and 4.x is currently under nselite. Nselite has saved manyears of time (see Karl Danz and Larry Bassel for mgmt confirmation; Len Brown & Roger Faulkner for engineering confirmation; I also have statistics of usage: nselite is more widely used than the NSE or Avocet). * VM, swap, tmpfs performance. I improved tmpfs write rates from 300KB to 7MB / second. Talk to Howard Chartok, Steve Kleiman, Peter Snyder for confirmation. * STREAMS, tty enhancements. Done under POSIX but had nothing to do with POSIX. * Porting tools for SunOS 4.x to any known Unix implementation. Talk to Rob Gingell for confirmation. * More fires in the kernel than I care to think about. I can run through bug traq to find these, many are boring, but all consumed substantial time. I have somewhat of a reputation of a kernel hack largely because of these firedrills. * Designed and built the first Sun clustered system, Sunbox. Hired and managed a team. * Taught two Quarters of Graduate level OS at Stanford while working full time at Sun. TA-ed the same course before that, Stanford ask me to teach it when Bob Hagmann retired. * Extensive consulting with other groups: - Lisp people, VM issues, Cris Perdue. - Fortran crowd, I/O issues, Robert Corbett. - SWSMON - kernel tuning, Anh Nuygun. - Dragon crowd I/O issues, SCSI performance, Jean-Marc Frailong. - Pluto people picked up many of the ideas in the SCSI card proposal, Dave Banks. - Avocet crowd is picking up all the positive ideas in nselite due to my team player efforts with them. Talk to Marla and Giordano for confirmation. - Okins group, SunBox, Okin for confirmation. - Mike Scott, HA NFS. - Disk performance, Rich Clewett. - Performance benchmarking, etc, Nhan Chu & group. - Big memory systems, Bill Peterson. - NFS group, performance, cache consistency, John Corbin. - UFS crowd, delayed I/O, quickcheck, Tom Wong, Blake Lewis. - SMCC, presto, omni, SCSI. From crossd at gmail.com Thu Jan 5 04:46:01 2017 From: crossd at gmail.com (Dan Cross) Date: Wed, 4 Jan 2017 13:46:01 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d3fae.uf5FiS568GsBeCKB%schily@schily.net> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> <586d3fae.uf5FiS568GsBeCKB%schily@schily.net> Message-ID: On Wed, Jan 4, 2017 at 1:32 PM, Joerg Schilling wrote: > schily at schily.net (Joerg Schilling) wrote: > > > https://en.wikipedia.org/wiki/Common_Development_and_ > Distribution_License > > > > Check the video mentioned there as this just lists what Simon did say. > > > > BTW: Danese Cooper was (from what I know) not involved in the CDDL at > all. > > > > I had a 2 hour telephone conference with Andrew Tucker, a Sun lawyer and > > a lady from Sun (I do no longer remember her name but it was definitely > not > > Danese). The reason for the telephone confernce was to discuss the > changes for > > the final CDDL license text. > > I just discovered the name again: The lady that was in the teleconference > has > been Claire Giordano. > FYI, I watched the video you referred to (my daughter having woken up) and Simon's comments seem to be in direct contradiction of your earlier statement. The relevant comments start at around 35:30, and he says that the Sun engineering community pretty clearly favored a BSD-style license. He mentions that trying to use the GPL would have pushed out their timeline by several years. The ``you do that and we quit'' comment (around the 38 minute park) was quite clearly in response to using GPL for OpenSolaris. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scj at yaccman.com Thu Jan 5 04:51:37 2017 From: scj at yaccman.com (Steve Johnson) Date: Wed, 04 Jan 2017 10:51:37 -0800 Subject: [TUHS] Unix stories In-Reply-To: <017201d266ab$cda885a0$68f990e0$@ronnatalie.com> Message-ID: <6a969f4310c22211e164a971e27a3af9121176cd@webmail.yaccman.com> Let me contaminate this philosophical discussion with some history. Long long ago, computers were slow and didn't have much memory.  Because C was targeting system code, it was important to make things run efficiently.  And the PDP-11 had autoincrement and autodecrement  hardware. Early machines also had some kinds of memory management, but most specified a base and limit.  DEC allowed you to protect the end of a block of memory, which made it possible to grow the stack backwards and still be able to add more stack space if you ran out.  But many other machines required that the stack grow upwards. The problem this caused was when you had     foo( f(), g() ) In backward-growing stacks, the most efficient thing was to call g first, then f. In forward-growing stacks, the most efficient thing was to call f first, then g. For whatever reason, Dennis decided that efficiency on a particular architecture was more important than consistency, so when f() and g() had side effects, their order was undefined. Autoincrement and Autodecrement also got tarred by the same brush:      foo( *p++, *p++ ) had a slew of "correct" implementations, including ones where p was incremented twice AFTER the call of foo had returned.  The situation became critical when getc() was implemented as a macro that pulled bytes out of an I/O buffer and used autoincrement to do so.  After some discussion, what I implemented in PCC was that all side effects of an argument must be carried out before the next argument was evaluated.  This still didn't solve the argument order problem, but it did cut down the space of astonishing surprises. These rules provided rich fodder for Lint, when it came along, although the function side effect issue was beyond its ken. Steve ----- Original Message ----- From: "Ron Natalie" To:"Random832" , "Steffen Nurpmeso" Cc: Sent:Wed, 4 Jan 2017 11:58:41 -0500 Subject:Re: [TUHS] Unix stories There's a trademark between allowing the compiler to reorder things and having a defined order of operations. Steps like that are well-defined in Java for instance. C lets the compiler do what it sees fit. Note that it's not necessarily any better in assembler. There are RISC architectures where load-followed-by-store and vice versa may not always be valid if done in quick succession. Requiring the compiler to insert sequence points typically wastes a lot of cycles. Assembler programmers tend to think about what they are doing, the C compiler tries to do some of this on its own and its not clairvoyant. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nevin at eviloverlord.com Thu Jan 5 04:56:03 2017 From: nevin at eviloverlord.com (Nevin Liber) Date: Wed, 4 Jan 2017 12:56:03 -0600 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104033512.GA22116@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> Message-ID: On Tue, Jan 3, 2017 at 9:35 PM, Larry McVoy wrote: > On Tue, Jan 03, 2017 at 10:23:28PM -0500, Dan Cross wrote: > > My favorite version number was SunOS 4.1.4U1: I was told that the ``U1'' > > meant, "you won", as in "you won. Here's another BSD-based release." > > That might have been the Greg Limes release. I may be all wrong but > someone, I think it was Greg, busted their ass to try and make SunOS > 4.x scale up on SMP machines. There were a lot of us at the time that > hated the SVr4 thing, it was such a huge step backwards. > Greg Limes says: Larry has it very nearly right, or at least very nearly matches my memories. The few exceptions are only important in light of this being an attempt to record history as accurately as possible. Yeah, I was the naive kid who pushed and pushed and pushed until it happened. I had and still probably have absolutely *NO* idea how many other people were pushing along with me, but I do know that I had the full support of at least three layers of management, and I do know that many of the changes were only possible thanks to the hard work of the other engineers that worked on the Sun-4M (4/600 series) port. While there was heroic effort involved, it was not the result of the effort of a lone hero. The release was (or was supposed to be, and I remember it as) "SunOS 4.1.3 u1" because we were told on no uncertain terms that there would be no release called "SunOS 4.1.4" but it was OK to send out an update release rolling up patches previously sent. I was *never* told why, which only made me (and my management chain) push harder. There were enough changes to warrant U1, U2, and U3 releases; I know U1 went out the door, and I know that U3 was ready for release when I departed, I don't recall whether U2 made it out the door or not. I do not recall the method we used to triage the changes into three releases. There was really no explicit "try to make SunOS 4 scale up on SMP machines" in this code -- in fact, for many common workloads, things scaled surprisingly well. The NFS crew in particular indicated they were quite happy with our scaling, but I would defer to Neal Nuckolls on that score. The purpose of U1 and subsequent updates was to bring a number of kernel bug fixes back into the mainline sources (um, maybe some of these fixes improved scaling, but it was not the basis for the release). Non-historical observation ... the interesting thing about the paper Larry linked, for me, is that it exactly describes the huge sucking black hole that made Linux (or something very much like it) inevitable. It is no coincidence that the same passion we found at Sun working on SunOS, we also find in the community of developers working on the Linux kernel. I always wondered how many of Sun's Kernel Hackers found their path there. -- Nevin ":-)" Liber +1-847-691-1404 -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Thu Jan 5 05:05:48 2017 From: imp at bsdimp.com (Warner Losh) Date: Wed, 4 Jan 2017 12:05:48 -0700 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> Message-ID: On Wed, Jan 4, 2017 at 11:56 AM, Nevin Liber wrote: > The release was (or was supposed to be, and I remember it as) "SunOS 4.1.3 > u1" because we were told on no uncertain terms that there would be no > release called "SunOS 4.1.4" but it was OK to send out an update release > rolling up patches previously sent. I was *never* told why, which only made > me (and my management chain) push harder. There were enough changes to > warrant U1, U2, and U3 releases; I know U1 went out the door, and I know > that U3 was ready for release when I departed, I don't recall whether U2 > made it out the door or not. I do not recall the method we used to triage > the changes into three releases. I don't think so. Solbourne had a OS/MP based on 4.1.3 (I think OS/MP 4.1C) and another based on 4.1.3u1 (OS/MP 4.1D), but there was never an OS/MP 4.1E. > There was really no explicit "try to make SunOS 4 scale up on SMP machines" > in this code -- in fact, for many common workloads, things scaled > surprisingly well. The NFS crew in particular indicated they were quite > happy with our scaling, but I would defer to Neal Nuckolls on that score. > The purpose of U1 and subsequent updates was to bring a number of kernel bug > fixes back into the mainline sources (um, maybe some of these fixes improved > scaling, but it was not the basis for the release). This only group I'm aware of was the Solbourne Kernel team that produced a ASMP version based on 4.0 and hired David Barak to make it SMP for the OS/MP 4.1 based on SunOS 4.1. It scaled to about 16 CPUs, IIRC, based on Solbourne's own MP designs. I worked at Solbourne at the time in the other interesting technology to come out of Solbourne (the OI GUI toolkit, which has become at best a historical footnote). Warner From clemc at ccc.com Thu Jan 5 06:00:58 2017 From: clemc at ccc.com (Clem Cole) Date: Wed, 4 Jan 2017 15:00:58 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> Message-ID: On Wed, Jan 4, 2017 at 1:56 PM, Nevin Liber wrote: > Non-historical observation ... the interesting thing about the paper Larry > linked, for me, is that it exactly describes the huge sucking black hole > that made Linux (or something very much like it) inevitable. Brother, I think you have that right, although I believe it that can be said of a number of the better early OS teams from those days. > It is no coincidence that the same passion we found at Sun working > on SunOS, we also find in the community of developers working on the Linux > kernel. ​+1 for Masscomp, SGI and Apollo​ > I always wondered how many of Sun's Kernel Hackers found their path there. ​From my alumni lists of the teams I was part, many of us are happily hacking away in the FOSS community although, as other responsibilities have come to my life - less and less time for some of it. Clem​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkt at tuhs.org Thu Jan 5 06:17:21 2017 From: wkt at tuhs.org (Warren Toomey) Date: Thu, 5 Jan 2017 06:17:21 +1000 Subject: [TUHS] OK, time to step back from the keyboard (for a bit) Message-ID: <20170104201721.GA13719@minnie.tuhs.org> Goodness, I go to sleep, wake up 8 hours later and there's 50 messages in the TUHS mailing list. Some of these do relate to the history of Unix, but some are getting quite off-topic. So, can I get you all to just pause before you send in a reply and ask: is this really relevant to the history of Unix, and does it contribute in a meaningful way to the conversation. Looks like we lost Armando, that's a real shame. Cheers, Warren From kayparker at mailite.com Thu Jan 5 06:59:27 2017 From: kayparker at mailite.com (=?utf-8?Q?Kay=20Parker=20=09=20?=) Date: Wed, 04 Jan 2017 12:59:27 -0800 Subject: [TUHS] OK, time to step back from the keyboard (for a bit) In-Reply-To: <20170104201721.GA13719@minnie.tuhs.org> References: <20170104201721.GA13719@minnie.tuhs.org> Message-ID: <1483563567.821474.837517801.28E855B3@webmail.messagingengine.com> I'm really happy that there is some traffic in the list what wasn't when I entered a month ago or so. And yes I didn't read any BS message. All are/were really relevant to the history of Unix! Kepp on posting that good stuff! On Wed, Jan 4, 2017, at 12:17 PM, Warren Toomey wrote: > Goodness, I go to sleep, wake up 8 hours later and there's 50 messages in > the TUHS mailing list. Some of these do relate to the history of Unix, > but > some are getting quite off-topic. > > So, can I get you all to just pause before you send in a reply and ask: > is this really relevant to the history of Unix, and does it contribute > in a meaningful way to the conversation. > > Looks like we lost Armando, that's a real shame. > > Cheers, Warren -- Kay Parker kayparker at mailite.com -- http://www.fastmail.com - mmm... Fastmail... From rminnich at gmail.com Thu Jan 5 07:24:37 2017 From: rminnich at gmail.com (ron minnich) Date: Wed, 04 Jan 2017 21:24:37 +0000 Subject: [TUHS] lost ports Message-ID: So there are a few ports I know of that I wonder if they ever made it back into that great github repo.I don't think they did. harris gould That weird BBN 20-bit machine (20 bits? true story: 5 4-bit modules fit in a 19" rack. So 20 bits) Alpha port (Tru64) Precision Architecture Unix port to Cray vector machines others? What's the list of "lost machines" look like? Would companies consider a donation, do you think? If that Cray port is of any interest I have a thread I can push on maybe. but another true story: I visited DEC in 2000 or so, as LANL was about to spend about $120M on an Alpha system. The question came up about the SRM firmware for Alpha. As it was described to me, it was written in BLISS and the only machine left that could build it was an 11/750, "somewhere in the basement, man, we haven't turned that thing on in years". I suspect there's a lot of these containing oxide oersteds of interest. ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad at anduin.eldar.org Thu Jan 5 07:20:40 2017 From: brad at anduin.eldar.org (Brad Spencer) Date: Wed, 04 Jan 2017 16:20:40 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: (message from Nevin Liber on Wed, 4 Jan 2017 12:56:03 -0600) Message-ID: Nevin Liber writes: [snip] > The release was (or was supposed to be, and I remember it as) "SunOS 4.1.3 > u1" because we were told on no uncertain terms that there would be no > release called "SunOS 4.1.4" but it was OK to send out an update release > rolling up patches previously sent. I was *never* told why, which only made > me (and my management chain) push harder. There were enough changes to > warrant U1, U2, and U3 releases; I know U1 went out the door, and I know > that U3 was ready for release when I departed, I don't recall whether U2 > made it out the door or not. I do not recall the method we used to triage > the changes into three releases. [snip] A small addition.. there was a 4.1.4, a.k.a. Solaris 1.1.2 [I think] that made it out the door [Google around for references]. I remember having the cd and installing it. I also remember 4.1.3_U1, but not U2 or U3, but I wouldn't be suprised that they existed. I was at AT&T at the time and the group I was in resisted going to Solaris 2.x for as long as we could. We were mostly interested in desktop and small server stuff, so SMP need not apply and Solaris 2.x where x <= 4 was painful. I remember the Sparc 5 Model 170 with SunOS 4.x which ran, for us, just as well as the Ultra 1. Fun times.... -- Brad Spencer - brad at anduin.eldar.org - KC8VKS http://anduin.eldar.org - & - http://anduin.ipv6.eldar.org [IPv6 only] From ron at ronnatalie.com Thu Jan 5 07:41:07 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Wed, 4 Jan 2017 16:41:07 -0500 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: <01c001d266d3$42294820$c67bd860$@ronnatalie.com> I worked a lot on the Gould SEL machines. I believe they copted George Goble from Purdue and some of his gang to do a lot of the initial OS work as the machines were dual processors and George had done the multiprocessor kernel for his “dual VAX” hack. We met with the project leading vice president, Jim Clark when they were planning this and really drove them torwards a BSD based kernel. His eyes lit up when we told him of Doug Gwyn’s SV on BSD dist which seaeled the deal. Amusingly, the SEL UNIX didn’t put a memory page at location zero by default. This should have been fine. In the PDP-11 kernel, location zero usually held the first few instructions of the program (notably a setd instruction and a few others that would cause printf(“%s”, 0) to print p&P6). The VAX BSD kernel put a zero at location zero which allowed all sorts of bugs to hide. We didn’t really mind the SEL behavior until we found a few programs that we didn’t have source code for crashing (notably Oracle). We had to put a hack in that if the a.out had a non-zero value in one of the unused fields it would put it in to “braindamaged VAX compatilibilty mode” mapping a zero at zero. This allowed us to poke the afflicted binaries. Years later a friend of mine was saying…here’s something you don’t see every day…a black computer company VP. I pointed out that I had worked with Jim Clark at Gould so there must be at least two. Turns out the article he was reading was about Jim joining AT&T. He’s still around somewhere (he’s on the board of the EAA right now). He might be a good guy to invite to the list. The BBN C machines were indeed potentially 20 bits. They were designed to be a generic hardware emulator, specifically to replace the Honeywell 516s that were being used for IMPS and TIPS at the time. They then sold someone (DARPA I suspect) the idea that they could write an instrution set that would be ideal for the C language and UNIX. I’m pretty sure that it was only doing 16 bit operations rather than 20. If I recall properly the systems were kind of klunky in practice. The Army had a few of them around. I never heard the 5 4-bit modules fit into a rack. The thing was pretty monolithic looking (about 3’ of 19” rack) and not modular at all. I did kernel work on the PA for HP also worked on their X server (did a few other X servers over the years). The hard part would be finding anybody from these companies who could even remember they made computers let alone had UNIX software. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pechter at gmail.com Thu Jan 5 07:57:12 2017 From: pechter at gmail.com (William Pechter) Date: Wed, 4 Jan 2017 16:57:12 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: Message-ID: <736bf64a-50d6-95d5-d3de-9449735f7909@gmail.com> Brad Spencer wrote: > Nevin Liber writes: > > [snip] > >> The release was (or was supposed to be, and I remember it as) "SunOS 4.1.3 >> u1" because we were told on no uncertain terms that there would be no >> release called "SunOS 4.1.4" but it was OK to send out an update release >> rolling up patches previously sent. I was *never* told why, which only made >> me (and my management chain) push harder. There were enough changes to >> warrant U1, U2, and U3 releases; I know U1 went out the door, and I know >> that U3 was ready for release when I departed, I don't recall whether U2 >> made it out the door or not. I do not recall the method we used to triage >> the changes into three releases. > [snip] > > A small addition.. there was a 4.1.4, a.k.a. Solaris 1.1.2 [I think] > that made it out the door [Google around for references]. I remember > having the cd and installing it. I also remember 4.1.3_U1, but not U2 > or U3, but I wouldn't be suprised that they existed. I was at AT&T at > the time and the group I was in resisted going to Solaris 2.x for as > long as we could. We were mostly interested in desktop and small server > stuff, so SMP need not apply and Solaris 2.x where x <= 4 was painful. > I remember the Sparc 5 Model 170 with SunOS 4.x which ran, for us, just > as well as the Ultra 1. Fun times.... I was keeping a group at Lucent running on some creaky Sparcstation2's used as the department servers (we probably had the worst collection of hardware in Lucent, but it was paid for and fully depreciated...) I had to do the Y2k patches on the boxes and the corporate types were pushing us to do weekly updates. Patches dribbled out over a couple of months requiring repeated passes over the hardware in a maintenance window. Ugh. I wrote a patch script and update cd and tape and applied the tar to all the Sun machines the night before y2k -- avoiding the repeated patching they (corporate IT) were requiring. Of course most departments just dumped the old hardware and updated to non-antiques. The windows boxes were another nightmare. After doing all of this I heard from an AT&T Manager that Lucent had a product update tape they sold to AT&T that did all the Y2k patches for one of their Sparcstation2 products. Too bad this wasn't available in-house. Anyone archive the y2k patches for SunOS 4.1.3_U1 and 4.1.4. I lost my fixes in a move and I'd like to bring a couple of the Sparcstations up again for fun. I really liked that OS and hated patching the libc for the resolver+ fixes. I think the main patches were for the date command, ms macros and some stuff like diag reporting. Realistically the Unix boxes were the easiest to deal with. Most y2k stuff was in application programs. If I still have the Sparc2 booting in 2038 it will be interesting. Bill From pechter at gmail.com Thu Jan 5 07:57:26 2017 From: pechter at gmail.com (William Pechter) Date: Wed, 4 Jan 2017 16:57:26 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: Message-ID: <16309908-e0e9-99dc-b435-1d44f6e65509@gmail.com> Brad Spencer wrote: > Nevin Liber writes: > > [snip] > >> The release was (or was supposed to be, and I remember it as) "SunOS 4.1.3 >> u1" because we were told on no uncertain terms that there would be no >> release called "SunOS 4.1.4" but it was OK to send out an update release >> rolling up patches previously sent. I was *never* told why, which only made >> me (and my management chain) push harder. There were enough changes to >> warrant U1, U2, and U3 releases; I know U1 went out the door, and I know >> that U3 was ready for release when I departed, I don't recall whether U2 >> made it out the door or not. I do not recall the method we used to triage >> the changes into three releases. > [snip] > > A small addition.. there was a 4.1.4, a.k.a. Solaris 1.1.2 [I think] > that made it out the door [Google around for references]. I remember > having the cd and installing it. I also remember 4.1.3_U1, but not U2 > or U3, but I wouldn't be suprised that they existed. I was at AT&T at > the time and the group I was in resisted going to Solaris 2.x for as > long as we could. We were mostly interested in desktop and small server > stuff, so SMP need not apply and Solaris 2.x where x <= 4 was painful. > I remember the Sparc 5 Model 170 with SunOS 4.x which ran, for us, just > as well as the Ultra 1. Fun times.... I was keeping a group at Lucent running on some creaky Sparcstation2's used as the department servers (we probably had the worst collection of hardware in Lucent, but it was paid for and fully depreciated...) I had to do the Y2k patches on the boxes and the corporate types were pushing us to do weekly updates. Patches dribbled out over a couple of months requiring repeated passes over the hardware in a maintenance window. Ugh. I wrote a patch script and update cd and tape and applied the tar to all the Sun machines the night before y2k -- avoiding the repeated patching they (corporate IT) were requiring. Of course most departments just dumped the old hardware and updated to non-antiques. The windows boxes were another nightmare. After doing all of this I heard from an AT&T Manager that Lucent had a product update tape they sold to AT&T that did all the Y2k patches for one of their Sparcstation2 products. Too bad this wasn't available in-house. Anyone archive the y2k patches for SunOS 4.1.3_U1 and 4.1.4. I lost my fixes in a move and I'd like to bring a couple of the Sparcstations up again for fun. I really liked that OS and hated patching the libc for the resolver+ fixes. I think the main patches were for the date command, ms macros and some stuff like diag reporting. Realistically the Unix boxes were the easiest to deal with. Most y2k stuff was in application programs. If I still have the Sparc2 booting in 2038 it will be interesting. Bill From lm at mcvoy.com Thu Jan 5 07:58:39 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 13:58:39 -0800 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: <20170104215839.GA6931@mcvoy.com> I worked on the ETA-10 (CDC spinoff, Neil Lincoln was the architect): https://en.wikipedia.org/wiki/ETA10 No idea if the code is still around, I would guess it's lost. Be fun if it showed up, I wrote a kmem "driver" so I could get my own version of ps(1) to run. As I recall, it was an ioctl that just fed me back everything I needed, it wasn't a kmem driver at all, this was my first real job after grad school and I had no idea how to write a driver :) On Wed, Jan 04, 2017 at 09:24:37PM +0000, ron minnich wrote: > So there are a few ports I know of that I wonder if they ever made it back > into that great github repo.I don't think they did. > > harris > gould > That weird BBN 20-bit machine > (20 bits? true story: 5 4-bit modules fit in a 19" rack. So 20 bits) > Alpha port (Tru64) > Precision Architecture > Unix port to Cray vector machines > > others? What's the list of "lost machines" look like? Would companies > consider a donation, do you think? > > If that Cray port is of any interest I have a thread I can push on maybe. > > but another true story: I visited DEC in 2000 or so, as LANL was about to > spend about $120M on an Alpha system. The question came up about the SRM > firmware for Alpha. As it was described to me, it was written in BLISS and > the only machine left that could build it was an 11/750, "somewhere in the > basement, man, we haven't turned that thing on in years". I suspect there's > a lot of these containing oxide oersteds of interest. > > ron -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From rmswierczek at gmail.com Thu Jan 5 08:01:31 2017 From: rmswierczek at gmail.com (Robert Swierczek) Date: Wed, 4 Jan 2017 17:01:31 -0500 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: I have run into interesting Unix code on the bitsavers site under bits/Unisoft and bits/SGI. In addition, archive.org has some interesting snapshots as well. I have always assumed they may be tainted in some way that discourages further exposure? Do these sites have some kind of "library" status that provides cover to host these artifacts? From crossd at gmail.com Thu Jan 5 08:01:39 2017 From: crossd at gmail.com (Dan Cross) Date: Wed, 4 Jan 2017 17:01:39 -0500 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: Along those lines.... I once heard about a paper that was presented at some conference titled something along the lines of, "My Goodness: It Still Runs?!". The topic was some sort of early version of Unix running on some ancient piece of hardware doing some sort of industrial control. When I heard about it, a notable part of the paper was a mention that it was believed they had removed all bugs from the implementation. Not quite a lost version of Unix, but almost a lost+found version. Has anyone else heard of this paper? Perhaps it is apocryphal? I've always wanted to read it, but never found a copy "in the wild." - Dan C. On Wed, Jan 4, 2017 at 4:24 PM, ron minnich wrote: > So there are a few ports I know of that I wonder if they ever made it back > into that great github repo.I don't think they did. > > harris > gould > That weird BBN 20-bit machine > (20 bits? true story: 5 4-bit modules fit in a 19" rack. So 20 bits) > Alpha port (Tru64) > Precision Architecture > Unix port to Cray vector machines > > others? What's the list of "lost machines" look like? Would companies > consider a donation, do you think? > > If that Cray port is of any interest I have a thread I can push on maybe. > > but another true story: I visited DEC in 2000 or so, as LANL was about to > spend about $120M on an Alpha system. The question came up about the SRM > firmware for Alpha. As it was described to me, it was written in BLISS and > the only machine left that could build it was an 11/750, "somewhere in the > basement, man, we haven't turned that thing on in years". I suspect there's > a lot of these containing oxide oersteds of interest. > > ron > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Thu Jan 5 09:13:58 2017 From: imp at bsdimp.com (Warner Losh) Date: Wed, 4 Jan 2017 16:13:58 -0700 Subject: [TUHS] Leap Second In-Reply-To: <20170104133202.VIWUz5j-a%steffen@sdaoden.eu> References: <20161229002105.GB94858@server.rulingia.com> <0d5eeef9-3dbb-0ddd-1b22-51fecee735d8@gmail.com> <586bc353.tOFm/S0IGecYYlh6%schily@schily.net> <20170104133202.VIWUz5j-a%steffen@sdaoden.eu> Message-ID: On Wed, Jan 4, 2017 at 6:32 AM, Steffen Nurpmeso wrote: > schily at schily.net (Joerg Schilling) wrote: > |Tony Finch wrote: > |> sds wrote: > |>> Important question: did anybody have an "exciting" new year because \ > |>> of a leap > |>> second bug? > |> > |> I've been collecting failure reports on the LEAPSECS list > | > |https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare\ > |-dns/ > | > |"go" seems to have a related bug. > | > |BTW: The POSIX standard intentionally does not include leap seconds \ > |in the UNIX > |time interface as it seems that this would cause more problems than \ > |it claims > |to fix. > > I think it is a problem, or better a gap, a void, with the current > standard that software has no option to become informed of the > event of a leap second for one, but further more that CLOCK_TAI is > not available. And even if it was, nobody would use it. It's not used in legacy code, and the subtle differences between the different CLOCK_xxx aren't well enough documented enough for programmers to get it right. And even if it were, the issue is a lot more subtle than that. If you use CLOCK_TAI, then if the system has the proper TAI offset to UTC, calling things like timegm will produce a time that's 40s different than the current UTC time if you aren't also running the proper "right" timezone files, and people will think your code is buggy. But if you get a UTC time, then you have an ambiguous encoding of the leap second (though CLOCK_UTC, where implemented, tries to cope with that by having a denormalized ts_nsec field). It's a big can of warms since most programmers expect time to be a uniform radix, and UTC transforms time of day into a non-uniform radix on an unpredictable timetable. But that's starting to get far afield for the historical unix group... > I think it would make things easier if software > which wants just that can get it, e.g., for periodic timer events > etc. CLOCK_MONOTONIC already exists for these things, and programmers still screw it up :( > This is surely not a healing given that most timestamps etc. > are based on UTC, but i think the severity of the problems could > possibly be lowered. Especially now that multi-hour smears seem > to become used by big companies it seems to be important to have > a correct clock available. This is in fact something i don't > really understand, at _that_ level that is to say. If, e.g., > Google and Bloomberg both would have stated instead that they > slew the leap second, then only a single second would have been > affected, instead of multiple hours. You can't just slew the one second. It introduces too large of a frequency error in the time base. ntpd will view it as a large error and freak out. Programs that want to sleep for 100ms will wind up sleeping for 200ms instead, which could be a big problem. With the slew over several hours, programs wind up sleeping for 100.01ms instead, which is down in the noise of the error you get from a sleep. Google is trading a small phase error and frequency error against the real UTC timestamp to maintain a well-defined monotonically increasing time series with no repeating seconds as its method of coping with POSIX's deliberate decision to not define what happens over a leap second, provide no encoding for a leap second and generally specifies an interface in which it is nearly impossible to get the leap second pedantically correct. Makes one question whether leap seconds are a good idea or not, but that's a political discussion for another group. Warner From dave at horsfall.org Thu Jan 5 10:36:27 2017 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 5 Jan 2017 11:36:27 +1100 (EST) Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104033512.GA22116@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> Message-ID: On Tue, 3 Jan 2017, Larry McVoy wrote: > I really wonder what the world would look like right now if Sun had open > sourced SunOS 4.x and put energy behind it. [...] My guess is it would be a lot like FreeBSD. -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From lm at mcvoy.com Thu Jan 5 10:43:53 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 16:43:53 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> Message-ID: <20170105004353.GB6931@mcvoy.com> On Thu, Jan 05, 2017 at 11:36:27AM +1100, Dave Horsfall wrote: > On Tue, 3 Jan 2017, Larry McVoy wrote: > > > I really wonder what the world would look like right now if Sun had open > > sourced SunOS 4.x and put energy behind it. [...] > > My guess is it would be a lot like FreeBSD. Not really. FreeBSD is open source and was way way behind SunOS. That's why so many Sun engineers were hugely butthurt when they were forced onto a far inferior System V source base. Scooter just didn't understand how much polish had gone into SunOS 4.x. It was a very talented group of engineers, many of whom had no social life (he says looking in the mirror :) so they poured all their energy into making SunOS great. I'm biased because I worked there, but I've run code on all of the major Unix offerings (AIX, IRIX, Ultrex, HP-UX and SunOS) and SunOS was hands down a better experience, inside the kernel, as a user of the syscalls, and in user space. It's what a geeky engineer would want with the polish needed to allow customers to have a good experience. I think a free SunOS would have had a cult following. I'd still be working on it. Sun's management just didn't realize what they were throwing away. Whimper. From pechter at gmail.com Thu Jan 5 10:50:27 2017 From: pechter at gmail.com (William Pechter) Date: Wed, 4 Jan 2017 19:50:27 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170105004353.GB6931@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <20170105004353.GB6931@mcvoy.com> Message-ID: <4c14e37a-f959-d625-b877-f498a644415c@gmail.com> Larry McVoy wrote: > On Thu, Jan 05, 2017 at 11:36:27AM +1100, Dave Horsfall wrote: >> On Tue, 3 Jan 2017, Larry McVoy wrote: >> >>> I really wonder what the world would look like right now if Sun had open >>> sourced SunOS 4.x and put energy behind it. [...] >> My guess is it would be a lot like FreeBSD. > Not really. FreeBSD is open source and was way way behind SunOS. That's > why so many Sun engineers were hugely butthurt when they were forced onto > a far inferior System V source base. Scooter just didn't understand > how much polish had gone into SunOS 4.x. It was a very talented group > of engineers, many of whom had no social life (he says looking in the > mirror :) so they poured all their energy into making SunOS great. > > I'm biased because I worked there, but I've run code on all of the major > Unix offerings (AIX, IRIX, Ultrex, HP-UX and SunOS) and SunOS was hands > down a better experience, inside the kernel, as a user of the syscalls, > and in user space. It's what a geeky engineer would want with the polish > needed to allow customers to have a good experience. > > I think a free SunOS would have had a cult following. I'd still be working > on it. Sun's management just didn't realize what they were throwing away. > > Whimper. Where would the current FreeBSD be if you compared it with SunOS4? Bill From dave at horsfall.org Thu Jan 5 10:51:22 2017 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 5 Jan 2017 11:51:22 +1100 (EST) Subject: [TUHS] Pipes in the Third Edition Unix In-Reply-To: <20170103220407.GA29268@minnie.tuhs.org> References: <20170103215310.GA26242@minnie.tuhs.org> <20170103220407.GA29268@minnie.tuhs.org> Message-ID: On Wed, 4 Jan 2017, Warren Toomey wrote: > Interestingly, the pipe manpage says: > SYNOPSIS sys pipe / pipe = 42.; not in assembler > > and I don't quite understand the comment :-) Other manpages with > the same comment are boot(2), csw(2), fpe(2), kill(2), rele(2), sleep(2), > sync(2) and times(2). So it's not particular to pipe(2). > > Can anybody help explain the "not in assembler" comment? As I recall, it means that those symbols are not recognised by the assembler, so had to be defined by hand. -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From lm at mcvoy.com Thu Jan 5 11:01:49 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 17:01:49 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <4c14e37a-f959-d625-b877-f498a644415c@gmail.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <20170105004353.GB6931@mcvoy.com> <4c14e37a-f959-d625-b877-f498a644415c@gmail.com> Message-ID: <20170105010148.GC6931@mcvoy.com> On Wed, Jan 04, 2017 at 07:50:27PM -0500, William Pechter wrote: > Where would the current FreeBSD be if you compared it with SunOS4? That's a good, and hard question. One of the nice things about SunOS4 was the VM system and the VFS layer and the VNODE layer. Those were really well thought out. They all, so far as I know, were Bill Joy dreams, but Steve Kleiman was the primary driver of the vnode design but I think Joe Moran was the main coder of all of that. It's one of those things that people copy but don't get right. I think Linux got closer than FreeBSD did. I haven't dug into the FreeBSD kernel in years so who knows, maybe it is fantastic. When I last looked it was lagging way behind SunOS (which isn't fair, Sun was a business and as such had buildings full of motivated people who were making it better. There was a building with just networking people in, we're talking a two story building with I dunno, ~100 offices). They threw more resources at it that FreeBSD has ever had. If you took the ~1992 SunOS and stacked it up against the 2016 FreeBSD, well I would hope that FreeBSD would be better but I wouldn't bet on it across the board. It would certainly have more drivers (and if we're being honest, that's 99% of the work, all this generic kernel stuff is super fun to talk about but all the real coding is in the drivers). I think the more interesting question is would {Free,Net,Open}BSD even exist if there had been a Free SunOS. I'm 100% convinced the answer to that is a resounding no. From cym224 at gmail.com Thu Jan 5 11:30:48 2017 From: cym224 at gmail.com (Nemo) Date: Wed, 4 Jan 2017 20:30:48 -0500 Subject: [TUHS] What sparked lint? [Was: Unix stories] Message-ID: On 4 January 2017 at 13:51, Steve Johnson wrote (in part): > These rules provided rich fodder for Lint, when it came along, [...] All this lint talk caused me to reread your Lint article but no history there. Was there a specific incident that begat lint? N. From lm at mcvoy.com Thu Jan 5 11:39:13 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 17:39:13 -0800 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: References: Message-ID: <20170105013907.GE6931@mcvoy.com> On Wed, Jan 04, 2017 at 08:30:48PM -0500, Nemo wrote: > On 4 January 2017 at 13:51, Steve Johnson wrote (in part): > > These rules provided rich fodder for Lint, when it came along, [...] > > All this lint talk caused me to reread your Lint article but no > history there. Was there a specific incident that begat lint? That would be cool to know. Another thing I wish they had put in was the ability to print the type (underlying type, not the typedef) names. My first job was porting /usr/src/cmd to the ETA-10 which had bit pointers. Yup, hardware pointers pointed at bits, not bytes. The C compiler people had to decide what to do if you took an int and cast it into a C pointer. They choose to shift it from bit to byte. So an int could countain an address but it countained a bit address. Consider this code: foo(size) { char *p = (char*)malloc(size); /* little white lie? uh-uh. */ .... } If there is no #include then the compiler thought that malloc returned an int. The conversion caused a 3 bit shift when it shouldn't. I wacked lint to print out the type names and any time I saw a ptr/int type mismatch I just #included all the header files like malloc.h et al. Turned a 6 month job into a 2 week job. I believe at some point System V made the same changes. I'm sort of sure that I sent my diffs to Dennis, not positive. They may have come up with the same idea on their own. Lint is a great tool. From lm at mcvoy.com Thu Jan 5 11:52:04 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 17:52:04 -0800 Subject: [TUHS] lost ports In-Reply-To: <20170104215839.GA6931@mcvoy.com> References: <20170104215839.GA6931@mcvoy.com> Message-ID: <20170105015204.GA31939@mcvoy.com> On Wed, Jan 04, 2017 at 01:58:39PM -0800, Larry McVoy wrote: > I worked on the ETA-10 (CDC spinoff, Neil Lincoln was the architect): > > https://en.wikipedia.org/wiki/ETA10 > > No idea if the code is still around, I would guess it's lost. Be fun > if it showed up, I wrote a kmem "driver" so I could get my own version > of ps(1) to run. As I recall, it was an ioctl that just fed me back > everything I needed, it wasn't a kmem driver at all, this was my first > real job after grad school and I had no idea how to write a driver :) I am apparently wrong, this has made be go through my notes. I actually wrote some drivers for the ETA project (I'm sure by copying other ones and hacking them). The ETA had local memory and then a big pool of shared memory but it was bcopy() shared. I wrote the driver that let you access the shared memory. The proc/kmem thing was driven by the fact that nlist was slow. So I wrote a driver that gave you the process table. Pretty simple. There was something called an Ibis disk, I didn't do the controller part, I did the Unix part that talked to the controller. My notes say "I wrote the Unix side of the driver which involved mapping the IOU into shared memory. Both interrupt and polled versions were used, currently the polled is used due to interrupt problems with I/O channels." All that said, I'm not a driver person, I suck at that. From wes.parish at paradise.net.nz Thu Jan 5 12:26:47 2017 From: wes.parish at paradise.net.nz (Wesley Parish) Date: Thu, 05 Jan 2017 15:26:47 +1300 (NZDT) Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104171033.GC3405@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> Message-ID: <1483583207.586daee72f6f2@www.paradise.net.nz> Quoting Larry McVoy : > On Wed, Jan 04, 2017 at 06:02:51PM +0100, Joerg Schilling wrote: > > I have access to both SunOS-4.x and Solaris sources and it is obvious > that the > > I'm not sure how you have legal access to the SunOS 4.x code. I'd love > a > copy of that source but so far as I know it's locked up. > Seconded. I'd love to get my hands on it as well. When I let my fascination with computers derail my BA(Classics) in 1991, I was told the OS choices included Unix - BSD naturally - but only if I could pony up a king's ransom to pay for an unused AT&T license. I wanted Sun because of the cool factor, but couldn't afford a workstation so I went with a PC, and the cost of downloading Linux or 386BSD correlated to the cost of floppies, and Linux was marginally cheaper, so Linux won. My copy of SLS is now on the Bochs images page. I found a Sun 68000 pizza box at a computer recycling place in 2002, but the cool had evaporated by then. But I'd still like to get my hands on the SunOS source. Who should we contact? Wesley Parish "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, Method for Guitar "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn From wes.parish at paradise.net.nz Thu Jan 5 13:00:06 2017 From: wes.parish at paradise.net.nz (Wesley Parish) Date: Thu, 05 Jan 2017 16:00:06 +1300 (NZDT) Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170105010148.GC6931@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <20170105004353.GB6931@mcvoy.com> <4c14e37a-f959-d625-b877-f498a644415c@gmail.com> <20170105010148.GC6931@mcvoy.com> Message-ID: <1483585206.586db6b64f706@www.paradise.net.nz> Quoting Larry McVoy : > On Wed, Jan 04, 2017 at 07:50:27PM -0500, William Pechter wrote: > > Where would the current FreeBSD be if you compared it with SunOS4? > > That's a good, and hard question. One of the nice things about SunOS4 > was the VM system and the VFS layer and the VNODE layer. Those were > really well thought out. They all, so far as I know, were Bill Joy > dreams, but Steve Kleiman was the primary driver of the vnode design > but I think Joe Moran was the main coder of all of that. It's one of > those things that people copy but don't get right. I think Linux got > closer than FreeBSD did. > > I haven't dug into the FreeBSD kernel in years so who knows, maybe > it is fantastic. When I last looked it was lagging way behind SunOS > (which isn't fair, Sun was a business and as such had buildings full of > motivated people who were making it better. There was a building with > just networking people in, we're talking a two story building with I > dunno, ~100 offices). They threw more resources at it that FreeBSD has > ever had. > > If you took the ~1992 SunOS and stacked it up against the 2016 FreeBSD, > well I would hope that FreeBSD would be better but I wouldn't bet on it > across the board. It would certainly have more drivers (and if we're > being honest, that's 99% of the work, all this generic kernel stuff > is super fun to talk about but all the real coding is in the drivers). > > I think the more interesting question is would {Free,Net,Open}BSD even > exist if there had been a Free SunOS. I'm 100% convinced the answer > to that is a resounding no. > My understanding which was that of an interested layman in 1991 and just bitten by the bug, and based upon the comments of some of the computer science staff of the U of Canterbury, NZ, at that time, is that 386BSD held everybody's attention. (I mentioned in 1992 reading about Linux in a computer mag to one of them and he told me 386BSD was where the action was.) i80386 PCs were relatively cheap, BSD was (relatively) free from AT&T's legal claims, and 386BSD was even freer and targeted that cheap powerhorse. My guess is that if Sun had spun off a Free SunOS, it would've been ported to the 386. What would've happened then is anyone's guess. Wesley Parish "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, Method for Guitar "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn From lm at mcvoy.com Thu Jan 5 13:13:29 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 4 Jan 2017 19:13:29 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <1483585206.586db6b64f706@www.paradise.net.nz> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <20170105004353.GB6931@mcvoy.com> <4c14e37a-f959-d625-b877-f498a644415c@gmail.com> <20170105010148.GC6931@mcvoy.com> <1483585206.586db6b64f706@www.paradise.net.nz> Message-ID: <20170105031329.GB32104@mcvoy.com> On Thu, Jan 05, 2017 at 04:00:06PM +1300, Wesley Parish wrote: > My understanding which was that of an interested layman in 1991 and just > bitten by the bug, and based upon the comments of some of the computer > science staff of the U of Canterbury, NZ, at that time, is that 386BSD > held everybody's attention. (I mentioned in 1992 reading about Linux in > a computer mag to one of them and he told me 386BSD was where the action > was.) i80386 PCs were relatively cheap, BSD was (relatively) free from > AT&T's legal claims, and 386BSD was even freer and targeted that cheap > powerhorse. My guess is that if Sun had spun off a Free SunOS, it would've > been ported to the 386. What would've happened then is anyone's guess. So I know the 386BSD guy, Bill Jolitz. He worked for me at Sun, I hired him because of, well some Usenix details that are best left untold. He was unfairly hurt by Usenix, that's as much as I'll say. He's a good guy, a little weird, but so am I. He did some great work in 386BSD, it was ahead of Linux. I remember going into Fry's and sticking a 386BSD floppy in to see if it would boot. It usually did. From scj at yaccman.com Thu Jan 5 13:20:55 2017 From: scj at yaccman.com (Steve Johnson) Date: Wed, 04 Jan 2017 19:20:55 -0800 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: Message-ID: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> OK, more history... In 1973, I spent a 9 month Sabbatical at the University of Waterloo in Canada.  When I left, B was the dominant language on Unix -- when I came back, C had taken over.  I had made a B compiler for the Honeywell mainframe, as much to test out Yacc as anything.  When I came back,  an Intern from MIT, Al Snyder, had rewritten my B compilers into C compilers.  He left, and I took back his code.  Dennis' compiler was still the touchstone on Unix, but Al's was working on the Honeywell and there was a lot of interest in making one for OS 360 and also for an internal switching machine.   Al's compiler for the Honeywell still had a fair number of bugs, mostly in the code generation phase (the code to decide what to do when the compiler ran out of registers went from one page to two pages to four pages and there were still bugs).  Also, about this time I had a fateful discussion with Dennis, in which he said "I think it may be easier to port Unix to a new piece of hardware than to port a complex application from Unix to a new OS" (Most OS's in those days were unique to their hardware, and written in assembler).  Clearly, such a plan required a portable compiler and I agreed to take a stab at it...    I started with the existing compilers and began editing them to make the similar code in the various compilers identical.  I started with the front end--the back-end work seemed to be (and was) rather harder.   The grammar and lexer were fairly easy to clean up, but I really didn't have a good way to test them...  And the type system was new and evolving. At that time, C was still fairly close to B.  For example, there were no function prototypes, so a common source of bugs was to change a function but miss some of the invocations, leading, usually, to crashes.  There were also no header files -- the system structures were printed in the manual (!).   And, 32-bit machines were beginning to come along and looked attractive.   I realized that I could kill several birds with one stone by using my nascent front end to parse C programs and then print out a line for every function call and definition with the function name, location, and argument types.  A bit of Unix magic took these lines and sorted them, and a small program then read the combined file and complained when a function was called or defined inconsistently.  And, as a side effect, I found and fixed a number of front-end bugs. We decided to purchase an Interdata 8/32, a 32-bit machine with an instruction set that looked like a cleaned-up IBM 360, and all of a sudden portability became much more important.  We had to do something about about all those structure definitions printed in the manual.  Over the summer, the concept of header files was developed, and Lint at one point required that system calls had to use the structure definitions in these header files -- a copy from the manual was no longer acceptable.  Also, as the compiler developed, we added additional portability messages and some useful things like flagging expressions that didn't do anything and statements that could not be reached. Lint continued for a long time to share the front end of the portable C compiler. I should mention that I was not the first person to write a program to criticize other programs.  Barbara Ryder at the Labs had written, in FORTRAN, program called PFORT that would look at FORTRAN programs and flag constructions that would not work on the 6 major FORTRAN compilers in use at the time.  There were a surprising number of differences between these languages, and we were able to write a symbolic algebra system in FORTRAN that ran on all these systems largely because of the effectiveness of PFORT. Finally, the name...  At the time Lint first appeared, I had two young children at home.  This was in the era of cloth diapers, and we were doing a LOT of wash.  One day, as I was cleaning out the lint trap in our dryer, I realized that the program I was writing was performing a similar function, and the name was born.  Since then, I have written a number of Lint-like programs -- I'm proudest of the one I did when working for the MathWorks.  It was integrated into the MATLAB editor, and could give nearly instant feedback when you typed in faulty code. ----- Original Message ----- From: "Nemo" To:"Steve Johnson" Cc:"TUHS main list" Sent:Wed, 4 Jan 2017 20:30:48 -0500 Subject:What sparked lint? [Was: Unix stories] On 4 January 2017 at 13:51, Steve Johnson wrote (in part): > These rules provided rich fodder for Lint, when it came along, [...] All this lint talk caused me to reread your Lint article but no history there. Was there a specific incident that begat lint? N. -------------- next part -------------- An HTML attachment was scrubbed... URL: From grog at lemis.com Thu Jan 5 13:37:17 2017 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Thu, 5 Jan 2017 14:37:17 +1100 Subject: [TUHS] OK, time to step back from the keyboard (for a bit) In-Reply-To: <20170104201721.GA13719@minnie.tuhs.org> References: <20170104201721.GA13719@minnie.tuhs.org> Message-ID: <20170105033717.GD99823@eureka.lemis.com> On Thursday, 5 January 2017 at 6:17:21 +1000, Warren Toomey wrote: > Goodness, I go to sleep, wake up 8 hours later and there's 50 messages in > the TUHS mailing list. Some of these do relate to the history of Unix, but > some are getting quite off-topic. > > So, can I get you all to just pause before you send in a reply and ask: > is this really relevant to the history of Unix, and does it contribute > in a meaningful way to the conversation. Is this really necessary? There's a certain amount of traffic necessary to keep a list going, and restricting it can have unwanted results. Yes, I agree, there's a lot of traffic at the moment, and I'm having trouble with it too, but it'll subside of its own accord. And who knows, maybe some unexpected gem might pop up. Greg -- Sent from my desktop computer. Finger grog at FreeBSD.org for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From mascheck+tuhs at in-ulm.de Thu Jan 5 14:24:03 2017 From: mascheck+tuhs at in-ulm.de (Sven Mascheck) Date: Thu, 5 Jan 2017 05:24:03 +0100 Subject: [TUHS] # as first character of file In-Reply-To: References: Message-ID: <20170105042403.GA652972@lisa.in-ulm.de> On Wed, Jan 04, 2017 at 04:41:06PM +0000, ron minnich wrote: > I just went looking at the v6 source to confirm a memory, namely that cpp > was only invoked if a # was the first character in the file. Hence, this: > https://github.com/dspinellis/unix-history-repo/blob/Research-V6-Snapshot-Development/usr/source/c/c01.c#L1 In v6 cc(1) still does the job itself. Here is where it actually happens, in expand(): https://github.com/dspinellis/unix-history-repo/blob/Research-V6-Snapshot-Development/usr/source/s1/cc.c#L249 > Now I'm curious. Anyone know when that convention ended? In v7 various files still have # as first character, but the requirement has gone, https://github.com/dspinellis/unix-history-repo/tree/Research-V7/usr/src/cmd/cpp # cat x.h /* */ #define macro value macro # /lib/cpp x.h # 1 "x.h" value # From akosela at andykosela.com Thu Jan 5 18:12:19 2017 From: akosela at andykosela.com (Andy Kosela) Date: Thu, 5 Jan 2017 02:12:19 -0600 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170105031329.GB32104@mcvoy.com> References: <20170104024127.GN12264@mcvoy.com> <20170104033512.GA22116@mcvoy.com> <20170105004353.GB6931@mcvoy.com> <4c14e37a-f959-d625-b877-f498a644415c@gmail.com> <20170105010148.GC6931@mcvoy.com> <1483585206.586db6b64f706@www.paradise.net.nz> <20170105031329.GB32104@mcvoy.com> Message-ID: On Wednesday, January 4, 2017, Larry McVoy wrote: > On Thu, Jan 05, 2017 at 04:00:06PM +1300, Wesley Parish wrote: > > My understanding which was that of an interested layman in 1991 and just > > bitten by the bug, and based upon the comments of some of the computer > > science staff of the U of Canterbury, NZ, at that time, is that 386BSD > > held everybody's attention. (I mentioned in 1992 reading about Linux in > > a computer mag to one of them and he told me 386BSD was where the action > > was.) i80386 PCs were relatively cheap, BSD was (relatively) free from > > AT&T's legal claims, and 386BSD was even freer and targeted that cheap > > powerhorse. My guess is that if Sun had spun off a Free SunOS, it > would've > > been ported to the 386. What would've happened then is anyone's guess. > > So I know the 386BSD guy, Bill Jolitz. He worked for me at Sun, I hired > him because of, well some Usenix details that are best left untold. He > was unfairly hurt by Usenix, that's as much as I'll say. > > He's a good guy, a little weird, but so am I. He did some great work > in 386BSD, it was ahead of Linux. I remember going into Fry's and > sticking a 386BSD floppy in to see if it would boot. It usually did. > It had tons of bugs though. That is why Jordan Hubbard and Rod Grimes started unofficial 386BSD patchkit which transformed into FreeBSD; "unofficial" because Bill Jolitz was very hard to work with, to say the least... --Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From schily at schily.net Thu Jan 5 21:18:45 2017 From: schily at schily.net (Joerg Schilling) Date: Thu, 05 Jan 2017 12:18:45 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> <586d3fae.uf5FiS568GsBeCKB%schily@schily.net> Message-ID: <586e2b95.qSZFAOk0Wy6HVTlq%schily@schily.net> Dan Cross wrote: > FYI, I watched the video you referred to (my daughter having woken up) and > Simon's comments seem to be in direct contradiction of your earlier > statement. The relevant comments start at around 35:30, and he says that > the Sun engineering community pretty clearly favored a BSD-style license. > He mentions that trying to use the GPL would have pushed out their timeline > by several years. The ``you do that and we quit'' comment (around the 38 > minute park) was quite clearly in response to using GPL for OpenSolaris. This is strange, I have in mind that Simon said BSD was not wanted by the programmers and GPL was not practical because Sun then could not give away binaries from Closed Source parts from other code owners that are in Sun Solaris but could not be in OpenSolaris. IIRC, the only thing I did get from that video is the confirmation that Simon was extremely unhappy with Danese claiming that Sun did like to have something that is deliberately incompatible to the GPL. My discussion with Andrew Tucker resulted in getting to know that while there have been _some_ people inside Sun that would like to go this way, it was not what Sun as a whole intended. I also remember that I got my recollection on the BSD thing from a face to face discssion with Simon and not from this video. So one of the statements from Simon may have been mistaken. What I definitely remember from a private discussion with Simon is that Danese was very unhappy that Sun did not chose GPL and she did never understand why this was not possible for a product like Sun Solaris that should have allowed to be combined with closed source parts from other vendors as done before. She left Sun soon after the CDDL was chosen and started to spread her view in a way that unfortunately harmed OpenSolaris. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Thu Jan 5 21:50:18 2017 From: schily at schily.net (Joerg Schilling) Date: Thu, 05 Jan 2017 12:50:18 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170104184448.GD3006@mcvoy.com> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> <20170104184448.GD3006@mcvoy.com> Message-ID: <586e32fa.75dTuXajWSNmdzuM%schily@schily.net> Larry McVoy wrote: > On Wed, Jan 04, 2017 at 07:23:12PM +0100, Joerg Schilling wrote: > > BTW: in order to avoid more missunderstandings, could you mention when you have > > been in the Sun kernel group and what kind of things you did with the kernel? > > Sure. Here's some notes I put together for Eli Lamb when I was thinking > about moving to Dec (to work for Jim Gray). The date on the file is > 1992 so I had been there about 4 years. I was in the kernel group from > 1988 to about 1992, then moved over to hardware where I did a cluster > based NFS server and LMbench. Then I went to SGI and did a new name > server that could serve all of California on a 200 mhz server, made > NFS deliver serve up files at 60MB/sec per file (we could do as many > streams in parallel as we had network cards). Thank you for the list! > I showed up in October 1988. This is what I can remember that I've done > since I've been here. When I interviewed at DEC, their HR people thought > I was lieing and I went through two more interviews before they finally > believed me. > > * Doubled file system throughput. Publication. Generated sales. Talk to > Steve Kleiman for confirmation. So it seems that you worked on code for SunOS-4.x but not on code for SunOS-5.x. With that background I could understand your view. Please note that I had access to the BSD-4.3 sources and SVr2/SVr3 in the university. In addition, I am the author for the Joliet and ISO-9660:1999 support in both UnixWare 7 and Solaris, so I know about the differences between SunOS-5.x and the AT&T based SVr4 as well as I had legal access to the SCO UNIX sources for this and another project. I warned SCO about their filesystem code that would need a lot of attention to work correctly on a 64 bit platform. I have a good overview on the differences and common elements of BSD, SunOS and AT&T based UNIX versions. I worked on code for the SunOS-4.x kernel and for the SunOS-5.x kernel and I ported drivers from SunOS-4.x to SunOS.5-x, so I am pretty sure about what I write and you may have gotten your impression because you did not compare the code we are talking about now. Because you worked on filesystem throughput, you should know the new memory subsystem from SunOS-4.x well....This is a big part of the SunOS-4.x kernel and if you check the OpenSolaris kernel sources with your knowledge of the SunOS-4.x kernel, you should be able to confirm my statements. > * Single handly implemented POSIX conformance in the 4.x OS. Bullet item > on lots of sales. Talk to Don Cragun for confirmation. Good hint, I'll ask him ;-) Today is the next POSIX teleconference call and he is still in the goup of core people. > * Implemented smoosh - basis for Avocet and nselite. Talk to Shannon for > confirmation. Interesting: Do you mean "Bill Shannon"? Was he involved in SCCS or smoosh as well? I know Bill as the author of "cstyle" and I pushed him to make it OSS in 2001 already, before it appeared in OpenSolaris. In January 2015, I talked with Glenn Skinner about SCCS and smoosh and he pointed me to his smoosh patent: http://patentimages.storage.googleapis.com/pdfs/US5481722.pdf that has been expired in late 2014. I received a lisp prototype implementation for Glenns idea. Did you write the C implementation? Have you been involved in the .ml protytype as well? Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From rudi.j.blom at gmail.com Thu Jan 5 22:28:23 2017 From: rudi.j.blom at gmail.com (Rudi Blom) Date: Thu, 5 Jan 2017 19:28:23 +0700 Subject: [TUHS] lost ports Message-ID: >Date: Wed, 4 Jan 2017 16:41:07 -0500 >From: "Ron Natalie" >To: "'ron minnich'" , >Subject: Re: [TUHS] lost ports >Message-ID: <01c001d266d3$42294820$c67bd860$@ronnatalie.com> >Content-Type: text/plain; charset="utf-8" ... >I did kernel work on the PA for HP also worked on their X server (did a few other X server >over the years). >The hard part would be finding anybody from these companies who could even remember >they made computers let alone had UNIX software. I worked for the computer division in Philips Electronics, DEC, Compaq, HP, HPE and still remember some of it :-) I wasn't involved in OS development, but in testing, turnover to National Sales Organisations, etc. Even now at some customer side I still have a few aDEC400xP servers from 1992 running SCO UNIX 3.2V4.2 (last update 1999). Also a few AlphaServers with Digital UNIX, Tru64; finally some Itanium servers with HP-UX 11.23/11.31. Especially the big/small endian issue gave our customer (and therefore myself) a few headaches. Imagine getting a chunk of shared memory and casting pointers assuming the 'system' takes care of alignment. Big surprise for the customer moving from Tru64 to HP-UX. From steffen at sdaoden.eu Fri Jan 6 00:29:57 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Thu, 05 Jan 2017 15:29:57 +0100 Subject: [TUHS] Leap Second In-Reply-To: References: <20161229002105.GB94858@server.rulingia.com> <0d5eeef9-3dbb-0ddd-1b22-51fecee735d8@gmail.com> <586bc353.tOFm/S0IGecYYlh6%schily@schily.net> <20170104133202.VIWUz5j-a%steffen@sdaoden.eu> Message-ID: <20170105142957.juFic0nGw%steffen@sdaoden.eu> Warner Losh wrote: |On Wed, Jan 4, 2017 at 6:32 AM, Steffen Nurpmeso \ |wrote: |> schily at schily.net (Joerg Schilling) wrote: |>|Tony Finch wrote: |>|> sds wrote: |>|>> Important question: did anybody have an "exciting" new year because \ |>|>> of a leap |>|>> second bug? |>|> |>|> I've been collecting failure reports on the LEAPSECS list |>| |>|https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudfla\ |>|re\ |>|-dns/ |>| |>|"go" seems to have a related bug. |>| |>|BTW: The POSIX standard intentionally does not include leap seconds \ |>|in the UNIX |>|time interface as it seems that this would cause more problems than \ |>|it claims |>|to fix. |> |> I think it is a problem, or better a gap, a void, with the current |> standard that software has no option to become informed of the |> event of a leap second for one, but further more that CLOCK_TAI is |> not available. | |And even if it was, nobody would use it. It's not used in legacy code, |and the subtle differences between the different CLOCK_xxx aren't well |enough documented enough for programmers to get it right. And even if |it were, the issue is a lot more subtle than that. If you use |CLOCK_TAI, then if the system has the proper TAI offset to UTC, |calling things like timegm will produce a time that's 40s different |than the current UTC time if you aren't also running the proper |"right" timezone files, and people will think your code is buggy. But |if you get a UTC time, then you have an ambiguous encoding of the leap |second (though CLOCK_UTC, where implemented, tries to cope with that |by having a denormalized ts_nsec field). It's a big can of warms since |most programmers expect time to be a uniform radix, and UTC transforms |time of day into a non-uniform radix on an unpredictable timetable. You quote the Markus Kuhn proposal. We would need more time interfaces that work with clockid_t, like much more differentiated and known people have developed not few years ago. But i really would like to see this basic and portable software infrastructure improved. I personally am bothering with the natural language support the most (and if it is as simple as wcwidth(3) missing from ISO C), but this is also true for time and date management. I really wonder why no University ever took part and did it right. That is such a crucial part?! For example, the Berlin (and Hamburg) Universities supported "significantly" the RIOT-OS development -- and that is a complete operating system! |But that's starting to get far afield for the historical unix group... | |> I think it would make things easier if software |> which wants just that can get it, e.g., for periodic timer events |> etc. | |CLOCK_MONOTONIC already exists for these things, and programmers still |screw it up :( For the context of this list that clock is a new invention. I've used gettimeofday(2) for my small thing and am thus guilty. |> This is surely not a healing given that most timestamps etc. |> are based on UTC, but i think the severity of the problems could |> possibly be lowered. Especially now that multi-hour smears seem |> to become used by big companies it seems to be important to have |> a correct clock available. This is in fact something i don't |> really understand, at _that_ level that is to say. If, e.g., |> Google and Bloomberg both would have stated instead that they |> slew the leap second, then only a single second would have been |> affected, instead of multiple hours. | |You can't just slew the one second. It introduces too large of a |frequency error in the time base. ntpd will view it as a large error |and freak out. Programs that want to sleep for 100ms will wind up |sleeping for 200ms instead, which could be a big problem. With the Yes, that is understood. But if you have absolute control over the environment then you most likely can also verify that or whether it can be driven with a very short slew or not, that is what i have meant. In fact i just followed the RedHat link you have posted on the leapsecond list, and was lead from there to a page with the several possible options, and how Chrony and NTPD behave for them. Note that for me personally this really does no(t) (longer) matter (except for my hard-NTP driven VM-based server), the low-level software i use has a very unprecise hardware clock, even so much that i practically have to set the time hard because NTP adjustments don't really make it (after hardware wakeup). |slew over several hours, programs wind up sleeping for 100.01ms |instead, which is down in the noise of the error you get from a sleep. |Google is trading a small phase error and frequency error against the |real UTC timestamp to maintain a well-defined monotonically increasing |time series with no repeating seconds as its method of coping with |POSIX's deliberate decision to not define what happens over a leap |second, provide no encoding for a leap second and generally specifies |an interface in which it is nearly impossible to get the leap second |pedantically correct. Makes one question whether leap seconds are a |good idea or not, but that's a political discussion for another group. I absolutely think that the knowledge that mankind gained over so many years and which manifests in the leapsecond, this is a cultural achievement that cannot be overestimated, and there i do not even incorporate the social importance that i think is involved. --steffen From clemc at ccc.com Fri Jan 6 02:01:49 2017 From: clemc at ccc.com (Clem Cole) Date: Thu, 5 Jan 2017 11:01:49 -0500 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: ​below...​ On Wed, Jan 4, 2017 at 4:24 PM, ron minnich wrote: > but another true story: I visited DEC in 2000 or so, as LANL was about to > spend about $120M on an Alpha system. The question came up about the SRM > firmware for Alpha. As it was described to me, it was written in BLISS and > the only machine left that could build it was an 11/750, "somewhere in the > basement, man, we haven't turned that thing on in years". I suspect there's > a lot of these containing oxide oersteds of interest. ​Cute story but not true [and I was @ DEC working Alpha at that time]. Some facts: A.) The SRM firmware was in C primary and Assembler and used >>UNIX<< tools not VMS tools for development B.) The GEM compiler (which still exists and still being developed by VMSI) had front ends for at least (which I remember): BLISS, C, PL/1, Pascal, ADA, FORTRAN​, Cobol, RPG and a few others (I'll try to ask if I see any of the old GEM guys in the Cafe' in the next few hours - they are dying off BTW - but that's a different story). C.) The GEM compiler has backends for, Vax, Galaxy, MIPS, Alpha, x86 (32bit), ia64, INTEL*64 (post DEC/Compaq/HP) and I believe also ARM (I'll need to ask if the VMSI folks come to lunch on Friday). D.) Alpha's ran UNIX before they ran VMS BTW. The HW debug was all UNIX. Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfawcus+lists-tuhs at employees.org Fri Jan 6 02:15:28 2017 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Thu, 5 Jan 2017 16:15:28 +0000 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: <20170105161528.GA87990@cowbell.employees.org> On Wed, Jan 04, 2017 at 09:24:37PM +0000, ron minnich wrote: > So there are a few ports I know of that I wonder if they ever made it back > into that great github repo.I don't think they did. > > gould The Gould powernode was my first experience of unix at uni. They must have had a source licence, as in grubbing around in the filesystem, I eventually stumbled across source code - certainly for user space, I can't really recall if they had kernel stuff as well. Reading the code for ed, and seeing the internal implementation of regex's was an eye opener. DF From clemc at ccc.com Fri Jan 6 02:20:02 2017 From: clemc at ccc.com (Clem Cole) Date: Thu, 5 Jan 2017 11:20:02 -0500 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: ​I also left out.... E.) GEM tools ran on VMS, Ultrix, Mica, OSF/1, Tru64, Mac OSx, NT/4 and later Windows version up too and now Win10​ And I was just reminded that there was a 68K back-end done for it also that terminal folks used, although I'm not sure I ever saw it. Ron - for whatever its worth, the whole BLISS vs C is different history both outside and inside of DEC [which some of lived and I'll not repeat it here]. But it is sadly miss represented. I'm a C programmer and while I learned BLISS before C, I certainly prefer C to BLISS as do many of my peers - even heavy, heavy BLISS hackers I know. You should know that the compiler team was definitely BLISS based, as was the VMS group, but once Streams I/O was added to VMS and the C compiler introduced, most VMS customers left RMS I/O; while continuing to use FORTRAN as the primary VMS end-user language, BLISS was less so, C and Pascal quickly became more popular. Even at DEC, C took off, particularly in the HW teams if for no other reason than you could hire C programmers from Universities and you had to teach them BLISS. Clem On Thu, Jan 5, 2017 at 11:01 AM, Clem Cole wrote: > ​below...​ > > On Wed, Jan 4, 2017 at 4:24 PM, ron minnich wrote: > >> but another true story: I visited DEC in 2000 or so, as LANL was about to >> spend about $120M on an Alpha system. The question came up about the SRM >> firmware for Alpha. As it was described to me, it was written in BLISS and >> the only machine left that could build it was an 11/750, "somewhere in the >> basement, man, we haven't turned that thing on in years". I suspect there's >> a lot of these containing oxide oersteds of interest. > > > ​Cute story but not true [and I was @ DEC working Alpha at that time]. > Some facts: > > A.) The SRM firmware was in C primary and Assembler and used >>UNIX<< > tools not VMS tools for development > B.) The GEM compiler (which still exists and still being developed by > VMSI) had front ends for at least (which I remember): BLISS, C, PL/1, > Pascal, ADA, FORTRAN​, Cobol, RPG and a few others (I'll try to ask if I > see any of the old GEM guys in the Cafe' in the next few hours - they are > dying off BTW - but that's a different story). > C.) The GEM compiler has backends for, Vax, Galaxy, MIPS, Alpha, x86 > (32bit), ia64, INTEL*64 (post DEC/Compaq/HP) and I believe also ARM (I'll > need to ask if the VMSI folks come to lunch on Friday). > D.) Alpha's ran UNIX before they ran VMS BTW. The HW debug was all UNIX. > > Clem > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Fri Jan 6 02:23:22 2017 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 5 Jan 2017 08:23:22 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <20170104033512.GA22116@mcvoy.com> <20170105004353.GB6931@mcvoy.com> <4c14e37a-f959-d625-b877-f498a644415c@gmail.com> <20170105010148.GC6931@mcvoy.com> <1483585206.586db6b64f706@www.paradise.net.nz> <20170105031329.GB32104@mcvoy.com> Message-ID: <20170105162322.GA2588@mcvoy.com> On Thu, Jan 05, 2017 at 02:12:19AM -0600, Andy Kosela wrote: > > He's a good guy, a little weird, but so am I. He did some great work > > in 386BSD, it was ahead of Linux. I remember going into Fry's and > > sticking a 386BSD floppy in to see if it would boot. It usually did. > > It had tons of bugs though. That is why Jordan Hubbard and Rod Grimes > started unofficial 386BSD patchkit which transformed into FreeBSD; > "unofficial" because Bill Jolitz was very hard to work with, to say the > least... He was smart but sensitive. And pretty butt hurt over how he had been treated by the "in crowd" at Usenix. I got past that and found him a pleasure to work with, but I had to get past all that first. We were working together in person, I can imagine that working with him through email would be more than "very hard" unfortunately. From clemc at ccc.com Fri Jan 6 02:31:51 2017 From: clemc at ccc.com (Clem Cole) Date: Thu, 5 Jan 2017 11:31:51 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170105162322.GA2588@mcvoy.com> References: <20170104033512.GA22116@mcvoy.com> <20170105004353.GB6931@mcvoy.com> <4c14e37a-f959-d625-b877-f498a644415c@gmail.com> <20170105010148.GC6931@mcvoy.com> <1483585206.586db6b64f706@www.paradise.net.nz> <20170105031329.GB32104@mcvoy.com> <20170105162322.GA2588@mcvoy.com> Message-ID: On Thu, Jan 5, 2017 at 11:23 AM, Larry McVoy wrote: > found him a > ​ ​ > pleasure to work with, > ​Same here....​ I never fully understood what happened. I was friends with all of them at the time and tried to stay out of the fight. BTW: I didn't think it was USENIX as much as a couple members of the CSRG/BSDi crew. I know of couple of the folks that he did battle, but not the details nor did I want to know. ​From a historical perspective, Bill & I wrote the original AT disk interface code for 386BSD - I was consulting the chief arch if NCR who was doing their first x86 systems at the time. For those interested in early UNIX ports, particularly to the 386 check out the DDJ articles. Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Fri Jan 6 03:23:07 2017 From: rminnich at gmail.com (ron minnich) Date: Thu, 05 Jan 2017 17:23:07 +0000 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: On Thu, Jan 5, 2017 at 8:02 AM Clem Cole wrote: > > > ​Cute story but not true [and I was @ DEC working Alpha at that time]. > Some facts: > Clem, thanks for the correction. This leaves me wondering why I was so naive as to believe that fable from those guys we visited ... I suspect they did not know or were just lying to me, since the question came up in the context of LInuxBIOS for the 4-socket system (2048 of them were used in ASCI Q) and I doubt they wanted to hear about non-DEC firmware on that system. The SRM was a huge pain point on that machine and we hoped to replace it with something we could live with, but it was not to be. We did get LinuxBIOS on the 1-socket pizza box thanks to Eric Biederman and Linux NetworX, which we used to build a 128-node LinuxBIOS cluster. LinuxBIOS included a PALcode implementation. Which leads to a question ... Jon Hall used to tell me that DEC used SRM in general and PALcode in particular as competitive leverage with customers (i.e. DEC-based Alpha systems always had the latest SRM and PALcode, and non-DEC-based Alpha systems were always a few revs behind). Note this implies DEC as a systems vendor was competing with DEC Alpha chip customers who were systems vendors, which was a situation we've seen in practice with many vendors that sold chips and motherboards. Anyway, the question: with LinuxBIOS, we shipped a GPL-ed PALcode implementation. It was pretty dumb, it just did 1:1 virt to phys mapping for example, but it worked. I've always believed that was the only open source or at least GPL'ed PALcode out there -- can you tell me if I got this right? Thanks, it's always good to read your histories ... sorry this is not strictly a Unix history question but I've always wondered. ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Fri Jan 6 03:46:38 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Thu, 5 Jan 2017 12:46:38 -0500 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> References: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> Message-ID: <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> I remember being at an early UUG meeting and the group who did the UNIX port to the IBM series lamenting that it printed NUXI on boot because of byte order issues. Don’t know if it was true, but NUXI became a synonym for UNIX byte order issues from then on. The 8/32 indeed has some 370-ish stuff starting from the fact that it numbers the bits from the MSB end. Amusingly, it has more minicomputerish other features. One bizarre source of fun is that where as accessing a 16 bit quantity on an odd address on the PDP-11 gives you a bus error trap, the Interdata just ignores the low order bit and returns you the 16 bit value that you are pointing into the middle of. Same things happen on 32-bit access (lower 2 bits ignored). For nostalgia, here’s a scan of an old 8/32 programmers manual: http://bitsavers.trailing-edge.com/pdf/interdata/32bit/8-32/29-428_8-32_User_May78.pdf Byte ordering got worked out when networking came in. I worked on IBM’s AIX which was a productization of the UCLA LOCUS kernel. The thing was a relatively tightly coupled multiprocessor system that allowed seamless execution of different binary types. The machines we were working with were the 370 mainframe, the i386 (in the form of IBM PS/2’s), and a four processor i860 add in card IBM built called the W4. The mainframe having the opposite byte ordering of the others. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdagog at gmail.com Fri Jan 6 04:34:01 2017 From: pdagog at gmail.com (Pierre DAVID) Date: Thu, 5 Jan 2017 19:34:01 +0100 Subject: [TUHS] lost ports In-Reply-To: <20170105161528.GA87990@cowbell.employees.org> References: <20170105161528.GA87990@cowbell.employees.org> Message-ID: <20170105183401.GA2719@vagabond> On Thu, Jan 05, 2017 at 04:15:28PM +0000, Derek Fawcus wrote: > >The Gould powernode was my first experience of unix at uni. > A Gould powernode (hostname = gouldorak) was sitting in the Computing facility at my first work, with a VAX. The Gould load average was very impressive, many huge pig processes (chemical simulations) were running all time, but this machine crashed many times a week, system engineers very busy at this time (fsck, restore, etc.). The day a new Cray was installed, the main pig processes were migrated to this new machine. From this day, the load average of the Gould, which was still running, gone down. And the beast never crashed again. And system engineers were no longer busy with the Gould. Therefore, many chemists were confusing causality: they said that without system engineer, a computer works better. Pierre From clemc at ccc.com Fri Jan 6 05:08:43 2017 From: clemc at ccc.com (Clem Cole) Date: Thu, 5 Jan 2017 14:08:43 -0500 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: On Thu, Jan 5, 2017 at 12:23 PM, ron minnich wrote: > Anyway, the question: with LinuxBIOS, we shipped a GPL-ed PALcode > implementation. It was pretty dumb, it just did 1:1 virt to phys mapping > for example, but it worked. I've always believed that was the only open > source or at least GPL'ed PALcode out there -- can you tell me if I got > this right? I have no reason to believe it is otherwise - i.e. I have no knowledge of anything else. Your comment about PALcode being used as leverage, I never saw that in practice and I worked with a lot of external folks. I think it was more of a delta-T between the time DEC engineering released the code and DEC-Semi got it to the field and the integrators into their systems. I don't think DEC systems sales tried to compete with the DEC-Semi customers, although in practice​ I'm sure they were not very good at being to have it both ways and certainly made mistakes. Maurice Marks was the lead techie @ DEC Semi in those days, and a colleague I used to see fairly often then. I know Maurice would have screamed pretty loud if he saw the system side of DEC mucking up his business and I think G2-Bob would have swatted folks if they had - he wanted revenue anyway he could get it. As I think you know my last DEC projects was the 1K Alpha were we spliced an EV6 into a $799 AMD based system. The DEC guys were the ones that hated it (Compaq actually liked it because Dell could not do it). Anyway, Maurice and I were shaking our heads on that one. Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Fri Jan 6 05:16:03 2017 From: rminnich at gmail.com (ron minnich) Date: Thu, 05 Jan 2017 19:16:03 +0000 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: On Thu, Jan 5, 2017 at 11:09 AM Clem Cole wrote: > > As I think you know my last DEC projects was the 1K Alpha were we spliced > an EV6 into a $799 AMD based system. > That was an awesome project. I assume it ran Tru64? How did it work out saleswise? It always seemed to me you had done a great trick there of leveraging commodity mainboard economics. ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Fri Jan 6 06:08:32 2017 From: clemc at ccc.com (Clem Cole) Date: Thu, 5 Jan 2017 15:08:32 -0500 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: On Thu, Jan 5, 2017 at 2:16 PM, ron minnich wrote: > That was an awesome project. I assume it ran Tru64? ​FreeBSD and Tru64 using an non-standard (Adaptec SCSI controller) tested not 100% completed (I ran it on my desk for a bit, but there was some rough edges). Linux was proposed and should have been fairly easy. Same for NT/Alpha. It never ran VMS due to a motherboard issue with DMA and the specific disk controller VMS used [qLogic ISP], but folks could have written a new driver for it if need be.​ VMS could use disks on an Adaptec controller but could not boot from one, which both UNIX and Windows could; although officially Tru64 did not support the Adaptec in the SPD (because they could not support fail-over in TruClusters]. > How did it work out saleswise? ​Never sold - killed by a VP who I will leave nameless. All I'll say is that he was claiming that you could not make an Alpha for under $5K. From my NCR days, I knew the guy that was running consumer PC's at Compaq. And after the merger, I told Craig that the EV6 and K8 were electrical twins (not mechanical) - as DEC had licensed the Alpha memory system to AMD. He got excited and an adaptor board was built to deal with the mechanical issues. At the time, the differential cost between K8 and EV6 was a little less than $200. Craig has used at 150W over supply, all plastic case etc... System was sold via Radio Shack at the time. It was all about low cost. BTW: I still have the motherboard we used in my basement and the EV6 on my desk @ Intel. > It always seemed to me you had done a great trick there of leveraging > commodity mainboard economics. ​Yes, but we had "raped the virgin mother" in the eyes of the DEC big iron folks because we made an Alpha on 16% gross margins not 43% like TurboLaser.​ (i.e. the DEC side not the Compaq side killed it). The point is at 43% gross margins and built like a DEC system (steel cases, 400W power supply) et al, it did come to $5K; if you built it like Compaq and used Compaq margins; you could break $1K. When I got shot down after I presented to the sr folks what we did, I returned the call from a start-up the next day. A month later, was VP of Engineering at Paceline. BTW: I think that I still have the business proposal PPT somewhere in my archives (I ran into a couple of summer ago). Like bitsavers, there needs to be an archive of cool ideas that companies never had the guts to follow thru on. Oh well. Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From mah at mhorton.net Fri Jan 6 06:55:00 2017 From: mah at mhorton.net (Mary Ann Horton) Date: Thu, 5 Jan 2017 12:55:00 -0800 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> References: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> Message-ID: I recall at the Delaware Usenix conference (in 1979?) a professor from Case Western gave a talk about his port of UNIX to some Interdata or Data General or something. He said that when he booted it up, it said "NUXI". On 01/05/2017 09:46 AM, Ron Natalie wrote: > > I remember being at an early UUG meeting and the group who did the > UNIX port to the IBM series lamenting that it printed NUXI on boot > because of byte order issues. Don’t know if it was true, but NUXI > became a synonym for UNIX byte order issues from then on. > > The 8/32 indeed has some 370-ish stuff starting from the fact that it > numbers the bits from the MSB end. Amusingly, it has more > minicomputerish other features. > > One bizarre source of fun is that where as accessing a 16 bit quantity > on an odd address on the PDP-11 gives you a bus error trap, the > Interdata just ignores the low order bit and returns you the 16 bit > value that you are pointing into the middle of. Same things happen > on 32-bit access (lower 2 bits ignored). > > For nostalgia, here’s a scan of an old 8/32 programmers manual: > http://bitsavers.trailing-edge.com/pdf/interdata/32bit/8-32/29-428_8-32_User_May78.pdf > > Byte ordering got worked out when networking came in. I worked on > IBM’s AIX which was a productization of the UCLA LOCUS kernel. The > thing was a relatively tightly coupled multiprocessor system that > allowed seamless execution of different binary types. The machines > we were working with were the 370 mainframe, the i386 (in the form of > IBM PS/2’s), and a four processor i860 add in card IBM built called > the W4. The mainframe having the opposite byte ordering of the others. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Fri Jan 6 07:06:13 2017 From: clemc at ccc.com (Clem Cole) Date: Thu, 5 Jan 2017 16:06:13 -0500 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: References: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> Message-ID: He is right, it was the IBM Series/1 Port and it was a different school (Miami of Ohio, I think). Case was Bill Shannon and Sam Leffler's port to an Interdata. On Thu, Jan 5, 2017 at 3:55 PM, Mary Ann Horton wrote: > I recall at the Delaware Usenix conference (in 1979?) a professor from > Case Western gave a talk about his port of UNIX to some Interdata or Data > General or something. He said that when he booted it up, it said "NUXI". > > On 01/05/2017 09:46 AM, Ron Natalie wrote: > > I remember being at an early UUG meeting and the group who did the UNIX > port to the IBM series lamenting that it printed NUXI on boot because of > byte order issues. Don’t know if it was true, but NUXI became a synonym > for UNIX byte order issues from then on. > > > > > > The 8/32 indeed has some 370-ish stuff starting from the fact that it > numbers the bits from the MSB end. Amusingly, it has more minicomputerish > other features. > > One bizarre source of fun is that where as accessing a 16 bit quantity on > an odd address on the PDP-11 gives you a bus error trap, the Interdata just > ignores the low order bit and returns you the 16 bit value that you are > pointing into the middle of. Same things happen on 32-bit access (lower 2 > bits ignored). > > For nostalgia, here’s a scan of an old 8/32 programmers manual: > http://bitsavers.trailing-edge.com/pdf/interdata/32bit/8-32/ > 29-428_8-32_User_May78.pdf > > > > Byte ordering got worked out when networking came in. I worked on > IBM’s AIX which was a productization of the UCLA LOCUS kernel. The thing > was a relatively tightly coupled multiprocessor system that allowed > seamless execution of different binary types. The machines we were > working with were the 370 mainframe, the i386 (in the form of IBM PS/2’s), > and a four processor i860 add in card IBM built called the W4. The > mainframe having the opposite byte ordering of the others. > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chet.ramey at case.edu Fri Jan 6 07:17:49 2017 From: chet.ramey at case.edu (Chet Ramey) Date: Thu, 5 Jan 2017 16:17:49 -0500 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: References: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> Message-ID: <1c8a320e-7b39-8734-1db5-0491ab38dd88@case.edu> On 1/5/17 3:55 PM, Mary Ann Horton wrote: > I recall at the Delaware Usenix conference (in 1979?) a professor from Case > Western gave a talk about his port of UNIX to some Interdata or Data > General or something. He said that when he booted it up, it said "NUXI". That might have been Sam Leffler and Bill Shannon's port of 7th Edition to the Harris/6. Somewhere I have both their MS theses describing it. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~chet/ From clemc at ccc.com Fri Jan 6 07:30:31 2017 From: clemc at ccc.com (Clem Cole) Date: Thu, 5 Jan 2017 16:30:31 -0500 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: <1c8a320e-7b39-8734-1db5-0491ab38dd88@case.edu> References: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> <1c8a320e-7b39-8734-1db5-0491ab38dd88@case.edu> Message-ID: I stand corrected -- I think you are right, Sam and Bill had a Harris system not Interdata. On Thu, Jan 5, 2017 at 4:17 PM, Chet Ramey wrote: > On 1/5/17 3:55 PM, Mary Ann Horton wrote: > > I recall at the Delaware Usenix conference (in 1979?) a professor from > Case > > Western gave a talk about his port of UNIX to some Interdata or Data > > General or something. He said that when he booted it up, it said "NUXI". > > That might have been Sam Leffler and Bill Shannon's port of 7th Edition > to the Harris/6. Somewhere I have both their MS theses describing it. > > -- > ``The lyf so short, the craft so long to lerne.'' - Chaucer > ``Ars longa, vita brevis'' - Hippocrates > Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~ > chet/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Fri Jan 6 07:42:40 2017 From: rminnich at gmail.com (ron minnich) Date: Thu, 05 Jan 2017 21:42:40 +0000 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: References: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> <1c8a320e-7b39-8734-1db5-0491ab38dd88@case.edu> Message-ID: the udel usenix was 1980. I was support staff :-) On Thu, Jan 5, 2017 at 1:31 PM Clem Cole wrote: > I stand corrected -- I think you are right, Sam and Bill had a Harris > system not Interdata. > > On Thu, Jan 5, 2017 at 4:17 PM, Chet Ramey wrote: > > On 1/5/17 3:55 PM, Mary Ann Horton wrote: > > I recall at the Delaware Usenix conference (in 1979?) a professor from > Case > > Western gave a talk about his port of UNIX to some Interdata or Data > > General or something. He said that when he booted it up, it said "NUXI". > > That might have been Sam Leffler and Bill Shannon's port of 7th Edition > to the Harris/6. Somewhere I have both their MS theses describing it. > > -- > ``The lyf so short, the craft so long to lerne.'' - Chaucer > ``Ars longa, vita brevis'' - Hippocrates > Chet Ramey, UTech, CWRU chet at case.edu > http://cnswww.cns.cwru.edu/~chet/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Fri Jan 6 07:51:08 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Thu, 5 Jan 2017 16:51:08 -0500 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: References: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> <1c8a320e-7b39-8734-1db5-0491ab38dd88@case.edu> Message-ID: <02b801d2679d$d34c11f0$79e435d0$@ronnatalie.com> Indeed. I went to Toronto in 1979 (I remember my boss paying me while I was gone) and was working at BRL in 1980 when we went to UDel. I remember working the AV for Mike Muuss as he was giving is BRL CAD presentation. He started off with “The Ballistic Research Laboratory is the Army’s Lead in Vulnerability and Lethality Analysis” which got a hot of hisses. Years later I was having dinner with Mark Krieger, then president of Unipress software, and looking at him and saying “Didn’t you get booed off the stage at the UDel UUG?” I couldn’t remember the circumstances until he then told me that he was half of Whitesmith’s at the time (and talking about their Idris commercial product and the UUG had a definite non-commercial bent at the time). I told him I was always kind of amused by the Whitesmith’s C Compiler license stamp that they sent you to stick to your VAX. Like the Whitesmith’s police were going to raid your facility to make sure you had it. He said he had left Whitesmiths by then, but the stickers let him know that Plauger had really gone off the deep end. A few months later I found that someone had actually stuck the stamp on a machine at the Rutgers-Newark campus. I carefully peeled it off and gave it to Mark. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chet.ramey at case.edu Fri Jan 6 08:02:15 2017 From: chet.ramey at case.edu (Chet Ramey) Date: Thu, 5 Jan 2017 17:02:15 -0500 Subject: [TUHS] What sparked lint? [Was: Unix stories] In-Reply-To: References: <730228a04039dc983eaed5f78a3d817ea80e79bd@webmail.yaccman.com> <024c01d2677b$aabff1b0$003fd510$@ronnatalie.com> <1c8a320e-7b39-8734-1db5-0491ab38dd88@case.edu> Message-ID: On 1/5/17 4:42 PM, ron minnich wrote: > the udel usenix was 1980. I was support staff :-) I think Bill's talk was at Boulder. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~chet/ From wkt at tuhs.org Fri Jan 6 09:08:53 2017 From: wkt at tuhs.org (Warren Toomey) Date: Fri, 6 Jan 2017 09:08:53 +1000 Subject: [TUHS] My Hidden Archive (was: lost ports) Message-ID: <20170105230853.GA11659@minnie.tuhs.org> All, following on from the "lost ports" thread, I might remind you all that I'm keeping a hidden archive of Unix material which cannot be made public due to copyright and other reasons. The goal is to ensure that these bits don't > /dev/null, even if we can't (yet) do anything with them. If you have anything that could be added to the archive, please let me know. My rules are: I don't divulge what's in the archive, nor who I got stuff from. There have been very few exceptions. I have sent copies of the archive to two important historical computer organisations who must abide by the same rules. I think I've had one or two individuals who were desperate to get software to run on their old kit, and I've "loaned" some bits to them. Anway, that's it. If that seems reasonable to you, and you want an off-site backup of your bits, I'm happy to look after them for you. Cheers, Warren From stewart at serissa.com Fri Jan 6 09:40:21 2017 From: stewart at serissa.com (Lawrence Stewart) Date: Thu, 5 Jan 2017 18:40:21 -0500 Subject: [TUHS] lost ports In-Reply-To: References: Message-ID: <026BD615-AC89-4407-9CB7-5819DCAA972E@serissa.com> I left Digital in 1994, so I don’t know much about the later evolution of the Alphaservers, but 1998 would have been about right for en EV-56 (EV-5 shrink) or EV-6. There’s a Wikipedia article about all the different systems but most of the dates are missing. The white label parts are all PAL22V10-15s. The 8 square chips are cache SRAMS, and most the the SOIC jellybeans are bus transceivers to connect the CPU to RAM and I/O. The PC derived stuff is in the back corner. There are 16 DIMM slots to make two ranks of 54 bit RAM out of 8-bit DIMMs. We usually ran with a SCSI card, an ethernet, and an 8514 graphics card plugged into the riser. -L > On 2017, Jan 5, at 5:55 PM, ron minnich wrote: > > What version of this would I have bought ca. 1998? I had 16 of some kind of Alpha nodes in AMD sockets, interconnected with SCI for encoding videos. I ended up writing and releasing what I think were the first open source drivers for SCI -- it took a long time to get Dolphin to let me release them. > > The DIPs with white labels -- are those PALs or somethin? Or are the labels just to cover up part names :-) > > On Thu, Jan 5, 2017 at 2:39 PM Lawrence Stewart > wrote: > Alphas in PC boxes! I dug around in the basement and found my Beta (photo attached). > > This was from 1992 or 1993 I think. This is an EV-3 or EV-4 in a low profile PC box using pc peripherals. Dave Conroy designed the hardware, I did the console ROMS (BIOS equivalent) and X server, and Tom Levergood ported OSF-1. A joint project of DEC Semiconductor Engineering and the DEC Cambridge Research Lab. I think about 20 were built, and the idea kickstarted a line of low end Alphaservers. > > This was a typical Conroy minimalist design, crunching the off-chip caches, PC junk I/O, ISA bus, and 64 MBytes of RAM into this little space. I think one gate array would replace about half of the chips. > > -L > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Fri Jan 6 12:02:40 2017 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 5 Jan 2017 18:02:40 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586e32fa.75dTuXajWSNmdzuM%schily@schily.net> References: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> <20170104184448.GD3006@mcvoy.com> <586e32fa.75dTuXajWSNmdzuM%schily@schily.net> Message-ID: <20170106020239.GI2588@mcvoy.com> On Thu, Jan 05, 2017 at 12:50:18PM +0100, Joerg Schilling wrote: > So it seems that you worked on code for SunOS-4.x but not on code for SunOS-5.x. That's not true, Sunbox was all 5.x code. I spent about 3 years in the 5.x source base. > I worked on code for the SunOS-4.x kernel and for the SunOS-5.x kernel and I > ported drivers from SunOS-4.x to SunOS.5-x, so I am pretty sure about what I > write and you may have gotten your impression because you did not compare the > code we are talking about now. You've been arguing with a guy who was in the kernel group and I've tried to set you straight and you just keep coming back with misinformation. > Because you worked on filesystem throughput, you should know the new memory > subsystem from SunOS-4.x well....This is a big part of the SunOS-4.x kernel and > if you check the OpenSolaris kernel sources with your knowledge of the > SunOS-4.x kernel, you should be able to confirm my statements. The VM system was ported from SunOS 4.x to System 5. Your statements that SVr4 is based on SunOS are flat out wrong. SVr4 got a lot of SunOS goodness but the starting point was ATT System V. > > * Implemented smoosh - basis for Avocet and nselite. Talk to Shannon for > > confirmation. > > Interesting: Do you mean "Bill Shannon"? Was he involved in SCCS or smoosh > as well? I know Bill as the author of "cstyle" and I pushed him to make it OSS > in 2001 already, before it appeared in OpenSolaris. Yup, that Shannon. > In January 2015, I talked with Glenn Skinner about SCCS and smoosh and he > pointed me to his smoosh patent: > > http://patentimages.storage.googleapis.com/pdfs/US5481722.pdf > > that has been expired in late 2014. The fact that Glenn didn't put me on that patent is a sore point. Yes, he wrote the lisp code that showed it could be done. I wrote the C code that did that in one pass (his stuff was N+M where N was how many deltas were on the local side and M was how deltas were on the remote side). --lm From lm at mcvoy.com Fri Jan 6 12:09:04 2017 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 5 Jan 2017 18:09:04 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586e2b95.qSZFAOk0Wy6HVTlq%schily@schily.net> References: <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> <586d3fae.uf5FiS568GsBeCKB%schily@schily.net> <586e2b95.qSZFAOk0Wy6HVTlq%schily@schily.net> Message-ID: <20170106020904.GK2588@mcvoy.com> On Thu, Jan 05, 2017 at 12:18:45PM +0100, Joerg Schilling wrote: > This is strange, I have in mind that Simon said BSD was not wanted by the > programmers and GPL was not practical because Sun then could not give away > binaries from Closed Source parts from other code owners that are in Sun > Solaris but could not be in OpenSolaris. There's just no way that there were, as you claimed, programmers who were willing to quit if it was BSD. BSD was fine, there are other good choices but BSD was fine. What the video showed is there were programmers who were willing to quit if was the GPL, which is the opposite of what you have so stridently claimed. > IIRC, the only thing I did get from that video is the confirmation that Simon > was extremely unhappy with Danese claiming that Sun did like to have something > that is deliberately incompatible to the GPL. That makes sense to me, the GPL was hated inside of Sun, it was considered a virus. The idea that you used a tiny bit of GPLed code and then everything else is GPLed was viewed as highway robbery. From rminnich at gmail.com Fri Jan 6 12:32:12 2017 From: rminnich at gmail.com (ron minnich) Date: Fri, 06 Jan 2017 02:32:12 +0000 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: On Wed, Jan 4, 2017 at 9:09 AM Clem Cole wrote: ​FWIW: I disagree​. For details look at my discussion of rewriting Linux in RUST on quora. But a quick point is this .... Linux original took off (and was successful) not because of GPL, but in spite of it and later the GPL would help it. Not disagreeing with you all, but: I guess it depends on where you were. In 1994-6 I worked with a friend at IBM Watson on getting netbsd going on powerpc. Linux killed that effort. It turned out that the BSD license would allow different parts of IBM to hold back code from other parts of IBM and still ship product. The GPL made such behavior much, much harder. The engineers inside IBM preferred sharing code, and the GPL made that possible. At least that's how it was explained to me. This also proved true for some Agencies in the US Gov't as early as 1993. See this: https://github.com/torvalds/linux/blob/master/drivers/net/LICENSE.SRC. I was there for the internal discussion which began in 1992. Weirdly enough, though, sometimes lawyers prefer the GPL. On our third foray into getting a sane license for Plan 9 in 2013, it turned out Lucent legal preferred GPL to BSD. Go figure. I don't understand lawyers most times. -------------- next part -------------- An HTML attachment was scrubbed... URL: From usotsuki at buric.co Fri Jan 6 13:07:52 2017 From: usotsuki at buric.co (Steve Nickolas) Date: Thu, 5 Jan 2017 22:07:52 -0500 (EST) Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170106020904.GK2588@mcvoy.com> References: <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> <586d3fae.uf5FiS568GsBeCKB%schily@schily.net> <586e2b95.qSZFAOk0Wy6HVTlq%schily@schily.net> <20170106020904.GK2588@mcvoy.com> Message-ID: On Thu, 5 Jan 2017, Larry McVoy wrote: > That makes sense to me, the GPL was hated inside of Sun, it was > considered a virus. The idea that you used a tiny bit of GPLed code and > then everything else is GPLed was viewed as highway robbery. "GPL: Free like VD" Personally, I'm fine with LGPL 2.1, but I rather quickly soured on the GPL proper, and what little code I've done recently has been under a modified BSD licence (UIUC licence). -uso. From crossd at gmail.com Fri Jan 6 13:56:08 2017 From: crossd at gmail.com (Dan Cross) Date: Thu, 5 Jan 2017 22:56:08 -0500 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: On Wed, Jan 4, 2017 at 12:08 PM, Clem Cole wrote: > > On Wed, Jan 4, 2017 at 11:17 AM, ron minnich > > wrote: > >> Larry, had Sun open sourced SunOS, as you fought so hard to make happen, >> Linux might not have happened as it did. SunOS was really good. Chalk up >> another win for ATT! >> > > ​FWIW: I disagree​. For details look at my discussion of rewriting > Linux in RUST > > on quora. But a quick point is this .... Linux original took off (and was > successful) not because of GPL, but in spite of it and later the GPL would > help it. But it was not the GPL per say that made Linux vs BSD vs SunOS et > al. > > What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a > lot of hackers (myself included) thought the case was about *copyright*. > It was not, it was about *trade secret* and the ideas around UNIX. * > i.e.* folks like, we "mentally contaminated" with the AT&T Intellectual > Property. > > When the case came, folks like me that were running 386BSD which would > later begat FreeBSD et al, got scared. At that time, *BSD (and SunOS) > were much farther along in the development and stability. But .... may of > us hought Linux would insulate us from losing UNIX on cheap HW because > their was not AT&T copyrighted code in it. Sadly, the truth is that if > AT&T had won the case, *all UNIX-like systems* would have had to be > removed from the market in the USA and EU [NATO-allies for sure]. > > That said, the fact the *BSD and Linux were in the wild, would have made > it hard to enforce and at a "Free" (as in beer) price it may have been hard > to make it stick. But that it was a misunderstanding of legal thing that > made Linux "valuable" to us, not the implementation. > > If SunOS has been available, it would not have been any different. It > would have been thought of based on the AT&T IP, but trade secret and > original copyright. > Yes, it seems in retrospect that USL v BSDi basically killed Unix (in the sense that Linux is not a blood-relative of Unix). I remember someone quipping towards the late 90s, "the Unix wars are over. Linux won." Perhaps an interesting area of speculation is, "what would the world have looked like if USL v BSDi hadn't happened *and* SunOS was opened to the world?" I think in that parallel universe, Linux wouldn't have made it particularly far: absent the legal angle, what would the incentive had been to work on something that was striving to basically be Unix, when really good Unix was already available? Ah well. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Fri Jan 6 13:58:26 2017 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 5 Jan 2017 19:58:26 -0800 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: <20170106035826.GA14549@mcvoy.com> On Thu, Jan 05, 2017 at 10:56:08PM -0500, Dan Cross wrote: > Perhaps an interesting area of speculation is, "what would the world have > looked like if USL v BSDi hadn't happened *and* SunOS was opened to the > world?" I think in that parallel universe, Linux wouldn't have made it > particularly far: absent the legal angle, what would the incentive had been > to work on something that was striving to basically be Unix, when really > good Unix was already available? Yeah, that was what I was trying to say, you said it better. > Ah well. Indeed. -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From schily at schily.net Fri Jan 6 22:56:42 2017 From: schily at schily.net (Joerg Schilling) Date: Fri, 06 Jan 2017 13:56:42 +0100 Subject: [TUHS] MacOS X is Unix (tm) In-Reply-To: <20170103182054.GB12264@mcvoy.com> References: <3564F094-9B31-4492-8FDD-716160F45E84@tfeb.org> <02d201d2642f$2bcfe0d0$836fa270$@ronnatalie.com> <95D6B274-6D3F-4610-873A-76F4707AE89B@tfeb.org> <20170101202850.GF17848@wopr> <20170101203813.GV5983@mcvoy.com> <586ba44c.dnHd1Caeq6INr3FG%schily@schily.net> <20170103182054.GB12264@mcvoy.com> Message-ID: <586f940a.7BBGQZHd8hoS/Iy2%schily@schily.net> Larry McVoy wrote: > First, it was two claims, fast file system, and fast processes. You > seem to have ignored the second one. That second one is a big deal > for multi process/multi processor jobs. Both are not aligned with my experiences. Note that I replaced Linux from the central web server for berlios.de by Solaris around 2005 and that resulted in a noticeable increased overall performance which includes fast processes. > If you have access to solaris and linux running on the same hardware, > get a copy of lmbench and run it. I can walk you through the results > and if LMbench has bit rotted I'll fix it. > > http://mcvoy.com/lm/bitmover/lmbench/lmbench2.tar.gz I contacted a friend and we are going to set up such a machine and do tests. This however will take a few weeks. BTW: I am not using "tar" for my tests but rather "star" as "tar" is too unspecific to be used for comparisons and as known implementations (including "gtar") are too slow to use them for performance metering. Since SEEK_HOLE/SEEK_DATA is available, "star -copy ..." is e.g. at least 10% faster than any other copying tool, including ufsdump/ufsrestore. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From clemc at ccc.com Sat Jan 7 00:24:46 2017 From: clemc at ccc.com (Clem Cole) Date: Fri, 6 Jan 2017 09:24:46 -0500 Subject: [TUHS] GPL or not (was SVR4 vs Sun disagreement) Message-ID: ​This is a history list and I'm going to try to answer this to give some historical context and hopefully end this otherwise a thread that I'm not sure adds to the history of UNIX much one way of the other. Some people love GPL, some do not.​ I'll gladly take some of this off list. But I would like to see us not devolve TUHS into my favorite license or favorite unix discussion. On Thu, Jan 5, 2017 at 9:09 PM, Larry McVoy wrote: > That makes sense to me, the GPL was hated inside of Sun, it was considered > ​ ​ > a virus. The idea that you used a tiny bit of GPLed code and then > everything > ​ ​ > else is GPLed was viewed as highway robbery. > ​I'm not lawyer, nor play one. I am speaking for myself not Intel here so take what I have to say with that in mind. Note I do teach the required "GPL and Copyright Course"​ of all Intel SW folks so I have had some training and I do have some opinions. I also have lived this for 5 start up, and a number of large firms both inside and as the a consultant. Basically, history has shown that they both viral an non-viral licenses have their place. Before I worked Intel I admit I was pretty much negative on the GPL "virus" and I >>mostly<< still am. IMHO, it's done more damage than it has helped and the CMU/MIT/BSD style "Dead Fish" license has done for more positive *for the industry @ large *than the GPL in the long run. But I admit, I'm a capitalist and I see the value in letting some one make some profit for their work. All I have seen the virus do in the long run is that firms have lawyers to figure out how to deal with it. There is a lot of miss information about the term "open source" .... open source does not mean "free" as in beer. It means available and "open" to be read and modified. Unix has >>always<< be open and available - which is why their are so many versions of Unix (and Linux). The question was the *price* of the license and who had it. Most hacker actually did have access as this list shows -- we had from our universities for little money or our employees for much more. GPL and the virus it has, does not protect any one from this diversity. In fact, in some ways it makes it harder. The diversity comes from the market place. The problem is that in the computer business, the diversity can be bad and keeping things "my way" is better for the owner of the gold (be it a firm like IBM, DEC, or Microsoft) or a technology like Linux. What GPL is >>supposed<< to do it ensure that secrets are not locked up and ensure that all can see and share in the ideas. This is a great idea in theory, the risk is that if you have IP that you want to some how protect, as Larry suggests, the virus can put your IP in danger. To the credit of firms like Intel, GE, IBM et al, they have learned how to try to firewall their >>important<< IP with processes and procedures to protect it (which is exactly what rms did not want to have happen BTW). [In my experience, it made the locks even tighter than before], although it has made some things more available. I now this rankles some folks. There are positives and negatives to each way of doing things. IMO, history has shown that it has been the economics of >>Clay Christiansen style disruption<<, not a license that changed things in our industry. When the price of UNIX of any version (Linux, *BSD, SunOS, MInux, etc...) and the low cost HW came to be and the "enough" hackers did something. Different legal events pushed one version ahead of others, and things had to be technology "good enough" -- but it was economics not license that made the difference. License played into the economics for sure, but in the end, it was free (as in beer) vs $s that made it all work. Having lived through they completely open, completely closed, GPLed and dead-fish world of the computer industry, I'm not sure if we are really any farther ahead in practice. We just have to be careful and more lawyers make more money - by that's my being a cynic. Anyway, I hope we can keep from devolving from really history. Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Sat Jan 7 00:27:36 2017 From: clemc at ccc.com (Clem Cole) Date: Fri, 6 Jan 2017 09:27:36 -0500 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: On Thu, Jan 5, 2017 at 10:56 PM, Dan Cross wrote: > Perhaps an interesting area of speculation is, "what would the world have > looked like if USL v BSDi hadn't happened *and* SunOS was opened to the > world?" I think in that parallel universe, Linux wouldn't have made it > particularly far: absent the legal angle, what would the incentive had been > to work on something that was striving to basically be Unix, when really > good Unix was already available? I agree. -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sat Jan 7 03:38:15 2017 From: imp at bsdimp.com (Warner Losh) Date: Fri, 6 Jan 2017 10:38:15 -0700 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170106020904.GK2588@mcvoy.com> References: <586d297d.1Xe1F+MbZU5jlMCH%schily@schily.net> <20170104170635.GB3405@mcvoy.com> <586d2cb8.bX/Qr+lraMBvukp9%schily@schily.net> <20170104171550.GD3405@mcvoy.com> <586d3377.O9F94JXabKYeeaLf%schily@schily.net> <20170104174217.GG3405@mcvoy.com> <586d3556.RxSZPogSIiAqCHBk%schily@schily.net> <586d3fae.uf5FiS568GsBeCKB%schily@schily.net> <586e2b95.qSZFAOk0Wy6HVTlq%schily@schily.net> <20170106020904.GK2588@mcvoy.com> Message-ID: On Thu, Jan 5, 2017 at 7:09 PM, Larry McVoy wrote: > On Thu, Jan 05, 2017 at 12:18:45PM +0100, Joerg Schilling wrote: >> This is strange, I have in mind that Simon said BSD was not wanted by the >> programmers and GPL was not practical because Sun then could not give away >> binaries from Closed Source parts from other code owners that are in Sun >> Solaris but could not be in OpenSolaris. > > There's just no way that there were, as you claimed, programmers who were > willing to quit if it was BSD. BSD was fine, there are other good choices > but BSD was fine. What the video showed is there were programmers who were > willing to quit if was the GPL, which is the opposite of what you have so > stridently claimed. > >> IIRC, the only thing I did get from that video is the confirmation that Simon >> was extremely unhappy with Danese claiming that Sun did like to have something >> that is deliberately incompatible to the GPL. > > That makes sense to me, the GPL was hated inside of Sun, it was considered > a virus. The idea that you used a tiny bit of GPLed code and then everything > else is GPLed was viewed as highway robbery. Not to get in the middle of this, but I've know several Sun Kernel engineers personally over the years (mostly in the 1990's when this is relevant). The overwhelming majority view is what Larry has said. Granted, these were engineers that were at the Sun office in Broomfield Colorado for the most part, but I think they were representative. I've also talked with several people in senior management at Sun who helped make sure that things got released under CDDL and they've told me they'd rather have gone with BSD, but it had issues the CDDL addressed. So can we please just get on with things and stop this silly back and forth. Warner From grog at lemis.com Sat Jan 7 12:58:29 2017 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Sat, 7 Jan 2017 13:58:29 +1100 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: <20170107025829.GH99823@eureka.lemis.com> On Friday, 6 January 2017 at 9:27:36 -0500, Clem Cole wrote: > On Thu, Jan 5, 2017 at 10:56 PM, Dan Cross wrote: > >> Perhaps an interesting area of speculation is, "what would the world have >> looked like if USL v BSDi hadn't happened *and* SunOS was opened to the >> world?" I think in that parallel universe, Linux wouldn't have made it >> particularly far: absent the legal angle, what would the incentive had been >> to work on something that was striving to basically be Unix, when really >> good Unix was already available? > >> I agree. I think that if SunOS 4 had been released to the world at the right time, the free BSDs wouldn't have happened in the way they did either; they would have evolved intimately coupled with SunOS. Greg -- Sent from my desktop computer. Finger grog at FreeBSD.org for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From imp at bsdimp.com Sat Jan 7 13:09:18 2017 From: imp at bsdimp.com (Warner Losh) Date: Fri, 6 Jan 2017 20:09:18 -0700 Subject: [TUHS] SunOS vs Linux In-Reply-To: <20170107025829.GH99823@eureka.lemis.com> References: <20170107025829.GH99823@eureka.lemis.com> Message-ID: On Fri, Jan 6, 2017 at 7:58 PM, Greg 'groggy' Lehey wrote: > On Friday, 6 January 2017 at 9:27:36 -0500, Clem Cole wrote: >> On Thu, Jan 5, 2017 at 10:56 PM, Dan Cross wrote: >> >>> Perhaps an interesting area of speculation is, "what would the world have >>> looked like if USL v BSDi hadn't happened *and* SunOS was opened to the >>> world?" I think in that parallel universe, Linux wouldn't have made it >>> particularly far: absent the legal angle, what would the incentive had been >>> to work on something that was striving to basically be Unix, when really >>> good Unix was already available? >> >>> I agree. > > I think that if SunOS 4 had been released to the world at the right > time, the free BSDs wouldn't have happened in the way they did either; > they would have evolved intimately coupled with SunOS. With the right license (BSD), I'd go so far as to saying there'd be no BSD 4.4, or if there was, it would have been rebased from the SunOS base... There were discussions between CSRG and Sun about Sun donating it's reworked VM and VFS to Berkeley to replace the Mach VM that was in there... Don't know the scope of these talks, or if they included any of the dozens of other areas that Sun improved from its BSD 4.3 base... The talks fell apart over the value of the code, if the rumors I've heard are correct. Warner From lm at mcvoy.com Sat Jan 7 13:12:33 2017 From: lm at mcvoy.com (Larry McVoy) Date: Fri, 6 Jan 2017 19:12:33 -0800 Subject: [TUHS] SunOS vs Linux In-Reply-To: <20170107025829.GH99823@eureka.lemis.com> References: <20170107025829.GH99823@eureka.lemis.com> Message-ID: <20170107031233.GI16253@mcvoy.com> On Sat, Jan 07, 2017 at 01:58:29PM +1100, Greg 'groggy' Lehey wrote: > On Friday, 6 January 2017 at 9:27:36 -0500, Clem Cole wrote: > > On Thu, Jan 5, 2017 at 10:56 PM, Dan Cross wrote: > > > >> Perhaps an interesting area of speculation is, "what would the world have > >> looked like if USL v BSDi hadn't happened *and* SunOS was opened to the > >> world?" I think in that parallel universe, Linux wouldn't have made it > >> particularly far: absent the legal angle, what would the incentive had been > >> to work on something that was striving to basically be Unix, when really > >> good Unix was already available? > > > >> I agree. > > I think that if SunOS 4 had been released to the world at the right > time, the free BSDs wouldn't have happened in the way they did either; > they would have evolved intimately coupled with SunOS. Yup. Instead of the splintering we have had with *BSD, I think it would have drawn everyone in to work on that OS. I have regrets in my life. Not getting SunOS out there as open source is one of the big ones. I fought for it, perhaps harder than anyone else. Which perhaps makes me the bigger loser since I didn't win. The world would be a better place if that had happened. Linux is fine but it lacks what SunOS had. From lm at mcvoy.com Sat Jan 7 13:13:28 2017 From: lm at mcvoy.com (Larry McVoy) Date: Fri, 6 Jan 2017 19:13:28 -0800 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: <20170107025829.GH99823@eureka.lemis.com> Message-ID: <20170107031328.GJ16253@mcvoy.com> On Fri, Jan 06, 2017 at 08:09:18PM -0700, Warner Losh wrote: > On Fri, Jan 6, 2017 at 7:58 PM, Greg 'groggy' Lehey wrote: > > On Friday, 6 January 2017 at 9:27:36 -0500, Clem Cole wrote: > >> On Thu, Jan 5, 2017 at 10:56 PM, Dan Cross wrote: > >> > >>> Perhaps an interesting area of speculation is, "what would the world have > >>> looked like if USL v BSDi hadn't happened *and* SunOS was opened to the > >>> world?" I think in that parallel universe, Linux wouldn't have made it > >>> particularly far: absent the legal angle, what would the incentive had been > >>> to work on something that was striving to basically be Unix, when really > >>> good Unix was already available? > >> > >>> I agree. > > > > I think that if SunOS 4 had been released to the world at the right > > time, the free BSDs wouldn't have happened in the way they did either; > > they would have evolved intimately coupled with SunOS. > > With the right license (BSD), I'd go so far as to saying there'd be no > BSD 4.4, or if there was, it would have been rebased from the SunOS > base... There were discussions between CSRG and Sun about Sun donating > it's reworked VM and VFS to Berkeley to replace the Mach VM that was > in there... Don't know the scope of these talks, or if they included > any of the dozens of other areas that Sun improved from its BSD 4.3 > base... The talks fell apart over the value of the code, if the rumors > I've heard are correct. So as much as I know, I was not privy to these talks. I didn't even know they were happening. From lm at mcvoy.com Sun Jan 8 11:37:15 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sat, 7 Jan 2017 17:37:15 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> References: <20170104033512.GA22116@mcvoy.com> <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> Message-ID: <20170108013715.GV16253@mcvoy.com> On Wed, Jan 04, 2017 at 07:23:12PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > Hint: I have been told > > > from Sun employees that the Sun ZFS group did read my diploma thesis before > > > they started with ZFS even though it is written in German ;-) > > > > Huh, interesting. I'll check that out. Both Jeff Bonwick and Bill Moore > > have worked for me. Bonwick was one of my students at Stanford and I > > hired him into the kernel group. Bill worked for me on BitKeeper. > > I'll let you know what they say. So I've asked around and I can't find anyone who has read that thesis. The ZFS guys started on ZFS long before any of them had heard of you. As for your claims that SVr4 is based on SunOS, here's what the guy who did the bring up had to say (spoiler, it's exactly what I told you): SVr4 was not based on SunOS, although it incorporated many of the best features of SunOS 4.x (VM management, filesystem architecture, shared libraries, etc). Those features and interfaces were merged (after extensive discussions, involving, on the Sun side, Bill Shannon, Rob Gingell, Don Cragun and others) into a pre-release version of System V by AT&T. The reference hardware platform was AT&T's 3b2. Sun would receive periodic "loads" from AT&T of that 3b2 based code, which we then merged on top of the machine-dependent code from SunOS 4.x. Let's just say it was an adventure. After the first port, I think, Joe Kowalski came on to head the userland effort, and the team gradually built up from there. That merged code was Sun proprietary stuff; AFAIK it never went back to AT&T. I could go into your comments about Bill Joy implementing mmap but I cracked open the 4.2BSD release notes and it said that it was unimplemented. Let's move on to more productive conversations. I love the history that is in this group. From mckusick at mckusick.com Sun Jan 8 16:10:22 2017 From: mckusick at mckusick.com (Kirk McKusick) Date: Sat, 07 Jan 2017 22:10:22 -0800 Subject: [TUHS] SunOS vs Linux Message-ID: <201701080610.v086AMr7084906@chez.mckusick.com> > Date: Fri, 6 Jan 2017 20:09:18 -0700 > From: Warner Losh > To: "Greg 'groggy' Lehey" > Cc: Clem Cole , The Eunuchs Hysterical Society > > Subject: Re: [TUHS] SunOS vs Linux > >> On Friday, 6 January 2017 at 9:27:36 -0500, Clem Cole wrote: >> >> I think that if SunOS 4 had been released to the world at the right >> time, the free BSDs wouldn't have happened in the way they did either; >> they would have evolved intimately coupled with SunOS. > > With the right license (BSD), I'd go so far as to saying there'd be no > BSD 4.4, or if there was, it would have been rebased from the SunOS > base... There were discussions between CSRG and Sun about Sun donating > it's reworked VM and VFS to Berkeley to replace the Mach VM that was > in there... Don't know the scope of these talks, or if they included > any of the dozens of other areas that Sun improved from its BSD 4.3 > base... The talks fell apart over the value of the code, if the rumors > I've heard are correct. > > Warner Since I was involved with the negotiations with Sun, I can speak directly to this discussion. The 4.2BSD VM was based on the implementation done by Ozalp Babaoglu that was incorporated into the BSD kernel by Bill Joy. It was very VAX centric and was not able to handle shared read-write mappings. Before Bill Joy left Berkeley for Sun, he wrote up the API specification for the mmap interface but did not finish an implementation. At Sun, he was very involved in the implementation though did not write much (if any) of the code for the SunOS VM. The original plan was to ship 4.2BSD with an mmap implementation, but with Bill's departure that did not happen. So, it fell to me to sort out how to get it into 4.3BSD. CSRG did not have the resources to do it from scratch (there were only three of us). So, I researched existing implementations and it came down to the SunOS and MACH implementations. The obvious choice was SunOS, so I approached Sun about contributing their implementation to Berkeley. We had had a lot of cooperation about exchanging bug fixes, so this is not as crazy as it seems. The Sun engineers were all for it, and convinced their managers to push my request up the hierarchy. Skipping over lots of drama it eventually got to Scott McNealy who was dubious, but eventually bought into the idea and cleared it. At that point it went to the Sun lawyers to draw up the paperwork. The lawyers came back and said that "giving away SunOS technology could lead to a stockholder lawsuit concerning the giving away of stockhoder assets." End of discussion. We had to go with MACH. Kirk McKusick From ron at ronnatalie.com Mon Jan 9 00:52:34 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Sun, 8 Jan 2017 09:52:34 -0500 Subject: [TUHS] SunOS vs Linux In-Reply-To: <201701080610.v086AMr7084906@chez.mckusick.com> References: <201701080610.v086AMr7084906@chez.mckusick.com> Message-ID: <03f001d269be$d930ea50$8b92bef0$@ronnatalie.com> > The lawyers came back and said that "giving away SunOS technology could lead to a stockholder lawsuit concerning the giving away of stockholder assets." End of discussion. We had to go with MACH. Gosh, this strikes a nerve. The engineers at our company all had access to the license generator (which we wrote). The thing had an easy way to kick out a 30-day demo license, so we always used that. When we got bought by a publicly traded company, they determined that the license keys were an essential stockholder asset and took the license generator away from us. We all just edited out the code that checked the license out of the program. In fact, I believe at least one major release went out with an undocumented environment variable that disabled the licensing system which probably was a much bigger risk to stockholder assets than letting the engineers issue themselves demo licenses. From angus at fairhaven.za.net Mon Jan 9 02:28:02 2017 From: angus at fairhaven.za.net (Angus Robinson) Date: Sun, 8 Jan 2017 18:28:02 +0200 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: I think at one point Linus said that if he had known or if 386bsd was available he would not have started Linux (If I remember correctly) On 6 Jan 2017 05:57, "Dan Cross" wrote: > On Wed, Jan 4, 2017 at 12:08 PM, Clem Cole wrote: >> >> On Wed, Jan 4, 2017 at 11:17 AM, ron minnich > > >> wrote: >> >>> Larry, had Sun open sourced SunOS, as you fought so hard to make happen, >>> Linux might not have happened as it did. SunOS was really good. Chalk up >>> another win for ATT! >>> >> >> ​FWIW: I disagree​. For details look at my discussion of rewriting >> Linux in RUST >> >> on quora. But a quick point is this .... Linux original took off (and was >> successful) not because of GPL, but in spite of it and later the GPL would >> help it. But it was not the GPL per say that made Linux vs BSD vs SunOS et >> al. >> >> What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a >> lot of hackers (myself included) thought the case was about *copyright*. >> It was not, it was about *trade secret* and the ideas around UNIX. * >> i.e.* folks like, we "mentally contaminated" with the AT&T Intellectual >> Property. >> >> When the case came, folks like me that were running 386BSD which would >> later begat FreeBSD et al, got scared. At that time, *BSD (and SunOS) >> were much farther along in the development and stability. But .... may of >> us hought Linux would insulate us from losing UNIX on cheap HW because >> their was not AT&T copyrighted code in it. Sadly, the truth is that if >> AT&T had won the case, *all UNIX-like systems* would have had to be >> removed from the market in the USA and EU [NATO-allies for sure]. >> >> That said, the fact the *BSD and Linux were in the wild, would have made >> it hard to enforce and at a "Free" (as in beer) price it may have been hard >> to make it stick. But that it was a misunderstanding of legal thing that >> made Linux "valuable" to us, not the implementation. >> >> If SunOS has been available, it would not have been any different. It >> would have been thought of based on the AT&T IP, but trade secret and >> original copyright. >> > > Yes, it seems in retrospect that USL v BSDi basically killed Unix (in the > sense that Linux is not a blood-relative of Unix). I remember someone > quipping towards the late 90s, "the Unix wars are over. Linux won." > > Perhaps an interesting area of speculation is, "what would the world have > looked like if USL v BSDi hadn't happened *and* SunOS was opened to the > world?" I think in that parallel universe, Linux wouldn't have made it > particularly far: absent the legal angle, what would the incentive had been > to work on something that was striving to basically be Unix, when really > good Unix was already available? > > Ah well. > > - Dan C. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From beebe at math.utah.edu Mon Jan 9 02:00:03 2017 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Sun, 8 Jan 2017 09:00:03 -0700 Subject: [TUHS] Resurrecting the B (which came before C) programming language Message-ID: I was amused this morning to see a post on the tack-devel at lists.sourceforge.net mailing list (TACK = The Amsterdam Compiler Kit) today from David Given, who writes: >> ... >> ... I took some time off from thinking about register allocation (ugh) >> and ported the ABC B compiler to the ACK. It's now integrated into the >> system and everything. >> >> B is Ken Thompson and Dennis Ritchie's untyped programming language >> which later acquired types and turned into K&R C. Everything's a machine >> word, and pointers are *word* address, not byte addresses. >> >> The port's a bit clunky and doesn't generate good code, but it works and >> it passes its own tests. It runs on all supported backends. There's not >> much standard library, though. >> >> Example: >> >> https://github.com/davidgiven/ack/blob/default/examples/hilo.b >> >> (Also, in the process it found lots of bugs in the PowerPC mcg backend, >> now fixed, as well as several subtle bugs in the PowerPC ncg backend; so >> that's good. I'm pretty sure that this is the only B compiler for the >> PowerPC in existence.) >> ... ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From kayparker at mailite.com Mon Jan 9 04:02:17 2017 From: kayparker at mailite.com (=?utf-8?Q?Kay=20Parker=20=09=20?=) Date: Sun, 08 Jan 2017 10:02:17 -0800 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> You remember correctly: 'If 386BSD had been available when I started on Linux, Linux would probably never had happened.' http://gondwanaland.com/meta/history/interview.html On Sun, Jan 8, 2017, at 08:28 AM, Angus Robinson wrote: > I think at one point Linus said that if he had known or if 386bsd was > available he would not have started Linux > > (If I remember correctly) > > On 6 Jan 2017 05:57, "Dan Cross" wrote: >> On Wed, Jan 4, 2017 at 12:08 PM, Clem Cole wrote: >>> On Wed, Jan 4, 2017 at 11:17 AM, ron minnich >>> wrote: >>>> Larry, had Sun open sourced SunOS, as you fought so hard to make >>>> happen, Linux might not have happened as it did. SunOS was really >>>> good. Chalk up another win for ATT! >>>> >>> >>> FWIW: I disagree. For details look at my discussion of rewriting >>> Linux in RUST[2] on quora. But a quick point is this .... Linux >>> original took off (and was successful) not because of GPL, but in >>> spite of it and later the GPL would help it. But it was not the GPL >>> per say that made Linux vs BSD vs SunOS et al. >>> >>> What made Linux happen was the BSDi/UCB vs AT&T case. At the >>> time, a lot of hackers (myself included) thought the case was about >>> *copyright*. It was not, it was about *trade secret* and the ideas >>> around UNIX. * i.e.* folks like, we "mentally contaminated" with >>> the AT&T Intellectual Property. >>> >>> When the case came, folks like me that were running 386BSD which >>> would later begat FreeBSD et al, got scared. At that time, *BSD >>> (and SunOS) were much farther along in the development and >>> stability. But .... may of us hought Linux would insulate us from >>> losing UNIX on cheap HW because their was not AT&T copyrighted code >>> in it. Sadly, the truth is that if AT&T had won the case, _*all >>> UNIX-like systems*_ would have had to be removed from the market in >>> the USA and EU [NATO-allies for sure]. >>> >>> That said, the fact the *BSD and Linux were in the wild, would have >>> made it hard to enforce and at a "Free" (as in beer) price it may >>> have been hard to make it stick. But that it was a >>> misunderstanding of legal thing that made Linux "valuable" to us, >>> not the implementation. >>> >>> If SunOS has been available, it would not have been any different. >>> It would have been thought of based on the AT&T IP, but trade secret >>> and original copyright. >> >> Yes, it seems in retrospect that USL v BSDi basically killed Unix >> (in the sense that Linux is not a blood-relative of Unix). I >> remember someone quipping towards the late 90s, "the Unix wars are >> over. Linux won." >> >> Perhaps an interesting area of speculation is, "what would the world >> have looked like if USL v BSDi hadn't happened *and* SunOS was opened >> to the world?" I think in that parallel universe, Linux wouldn't have >> made it particularly far: absent the legal angle, what would the >> incentive had been to work on something that was striving to >> basically be Unix, when really good Unix was already available? >> >> Ah well. >> >> - Dan C. >> -- Kay Parker kayparker at mailite.com Links: 1. https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=rminnich at gmail.com 2. https://www.quora.com/Would-it-be-possible-advantageous-to-rewrite-the-Linux-kernel-in-Rust-when-the-language-is-stable -- http://www.fastmail.com - IMAP accessible web-mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From aap at papnet.eu Mon Jan 9 05:09:19 2017 From: aap at papnet.eu (Angelo Papenhoff) Date: Sun, 8 Jan 2017 20:09:19 +0100 Subject: [TUHS] Resurrecting the B (which came before C) programming language In-Reply-To: References: Message-ID: <20170108190919.GA67926@indra.papnet.eu> On 08/01/17, Nelson H. F. Beebe wrote: > I was amused this morning to see a post on the tack-devel at lists.sourceforge.net > mailing list (TACK = The Amsterdam Compiler Kit) today from David Given, > who writes: > > >> ... > >> ... I took some time off from thinking about register allocation (ugh) > >> and ported the ABC B compiler to the ACK. It's now integrated into the > >> system and everything. > >> > >> B is Ken Thompson and Dennis Ritchie's untyped programming language > >> which later acquired types and turned into K&R C. Everything's a machine > >> word, and pointers are *word* address, not byte addresses. > >> > >> The port's a bit clunky and doesn't generate good code, but it works and > >> it passes its own tests. It runs on all supported backends. There's not > >> much standard library, though. > >> > >> Example: > >> > >> https://github.com/davidgiven/ack/blob/default/examples/hilo.b > >> > >> (Also, in the process it found lots of bugs in the PowerPC mcg backend, > >> now fixed, as well as several subtle bugs in the PowerPC ncg backend; so > >> that's good. I'm pretty sure that this is the only B compiler for the > >> PowerPC in existence.) > >> ... Some explanation: I wrote abc (https://github.com/aap/abc/) a few years ago as a toy and because I was annoyed there was no B compiler around. David and I met at VCFE Zürich in november where I told him about it. He then proceeded to port it to the ACK. From clemc at ccc.com Mon Jan 9 06:51:06 2017 From: clemc at ccc.com (Clem cole) Date: Sun, 8 Jan 2017 15:51:06 -0500 Subject: [TUHS] SunOS vs Linux In-Reply-To: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> Message-ID: <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> But It was (check the dates listed in the DDJ articles and the dates of Linus's first email). He just did not know the FTP path to down load it. Which is sort of funny because it was not particularly secret between most BSD users. Jordan was pretty liberal at giving it to people if he believed they had access to a BSD license which just about anyone at a university (like Linus was at the time). Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. > On Jan 8, 2017, at 1:02 PM, Kay Parker wrote: > > You remember correctly: > > 'If 386BSD had been available when I started on Linux, Linux would probably never had happened.' > http://gondwanaland.com/meta/history/interview.html > > >> On Sun, Jan 8, 2017, at 08:28 AM, Angus Robinson wrote: >> I think at one point Linus said that if he had known or if 386bsd was available he would not have started Linux >> >> (If I remember correctly) >> >> On 6 Jan 2017 05:57, "Dan Cross" wrote: >> On Wed, Jan 4, 2017 at 12:08 PM, Clem Cole wrote: >> On Wed, Jan 4, 2017 at 11:17 AM, ron minnich wrote: >> Larry, had Sun open sourced SunOS, as you fought so hard to make happen, Linux might not have happened as it did. SunOS was really good. Chalk up another win for ATT! >> >> >> FWIW: I disagree. For details look at my discussion of rewriting Linux in RUST on quora. But a quick point is this .... Linux original took off (and was successful) not because of GPL, but in spite of it and later the GPL would help it. But it was not the GPL per say that made Linux vs BSD vs SunOS et al. >> >> What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a lot of hackers (myself included) thought the case was about copyright. It was not, it was about trade secret and the ideas around UNIX. i.e. folks like, we "mentally contaminated" with the AT&T Intellectual Property. >> >> When the case came, folks like me that were running 386BSD which would later begat FreeBSD et al, got scared. At that time, *BSD (and SunOS) were much farther along in the development and stability. But .... may of us hought Linux would insulate us from losing UNIX on cheap HW because their was not AT&T copyrighted code in it. Sadly, the truth is that if AT&T had won the case, all UNIX-like systems would have had to be removed from the market in the USA and EU [NATO-allies for sure]. >> >> That said, the fact the *BSD and Linux were in the wild, would have made it hard to enforce and at a "Free" (as in beer) price it may have been hard to make it stick. But that it was a misunderstanding of legal thing that made Linux "valuable" to us, not the implementation. >> >> If SunOS has been available, it would not have been any different. It would have been thought of based on the AT&T IP, but trade secret and original copyright. >> >> Yes, it seems in retrospect that USL v BSDi basically killed Unix (in the sense that Linux is not a blood-relative of Unix). I remember someone quipping towards the late 90s, "the Unix wars are over. Linux won." >> >> Perhaps an interesting area of speculation is, "what would the world have looked like if USL v BSDi hadn't happened *and* SunOS was opened to the world?" I think in that parallel universe, Linux wouldn't have made it particularly far: absent the legal angle, what would the incentive had been to work on something that was striving to basically be Unix, when really good Unix was already available? >> >> Ah well. >> >> - Dan C. >> > > -- > Kay Parker > kayparker at mailite.com > > > -- > http://www.fastmail.com - IMAP accessible web-mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From w.f.j.mueller at retro11.de Mon Jan 9 06:37:41 2017 From: w.f.j.mueller at retro11.de (Walter F.J. Mueller) Date: Sun, 8 Jan 2017 21:37:41 +0100 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code Message-ID: <22ec379e-4985-2ab5-9fa5-f932fa4de653@retro11.de> There was thread 'Unix stories' were Stephen Bourne played role. Here another story about Stephen Bourne. He worked first on Algol 68, than joined the Unix team at Bell labs and wrote sh and adb. It's well known that the if-fi and case-esac notation from Algol came to shell syntax this way. Maybe less know is that Bourne tried as hard as he could to make the C code of sh and adb look like Algol, with the help of preprocessor macros. I stumbled across this when I looked into the 2.11BSD code base some time ago. Look at http://www.retro11.de/ouxr/211bsd/usr/src/bin/sh/main.c.html http://www.retro11.de/ouxr/211bsd/usr/src/bin/adb/main.c.html to enjoy C with an Algol-look. The definitions are in http://www.retro11.de/ouxr/211bsd/usr/src/bin/sh/mac.h.html http://www.retro11.de/ouxr/211bsd/usr/src/bin/adb/defs.h.html Cheers, Walter From lm at mcvoy.com Mon Jan 9 07:12:53 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 8 Jan 2017 13:12:53 -0800 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <22ec379e-4985-2ab5-9fa5-f932fa4de653@retro11.de> References: <22ec379e-4985-2ab5-9fa5-f932fa4de653@retro11.de> Message-ID: <20170108211253.GV25228@mcvoy.com> On Sun, Jan 08, 2017 at 09:37:41PM +0100, Walter F.J. Mueller wrote: > to enjoy C with an Algol-look. "enjoy" :) From dave at horsfall.org Mon Jan 9 07:26:14 2017 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 9 Jan 2017 08:26:14 +1100 (EST) Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <20170108211253.GV25228@mcvoy.com> References: <22ec379e-4985-2ab5-9fa5-f932fa4de653@retro11.de> <20170108211253.GV25228@mcvoy.com> Message-ID: On Sun, 8 Jan 2017, Larry McVoy wrote: > > to enjoy C with an Algol-look. > > "enjoy" :) Good grief... I can actually read that, having learned Algol back in my Computer Science days. -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From kayparker at mailite.com Mon Jan 9 07:54:52 2017 From: kayparker at mailite.com (=?utf-8?Q?Kay=20Parker=20=09=20?=) Date: Sun, 08 Jan 2017 13:54:52 -0800 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <22ec379e-4985-2ab5-9fa5-f932fa4de653@retro11.de> References: <22ec379e-4985-2ab5-9fa5-f932fa4de653@retro11.de> Message-ID: <1483912492.3148012.841178505.57CBD486@webmail.messagingengine.com> Thanks Walter! I already read about Algol like C in the Bourne area and now know what it means. I also read elsewhere that it was a act of freedom when the Bell Lab boys freed finally themselves from the Bourne 'Algol' influence. On Sun, Jan 8, 2017, at 12:37 PM, Walter F.J. Mueller wrote: > There was thread 'Unix stories' were Stephen Bourne played role. > > Here another story about Stephen Bourne. He worked first on Algol 68, > than joined the Unix team at Bell labs and wrote sh and adb. It's well > known that the if-fi and case-esac notation from Algol came to shell > syntax this way. > > Maybe less know is that Bourne tried as hard as he could to make the > C code of sh and adb look like Algol, with the help of preprocessor > macros. I stumbled across this when I looked into the 2.11BSD code > base some time ago. Look at > > http://www.retro11.de/ouxr/211bsd/usr/src/bin/sh/main.c.html > http://www.retro11.de/ouxr/211bsd/usr/src/bin/adb/main.c.html > > to enjoy C with an Algol-look. The definitions are in > > http://www.retro11.de/ouxr/211bsd/usr/src/bin/sh/mac.h.html > http://www.retro11.de/ouxr/211bsd/usr/src/bin/adb/defs.h.html > > > Cheers, Walter -- Kay Parker kayparker at mailite.com -- http://www.fastmail.com - Send your email first class From wes.parish at paradise.net.nz Mon Jan 9 08:52:55 2017 From: wes.parish at paradise.net.nz (Wesley Parish) Date: Mon, 09 Jan 2017 11:52:55 +1300 (NZDT) Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: <1483915975.5872c2c7af400@www.paradise.net.nz> I remember reading the same. I just can't remember where I read it. I'll try to track it down. Wesley Parish Quoting Angus Robinson : > I think at one point Linus said that if he had known or if 386bsd was > available he would not have started Linux > > (If I remember correctly) > > On 6 Jan 2017 05:57, "Dan Cross" wrote: > > > On Wed, Jan 4, 2017 at 12:08 PM, Clem Cole wrote: > >> > >> On Wed, Jan 4, 2017 at 11:17 AM, ron minnich >> > > > > >> wrote: > >> > >>> Larry, had Sun open sourced SunOS, as you fought so hard to make > happen, > >>> Linux might not have happened as it did. SunOS was really good. > Chalk up > >>> another win for ATT! > >>> > >> > >> ​FWIW: I disagree​. For details look at my discussion of > rewriting > >> Linux in RUST > >> > Linux-kernel-in-Rust-when-the-language-is-stable> > >> on quora. But a quick point is this .... Linux original took off (and > was > >> successful) not because of GPL, but in spite of it and later the GPL > would > >> help it. But it was not the GPL per say that made Linux vs BSD vs > SunOS et > >> al. > >> > >> What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a > >> lot of hackers (myself included) thought the case was about > *copyright*. > >> It was not, it was about *trade secret* and the ideas around UNIX. * > >> i.e.* folks like, we "mentally contaminated" with the AT&T > Intellectual > >> Property. > >> > >> When the case came, folks like me that were running 386BSD which > would > >> later begat FreeBSD et al, got scared. At that time, *BSD (and > SunOS) > >> were much farther along in the development and stability. But .... > may of > >> us hought Linux would insulate us from losing UNIX on cheap HW > because > >> their was not AT&T copyrighted code in it. Sadly, the truth is that > if > >> AT&T had won the case, *all UNIX-like systems* would have had to be > >> removed from the market in the USA and EU [NATO-allies for sure]. > >> > >> That said, the fact the *BSD and Linux were in the wild, would have > made > >> it hard to enforce and at a "Free" (as in beer) price it may have > been hard > >> to make it stick. But that it was a misunderstanding of legal thing > that > >> made Linux "valuable" to us, not the implementation. > >> > >> If SunOS has been available, it would not have been any different. > It > >> would have been thought of based on the AT&T IP, but trade secret > and > >> original copyright. > >> > > > > Yes, it seems in retrospect that USL v BSDi basically killed Unix (in > the > > sense that Linux is not a blood-relative of Unix). I remember someone > > quipping towards the late 90s, "the Unix wars are over. Linux won." > > > > Perhaps an interesting area of speculation is, "what would the world > have > > looked like if USL v BSDi hadn't happened *and* SunOS was opened to > the > > world?" I think in that parallel universe, Linux wouldn't have made > it > > particularly far: absent the legal angle, what would the incentive had > been > > to work on something that was striving to basically be Unix, when > really > > good Unix was already available? > > > > Ah well. > > > > - Dan C. > > > > > "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, Method for Guitar "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn From wkt at tuhs.org Mon Jan 9 09:21:05 2017 From: wkt at tuhs.org (Warren Toomey) Date: Mon, 9 Jan 2017 09:21:05 +1000 Subject: [TUHS] 2.11BSD source code cross-reference Message-ID: <20170108232105.GC29926@minnie.tuhs.org> All, I'm not sure if you know of Walter Müller's work at implementing a PDP-11 on FPGAs: https://wfjm.github.io/home/w11/. He sent me this e-mail with an excellent source code cross-reference of the 2.11BSD kernel: P.S.: long time ago I wrote a source code viewer for 2.11BSD and OS with a similar file and directory layout. I made a few tune-ups lately and wrote some sort of introduction, see https://wfjm.github.io/home/ouxr/ Might be helpful for you in case you inspect 2.11BSD source code. Cheers all, Warren From ats at offog.org Mon Jan 9 09:14:17 2017 From: ats at offog.org (Adam Sampson) Date: Sun, 08 Jan 2017 23:14:17 +0000 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <22ec379e-4985-2ab5-9fa5-f932fa4de653@retro11.de> (Walter F. J. Mueller's message of "Sun, 8 Jan 2017 21:37:41 +0100") References: <22ec379e-4985-2ab5-9fa5-f932fa4de653@retro11.de> Message-ID: "Walter F.J. Mueller" writes: > [...] to enjoy C with an Algol-look. For those who enjoy Bournegol, can I also recommend the source code to David Turner's KRC, which was ported from (EMAS) BCPL to C using a similar approach. A sample from main.c: STATIC VOID DISPLAYCOM() { TEST HAVEID() THEN TEST HAVE(EOL) THEN DISPLAY(THE_ID,TRUE,FALSE); OR TEST HAVE((TOKEN)DOTDOT_SY) THEN { ATOM A = THE_ID; LIST X=NIL; ATOM B = HAVE(EOL) ? (ATOM)EOL :> // BUG? HAVEID() && HAVE(EOL) ? THE_ID : 0; TEST B==0 THEN SYNTAX(); OR X=EXTRACT(A,B); UNTIL X==NIL DO { DISPLAY((ATOM)HD(X),FALSE,FALSE); X=TL(X); } } //could insert extra line here between groups OR SYNTAX(); OR SYNTAX(); } http://krc-lang.org/ -- Adam Sampson From doug at cs.dartmouth.edu Mon Jan 9 11:06:50 2017 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Sun, 08 Jan 2017 20:06:50 -0500 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code Message-ID: <201701090106.v0916oha004909@coolidge.cs.Dartmouth.EDU> > if-fi and case-esac notation from Algol came to shell [via Steve Bourne] There was some pushback which resulted in the strange compromise of if-fi, case-esac, do-done. Alas, the details have slipped from memory. Help, scj? doug From random832 at fastmail.com Mon Jan 9 12:18:08 2017 From: random832 at fastmail.com (Random832) Date: Sun, 08 Jan 2017 21:18:08 -0500 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <201701090106.v0916oha004909@coolidge.cs.Dartmouth.EDU> References: <201701090106.v0916oha004909@coolidge.cs.Dartmouth.EDU> Message-ID: <1483928288.44688.841335609.6DFCF647@webmail.messagingengine.com> On Sun, Jan 8, 2017, at 20:06, Doug McIlroy wrote: > > if-fi and case-esac notation from Algol came to shell [via Steve Bourne] > > There was some pushback which resulted in the strange compromise > of if-fi, case-esac, do-done. Alas, the details have slipped from > memory. Help, scj? My guess would be that it was because od already existed as the octal dump tool. I think I heard it from someone else who was guessing on the same basis actually. From norman at oclsc.org Mon Jan 9 12:30:03 2017 From: norman at oclsc.org (Norman Wilson) Date: Sun, 08 Jan 2017 21:30:03 -0500 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code Message-ID: <1483929007.6355.for-standards-violators@oclsc.org> Doug McIlroy: There was some pushback which resulted in the strange compromise of if-fi, case-esac, do-done. Alas, the details have slipped from memory. Help, scj? ==== do-od would have required renaming the long-tenured od(1). I remember a tale--possibly chat in the UNIX Room at one point in the latter 1980s--that Steve tried and tried and tried to convince Ken to rename od, in the name of symmetry and elegance. Ken simply said no, as many times as it took. I don't remember who I heard this from; anyone still in touch with Ken who can ask him? Norman Wilson Toronto ON From wkt at tuhs.org Mon Jan 9 12:35:02 2017 From: wkt at tuhs.org (Warren Toomey) Date: Mon, 9 Jan 2017 12:35:02 +1000 Subject: [TUHS] History of select(2) Message-ID: <20170109023502.GA8507@minnie.tuhs.org> Also, I came across this history of select(2) a while back: https://idea.popcount.org/2016-11-01-a-brief-history-of-select2/ Cheers, Warren From grog at lemis.com Mon Jan 9 13:00:22 2017 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Mon, 9 Jan 2017 14:00:22 +1100 Subject: [TUHS] SunOS vs Linux In-Reply-To: <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> Message-ID: <20170109030022.GE66746@eureka.lemis.com> On Sunday, 8 January 2017 at 15:51:06 -0500, Clem cole wrote: > > But It was (check the dates listed in the DDJ articles and the dates > of Linus's first email). He just did not know the FTP path to down > load it. Which is sort of funny because it was not particularly > secret between most BSD users. Given that the first person he mentions in the article is Bruce Evans, it's difficult to understand how he hadn't heard of it. Greg -- Sent from my desktop computer. Finger grog at FreeBSD.org for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From scj at yaccman.com Mon Jan 9 13:31:18 2017 From: scj at yaccman.com (Steve Johnson) Date: Sun, 08 Jan 2017 19:31:18 -0800 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <1483929007.6355.for-standards-violators@oclsc.org> Message-ID: <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> I wasn't directly involved in this, but I do remember Dennis telling me essentially the same story.  I don't recall him mentioning Ken's name, just that "we couldn't use _od_ because that was already taken". Steve B and I had adjacent offices, so I overheard a lot of the discussions about the Bourne shell.  The quoting mechanisms, in particular, got a lot of attention, I think to good end.  There was a lot more thought there than is evident from the surface... Steve (not Bourne) ----- Original Message ----- From: "Norman Wilson" To: Cc: Sent:Sun, 08 Jan 2017 21:30:03 -0500 Subject:Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code Doug McIlroy: There was some pushback which resulted in the strange compromise of if-fi, case-esac, do-done. Alas, the details have slipped from memory. Help, scj? ==== do-od would have required renaming the long-tenured od(1). I remember a tale--possibly chat in the UNIX Room at one point in the latter 1980s--that Steve tried and tried and tried to convince Ken to rename od, in the name of symmetry and elegance. Ken simply said no, as many times as it took. I don't remember who I heard this from; anyone still in touch with Ken who can ask him? Norman Wilson Toronto ON -------------- next part -------------- An HTML attachment was scrubbed... URL: From arno.griffioen at ieee.org Mon Jan 9 16:32:25 2017 From: arno.griffioen at ieee.org (Arno Griffioen) Date: Mon, 9 Jan 2017 07:32:25 +0100 Subject: [TUHS] SunOS vs Linux In-Reply-To: <20170109030022.GE66746@eureka.lemis.com> References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> <20170109030022.GE66746@eureka.lemis.com> Message-ID: <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> On Mon, Jan 09, 2017 at 02:00:22PM +1100, Greg 'groggy' Lehey wrote: > > load it. Which is sort of funny because it was not particularly > > secret between most BSD users. > > Given that the first person he mentions in the article is Bruce Evans, > it's difficult to understand how he hadn't heard of it. Have to keep in mind that Linus was at the time of course a student in Finland, so outside the USA. Outside the USA such BSD (or other *IX) source-code access on universities and technical schools was not common is my personal experience. At that time I was a student too and apart from MINIX there really was little to no *IX source access available to anyone (BSD or otherwise) unless for very specific research applications and needing to sign all sorts of NDA stuff. Buying a BSD license was way outside a student's budget at that time and universities were not very forthcoming in giving them access. As a result MINIX was actually making quite a few strides to get more complex, but Andrew Tanenbaum always actively resisted turning it into a 'production' system as he wanted to retain it as an educational tool (and the license agreement was quite limited to this purpose) pushing a lot of european hackers towards this initially very rudimentary minix userland-compatible new little kernel made by some finnish dude ;) Quite a few strong discussions between Linus and Andrew at the time on Usenet in comp.os.minix about the monolithic vs. microkernel ideas. Bye, Arno. From wes.parish at paradise.net.nz Mon Jan 9 18:27:21 2017 From: wes.parish at paradise.net.nz (Wesley Parish) Date: Mon, 09 Jan 2017 21:27:21 +1300 (NZDT) Subject: [TUHS] SunOS vs Linux In-Reply-To: <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> <20170109030022.GE66746@eureka.lemis.com> <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> Message-ID: <1483950441.58734969d08cd@www.paradise.net.nz> I can second that. At the time I was asking about an OS suitable for my brand-new 486 while I was at the U of Canterbury NZ, I was told I'd need to get an AT&T license for BSD, and those cost a king's ransom. Does anybody have copies of the kind of legal guff AT&T put these universities through? It would make interesting reading. Wesley Parish Quoting Arno Griffioen : > On Mon, Jan 09, 2017 at 02:00:22PM +1100, Greg 'groggy' Lehey wrote: > > > load it. Which is sort of funny because it was not particularly > > > secret between most BSD users. > > > > Given that the first person he mentions in the article is Bruce > Evans, > > it's difficult to understand how he hadn't heard of it. > > Have to keep in mind that Linus was at the time of course a student in > Finland, so outside the USA. > > Outside the USA such BSD (or other *IX) source-code access on > universities > and technical schools was not common is my personal experience. > > At that time I was a student too and apart from MINIX there really was > little to no *IX source access available to anyone (BSD or otherwise) > unless > for very specific research applications and needing to sign all sorts of > NDA > stuff. > > Buying a BSD license was way outside a student's budget at that time > and universities were not very forthcoming in giving them access. > > As a result MINIX was actually making quite a few strides to get more > complex, but Andrew Tanenbaum always actively resisted turning it into a > > 'production' system as he wanted to retain it as an educational tool > (and the license agreement was quite limited to this purpose) pushing a > > lot of european hackers towards this initially very rudimentary minix > userland-compatible new little kernel made by some finnish dude ;) > > Quite a few strong discussions between Linus and Andrew at the time > on Usenet in comp.os.minix about the monolithic vs. microkernel > ideas. > > Bye, Arno. > "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, Method for Guitar "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn From pnr at planet.nl Mon Jan 9 20:36:20 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Mon, 9 Jan 2017 11:36:20 +0100 Subject: [TUHS] History of select(2) In-Reply-To: <20170109023502.GA8507@minnie.tuhs.org> References: <20170109023502.GA8507@minnie.tuhs.org> Message-ID: On 9 Jan 2017, at 3:35 , Warren Toomey wrote: > Also, I came across this history of select(2) a while back: > > https://idea.popcount.org/2016-11-01-a-brief-history-of-select2/ > > Cheers, Warren That is an interesting blog post, but I think it is a bit short on the history of things before 4.2BSD. Below my current understanding of what came before select(). In March 1975 the first networked Unix was created at the University of Illinois, initially based on 5th edition, but soon ported to 6th edition. It is described in RFC681 and a paper by Greg Chesson. Note that UoI was the very first Unix licensee. Its primary authors were Steve Holmgren, Steve Bunch and Gary Grossman. Greg Chesson was also involved. Grossman had already done two earlier Arpanet implementations (the ANTS and ANTS II systems) on bare metal and had a deep understanding of what a good implementation needed. Their implementation was compact (about a thousand lines added to the kernel, and another thousand in the connection daemon) and - I'm my opinion at least - conceptually well integrated into the existing file API. It became the leading Unix Arpanet implementation with wide use from 1975 to 1981. Two things stand out: (i) no accept(); and (ii) no select(). The original authors are still with us, with the exception of Greg, and I asked for their input as well. (i) no accept() Listening sockets worked a bit different from today. If one opened a listening socket it would not return a descriptor but block instead; when a connection was made it would return with the listening socket now bound to the new connection. Server applications would open a listening socket and do a double fork for the client connection (i.e. getting process 1 as its parent); the main process would loop around and open a new listening socket (this can all be verified in surviving application sources). According to Steve Holmgren this was not perceived as a big problem at the time. Network speeds were still so low that the brief gap in listening did not matter much, and the double fork was just a few lines of code. This changed when the CSRG team moved from a long-haul, Arpanet, 56Kb/s context to a local, Ethernet, 3Mb/s context and Sam Leffler came up with the concept of accept(). In 4.1a BSD and 2.9BSD the queue of pending connections was fixed (possibly 1, I have to check). In 4.1c BSD listen() was introduced; before then whether a socket was active or listening was a flag to opening the socket. The second parameter to listen() specified the maximum number of pending connections [as an aside, note that I'm using 'socket' in the BSD sense; the term socket changed meaning several times between 1973 and 1983]. (ii) no select() This was the real pain (Holmgren reconfirmed that). This is what Dennis must have referred to in his retrospect paper. Various solutions were thought of, but in Network Unix the model remained using separate processes for simultaneous reading and writing. Progress in this area came from two other places involved in Unix and Arpanet: Rand and BBN. In 1977 Rand was taking on this problem (see http://www.dtic.mil/dtic/tr/fulltext/u2/a044200.pdf and http://www.dtic.mil/dtic/tr/fulltext/u2/a044201.pdf). They considered a solution with a new system call 'empty()' that would tell if there was any data available on a file descriptor, a crude form of non-blocking I/O if you like. As this would consume precious CPU cycles it proved inadequate. Instead they came up with "ports". A port was a (possibly named) pipe with multiple writers and a single reader, and it was created with a 'port()' system call. The reader would see each write preceded by a header block identifying the reader. The implementation (see second PDF) was simple, apparently only taking 200 words of kernel code. Rand ports are a simplistic version of the 'mpx' facility done by Greg Chesson at Bell Labs (in 1978?). I am not sure whether this was independent invention or that Greg was aware of Rand ports. Unfortunately we cannot ask him anymore. Later in 1977, over at BBN, Jack Haverty was doing an experimental TCP/IP stack for Unix (this was TCP 2.5, not TCP 4). He had a working stack written in PDP11 assembler for a different OS and was making this run on Unix. He was using Rand ports to connect clients to the network stack, but still lacked the required primitives to make this work properly. So he came up with the await() system call, a direct precursor to select(). It is documented in BBN report 3911 (http://bit.ly/2iU1TNK), including man pages. With the awtenb() and awtdis() one would manage the monitored descriptors (like the bit vectors going into a select), and await() would then wait for an event or time out. Related to this was the capac() system call, to get the 'capacity' of a descriptor. This returns the amount of data that can safely be written to or read from a descriptor. I suppose it is an improved version of empty(). There is no equivalent of that in the later BSD sockets, perhaps because non-blocking I/O in the current sense was about to arrive. With port(), await() and capac() it becomes possible to write single threaded network programs. An example may be found here, the first TCP/IP (version 4) stack in C for Unix, from early 1979: http://digital2.library.ucla.edu/viewItem.do?ark=21198/zz002gvzqg (scroll down past IMP stuff). It's documented in IEN98 (https://www.rfc-editor.org/ien/ien98.txt). I'm currently retyping this source so that it can be better studied. The await() call is not in the TCP/IP code done for 4.1 BSD by BBN. I'm puzzled by this as it is evidently useful and Jack Haverty and Rob Gurwitz worked in the same corridor at BBN at the time. In 4.1a the select() call appears and it seems to be an improved version of await(), with the need for awtenb() en awtdis() replaced by the use of bit vectors. I am not sure if Bill Joy was aware of await() or whether it was independent invention. Here we can ask, but I have no contact details. Hope the above is of interest. I'm still learning new things about these topics every day, so please advise if my above understanding is wrong. As a side note, I am still looking for: - surviving copies of UoI "Network Unix" (I'm currently no further than papers and bits of source that lingered in other code bases) - surviving copies of the 4.1a BSD distribution tape (Kirk McKusick's tape was damaged) - surviving source of the kernel code of port(), await() and capac(); (could possibly be recreated from documentation) Any and all help very much appreciated. Paul From wkt at tuhs.org Mon Jan 9 20:42:56 2017 From: wkt at tuhs.org (Warren Toomey) Date: Mon, 9 Jan 2017 20:42:56 +1000 Subject: [TUHS] History of select(2) In-Reply-To: References: <20170109023502.GA8507@minnie.tuhs.org> Message-ID: <20170109104256.GA27166@minnie.tuhs.org> > On 9 Jan 2017, at 3:35 , Warren Toomey wrote: > > Also, I came across this history of select(2) a while back: > > https://idea.popcount.org/2016-11-01-a-brief-history-of-select2/ On Mon, Jan 09, 2017 at 11:36:20AM +0100, Paul Ruizendaal wrote: > That is an interesting blog post, but I think it is a bit short on the > history of things before 4.2BSD. Below my current understanding of what > came before select(). Wow, that's an excellent writeup of the early history of select(2). Thanks Paul! Warren From schily at schily.net Mon Jan 9 23:07:30 2017 From: schily at schily.net (Joerg Schilling) Date: Mon, 09 Jan 2017 14:07:30 +0100 Subject: [TUHS] SunOS vs Linux In-Reply-To: <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> <20170109030022.GE66746@eureka.lemis.com> <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> Message-ID: <58738b12.0ppPxW/XSy82LkMZ%schily@schily.net> Arno Griffioen wrote: > Outside the USA such BSD (or other *IX) source-code access on universities > and technical schools was not common is my personal experience. Regardless of where wou look, it was depending on whether the responsible people did the burocratic work. It has been available inside TU-Berlin. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Mon Jan 9 23:40:08 2017 From: schily at schily.net (Joerg Schilling) Date: Mon, 09 Jan 2017 14:40:08 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170106020239.GI2588@mcvoy.com> References: <586d234d.vf4JCu1Ye3gumwfc%schily@schily.net> <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> <20170104184448.GD3006@mcvoy.com> <586e32fa.75dTuXajWSNmdzuM%schily@schily.net> <20170106020239.GI2588@mcvoy.com> Message-ID: <587392b8.YVBYtdtMbIwoTw24%schily@schily.net> Larry McVoy wrote: > The VM system was ported from SunOS 4.x to System 5. Your statements that > SVr4 is based on SunOS are flat out wrong. SVr4 got a lot of SunOS goodness > but the starting point was ATT System V. This looks like a response made from gut. Let us check things that are verifiable, by looking at the basic elements of the OS kernel that together cover the majority of the kernel. SVr3 SunOS-4 SVr4 ======================================= TTY driver V7/Svr0 STREAMS STREAMS Networking if: BSD BSD STREAMS Networktap New code written allows STREAMS by Lachman testing Driver Svr0 BSD based SunOS-4 based interface Kernel virtual V7/Svr0 SunOS-4 SunOS-4 memory VFS - SunOS-4 SunOS-4 based There are few things in SVr4 that are not the same as in SunOS-4, e.g. the way parameters to ioctl() are copied to/from the kernel but this are minor parts. So why should someone start with the AT&T sources when the expected result is > 70% identical with SunOS-4? Even the TTY streams implementation is the one seen in SunOS4, and not a possible AT&T internal implementation, as this contains a design flaw that caused switching from/to coocked mode to loose data. This is not permitted by the UNIX definitions for the TTY driver and has been fixed by an enhencement that occured in SunOS-4 and is in Svr4 as well. BTW: My statements are from a talk from Bill Joy from the Sun User Group meetings. Bill claimed that he was responsible for the Svr4 kernel and did this on a new location in Denver Colorado that was as a joint venture from Sun and AT&T. The SunOS-5 kernel and the Svr4 kernel still differ, but they are more close together than Svr4 and Svr3. Note that I did not only write Joliet and ISO9660:1999 support code for SunOS and SCO UnixWare but also for SCO OpenServer that is based on Svr3. So I had legal access to SunOS, Svr4 and Svr3 based code. Did you have access to this code? Did you compare? > > Interesting: Do you mean "Bill Shannon"? Was he involved in SCCS or smoosh > > as well? I know Bill as the author of "cstyle" and I pushed him to make it OSS > > in 2001 already, before it appeared in OpenSolaris. > > Yup, that Shannon. OK, so he was also working on SCCS? > > In January 2015, I talked with Glenn Skinner about SCCS and smoosh and he > > pointed me to his smoosh patent: > > > > http://patentimages.storage.googleapis.com/pdfs/US5481722.pdf > > > > that has been expired in late 2014. > > The fact that Glenn didn't put me on that patent is a sore point. Yes, > he wrote the lisp code that showed it could be done. I wrote the C code > that did that in one pass (his stuff was N+M where N was how many deltas > were on the local side and M was how deltas were on the remote side). I have been told that a patent can be void if it does not list the right inventors. I would guess that this was a decision made by the lawyer that helped to file the patent. Are you responsible for the original idea? Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From rochkind at basepath.com Tue Jan 10 01:30:58 2017 From: rochkind at basepath.com (Marc Rochkind) Date: Mon, 9 Jan 2017 08:30:58 -0700 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> References: <1483929007.6355.for-standards-violators@oclsc.org> <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> Message-ID: Just a quick note about Algol vs. Algol 68: The two are used interchangeably (it seems) in this thread, but they're very different languages, with very different control structures. Someone mentioned he had studied Algol in school, which is plausible. If he in fact studied Algol 68, that's worth a story in its own right! [Whoops... forgot to properly terminate that last sentence.] fi --Marc On Sun, Jan 8, 2017 at 8:31 PM, Steve Johnson wrote: > I wasn't directly involved in this, but I do remember Dennis telling me > essentially the same story. I don't recall him mentioning Ken's name, just > that "we couldn't use *od* because that was already taken". > > Steve B and I had adjacent offices, so I overheard a lot of the > discussions about the Bourne shell. The quoting mechanisms, in particular, > got a lot of attention, I think to good end. There was a lot more thought > there than is evident from the surface... > > Steve (not Bourne) > > > ----- Original Message ----- > From: > "Norman Wilson" > > To: > > Cc: > > Sent: > Sun, 08 Jan 2017 21:30:03 -0500 > Subject: > Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code > > > > Doug McIlroy: > > There was some pushback which resulted in the strange compromise > of if-fi, case-esac, do-done. Alas, the details have slipped from > memory. Help, scj? > > ==== > > do-od would have required renaming the long-tenured od(1). > > I remember a tale--possibly chat in the UNIX Room at one point in > the latter 1980s--that Steve tried and tried and tried to convince > Ken to rename od, in the name of symmetry and elegance. Ken simply > said no, as many times as it took. I don't remember who I heard this > from; anyone still in touch with Ken who can ask him? > > Norman Wilson > Toronto ON > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Tue Jan 10 01:45:47 2017 From: arnold at skeeve.com (arnold at skeeve.com) Date: Mon, 09 Jan 2017 08:45:47 -0700 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: References: <1483929007.6355.for-standards-violators@oclsc.org> <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> Message-ID: <201701091545.v09FjlXE027448@freefriends.org> I remember the Bournegol well; I did some hacking on the BSD shell. In general, it wasn't too unusual for people from Pascal backgrounds to do similar things, e.g. #define repeat do { #define until(cond) } while (! (cond)) (I remember for me personally that do...while sure looked weird for my first few years of C programming. :-) (Also, I would not recommend doing that; I'm just noting that people often did do stuff like that.) FWIW, it was the USG guys who de-Algolized the sh code, at SVR2, I believe. I think it was also done by the Research guys at a later point, but without V8/V9/V10 to look at it, it's hard to know. If we're talking about langauge design, the Ada guys borrowed a page from Algol 68's book and let the keywords do the grouping instead of requiring begin-end. I personally find that somewhat more elegant. Arnold Marc Rochkind wrote: > Just a quick note about Algol vs. Algol 68: The two are used > interchangeably (it seems) in this thread, but they're very different > languages, with very different control structures. Someone mentioned he had > studied Algol in school, which is plausible. If he in fact studied Algol > 68, that's worth a story in its own right! > > [Whoops... forgot to properly terminate that last sentence.] > > fi > > --Marc > > On Sun, Jan 8, 2017 at 8:31 PM, Steve Johnson wrote: > > > I wasn't directly involved in this, but I do remember Dennis telling me > > essentially the same story. I don't recall him mentioning Ken's name, just > > that "we couldn't use *od* because that was already taken". > > > > Steve B and I had adjacent offices, so I overheard a lot of the > > discussions about the Bourne shell. The quoting mechanisms, in particular, > > got a lot of attention, I think to good end. There was a lot more thought > > there than is evident from the surface... > > > > Steve (not Bourne) > > > > > > ----- Original Message ----- > > From: > > "Norman Wilson" > > > > To: > > > > Cc: > > > > Sent: > > Sun, 08 Jan 2017 21:30:03 -0500 > > Subject: > > Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code > > > > > > > > Doug McIlroy: > > > > There was some pushback which resulted in the strange compromise > > of if-fi, case-esac, do-done. Alas, the details have slipped from > > memory. Help, scj? > > > > ==== > > > > do-od would have required renaming the long-tenured od(1). > > > > I remember a tale--possibly chat in the UNIX Room at one point in > > the latter 1980s--that Steve tried and tried and tried to convince > > Ken to rename od, in the name of symmetry and elegance. Ken simply > > said no, as many times as it took. I don't remember who I heard this > > from; anyone still in touch with Ken who can ask him? > > > > Norman Wilson > > Toronto ON > > > > From clemc at ccc.com Tue Jan 10 01:57:59 2017 From: clemc at ccc.com (Clem Cole) Date: Mon, 9 Jan 2017 10:57:59 -0500 Subject: [TUHS] SunOS vs Linux In-Reply-To: <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> <20170109030022.GE66746@eureka.lemis.com> <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> Message-ID: On Mon, Jan 9, 2017 at 1:32 AM, Arno Griffioen wrote: > Buying a BSD license was way outside a student's budget at that time > and universities were not very forthcoming in giving them access. > ​A little strange statement... student did not have to buy it and Universities got it for $100 tape copying fee ( and were free to do with it at they wanted - i.e. "dead-fish license"). FYI: CMU made us sign a document of some type stating it was AT&T's IP to take the undergrad OS course in 1976, but they certainly did not try to keep the code under lock and key with a guard on the door. Also, the whole idea of the 1956 AT&T consent decree was that AT&T >>had<< to make the IP available -- by law (so they did - which is why they later lost the BSDi/UCB case). They were given a monopoly if the world access to their patents. Also by the late 1980s, early 1990's - i.e. Linus' time for early Linux, most Universities world wide were using Vaxen and Sun systems. On the Vaxen, then tended to have BSD which is what the 386 code was based. To get a copy of it you needed a BSD license and almost all Universities had them by that point in the USA and the EU. Hey were were having USENIX conferences hosted in the the EU pretty regularly, and lots of development. Your comment about not being "forthcoming" I get, as people that power often take a conservative approach if they are not sure. But the US Gov's deal with AT&T was certainly not supposed to have been that way. I just don't buy it that the code was not available to Linus. Linus' school had access to the code base. He has gone on record as saying he had used Sun systems there before he started hacking and they had BSD based Vaxen. I think it was purely and situation of "not knowing" who or how to ask. Not the it matters now. But it certainly made for a large fork, confusion and much unnecessary churn. Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Tue Jan 10 01:59:00 2017 From: rminnich at gmail.com (ron minnich) Date: Mon, 09 Jan 2017 15:59:00 +0000 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <201701091545.v09FjlXE027448@freefriends.org> References: <1483929007.6355.for-standards-violators@oclsc.org> <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> <201701091545.v09FjlXE027448@freefriends.org> Message-ID: On Mon, Jan 9, 2017 at 7:47 AM wrote: > I remember the Bournegol well; I did some hacking on the BSD shell. > > In general, it wasn't too unusual for people from Pascal backgrounds to > do similar things, e.g. > > #define repeat do { > #define until(cond) } while (! (cond)) > *was* not unusual? This kind of stuff is still everywhere. In fact there's probably more of it each month. It seems to be especially popular in "high level" languages like C++ but you see tons of it in kernels too. Some of the worst cpp abuse I've seen is in C++ in fact. One reason I'm glad Go has no preprocessor. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Tue Jan 10 02:03:27 2017 From: clemc at ccc.com (Clem Cole) Date: Mon, 9 Jan 2017 11:03:27 -0500 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <201701091545.v09FjlXE027448@freefriends.org> References: <1483929007.6355.for-standards-violators@oclsc.org> <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> <201701091545.v09FjlXE027448@freefriends.org> Message-ID: On Mon, Jan 9, 2017 at 10:45 AM, wrote: > I remember the Bournegol well; I did some hacking on the BSD shell. > ​Yep - lots of strange things in source debuggers.​ > > In general, it wasn't too unusual for people from Pascal backgrounds to > do similar things, > ​When we did Magnolia & Tektronix the ex-Xerox/Alta guys lusted for Cedar/Mesa et al - and quickly discovered the Bournegol idea. ​ I shook my head/shrugged my shoulders, but it made them happy and they quickly wrote some pretty cool tools, like an ECAD system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Tue Jan 10 02:08:01 2017 From: rminnich at gmail.com (ron minnich) Date: Mon, 09 Jan 2017 16:08:01 +0000 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> <20170109030022.GE66746@eureka.lemis.com> <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> Message-ID: At the same time, I think some sort of GPL'ed kernel was inevitable for any number of reasons. Also, I worked closely with one of the principals in Linux back then (i.e. 1991) and his experience was that the linux community was way more open to his contributions than the bsd community. Not surprising, linux was pretty much a clean sheet. I expect that was a factor as well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rp at servium.ch Tue Jan 10 03:32:21 2017 From: rp at servium.ch (Rico Pajarola) Date: Mon, 9 Jan 2017 18:32:21 +0100 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> <20170109030022.GE66746@eureka.lemis.com> <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> Message-ID: On Mon, Jan 9, 2017 at 4:57 PM, Clem Cole wrote: > > On Mon, Jan 9, 2017 at 1:32 AM, Arno Griffioen > wrote: > >> Buying a BSD license was way outside a student's budget at that time >> and universities were not very forthcoming in giving them access. >> > > ​A little strange statement... student did not have to buy it and > Universities got it for $100 tape copying fee ( and were free to do with it > at they wanted - i.e. "dead-fish license"). > Now stop picking on Joerg already. Not every university was invested in Unix. In practice Unix source was pretty much unobtainable if you happened to live outside of the "Unix bubble". I grew up and went to school/university in Switzerland, and getting access to UNIX source was nothing but a crazy pipe dream at the time. I don't even know if my university had a source license (I can't imagine they didn't), but in any case it wasn't something that they would let you use as a normal student. None of my inquiries at the time resulted in anything that would allow me to get access to Unix source. If the university had it, this wasn't public information, and they didn't share. I couldn't prove that my university had a license, and I had no way to get the actual bits. This was the 90ies btw. We had Sun workstations (Solaris, without source), Windows (blech, but funnily enough there were source kits. No, you couldn't get access to that either), and of course the locally developed Oberon machines (Lilith) and later Bluebottle. I also saw some VAXen running VMS (on their way out). Some departments had RS/6000s, Alphas and SGIs and other random stuff (do I need to mention that they came without source?). I've never seen any trace of Unix source or even BSD. We all longed for some Unix that was available for personal use, and Linux absolutely filled that gap. While 386BSD was theoretically available, it came out almost a year after Linus announced his first version of Linux. 386BSD seemed to have a lot of strings attached, and it wasn't really usable until FreeBSD/NetBSD. By that time, Linux had gained a lot of momentum already. Rico -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Tue Jan 10 03:40:47 2017 From: crossd at gmail.com (Dan Cross) Date: Mon, 9 Jan 2017 12:40:47 -0500 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> <20170109030022.GE66746@eureka.lemis.com> <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> Message-ID: On Mon, Jan 9, 2017 at 11:08 AM, ron minnich wrote: > At the same time, I think some sort of GPL'ed kernel was inevitable for > any number of reasons. > > Also, I worked closely with one of the principals in Linux back then (i.e. > 1991) and his experience was that the linux community was way more open to > his contributions than the bsd community. Not surprising, linux was pretty > much a clean sheet. I expect that was a factor as well. > I'll second this. Larry mentioned earlier the USENIX "in-crowd" and I think that was a real thing (USENET cabal, anyone?). I was near a major American university at the time, kind of a student, and I couldn't easily get access to Unix source code (nor could any of the undergrad or grad students I knew). As I recall, no one had copies of the old stuff anymore (32/V and prior) and access to BSD source code was tightly controlled; we had an academic site license for SunOS source, but it was strictly on a "need-to-know" basis. You had to be part of the local "in-crowd" to get access to that code, and students weren't members of the "in-crowd." It wasn't particularly easy to build up the sufficient credibility to get into the club without access to source either, and they certainly weren't handing it out to everyone who asked. Further, my sense was that system administrators in big institutions were often hawks about things like that. There could be real academic consequences for trying to buck the system in this area, particularly for undergrads (or in my case, high school students taking courses). The ever-accurate Wikipedia says that 386BSD wasn't available until 1992 (and then not really usable until July of that year). But Torvalds had already announced his Linux project (by which point he had a running kernel and had ported a significant number of programs over) in August of 1991 and put it on an FTP server by September; nearly a full year before a usable version of 386BSD was available. The thing I wonder is why Linux didn't die off due to lack of networking once 386BSD came onto the scene: Linux didn't get TCP/IP until September of 1992 and then it was under heavy development until December, by which time 386BSD 0.1 was generally available (and would of course already have had networking). I suspect by that point two factors were at play: a) Linux had gathered significant momentum and b) USL v BSDi cause people to shy away from the BSD source base and embrace Linux as an unencumbered alternative. By '93ish, when NetBSD and FreeBSD were both real, there wouldn't have been a need for Linux, but by that time, it had had two years of exciting activity for a number of people: it's unlikely anyone just walked away from it because a technically better alternative came along. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Tue Jan 10 03:48:44 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 9 Jan 2017 09:48:44 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <587392b8.YVBYtdtMbIwoTw24%schily@schily.net> References: <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> <20170104184448.GD3006@mcvoy.com> <586e32fa.75dTuXajWSNmdzuM%schily@schily.net> <20170106020239.GI2588@mcvoy.com> <587392b8.YVBYtdtMbIwoTw24%schily@schily.net> Message-ID: <20170109174844.GD3143@mcvoy.com> On Mon, Jan 09, 2017 at 02:40:08PM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > The VM system was ported from SunOS 4.x to System 5. Your statements that > > SVr4 is based on SunOS are flat out wrong. SVr4 got a lot of SunOS goodness > > but the starting point was ATT System V. > > This looks like a response made from gut. Oh, brother. Let it go. I talked to the guy who did the bringup, did I not post that? It starts "SVr4 was not based on SunOS, although it incorporated many of the best features of SunOS 4.x". I'm dealing with trees down and landslides, I'll post the rest later, but Joerg you are just wrong about this. From dfawcus+lists-tuhs at employees.org Tue Jan 10 04:48:09 2017 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Mon, 9 Jan 2017 18:48:09 +0000 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: References: <1483929007.6355.for-standards-violators@oclsc.org> <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> Message-ID: <20170109184809.GA57219@cowbell.employees.org> On Mon, Jan 09, 2017 at 08:30:58am -0700, Marc Rochkind wrote: > If he in fact studied Algol 68, that's worth a story in its own right! Surrey University, 1986-1990; reading EE, we had a set of programming lectures where we were required to learn it (to some extent) as we had to submit course work written in it. I tended to do the work in C on the dept Gould running unix, then transliterate to Algol 68 for the final submission. This was because the compiler was only available as a batch submission process, and we'd have long turn arounds just to get past syntax checks. Especially when 100 of us were trying to submit jobs at the same time. I still recall the frustration of a few hours turn around just to get an error about 'semi-colon not required here', which would then complain 'semi-colon required once one removed it'. I actually quite liked it, just it was a bit verbose, and I never figured out how to do pointers or dynamic allocation in it at the time, so some conversions ended up being less than optimal. As I recall one could convert the ternary op to an IF expression, and the loop construct combined all of 'while' 'for' and 'do while' depending upon which syntax elements one used. DF From dugo at xs4all.nl Tue Jan 10 05:45:18 2017 From: dugo at xs4all.nl (Jacob Goense) Date: Mon, 09 Jan 2017 14:45:18 -0500 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: Message-ID: On 2017-01-05 22:56, Dan Cross wrote: > Perhaps an interesting area of speculation is, "what would the world > have looked like if USL v BSDi hadn't happened *and* SunOS was opened > to the world?" Drawing a bit of a blank here, but I just can't shake this image of a raving Richard Stallman demanding it should be called GNU/SunOS. Then again, it might have freed up enough resources for an x86 as/cc with a BSD licence in the early 90s. From arnold at skeeve.com Tue Jan 10 06:10:44 2017 From: arnold at skeeve.com (Arnold Robbins) Date: Mon, 09 Jan 2017 22:10:44 +0200 Subject: [TUHS] nice interview with Steve Bourne Message-ID: <201701092010.v09KAirD003815@skeeve.com> Just came across this: http://www.computerworld.com.au/article/279011/-z_programming_languages_bourne_shell_sh >From 2009. There are links to other interviews about Unix stuff as well. Arnold From scj at yaccman.com Tue Jan 10 06:35:38 2017 From: scj at yaccman.com (Steve Johnson) Date: Mon, 09 Jan 2017 12:35:38 -0800 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: Message-ID: <7e8fd8694d992170208e854db5dd0f72c552016b@webmail.yaccman.com> I can certainly confirm that Steve Bourne not only knew Algol 68, he was quite an evangelist for it.  When he came to the labs, he got a number of people, including me, to plough through the Algol 68 report, probably the worst written introduction to anything Ive ever read.  They were firmly convinced they were breaking new ground and consequently invented new terms for all kinds of otherwise familiar ideas.  It was as if the report had been written in Esperanto...   There were some very cool ideas, particularly the way the type system worked.  After the simplicity of C, though, we mostly found the syntax to be off-putting.  Also, as I recall, there really was no portability strategy for the language, and I think that held it back, since there were so many different architectures in play at that time.  Languages like C and Pascal, that had implementations that could be easily ported, quickly left non-portable languages like Algol 68 and Bliss in the dust... (Lest I sound like I'm tooting my own horn here, Dennis' PDP-11 C was based on an intermediate language somewhat like P-code, and was in fact ported to a couple of different machines before PCC came along... I learned from the master...) Steve  ----- Original Message ----- From: "Marc Rochkind" To: Cc: "The UNIX Historical Society" Sent: Mon, 9 Jan 2017 08:30:58 -0700 Subject: Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code Just a quick note about Algol vs. Algol 68: The two are used interchangeably (it seems) in this thread, but they're very different languages, with very different control structures. Someone mentioned he had studied Algol in school, which is plausible. If he in fact studied Algol 68, that's worth a story in its own right! [Whoops... forgot to properly terminate that last sentence.] fi --Marc On Sun, Jan 8, 2017 at 8:31 PM, Steve Johnson wrote: I wasn't directly involved in this, but I do remember Dennis telling me essentially the same story.  I don't recall him mentioning Ken's name, just that "we couldn't use _od_ because that was already taken". Steve B and I had adjacent offices, so I overheard a lot of the discussions about the Bourne shell.  The quoting mechanisms, in particular, got a lot of attention, I think to good end.  There was a lot more thought there than is evident from the surface... Steve (not Bourne) ----- Original Message ----- From: "Norman Wilson" To: Cc: Sent: Sun, 08 Jan 2017 21:30:03 -0500 Subject: Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code Doug McIlroy: There was some pushback which resulted in the strange compromise of if-fi, case-esac, do-done. Alas, the details have slipped from memory. Help, scj? ==== do-od would have required renaming the long-tenured od(1). I remember a tale--possibly chat in the UNIX Room at one point in the latter 1980s--that Steve tried and tried and tried to convince Ken to rename od, in the name of symmetry and elegance. Ken simply said no, as many times as it took. I don't remember who I heard this from; anyone still in touch with Ken who can ask him? Norman Wilson Toronto ON Links: ------ [1] mailto:scj at yaccman.com [2] mailto:norman at oclsc.org [3] mailto:tuhs at tuhs.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Tue Jan 10 13:58:23 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 9 Jan 2017 19:58:23 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <587392b8.YVBYtdtMbIwoTw24%schily@schily.net> References: <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> <20170104184448.GD3006@mcvoy.com> <586e32fa.75dTuXajWSNmdzuM%schily@schily.net> <20170106020239.GI2588@mcvoy.com> <587392b8.YVBYtdtMbIwoTw24%schily@schily.net> Message-ID: <20170110035823.GI8099@mcvoy.com> On Mon, Jan 09, 2017 at 02:40:08PM +0100, Joerg Schilling wrote: > > The VM system was ported from SunOS 4.x to System 5. Your statements that > > SVr4 is based on SunOS are flat out wrong. SVr4 got a lot of SunOS goodness > > but the starting point was ATT System V. > > This looks like a response made from gut. OK, I'll try again. Before I start let me tell you that I've ported the Lachman TCP/IP stack into the ETA 10 Unix os as well as the the SCO os (which was as about as pure an ATT Unix as you can find). I worked for Lachman as my first job out of school. BTW, they didn't write that stack, they bought it, I think from Convergent. > Let us check things that are verifiable, by looking at the basic elements > of the OS kernel that together cover the majority of the kernel. > > SVr3 SunOS-4 SVr4 > ======================================= > TTY driver V7/Svr0 STREAMS STREAMS SunOS supported STREAMS but their tty drivers where not STREAMS based except where they had to be for some contract. STREAMS sucks ass and that's what dmr said. STREAMS != streams. Miserable system. > So why should someone start with the AT&T sources when the expected result > is > 70% identical with SunOS-4? Um because data? I dunno what will convince you, dmr rising from the grave and saying so? How about this from the guy that did the bring up, a guy that is a close friend, we were car pooling to mountain view daily during this time, his office was across the hall from me. SVr4 was not based on SunOS, although it incorporated many of the best features of SunOS 4.x (VM management, filesystem architecture, shared libraries, etc). Those features and interfaces were merged (after extensive discussions, involving, on the Sun side, Bill Shannon, Rob Gingell, Don Cragun and others) into a pre-release version of System V by AT&T. The reference hardware platform was AT&T???s 3b2. Sun would receive periodic ???loads??? from AT&T of that 3b2 based code, which we then merged on top of the machine-dependent code from SunOS 4.x. Let???s just say it was an adventure. After the first port, I think, Joe Kowalski came on to head the userland effort, and the team gradually built up from there. That merged code was Sun proprietary stuff; AFAIK it never went back to AT&T. > BTW: My statements are from a talk from Bill Joy from the Sun User Group > meetings. Bill claimed that he was responsible for the Svr4 kernel and did this > on a new location in Denver Colorado that was as a joint venture from Sun and > AT&T. So I have Bill's home number on my cell phone. We're not close and I'm not going to use up a silver bullet to win this argument but your claim here is complete nonsense. I know the people in the Colorado site and they didn't have anything to do with SVr4. The Rocky Mountain group was working on file systems (and produced CVS). Bill was checked out at this point, he was living in Aspen and had very little to do with Sun. The idea that he, Mr BSD, would have been "responsible for the Svr4 kernel" is a joke. BSD and SunOS were so much better, there is no way he would have been for SVr4. > The SunOS-5 kernel and the Svr4 kernel still differ, but they are more close > together than Svr4 and Svr3. Note that I did not only write Joliet and > ISO9660:1999 support code for SunOS and SCO UnixWare but also for SCO > OpenServer that is based on Svr3. So I had legal access to SunOS, Svr4 and Svr3 > based code. Did you have access to this code? Did you compare? I've had legal access to SunOS 4/5, SCO, SVr3, SVr4, and a pile of other variants. And I've been a kernel engineer in all of those. As in spent years as a paid engineer in all of them. Yeah, I know which is which. I've run diff on tons of that code. > > > Interesting: Do you mean "Bill Shannon"? Was he involved in SCCS or smoosh > > > as well? I know Bill as the author of "cstyle" and I pushed him to make it OSS > > > in 2001 already, before it appeared in OpenSolaris. > > > > Yup, that Shannon. > > OK, so he was also working on SCCS? No, he was a DE and as such he oversaw pretty much everything, including my work. We worked closely together. > > > In January 2015, I talked with Glenn Skinner about SCCS and smoosh and he > > > pointed me to his smoosh patent: > > > > > > http://patentimages.storage.googleapis.com/pdfs/US5481722.pdf > > > > > > that has been expired in late 2014. > > > > The fact that Glenn didn't put me on that patent is a sore point. Yes, > > he wrote the lisp code that showed it could be done. I wrote the C code > > that did that in one pass (his stuff was N+M where N was how many deltas > > were on the local side and M was how deltas were on the remote side). > > I have been told that a patent can be void if it does not list the right > inventors. > > I would guess that this was a decision made by the lawyer that helped to file > the patent. > > Are you responsible for the original idea? Nope, I give credit to Glenn for that. It was his idea. The one pass version of it, which is what gave Sun Teamware, that's 100% me. That's not quite fair, Glenn and I talked about it a lot, we knew we needed that for performance, so the desire for a one pass version includes Glenn. But the code that did it? That's 100% me. I designed it, I wrote it. From imp at bsdimp.com Tue Jan 10 14:16:43 2017 From: imp at bsdimp.com (Warner Losh) Date: Mon, 9 Jan 2017 20:16:43 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170110035823.GI8099@mcvoy.com> References: <20170104164630.GA3405@mcvoy.com> <586d2abb.A5j4GovJtyzlD+AQ%schily@schily.net> <20170104171033.GC3405@mcvoy.com> <586d334d.XcKOxzKwrzmvL326%schily@schily.net> <20170104175227.GH3405@mcvoy.com> <586d3d90.oAzCBIUMx+CcWar6%schily@schily.net> <20170104184448.GD3006@mcvoy.com> <586e32fa.75dTuXajWSNmdzuM%schily@schily.net> <20170106020239.GI2588@mcvoy.com> <587392b8.YVBYtdtMbIwoTw24%schily@schily.net> <20170110035823.GI8099@mcvoy.com> Message-ID: On Mon, Jan 9, 2017 at 7:58 PM, Larry McVoy wrote: > On Mon, Jan 09, 2017 at 02:40:08PM +0100, Joerg Schilling wrote: >> BTW: My statements are from a talk from Bill Joy from the Sun User Group >> meetings. Bill claimed that he was responsible for the Svr4 kernel and did this >> on a new location in Denver Colorado that was as a joint venture from Sun and >> AT&T. > > So I have Bill's home number on my cell phone. We're not close and I'm not > going to use up a silver bullet to win this argument but your claim here is > complete nonsense. I know the people in the Colorado site and they didn't > have anything to do with SVr4. The Rocky Mountain group was working on > file systems (and produced CVS). Bill was checked out at this point, he > was living in Aspen and had very little to do with Sun. The idea that he, > Mr BSD, would have been "responsible for the Svr4 kernel" is a joke. BSD > and SunOS were so much better, there is no way he would have been for SVr4. All the Sun guys I knew at Solbourne went on to work at the Broomfield location at Interlocken. They mostly worked on file system things. There was a Boulder office that did a port to the Sun Roadrunner, a 386 box that ran SunOS 4.0 and friends. They did other stuff there too before they were moved to the Broomfield office in a consolidation of different groups at sun. I'm pretty sure that it would be a huge stretch to say that the sysvr4 porting was done here, but it was a big office and maybe a few people were tangentially related to that effort. There were also some guys in Colorado Springs as well that moved up to Broomfield. The only big AT&T presence in town at the time was the buildings that would be spun off into Lucent a few years later... I'd have to agree with Larry on this point: I doubt that it was done in the Denver metro area. Btw, I've lived in Denver since I graduated college and went to work for Solbourne Computer in 1990. For a while, Sun was making Solbourne compatible computers... >> The SunOS-5 kernel and the Svr4 kernel still differ, but they are more close >> together than Svr4 and Svr3. Note that I did not only write Joliet and >> ISO9660:1999 support code for SunOS and SCO UnixWare but also for SCO >> OpenServer that is based on Svr3. So I had legal access to SunOS, Svr4 and Svr3 >> based code. Did you have access to this code? Did you compare? > > I've had legal access to SunOS 4/5, SCO, SVr3, SVr4, and a pile of other > variants. And I've been a kernel engineer in all of those. As in spent > years as a paid engineer in all of them. Yeah, I know which is which. > I've run diff on tons of that code. That's got me beat. I just had access to SunOS and Solaris... Plus various Solaris driver works over the years. And it was clear that Solaris had huge influx of code from SunOS, but it was equally clear that it was still old-school AT&T Unix from the layout of the sources. It didn't adopt many of the BSDish things, but opted for the older file placement and such. SunOS was quite a bit similar to 4.3 net2 BSD that FreeBSD started out on in lots of ways. Not so Solaris. But I've only been a FreeBSD committer for the past 25 years or so, so what do I know. I will admit that the last time I looked at the SunOS sources was in the early 1990's though. ZFS adopted to the SysVr4 layout, and so its port to FreeBSD also has the funky layout that had to be preserved and carefully hacked into place in FreeBSD... Warner From paul at mcjones.org Tue Jan 10 13:11:02 2017 From: paul at mcjones.org (Paul McJones) Date: Mon, 9 Jan 2017 19:11:02 -0800 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: References: Message-ID: > On Jan 9, 2017, at 6:00 PM,"Steve Johnson" wrote: > > I can certainly confirm that Steve Bourne not only knew Algol 68, he > was quite an evangelist for it. Bourne had led the Algol68C development team at Cambridge until 1975. See http://www.softwarepreservation.org/projects/ALGOL/algol68impl/#Algol68C . From rudi.j.blom at gmail.com Tue Jan 10 14:40:47 2017 From: rudi.j.blom at gmail.com (Rudi Blom) Date: Tue, 10 Jan 2017 11:40:47 +0700 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code Message-ID: >Date: Mon, 09 Jan 2017 08:45:47 -0700 >From: arnold at skeeve.com >To: rochkind at basepath.com >Cc: tuhs at tuhs.org >Subject: Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code >Message-ID: <201701091545.v09FjlXE027448 at freefriends.org> >Content-Type: text/plain; charset=us-ascii > >I remember the Bournegol well; I did some hacking on the BSD shell. > >In general, it wasn't too unusual for people from Pascal backgrounds to >do similar things, e.g. > > #define repeat do { > #define until(cond) } while (! (cond)) > >(I remember for me personally that do...while sure looked weird for. >my first few years of C programming. :-) > >(Also, I would not recommend doing that; I'm just noting that >people often did do stuff like that.) When the Philips computer division worked on MPX (multi-processor UNIX) in late 80tish they had an include file 'syntax.h' which did a lot of that Pascal-like mapping. Here part of it: /* For a full explanation see the file syntax.help */ #define IF if( #define THEN ){ #define ELSIF }else if( #define ELSE }else{ #define ENDIF } #define NOT ! #define AND && #define OR || #define CASE switch( #define OF ){ #define ENDCASE break;} #define WHEN break;case #define CWHEN case #define IMPL : #define COR :case #define BREAK break #define WHENOTHERS break;default #define CWHENOTHERS default #define SELECT do{{ #define SWHEN }if( #define SIMPL ){ #define ENDSELECT }}while(0) #define SCOPE { #define ENDSCOPE } #define BLOCK { #define ENDBLOCK } #define FOREVER for(;; #define FOR for( #define SKIP #define COND ; #define STEP ; #define LOOP ){ #define ENDLOOP } #define NULLOOP ){} #define WHILE while( #define DO do{ #define UNTIL }while(!( #define ENDDO )) #define EXITWHEN(e) if(e)break #define CONTINUE continue #define RETURN return #define GOTO goto From wkt at tuhs.org Tue Jan 10 16:00:51 2017 From: wkt at tuhs.org (Warren Toomey) Date: Tue, 10 Jan 2017 16:00:51 +1000 Subject: [TUHS] Old Unix man pages, was Re: nice interview with Steve Bourne In-Reply-To: <201701092010.v09KAirD003815@skeeve.com> References: <201701092010.v09KAirD003815@skeeve.com> Message-ID: <20170110060051.GA14266@minnie.tuhs.org> On Mon, Jan 09, 2017 at 10:10:44PM +0200, Arnold Robbins wrote: > Just came across this: http://www.computerworld.com.au/article/279011/-z_programming_languages_bourne_shell_sh > > >From 2009. There are links to other interviews about Unix stuff as well. > > Arnold Good find, Arnold. Doug McIlroy also recommended this link: http://man.cat-v.org to a bunch of historical Unix man pages. Cheers, Warren From schily at schily.net Tue Jan 10 21:02:45 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 10 Jan 2017 12:02:45 +0100 Subject: [TUHS] SunOS vs Linux In-Reply-To: References: <1483898537.3109544.841035897.5E751FC9@webmail.messagingengine.com> <61DAEC90-BE06-41FF-9B2B-D401CC8FC9F9@ccc.com> <20170109030022.GE66746@eureka.lemis.com> <20170109063225.7imw2tuiomsekvtn@ancienthardware.org> Message-ID: <5874bf55.S65Cf/vLEnr+muVU%schily@schily.net> Rico Pajarola wrote: > Now stop picking on Joerg already. Not every university was invested in > Unix. In practice Unix source was pretty much unobtainable if you happened > to live outside of the "Unix bubble". > > I grew up and went to school/university in Switzerland, and getting access > to UNIX source was nothing but a crazy pipe dream at the time. I don't even > know if my university had a source license (I can't imagine they didn't), > but in any case it wasn't something that they would let you use as a normal > student. None of my inquiries at the time resulted in anything that would > allow me to get access to Unix source. If the university had it, this > wasn't public information, and they didn't share. I couldn't prove that my > university had a license, and I had no way to get the actual bits. This was > the 90ies btw. Well, I did not say that it was easy and that every university did have source access, but universities that had people who have been interested in UNIX did usually try to get source access. It did take time to get it and I remember that TU-Berlin received the Svr2 sources when AT&T launched Svr3. In order to get SunOS source code, you needed to have a AT&T source license and another from Sun. This was close to impossible for a smaller company...... On the other side, Sun did give away parts of the SunOS source that was not based on AT&T code. If you have been a big OEM (and H.Berthold AG was a big OEM) you received what Sun believed was helpful for business. I e.g. received the keyboard driver in spring 1986 and I wrote the enhancements to support 155 keys from the Berthold keyboard and to switch layouts for different languages. In January 1986, I received a one sheet of paper description for a SCSI VME board that was made of a DMA chip and a few PALs. I wrote a SCSI driver and we demonstrated a SCSI interface to our high resolution scanner at the "Drupa" fair in April 1986 in Düsseldorf. The demo used a diskless client machine as I could either bind the Sun SCSI framework into the kernel or mine and then we could no longer access disks. Sun mamagers attended that fair and a few weeks later, I had access to the Sun SCSI driver framework and to Matthew Jacobs - the architect of that code. This resulted in my "scg" driver, the first SCSI pass through driver that I used to e.g. format disks while the kernel was running. Sun at that time had to boot a standalone program for disk formatting, but Sun did take my idea after I explained it to Matthew Jacobs. Even with these connections, I was not able to get a AT&T source license for a complete SunOS kernel source. This was because IIRC the AT&T license did cost 200000 $ for non-university entities and H.Berthold AG would not spend that much money for a source license. Here Horst Winterhoff helped and asked Bill Joy whether he could give me the sources for my dimploma thesis. So you are right, you had to be somehow connected to the right people to get source access. But people who have been interested usually have been connected...even though the world was harder to explore these days. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From dot at dotat.at Tue Jan 10 22:24:58 2017 From: dot at dotat.at (Tony Finch) Date: Tue, 10 Jan 2017 12:24:58 +0000 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <7e8fd8694d992170208e854db5dd0f72c552016b@webmail.yaccman.com> References: <7e8fd8694d992170208e854db5dd0f72c552016b@webmail.yaccman.com> Message-ID: Steve Johnson wrote: > I can certainly confirm that Steve Bourne not only knew Algol 68, he > was quite an evangelist for it.  When he came to the labs, he got a > number of people, including me, to plough through the Algol 68 report, > probably the worst written introduction to anything Ive ever read.  This is a bit off topic, sorry, but a couple more Algol 68 observations... > They were firmly convinced they were breaking new ground and > consequently invented new terms for all kinds of otherwise familiar > ideas.  It was as if the report had been written in Esperanto...   Or maybe Latin with the way it inflects words e.g. the -ETY suffix being sort for "or empty", i.e. an optional thing. Though I don't know what would be a good comparison for all the elision, e.g. MOID = MODE or VOID, MODINE = MODE or ROUTINE. (a MODE is what they call a type...) There's a nicely te-typeset version at http://www.eah-jena.de/~kleine/history/languages/Algol68-RevisedReport.pdf The other classic of Algol 68 literature was the Informal Introduction, in which the structure of the book was arranged in two orthogonal dimensions. The table of contents is a sight to behold. http://www.softwarepreservation.org/projects/ALGOL/book/Lindsey_van_der_Meulen-IItA68-Revised-ContentsOnly.pdf One of my ex-colleagues (now retired) was Chris Cheney, who worked with Steve Bourne on the Algol 68C project. I think it was on that project where he invented his beautiful compacting copying garbage collector algorithm. Tony. -- f.anthony.n.finch http://dotat.at/ - I xn--zr8h punycode Trafalgar: North 5 to 7, occasionally 4 at first. Slight or moderate, becoming rough or very rough except in far southeast. Occasional rain. Moderate or good. From jnc at mercury.lcs.mit.edu Tue Jan 10 23:50:15 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 10 Jan 2017 08:50:15 -0500 (EST) Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code Message-ID: <20170110135015.566E318C0A6@mercury.lcs.mit.edu> > From: Tony Finch > The other classic of Algol 68 literature No roundup of classic Algol 68 literature would be complete without Hoare's "The Emperor's Old Clothes". I assume everyone here has read it, but on the off-chance there is someone who hasn't, a copy is here: http://zoo.cs.yale.edu/classes/cs422/2014/bib/hoare81emperor.pdf and I cannot recommend it more highly. Noel From berny at berwynlodge.com Wed Jan 11 01:12:19 2017 From: berny at berwynlodge.com (Berny Goodheart) Date: Tue, 10 Jan 2017 15:12:19 +0000 Subject: [TUHS] the guy who brought up SVr4 on Sun machines Message-ID: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> I have been trolling these many threads lately of interest. So thought I should chip in. "SVr4 was not based on SunOS, although it incorporated many of the best features of SunOS 4.x”. IMHO this statement is almost true (there were many great features from BSD too!). SunOS 5.0 was ported from SVR4 in early 1991 and released as Solaris 2.0 in 1992 for desktop only. Back in the late 80s, Sun and AT&T partnered development efforts so it’s no surprise that SunOS morphed into SVR4. Indeed it was Sun and AT&T who were the founding members of Unix International…with an aim to provide direction and unification of SVR4. I remember when I went to work for Sun (much later in 2003), and found that the code base was remarkably similar to the SVR4 code (if not exact in many areas). Here’s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) From BSD: TCP/IP C Shell Sockets Process groups and job Control Some signals FFS in UFS guise Multi groups/file ownership Some system calls COFF From SunOS: vnodes VFS VM mmap LWP and kernel threads /proc Dynamic linking extensions NFS RPC XDR From SVR3: .so libs revamped signals and trampoline code VFSSW RFS STREAMS and TLI IPC (Shared memory, Message queues, semaphores) Additional features in SVR4 from USL: new boot process. ksh real time extensions Service access facility Enhancements to STREAMS ELF From jnc at mercury.lcs.mit.edu Wed Jan 11 01:38:59 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 10 Jan 2017 10:38:59 -0500 (EST) Subject: [TUHS] the guy who brought up SVr4 on Sun machines Message-ID: <20170110153859.BCC3618C08D@mercury.lcs.mit.edu> > From: Berny Goodheart > From BSD: > Process groups and job Control The intermediate between V6 and V7 which ran on several MIT machines (I think it was an early PWB - I should retrieve it and make it available to the Unix archive, it's an interesting system) had 'process groups', but I don't know if the concept was the same as BSD process groups. Noel From arnold at skeeve.com Wed Jan 11 02:03:40 2017 From: arnold at skeeve.com (arnold at skeeve.com) Date: Tue, 10 Jan 2017 09:03:40 -0700 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> Message-ID: <201701101603.v0AG3eW9021237@freefriends.org> Berny Goodheart wrote: > From BSD: > COFF COFF came from System V. IIRC SVR2, but maybe even SVR1. Everything else looks correct to me. :-) Thanks, Arnold From pechter at gmail.com Wed Jan 11 02:19:13 2017 From: pechter at gmail.com (pechter at gmail.com) Date: Tue, 10 Jan 2017 11:19:13 -0500 Subject: [TUHS] Ksh and SVR4 Message-ID: <7d852156-56aa-432d-a235-a3c5fb5002df.maildroid@localhost> Wasn't ksh SVR4... It was in the Xelos sources @Concurrent Computer which was an SVR2 port. Xelos didn't do paging but the source in 87 or 88 or so had ksh in it. I. built it for SVR4 on my Xelos 3230 back in the day. Bill Sent from my android device. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Wed Jan 11 02:20:24 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 10 Jan 2017 08:20:24 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> Message-ID: <20170110162024.GP8099@mcvoy.com> On Tue, Jan 10, 2017 at 03:12:19PM +0000, Berny Goodheart wrote: > From SunOS: > /proc Pretty sure /proc was not a SunOS thing. > From SVR3: > .so libs If you mean shared libraries, SunOS had those. If it's more nuanced than that, I'd defer to Gingell. --lm From schily at schily.net Wed Jan 11 02:20:49 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 10 Jan 2017 17:20:49 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> Message-ID: <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Berny Goodheart wrote: > Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) > > From BSD: > TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack > C Shell > Sockets <=== NO, BSD has sockets in kernel, SVr4 in userland > Process groups and job Control > Some signals > FFS in UFS guise <=== NO, rather taken from SunOS-4 > Multi groups/file ownership > Some system calls > COFF <=== NO, COFF was from SysV and deprecated in Svr4 > > From SunOS: > vnodes > VFS > VM > mmap > LWP and kernel threads > /proc <=== NO, /proc did not exist in SunOS-4 > Dynamic linking extensions > NFS > RPC > XDR > > From SVR3: > .so libs <=== What should this be? I am not even sure whether SVr4 included backwards compatibility for the SVr3 "installed" shared libraries. > revamped signals and trampoline code +++++sigset() was not in SVr2, I believe it was not available in svr3 as well and rather invented for Svr4 > VFSSW <=== NO, this is from SunOS-4 > RFS > STREAMS and TLI <=== SVr3 did not have STREAMS > IPC (Shared memory, Message queues, semaphores) <=== Already in SunOS-4 Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Wed Jan 11 02:24:38 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 10 Jan 2017 17:24:38 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170110162024.GP8099@mcvoy.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <20170110162024.GP8099@mcvoy.com> Message-ID: <58750ac6.xDyt5KJHPLKiEQfS%schily@schily.net> Larry McVoy wrote: > On Tue, Jan 10, 2017 at 03:12:19PM +0000, Berny Goodheart wrote: > > From SunOS: > > /proc > > Pretty sure /proc was not a SunOS thing. and I believe that Roger Faulkner did come from AT&T > > From SVR3: > > .so libs > > If you mean shared libraries, SunOS had those. If it's more nuanced > than that, I'd defer to Gingell. .so is a name introduced by SunOS-4 What Svr3 had, was shared libraries that have been installed in the kernel during boot up into multi-user mode using a special program. In order to manage this, you needed to have a global library manager that defined start addresses for the load addresses of the libraries. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From berny at berwynlodge.com Wed Jan 11 02:32:18 2017 From: berny at berwynlodge.com (Berny Goodheart) Date: Tue, 10 Jan 2017 16:32:18 +0000 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <58750ac6.xDyt5KJHPLKiEQfS%schily@schily.net> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <20170110162024.GP8099@mcvoy.com> <58750ac6.xDyt5KJHPLKiEQfS%schily@schily.net> Message-ID: > On 10 Jan 2017, at 16:24, Joerg Schilling wrote: > > Larry McVoy wrote: > >> On Tue, Jan 10, 2017 at 03:12:19PM +0000, Berny Goodheart wrote: >>> From SunOS: >>> /proc >> >> Pretty sure /proc was not a SunOS thing. > > and I believe that Roger Faulkner did come from AT&T Yes. And I should know this as I communicated and met with him on several occasions when I was developing /proc for Janus (Linux binary emulation) on Solaris x86. So, I was defo wrong on this. /proc was done by Roger at AT&T (maybe USL). I recall him telling me that he was not the original author though and that it came from PWB. > >>> From SVR3: >>> .so libs >> >> If you mean shared libraries, SunOS had those. If it's more nuanced >> than that, I'd defer to Gingell. > > .so is a name introduced by SunOS-4 > > What Svr3 had, was shared libraries that have been installed in the kernel > during boot up into multi-user mode using a special program. In order to > manage this, you needed to have a global library manager that defined start > addresses for the load addresses of the libraries. My bad again….there will probably be many more. It’s old age you see ;) I meant shared libraries. > > Jörg > > -- > EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin > joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ > URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From ron at ronnatalie.com Wed Jan 11 02:34:09 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Tue, 10 Jan 2017 11:34:09 -0500 Subject: [TUHS] Ksh and SVR4 In-Reply-To: <7d852156-56aa-432d-a235-a3c5fb5002df.maildroid@localhost> References: <7d852156-56aa-432d-a235-a3c5fb5002df.maildroid@localhost> Message-ID: <008401d26b5f$5e927f50$1bb77df0$@ronnatalie.com> I believe that was indeed the first UNIX release to have it included. From: TUHS [mailto:tuhs-bounces at minnie.tuhs.org] On Behalf Of pechter at gmail.com Sent: Tuesday, January 10, 2017 11:19 AM To: Tuhs Subject: [TUHS] Ksh and SVR4 Wasn't ksh SVR4... It was in the Xelos sources @Concurrent Computer which was an SVR2 port. Xelos didn't do paging but the source in 87 or 88 or so had ksh in it. I. built it for SVR4 on my Xelos 3230 back in the day. Bill Sent from my android device. -------------- next part -------------- An HTML attachment was scrubbed... URL: From berny at berwynlodge.com Wed Jan 11 02:34:10 2017 From: berny at berwynlodge.com (Berny Goodheart) Date: Tue, 10 Jan 2017 16:34:10 +0000 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <1154c8d8-2051-455e-a3f2-45415d901232.maildroid@localhost> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <1154c8d8-2051-455e-a3f2-45415d901232.maildroid@localhost> Message-ID: > On 10 Jan 2017, at 16:16, pechter at gmail.com wrote: > > Wasn't msg SVR4... It was in the Xelos sources @Concurrent Computer which was an SVR2 port. Xelos didn't do paging but the source in 87 or 88 or so had ksh in it. > > I. built it for SVR4 on my Xelos 3230 back in the day. msgs goes back as far as SVR2. > > Bill > > Sent from my android device. > > -----Original Message----- > From: Berny Goodheart > To: tuhs at minnie.tuhs.org > Sent: Tue, 10 Jan 2017 10:12 > Subject: [TUHS] the guy who brought up SVr4 on Sun machines > > I have been trolling these many threads lately of interest. So thought I should chip in. > > "SVr4 was not based on SunOS, although it incorporated > many of the best features of SunOS 4.x”. > > IMHO this statement is almost true (there were many great features from BSD too!). > SunOS 5.0 was ported from SVR4 in early 1991 and released as Solaris 2.0 in 1992 for desktop only. > Back in the late 80s, Sun and AT&T partnered development efforts so it’s no surprise that SunOS morphed into SVR4. Indeed it was Sun and AT&T who were the founding members of Unix International…with an aim to provide direction and unification of SVR4. > I remember when I went to work for Sun (much later in 2003), and found that the code base was remarkably similar to the SVR4 code (if not exact in many areas). > > Here’s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) > > From BSD: > TCP/IP > C Shell > Sockets > Process groups and job Control > Some signals > FFS in UFS guise > Multi groups/file ownership > Some system calls > COFF > > From SunOS: > vnodes > VFS > VM > mmap > LWP and kernel threads > /proc > Dynamic linking extensions > NFS > RPC > XDR > > From SVR3: > .so libs > revamped signals and trampoline code > VFSSW > RFS > STREAMS and TLI > IPC (Shared memory, Message queues, semaphores) > > Additional features in SVR4 from USL: > new boot process. > ksh > real time extensions > Service access facility > Enhancements to STREAMS > ELF > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jan 11 02:34:38 2017 From: clemc at ccc.com (Clem cole) Date: Tue, 10 Jan 2017 08:34:38 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Message-ID: The old saw that I wish I had said "great men stand on the shoulders of greater men, computer scientist like to step on their toes." The problem I have with this sort of accounting is it leaves out where different groups took these ideas and integrated them. Others that come later loss that history. For instance ip/tcp came from bbn, /proc came from research, job control came from MIT, fsck from CMU etc. Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. > On Jan 10, 2017, at 8:20 AM, Joerg Schilling wrote: > > Berny Goodheart wrote: > >> Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) >> >> From BSD: >> TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack >> C Shell >> Sockets <=== NO, BSD has sockets in kernel, SVr4 in > userland >> Process groups and job Control >> Some signals >> FFS in UFS guise <=== NO, rather taken from SunOS-4 >> Multi groups/file ownership >> Some system calls >> COFF <=== NO, COFF was from SysV and deprecated in Svr4 >> >> From SunOS: >> vnodes >> VFS >> VM >> mmap >> LWP and kernel threads >> /proc <=== NO, /proc did not exist in SunOS-4 >> Dynamic linking extensions >> NFS >> RPC >> XDR >> >> From SVR3: >> .so libs <=== What should this be? > I am not even sure whether SVr4 included > backwards compatibility for the SVr3 > "installed" shared libraries. > >> revamped signals and trampoline code +++++sigset() was not in SVr2, I believe > it was not available in svr3 as > well and rather invented for > Svr4 >> VFSSW <=== NO, this is from SunOS-4 >> RFS >> STREAMS and TLI <=== SVr3 did not have STREAMS >> IPC (Shared memory, Message queues, semaphores) <=== Already in SunOS-4 > > Jörg > > -- > EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin > joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ > URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From chet.ramey at case.edu Wed Jan 11 02:38:16 2017 From: chet.ramey at case.edu (Chet Ramey) Date: Tue, 10 Jan 2017 11:38:16 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Message-ID: <36e7f8a3-6964-efc4-1d75-1a9870010768@case.edu> On 1/10/17 11:34 AM, Clem cole wrote: > job control came from MIT The original concepts might have, but the BSD implementation was done by Jim Kulp at IIASA before it was folded into 4.1 BSD. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~chet/ From chet.ramey at case.edu Wed Jan 11 02:40:59 2017 From: chet.ramey at case.edu (Chet Ramey) Date: Tue, 10 Jan 2017 11:40:59 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <20170110162024.GP8099@mcvoy.com> <58750ac6.xDyt5KJHPLKiEQfS%schily@schily.net> Message-ID: On 1/10/17 11:32 AM, Berny Goodheart wrote: > >> On 10 Jan 2017, at 16:24, Joerg Schilling wrote: >> >> Larry McVoy wrote: >> >>> On Tue, Jan 10, 2017 at 03:12:19PM +0000, Berny Goodheart wrote: >>>> From SunOS: >>>> /proc >>> >>> Pretty sure /proc was not a SunOS thing. >> >> and I believe that Roger Faulkner did come from AT&T > > Yes. And I should know this as I communicated and met with him on several occasions when I was developing /proc for Janus (Linux binary emulation) on Solaris x86. > So, I was defo wrong on this. /proc was done by Roger at AT&T (maybe USL). I recall him telling me that he was not the original author though and that it came from PWB. The original implementation was done by Tom Killian for 8th Edition. I think Roger used that as inspiration, not sure about the code. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://cnswww.cns.cwru.edu/~chet/ From schily at schily.net Wed Jan 11 02:41:45 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 10 Jan 2017 17:41:45 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <20170110162024.GP8099@mcvoy.com> <58750ac6.xDyt5KJHPLKiEQfS%schily@schily.net> Message-ID: <58750ec9.2evCIrgfc9yC6OV7%schily@schily.net> Berny Goodheart wrote: > >> Pretty sure /proc was not a SunOS thing. > > > > and I believe that Roger Faulkner did come from AT&T > > Yes. And I should know this as I communicated and met with him on several occasions when I was developing /proc for Janus (Linux binary emulation) on Solaris x86. > So, I was defo wrong on this. /proc was done by Roger at AT&T (maybe USL). I recall him telling me that he was not the original author though and that it came from PWB. AFAIK, Roger was the original author for ProcFS-II that was introduced later in SunOS-5 (I believe it was Solaris-2.6) and introduced subdirectories. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From pechter at gmail.com Wed Jan 11 02:41:45 2017 From: pechter at gmail.com (pechter at gmail.com) Date: Tue, 10 Jan 2017 11:41:45 -0500 Subject: [TUHS] Ksh and SVR4 In-Reply-To: <008401d26b5f$5e927f50$1bb77df0$@ronnatalie.com> References: <7d852156-56aa-432d-a235-a3c5fb5002df.maildroid@localhost> <008401d26b5f$5e927f50$1bb77df0$@ronnatalie.com> Message-ID: <15c79a0e-89cf-4f98-aecf-7303fbb6af7d.maildroid@localhost> Freakin autocorrect. I built ksh for SVR2 to have it on the MIS dept. Xelos 3230s. CCUR never made it to SVR4 on the 3200 series decendants of the 7/32 and 8/32 machines. Bill Sent from my android device. -----Original Message----- From: Ron Natalie To: pechter at gmail.com, 'Tuhs' Sent: Tue, 10 Jan 2017 11:34 Subject: RE: [TUHS] Ksh and SVR4 I believe that was indeed the first UNIX release to have it included. From: TUHS [mailto:tuhs-bounces at minnie.tuhs.org] On Behalf Of pechter at gmail.com Sent: Tuesday, January 10, 2017 11:19 AM To: Tuhs Subject: [TUHS] Ksh and SVR4 Wasn't ksh SVR4... It was in the Xelos sources @Concurrent Computer which was an SVR2 port. Xelos didn't do paging but the source in 87 or 88 or so had ksh in it. I. built it for SVR4 on my Xelos 3230 back in the day. Bill Sent from my android device. -------------- next part -------------- An HTML attachment was scrubbed... URL: From berny at berwynlodge.com Wed Jan 11 02:57:55 2017 From: berny at berwynlodge.com (Berny Goodheart) Date: Tue, 10 Jan 2017 16:57:55 +0000 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Message-ID: > On 10 Jan 2017, at 16:20, Joerg Schilling wrote: > > Berny Goodheart wrote: > >> Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) >> >> From BSD: >> TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack Correct. Sort of. in SVR4 it was defo STREAMS based. But in earlier SVRx versions it was the BSD stuff. >> C Shell >> Sockets <=== NO, BSD has sockets in kernel, SVr4 in > userland But I dont recall seeing sockets in SVRx until SVR4. >> Process groups and job Control >> Some signals >> FFS in UFS guise <=== NO, rather taken from SunOS-4 I am not sure on this so I will agree ;) >> Multi groups/file ownership >> Some system calls >> COFF <=== NO, COFF was from SysV and deprecated in Svr4 I defo have this one wrong. You are correct. >> >> From SunOS: >> vnodes >> VFS >> VM >> mmap >> LWP and kernel threads >> /proc <=== NO, /proc did not exist in SunOS-4 Yes, it came from Roger at AT&T >> Dynamic linking extensions >> NFS >> RPC >> XDR >> >> From SVR3: >> .so libs <=== What should this be? > I am not even sure whether SVr4 included > backwards compatibility for the SVr3 > "installed" shared libraries. > >> revamped signals and trampoline code +++++sigset() was not in SVr2, I believe > it was not available in svr3 as > well and rather invented for > Svr4 Hmm… I am not sure this is totally correct. I was working on the trampoline code myself in SVR2 which was released as SVR3 but sigset and relatives didnt come until SVR4 and the trampoline code was again revamped in SVR4 by guess who….Sun. >> VFSSW <=== NO, this is from SunOS-4 I am prety sure this was from AT&T albeit, it was probably Sun developed. >> RFS >> STREAMS and TLI <=== SVr3 did not have STREAMS SVR3 defo had STREAMS. >> IPC (Shared memory, Message queues, semaphores) <=== Already in SunOS-4 But it was in SysV before SunOS-4. Much of Solaris IPC came from the SysV code together with POSIX IPC and some extras….doors. > > Jörg > > -- > EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin > joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ > URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Wed Jan 11 03:10:37 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 10 Jan 2017 18:10:37 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Message-ID: <5875158d.TJ0+CmCdKdVIM2jd%schily@schily.net> Berny Goodheart wrote: > >> IPC (Shared memory, Message queues, semaphores) <=== Already in SunOS-4 > But it was in SysV before SunOS-4. Much of Solaris IPC came from the SysV code together with POSIX IPC and some extras?.doors. Doors are a backport from the "Spring" operating system from Bill Joy that he developed in Colorado after the SVr4 task was finished around 1992. Spring was send to universities and research institutes in 1996 and then shut down, but doors were backported to Solaris for "nscd" the name service cache daemon. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From imp at bsdimp.com Wed Jan 11 03:47:28 2017 From: imp at bsdimp.com (Warner Losh) Date: Tue, 10 Jan 2017 09:47:28 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Message-ID: On Tue, Jan 10, 2017 at 8:20 AM, Joerg Schilling wrote: > Berny Goodheart wrote: > >> Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) >> >> From BSD: >> TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack svr4's stack is derived from BSD with a STREAMS packaging. These files were listed as "in AT&T's code w/o BSD headers" in the countersuit for the infamous AT&T lawsuit. Warner From lm at mcvoy.com Wed Jan 11 04:28:53 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 10 Jan 2017 10:28:53 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Message-ID: <20170110182853.GR8099@mcvoy.com> On Tue, Jan 10, 2017 at 09:47:28AM -0800, Warner Losh wrote: > On Tue, Jan 10, 2017 at 8:20 AM, Joerg Schilling wrote: > > Berny Goodheart wrote: > > > >> Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) > >> > >> From BSD: > >> TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack > > svr4's stack is derived from BSD with a STREAMS packaging. These files > were listed as "in AT&T's code w/o BSD headers" in the countersuit for > the infamous AT&T lawsuit. Yeah, I think Convergent did the STREAMS packaging, then Lachman bought the stack, I ported it twice (ETA & SCO), then I believe it was Bill Coleman (not positive on the name, it was the VP of networking) at Sun that bought rights to the stack from Lachman under pretty unfavorable terms, then Sun got unhappy with the terms (and the performance), contracted with Mentat to do a new stack and I think that stack is what remains in Solaris. From imp at bsdimp.com Wed Jan 11 04:33:59 2017 From: imp at bsdimp.com (Warner Losh) Date: Tue, 10 Jan 2017 10:33:59 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170110182853.GR8099@mcvoy.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <20170110182853.GR8099@mcvoy.com> Message-ID: On Tue, Jan 10, 2017 at 10:28 AM, Larry McVoy wrote: > On Tue, Jan 10, 2017 at 09:47:28AM -0800, Warner Losh wrote: >> On Tue, Jan 10, 2017 at 8:20 AM, Joerg Schilling wrote: >> > Berny Goodheart wrote: >> > >> >> Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) >> >> >> >> From BSD: >> >> TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack >> >> svr4's stack is derived from BSD with a STREAMS packaging. These files >> were listed as "in AT&T's code w/o BSD headers" in the countersuit for >> the infamous AT&T lawsuit. > > Yeah, I think Convergent did the STREAMS packaging, then Lachman bought > the stack, I ported it twice (ETA & SCO), then I believe it was Bill > Coleman (not positive on the name, it was the VP of networking) at Sun > that bought rights to the stack from Lachman under pretty unfavorable > terms, then Sun got unhappy with the terms (and the performance), > contracted with Mentat to do a new stack and I think that stack is what > remains in Solaris. I did some work on the Lachman stack for sysvr4 machines at Wollongong in 89 or so as well... It was very BSDish code that had been involved in a horrific traffic accident and rebuilt in a STREAMS framework. I'm not at all surprised that it didn't scale, because at the time it barely worked... Warner From lm at mcvoy.com Wed Jan 11 04:42:03 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 10 Jan 2017 10:42:03 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <20170110182853.GR8099@mcvoy.com> Message-ID: <20170110184203.GS8099@mcvoy.com> On Tue, Jan 10, 2017 at 10:33:59AM -0800, Warner Losh wrote: > On Tue, Jan 10, 2017 at 10:28 AM, Larry McVoy wrote: > > On Tue, Jan 10, 2017 at 09:47:28AM -0800, Warner Losh wrote: > >> On Tue, Jan 10, 2017 at 8:20 AM, Joerg Schilling wrote: > >> > Berny Goodheart wrote: > >> > > >> >> Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) > >> >> > >> >> From BSD: > >> >> TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack > >> > >> svr4's stack is derived from BSD with a STREAMS packaging. These files > >> were listed as "in AT&T's code w/o BSD headers" in the countersuit for > >> the infamous AT&T lawsuit. > > > > Yeah, I think Convergent did the STREAMS packaging, then Lachman bought > > the stack, I ported it twice (ETA & SCO), then I believe it was Bill > > Coleman (not positive on the name, it was the VP of networking) at Sun > > that bought rights to the stack from Lachman under pretty unfavorable > > terms, then Sun got unhappy with the terms (and the performance), > > contracted with Mentat to do a new stack and I think that stack is what > > remains in Solaris. > > I did some work on the Lachman stack for sysvr4 machines at Wollongong > in 89 or so as well... It was very BSDish code that had been involved > in a horrific traffic accident and rebuilt in a STREAMS framework. I'm > not at all surprised that it didn't scale, because at the time it > barely worked... Yup, been there, lived that. Until Mentat came along it was the only game in town. I don't normally tell people I'm the guy that gave SCO networking because it "barely worked" as you say. I did get SCO to ship sw (STREAMS watch) that was sort of like a top for STREAMS - it was useful to run this while beating on the stack and then go tune the internal limits for better performance. I can't imagine anyone wants this any more, or if it even runs, but it's my copyright and I stuck a copy in http://mcvoy.com/lm/sw.shar From stewart at serissa.com Wed Jan 11 04:44:10 2017 From: stewart at serissa.com (Lawrence Stewart) Date: Tue, 10 Jan 2017 13:44:10 -0500 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: References: <1483929007.6355.for-standards-violators@oclsc.org> <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> <201701091545.v09FjlXE027448@freefriends.org> Message-ID: > On 2017, Jan 9, at 11:03 AM, Clem Cole wrote: > > > On Mon, Jan 9, 2017 at 10:45 AM, > wrote: > I remember the Bournegol well; I did some hacking on the BSD shell. > ​Yep - lots of strange things in source debuggers.​ > > > > In general, it wasn't too unusual for people from Pascal backgrounds to > do similar things, > ​When we did Magnolia & Tektronix the ex-Xerox/Alta guys lusted for Cedar/Mesa et al - and quickly discovered the Bournegol idea. ​ > > I shook my head/shrugged my shoulders, but it made them happy and they quickly wrote some pretty cool tools, like an ECAD system. > > Speaking as an ex-Xerox/Alto guy, there was some flow in the other direction as well. Cedar started with a Tenex CMD JSYS derived command line, like the Alto OS before it, but having received the True Word from V7 at Stanford, I wrote a Cedar shell with standard I/O and redirection and shell scripts. I don’t think I did pipes. It did become the standard command line interface. Then Warren Teitelman added DWIM to it which was highly entertaining at times. -L -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jan 11 05:21:46 2017 From: clemc at ccc.com (Clem cole) Date: Tue, 10 Jan 2017 11:21:46 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170110184203.GS8099@mcvoy.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <20170110182853.GR8099@mcvoy.com> <20170110184203.GS8099@mcvoy.com> Message-ID: <1C99BA5E-9D2B-472B-AABB-EEB47242E969@ccc.com> Correct- that was the path as I know it. i.e. ITS gave Unix more and job control. Which is my point. When some one starts pontificating about how SYS/BSD/Linux this that or the other thing - often the idea came elsewhere. Wide distribution and use was supplied by the XXX Channel but there were many many fathers and mothers Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. > On Jan 10, 2017, at 10:42 AM, Larry McVoy wrote: > >> On Tue, Jan 10, 2017 at 10:33:59AM -0800, Warner Losh wrote: >>> On Tue, Jan 10, 2017 at 10:28 AM, Larry McVoy wrote: >>>> On Tue, Jan 10, 2017 at 09:47:28AM -0800, Warner Losh wrote: >>>>> On Tue, Jan 10, 2017 at 8:20 AM, Joerg Schilling wrote: >>>>> Berny Goodheart wrote: >>>>> >>>>>> Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) >>>>>> >>>>>> From BSD: >>>>>> TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack >>>> >>>> svr4's stack is derived from BSD with a STREAMS packaging. These files >>>> were listed as "in AT&T's code w/o BSD headers" in the countersuit for >>>> the infamous AT&T lawsuit. >>> >>> Yeah, I think Convergent did the STREAMS packaging, then Lachman bought >>> the stack, I ported it twice (ETA & SCO), then I believe it was Bill >>> Coleman (not positive on the name, it was the VP of networking) at Sun >>> that bought rights to the stack from Lachman under pretty unfavorable >>> terms, then Sun got unhappy with the terms (and the performance), >>> contracted with Mentat to do a new stack and I think that stack is what >>> remains in Solaris. >> >> I did some work on the Lachman stack for sysvr4 machines at Wollongong >> in 89 or so as well... It was very BSDish code that had been involved >> in a horrific traffic accident and rebuilt in a STREAMS framework. I'm >> not at all surprised that it didn't scale, because at the time it >> barely worked... > > Yup, been there, lived that. Until Mentat came along it was the only game > in town. I don't normally tell people I'm the guy that gave SCO networking > because it "barely worked" as you say. > > I did get SCO to ship sw (STREAMS watch) that was sort of like a top for > STREAMS - it was useful to run this while beating on the stack and then > go tune the internal limits for better performance. I can't imagine > anyone wants this any more, or if it even runs, but it's my copyright > and I stuck a copy in http://mcvoy.com/lm/sw.shar > From clemc at ccc.com Wed Jan 11 05:41:39 2017 From: clemc at ccc.com (Clem cole) Date: Tue, 10 Jan 2017 11:41:39 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <1C99BA5E-9D2B-472B-AABB-EEB47242E969@ccc.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <20170110182853.GR8099@mcvoy.com> <20170110184203.GS8099@mcvoy.com> <1C99BA5E-9D2B-472B-AABB-EEB47242E969@ccc.com> Message-ID: Looks like apples mail screwed up the repl. That was supported to be directed at Chet's comment about noting Jim Kulp great work implementing ITS style job control that joy took into BSD Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. > On Jan 10, 2017, at 11:21 AM, Clem cole wrote: > > Correct- that was the path as I know it. i.e. ITS gave Unix more and job control. > > > Which is my point. When some one starts pontificating about how SYS/BSD/Linux this that or the other thing - often the idea came elsewhere. Wide distribution and use was supplied by the XXX Channel but there were many many fathers and mothers > > Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite. > >>> On Jan 10, 2017, at 10:42 AM, Larry McVoy wrote: >>> >>>> On Tue, Jan 10, 2017 at 10:33:59AM -0800, Warner Losh wrote: >>>>> On Tue, Jan 10, 2017 at 10:28 AM, Larry McVoy wrote: >>>>>> On Tue, Jan 10, 2017 at 09:47:28AM -0800, Warner Losh wrote: >>>>>> On Tue, Jan 10, 2017 at 8:20 AM, Joerg Schilling wrote: >>>>>> Berny Goodheart wrote: >>>>>> >>>>>>> Here???s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;) >>>>>>> >>>>>>> From BSD: >>>>>>> TCP/IP <=== NO, Svr4 uses a STREAMS based TCP/IP stack >>>>> >>>>> svr4's stack is derived from BSD with a STREAMS packaging. These files >>>>> were listed as "in AT&T's code w/o BSD headers" in the countersuit for >>>>> the infamous AT&T lawsuit. >>>> >>>> Yeah, I think Convergent did the STREAMS packaging, then Lachman bought >>>> the stack, I ported it twice (ETA & SCO), then I believe it was Bill >>>> Coleman (not positive on the name, it was the VP of networking) at Sun >>>> that bought rights to the stack from Lachman under pretty unfavorable >>>> terms, then Sun got unhappy with the terms (and the performance), >>>> contracted with Mentat to do a new stack and I think that stack is what >>>> remains in Solaris. >>> >>> I did some work on the Lachman stack for sysvr4 machines at Wollongong >>> in 89 or so as well... It was very BSDish code that had been involved >>> in a horrific traffic accident and rebuilt in a STREAMS framework. I'm >>> not at all surprised that it didn't scale, because at the time it >>> barely worked... >> >> Yup, been there, lived that. Until Mentat came along it was the only game >> in town. I don't normally tell people I'm the guy that gave SCO networking >> because it "barely worked" as you say. >> >> I did get SCO to ship sw (STREAMS watch) that was sort of like a top for >> STREAMS - it was useful to run this while beating on the stack and then >> go tune the internal limits for better performance. I can't imagine >> anyone wants this any more, or if it even runs, but it's my copyright >> and I stuck a copy in http://mcvoy.com/lm/sw.shar >> From doug at cs.dartmouth.edu Wed Jan 11 06:33:57 2017 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Tue, 10 Jan 2017 15:33:57 -0500 Subject: [TUHS] NFS aka the guy who brought up SVr4 on Sun machines Message-ID: <201701102033.v0AKXvrc018898@coolidge.cs.Dartmouth.EDU> Reputed origins of SVR4: > From SunOS: > ... > NFS And, sadly, NFS is still with us, having somehow upstaged Peter Weinberger's RFS (R for remote) that appeared at the same time. NFS allows one to add computers to a file system, but not to combine the file systems of multiple computers, as RFS did by mapping uids: NFS:RFS::LAN:WAN. Doug From doug at cs.dartmouth.edu Wed Jan 11 06:36:05 2017 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Tue, 10 Jan 2017 15:36:05 -0500 Subject: [TUHS] /proc aka the guy who brought up SVr4 on Sun machines Message-ID: <201701102036.v0AKa58R018944@coolidge.cs.Dartmouth.EDU> > /proc came from research Indeed it did. > /proc was done by Roger at AT&T (maybe USL). I recall him telling me that > he was not the original author though and that it came from PWB. Roger Faulkner's /proc article, recently cited in tuhs, begins with acknowledgment to Tom Killian, who originated /proc in research. (That was Tom's spectacular debut when he switched from high-energy physics, at Argonne IIRC, to CS at Bell Labs.) doug From lm at mcvoy.com Wed Jan 11 06:41:19 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 10 Jan 2017 12:41:19 -0800 Subject: [TUHS] /proc aka the guy who brought up SVr4 on Sun machines In-Reply-To: <201701102036.v0AKa58R018944@coolidge.cs.Dartmouth.EDU> References: <201701102036.v0AKa58R018944@coolidge.cs.Dartmouth.EDU> Message-ID: <20170110204119.GF24126@mcvoy.com> On Tue, Jan 10, 2017 at 03:36:05PM -0500, Doug McIlroy wrote: > > /proc came from research > > Indeed it did. > > > /proc was done by Roger at AT&T (maybe USL). I recall him telling me that > > he was not the original author though and that it came from PWB. > > > Roger Faulkner's /proc article, recently cited in tuhs, begins with > acknowledgment to Tom Killian, who originated /proc in research. > (That was Tom's spectacular debut when he switched from high-energy > physics, at Argonne IIRC, to CS at Bell Labs.) Didn't Ron Grimes have something to do with it as well? -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From lars at nocrew.org Wed Jan 11 06:52:04 2017 From: lars at nocrew.org (Lars Brinkhoff) Date: Tue, 10 Jan 2017 21:52:04 +0100 Subject: [TUHS] BSD job control, aka the guy, etc Message-ID: <861swaab9n.fsf@molnjunk.nocrew.org> I asked: > I wonder where the inspiration for the Unix job control came from? In > particular, I can't help but notice that Control-Z does something very > similar in the PDP-10 Incompatible Timesharing System. Jim Kulp answered: > The ITS capabilities were certainly part of the inspiration. It was a > combination of frustrations and gaps in UNIX with some of those > features found in ITS that resulted in the final package of features. From jnc at mercury.lcs.mit.edu Wed Jan 11 07:26:08 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 10 Jan 2017 16:26:08 -0500 (EST) Subject: [TUHS] the guy who brought up SVr4 on Sun machines Message-ID: <20170110212608.3166518C093@mercury.lcs.mit.edu> > From: Chet Ramey > /proc was done by Roger at AT&T (maybe USL). I recall him telling me > that he was not the original author though and that it came from PWB. > The original implementation was done by Tom Killian for 8th Edition. I wonder if >pdd (which dates to somewhere in the mid-60's, I'm too lazy to look the exact date up) was in any way any inspiration for /proc? Noel From schily at schily.net Wed Jan 11 07:40:03 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 10 Jan 2017 22:40:03 +0100 Subject: [TUHS] NFS aka the guy who brought up SVr4 on Sun machines In-Reply-To: <201701102033.v0AKXvrc018898@coolidge.cs.Dartmouth.EDU> References: <201701102033.v0AKXvrc018898@coolidge.cs.Dartmouth.EDU> Message-ID: <587554b3.6O+E9BGOgaxwufwc%schily@schily.net> Doug McIlroy wrote: > And, sadly, NFS is still with us, having somehow upstaged Peter > Weinberger's RFS (R for remote) that appeared at the same time. > NFS allows one to add computers to a file system, but not to > combine the file systems of multiple computers, as RFS did > by mapping uids: NFS:RFS::LAN:WAN. This changed long ago, NFSv4 no longer sends uid's but user names and supports mappings. NFS won because it was not built on top of UNIX semantics and thus allowed to port it to other platforms. The nice idea in RFS was that it supported remote devices, but the iotcl handling was a problem in AT&T UNIX before SVr4 ??? added a flag to tell whether the data source was in kernel or userland. I am not sure wether RFS had a concept like XDR for ioctls. The funny thing: RFS was supported in SunOS4, but not in SunOS-5. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Wed Jan 11 07:43:55 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 10 Jan 2017 13:43:55 -0800 Subject: [TUHS] NFS aka the guy who brought up SVr4 on Sun machines In-Reply-To: <587554b3.6O+E9BGOgaxwufwc%schily@schily.net> References: <201701102033.v0AKXvrc018898@coolidge.cs.Dartmouth.EDU> <587554b3.6O+E9BGOgaxwufwc%schily@schily.net> Message-ID: <20170110214355.GM24126@mcvoy.com> On Tue, Jan 10, 2017 at 10:40:03PM +0100, Joerg Schilling wrote: > The nice idea in RFS was that it supported remote devices, but the iotcl > handling was a problem in AT&T UNIX before SVr4 ??? added a flag to tell > whether the data source was in kernel or userland. I am not sure wether RFS > had a concept like XDR for ioctls. I believe it did not. > The funny thing: RFS was supported in SunOS4, but not in SunOS-5. And Howard Chartok was ecstatic over that decision (he was my office mate and did the port into SunOS 4.x. Not one of his favorite projects.) From steve at quintile.net Wed Jan 11 10:56:52 2017 From: steve at quintile.net (Steve Simon) Date: Wed, 11 Jan 2017 00:56:52 +0000 Subject: [TUHS] NFS aka the guy who brought up SVr4 on Sun machines In-Reply-To: <20170110214355.GM24126@mcvoy.com> References: <201701102033.v0AKXvrc018898@coolidge.cs.Dartmouth.EDU> <587554b3.6O+E9BGOgaxwufwc%schily@schily.net> <20170110214355.GM24126@mcvoy.com> Message-ID: Beware of confusion. There is the 8th and 9th edition remote file protocol (I have papers somewhere I think), by Weinberger. This evolved into 9p, Plan9’s file protocol. There is also RFS, I think a USG package for SYSVr3. The paper I have About this is by Author L Sabsevitz, though I don’t know if he was the author of the code, or just the paper. They are rather different beasts with similar names. -Steve > On 10 Jan 2017, at 21:43, Larry McVoy wrote: > > On Tue, Jan 10, 2017 at 10:40:03PM +0100, Joerg Schilling wrote: >> The nice idea in RFS was that it supported remote devices, but the iotcl >> handling was a problem in AT&T UNIX before SVr4 ??? added a flag to tell >> whether the data source was in kernel or userland. I am not sure wether RFS >> had a concept like XDR for ioctls. > > I believe it did not. > >> The funny thing: RFS was supported in SunOS4, but not in SunOS-5. > > And Howard Chartok was ecstatic over that decision (he was my office > mate and did the port into SunOS 4.x. Not one of his favorite projects.) From rmswierczek at gmail.com Wed Jan 11 12:33:03 2017 From: rmswierczek at gmail.com (Robert Swierczek) Date: Tue, 10 Jan 2017 21:33:03 -0500 Subject: [TUHS] Questions for TUHS great minds Message-ID: Not so long ago I joked about putting a Cray-1 in a watch. Now that we are essentially living in the future, what audacious (but realistic) architectures can we imagine under our desks in 25 years? Perhaps a mesh of ten-million of today's highest end CPU/GPU pairs bathing in a vast sea of non-volatile memory? What new abstractions are needed in the OS to handle that? Obviously many of the current command line tools would need rethinking (ps -ef for instance.) Or does the idea of a single OS disintegrate into a fractal cloud of zero-cost VM's? What would a meta-OS need to manage that? Would we still recognize it as a Unix? -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at quintile.net Wed Jan 11 12:16:26 2017 From: steve at quintile.net (Steve Simon) Date: Wed, 11 Jan 2017 02:16:26 +0000 Subject: [TUHS] Rje / sna networking Message-ID: <05DECAED-065A-4520-805A-D86CF1596A01@quintile.net> Casual interest, Anyone ever used RJE from SYS-III - IBM mainframe remote job entry System? I started on Edition 7 on an interdata so I am (pretty much) too young for that era, unless I am fooling myself. -Steve From mah at mhorton.net Wed Jan 11 13:17:20 2017 From: mah at mhorton.net (Mary Ann Horton) Date: Tue, 10 Jan 2017 19:17:20 -0800 Subject: [TUHS] NFS aka the guy who brought up SVr4 on Sun machines In-Reply-To: References: <201701102033.v0AKXvrc018898@coolidge.cs.Dartmouth.EDU> <587554b3.6O+E9BGOgaxwufwc%schily@schily.net> <20170110214355.GM24126@mcvoy.com> Message-ID: <9f7dc170-ba5b-c503-aed8-55855e494a8f@mhorton.net> As I recall, RFS was implemented over virtual circuits, whereas NFS was over datagrams (UDP). RFS was well suited to Datakit, which only did virtual circuits, and they often were used together inside Bell Labs. One of the reasons NFS won is that IP won over Datakit. On 01/10/2017 04:56 PM, Steve Simon wrote: > There is also RFS, I think a USG package for SYSVr3. The paper I have > About this is by Author L Sabsevitz, though I don’t know if he was the author > of the code, or just the paper. > > From mah at mhorton.net Wed Jan 11 13:19:44 2017 From: mah at mhorton.net (Mary Ann Horton) Date: Tue, 10 Jan 2017 19:19:44 -0800 Subject: [TUHS] BSD job control, aka the guy, etc In-Reply-To: <861swaab9n.fsf@molnjunk.nocrew.org> References: <861swaab9n.fsf@molnjunk.nocrew.org> Message-ID: <7dc555cf-b018-1599-d685-7011590c9c10@mhorton.net> Yes, ITS and Tenex/TOPS were the inspiration for UNIX ^Z. When I was at Berkeley, of course we didn't have windowing environments, we had dumb terminals, and it was nice to be able to interrupt a long-running job for a shell command. I recall asking for it, and I think Bill put it into csh. On 01/10/2017 12:52 PM, Lars Brinkhoff wrote: > I asked: >> I wonder where the inspiration for the Unix job control came from? In >> particular, I can't help but notice that Control-Z does something very >> similar in the PDP-10 Incompatible Timesharing System. > Jim Kulp answered: >> The ITS capabilities were certainly part of the inspiration. It was a >> combination of frustrations and gaps in UNIX with some of those >> features found in ITS that resulted in the final package of features. From lm at mcvoy.com Wed Jan 11 13:32:35 2017 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 10 Jan 2017 19:32:35 -0800 Subject: [TUHS] NFS aka the guy who brought up SVr4 on Sun machines In-Reply-To: <9f7dc170-ba5b-c503-aed8-55855e494a8f@mhorton.net> References: <201701102033.v0AKXvrc018898@coolidge.cs.Dartmouth.EDU> <587554b3.6O+E9BGOgaxwufwc%schily@schily.net> <20170110214355.GM24126@mcvoy.com> <9f7dc170-ba5b-c503-aed8-55855e494a8f@mhorton.net> Message-ID: <20170111033235.GC3887@mcvoy.com> As a Sun guy it was obvious that NFS should win. Sun ran all of engineering on NFS, it actually worked and worked well. When I left Sun I found out that nobody did NFS anywhere as near as well as Sun did it. At SGI NFS was a joke, we had senior engineers who said "don't trust that, use rcp" (looking at you Dave Olsen). That was weird to me, at Sun NFS just worked, as in you never thought about it not working. Everywhere else it was so-so. On Tue, Jan 10, 2017 at 07:17:20PM -0800, Mary Ann Horton wrote: > As I recall, RFS was implemented over virtual circuits, whereas NFS was over > datagrams (UDP). RFS was well suited to Datakit, which only did virtual > circuits, and they often were used together inside Bell Labs. One of the > reasons NFS won is that IP won over Datakit. > > > On 01/10/2017 04:56 PM, Steve Simon wrote: > >There is also RFS, I think a USG package for SYSVr3. The paper I have > >About this is by Author L Sabsevitz, though I don???t know if he was the author > >of the code, or just the paper. > > > > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From arnold at skeeve.com Wed Jan 11 13:40:00 2017 From: arnold at skeeve.com (arnold at skeeve.com) Date: Tue, 10 Jan 2017 20:40:00 -0700 Subject: [TUHS] NFS aka the guy who brought up SVr4 on Sun machines In-Reply-To: <9f7dc170-ba5b-c503-aed8-55855e494a8f@mhorton.net> References: <201701102033.v0AKXvrc018898@coolidge.cs.Dartmouth.EDU> <587554b3.6O+E9BGOgaxwufwc%schily@schily.net> <20170110214355.GM24126@mcvoy.com> <9f7dc170-ba5b-c503-aed8-55855e494a8f@mhorton.net> Message-ID: <201701110340.v0B3e0Wt007097@freefriends.org> Mary Ann Horton wrote: > As I recall, RFS was implemented over virtual circuits, whereas NFS was > over datagrams (UDP). RFS was well suited to Datakit, which only did > virtual circuits, and they often were used together inside Bell Labs. > One of the reasons NFS won is that IP won over Datakit. I think another reason is that AT&T got a lot more, er, "difficult" about its licensing come SVR3, which introduced RFS. Many of the major UNIX vendors (IBM, DEC, HP) didn't bother to license it. As most of them already had NFS, it wasn't worth the trouble. SunOS 4.0 had RFS. I think early versions of 4.1 did, but I'm pretty sure that by 4.1.3 SunOS had removed it. Arnold From brantleycoile at me.com Wed Jan 11 16:00:36 2017 From: brantleycoile at me.com (Brantley Coile) Date: Wed, 11 Jan 2017 01:00:36 -0500 Subject: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code In-Reply-To: <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> References: <934ec8dea21ce728b7c4e70a6ee2deb86af39d27@webmail.yaccman.com> Message-ID: <2FF70709-B205-4A26-BFD2-04EA0E18E662@me.com> I asked Ken and Steve about this yesterday. Ken remembers the request to rename od(1) but not who asked. Steve remembers vaguely asking but suspects he just used do - od and found out the hard way he needed to change it to done. Neither remembers the episode very well so it must not have been a big deal to them at the time. Brantley > On Jan 8, 2017, at 10:31 PM, Steve Johnson wrote: > > I wasn't directly involved in this, but I do remember Dennis telling me essentially the same story. I don't recall him mentioning Ken's name, just that "we couldn't use od because that was already taken". > > Steve B and I had adjacent offices, so I overheard a lot of the discussions about the Bourne shell. The quoting mechanisms, in particular, got a lot of attention, I think to good end. There was a lot more thought there than is evident from the surface... > > Steve (not Bourne) > > ----- Original Message ----- > From: > "Norman Wilson" > > To: > > Cc: > > Sent: > Sun, 08 Jan 2017 21:30:03 -0500 > Subject: > Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code > > > Doug McIlroy: > > There was some pushback which resulted in the strange compromise > of if-fi, case-esac, do-done. Alas, the details have slipped from > memory. Help, scj? > > ==== > > do-od would have required renaming the long-tenured od(1). > > I remember a tale--possibly chat in the UNIX Room at one point in > the latter 1980s--that Steve tried and tried and tried to convince > Ken to rename od, in the name of symmetry and elegance. Ken simply > said no, as many times as it took. I don't remember who I heard this > from; anyone still in touch with Ken who can ask him? > > Norman Wilson > Toronto ON -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Wed Jan 11 20:29:38 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Wed, 11 Jan 2017 05:29:38 -0500 Subject: [TUHS] Rje / sna networking In-Reply-To: <05DECAED-065A-4520-805A-D86CF1596A01@quintile.net> References: <05DECAED-065A-4520-805A-D86CF1596A01@quintile.net> Message-ID: <016901d26bf5$9d03fed0$d70bfc70$@ronnatalie.com> Not RJE->IBM, but one of my first "outside the University" UNIX jobs was setting up a PWB system to allow us to use SCCS and the like on software we were developing for a target system of RSX-11M as well as a few 8080 based communications processors. In addition to SCCS, we also used nroff for documentation and I wrote (after not being able to coax a copy out of Dennis Mumaugh at the NSA) a version of the -mm macro package that did classification marking and a version of the print spooler that looked for classification marks in the source code and marked the output appropriately.. -----Original Message----- From: TUHS [mailto:tuhs-bounces at minnie.tuhs.org] On Behalf Of Steve Simon Sent: Tuesday, January 10, 2017 9:16 PM To: tuhs at tuhs.org Subject: [TUHS] Rje / sna networking Casual interest, Anyone ever used RJE from SYS-III - IBM mainframe remote job entry System? I started on Edition 7 on an interdata so I am (pretty much) too young for that era, unless I am fooling myself. -Steve From lars at nocrew.org Wed Jan 11 22:37:34 2017 From: lars at nocrew.org (Lars Brinkhoff) Date: Wed, 11 Jan 2017 13:37:34 +0100 Subject: [TUHS] BSD job control, aka the guy, etc In-Reply-To: <7dc555cf-b018-1599-d685-7011590c9c10@mhorton.net> (Mary Ann Horton's message of "Tue, 10 Jan 2017 19:19:44 -0800") References: <861swaab9n.fsf@molnjunk.nocrew.org> <7dc555cf-b018-1599-d685-7011590c9c10@mhorton.net> Message-ID: <868tqh93ht.fsf@molnjunk.nocrew.org> Mary Ann Horton wrote: > Jim Kulp wrote: >> Lars Brinkhoff wrote: >>> I wonder where the inspiration for the Unix job control came from? In >>> particular, I can't help but notice that Control-Z does something very >>> similar in the PDP-10 Incompatible Timesharing System. >> >> The ITS capabilities were certainly part of the inspiration. It was a >> combination of frustrations and gaps in UNIX with some of those >> features found in ITS that resulted in the final package of features. > > Yes, ITS and Tenex/TOPS were the inspiration for UNIX ^Z. Not so much TOPS-20, according to Jim Kulp. From doug at cs.dartmouth.edu Wed Jan 11 23:41:46 2017 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Wed, 11 Jan 2017 08:41:46 -0500 Subject: [TUHS] /proc aka the guy who brought up SVr4 on Sun machines In-Reply-To: <20170110204119.GF24126@mcvoy.com> References: <201701102036.v0AKa58R018944@coolidge.cs.Dartmouth.EDU> <20170110204119.GF24126@mcvoy.com> Message-ID: <201701111341.v0BDfkwD028523@coolidge.cs.Dartmouth.EDU> > Didn't Ron Grimes have something to do with it as well? Ron Gomes. Yes, he was coauthor wiith Roger. Killian's oriiginal implementation was solo. Doug ---------------------------------------------------------- From rudi.j.blom at gmail.com Thu Jan 12 00:58:27 2017 From: rudi.j.blom at gmail.com (Rudi Blom) Date: Wed, 11 Jan 2017 21:58:27 +0700 Subject: [TUHS] Rje / sna networking Message-ID: >Date: Wed, 11 Jan 2017 02:16:26 +0000 >From: Steve Simon >To: tuhs at tuhs.org >Subject: [TUHS] Rje / sna networking >Message-ID: <05DECAED-065A-4520-805A-D86CF1596A01 at quintile.net> >Content-Type: text/plain; charset=us-ascii >Casual interest, >Anyone ever used RJE from SYS-III - IBM mainframe remote job entry >System? I started on Edition 7 on an interdata so I am (pretty much) too young >for that era, unless I am fooling myself. >-Steve In the 90sh DEC in Europe had a number of products on top of SCO UNIX 3.2V4.2 calling DECadvantage (from the German part of former Philips Information Systems). Included were an SNA environment with 3270 display/print, 3770/RJE, and APPC. I've used RJE for downloading daily reports in one of the banks here in Thailand. Long time ago though. Still have various sample scripts I put together that time. From ron at ronnatalie.com Thu Jan 12 02:25:48 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Wed, 11 Jan 2017 11:25:48 -0500 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: References: Message-ID: <018801d26c27$5ea34b50$1be9e1f0$@ronnatalie.com> I give up on prediction. I never thought UNIX would have lasted this long. Personal computing bounces back and forth between local devices and remote services every decade or so. I will offer a few interesting observations. Back when I was still at BRL (somewhere around 1983 probably), one of my coworkers walked in and said that in a few years they'd be able to give me a computer as powerful as a VAX, and it will sit on my desk, and I’ll have exclusive use of it and be happy. I pointed out that this was unlikely (the being happy part), as my expectations would increase as time goes on. Ron’s Rule Of Computing: “I need a computer ‘this’ big” (which is accompanied by holding my arms out about the width of a VAX 780 CPU cabinet. Hanging up on the wall in one of the machine rooms I administered was a sign comparing the computer that had sat there (the ENIAC) to a then HP 65 calculator. The HP 65 would have been an incredible tool to the ENIAC guys, but now It seems way dated. A lot of computer discussion mentioned what would happen if a hacker had access to a CRAY computer. Could he perhaps brute force the crypt in the UNIX password file? Oddly, when BRL got their first Cray (an X/MP preempted from Apple’s delivery slot), I was given pretty much as much time as I wanted to try to vectorize the crypt routine. It wasn’t particularly easy, and we had other parallel processors that were easier to program that were doing a better job at the hack for less money. I actually signed for the BRL Cray 2 but it didn’t get installed until after I left. Ron’s Rule of Software Deployment: “Stop making cutovers on major holidays.” The dang government kept doing things like the long leader conversion on the Arpanet and the TCP/IP changeover on Jan 1. It drove me nuts that our entire group ended up working over the holidays to bring the new systems up. At one point when they were rejiggering the USENET groups, the proposal was to do it on Labor Day weekend. I pointed out (this was the Atlanta UUG) that it violated the above rule AND was particularly bad because many of the USENET system admins were going to be back in Atlanta for the World Science Fiction Convention that weekend. And finally, Ron’s Rule of Electrical Engineering: “If two things can be plugged into each other, some fool will do so. You better make it work, or at least benignly fail when it happens.” Somewhere I have a cord that one of my employees made me which has a 110V plug on one side and an RJ-11 on the other (fortunately it is non functional). -------------- next part -------------- An HTML attachment was scrubbed... URL: From dugo at xs4all.nl Thu Jan 12 02:50:06 2017 From: dugo at xs4all.nl (Jacob Goense) Date: Wed, 11 Jan 2017 17:50:06 +0100 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: <018801d26c27$5ea34b50$1be9e1f0$@ronnatalie.com> References: <018801d26c27$5ea34b50$1be9e1f0$@ronnatalie.com> Message-ID: <608e82b3a875c499f5e63e776886337a@xs4all.nl> On 2017-01-11 17:25, Ron Natalie wrote: > Somewhere I have an etherkiller (unfortunately it is non functional). FTFY From corey at lod.com Thu Jan 12 03:01:12 2017 From: corey at lod.com (Corey Lindsly) Date: Wed, 11 Jan 2017 09:01:12 -0800 (PST) Subject: [TUHS] Questions for TUHS great minds In-Reply-To: <608e82b3a875c499f5e63e776886337a@xs4all.nl> Message-ID: <20170111170112.6D81840FC@lod.com> > > On 2017-01-11 17:25, Ron Natalie wrote: > > Somewhere I have an etherkiller (unfortunately it is non functional). > > FTFY > I don't think so. RJ-11? More like a telephone killer, or home firestarter. --corey From rochkind at basepath.com Thu Jan 12 03:54:14 2017 From: rochkind at basepath.com (Marc Rochkind) Date: Wed, 11 Jan 2017 10:54:14 -0700 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: <20170111170112.6D81840FC@lod.com> References: <608e82b3a875c499f5e63e776886337a@xs4all.nl> <20170111170112.6D81840FC@lod.com> Message-ID: A couple of answers: "Or does the idea of a single OS disintegrate into a fractal cloud of zero-cost VM's?" I would say zero-cost computing. Whether that occurs at VMs or even on what we would recognize as a computer seems too limiting. I think all that matters is that the program be elaborated. (There, I used a term from the Algol 68 report!) "Would we still recognize it as a Unix?" Not sure what "it" refers to, but I'm sure that any and all things UNIX-like would be programs that could be run. I imagine that clever marketeers will design a box that can appear to run programs (they may or may not actually run on anything contained in the box) and then call it a "computer", for those who still care. It could have flashing lights, even. And, as it is nearly empty, it could range in size from a watch (or smaller) to a big desktop box. In today's terminology, what I see is that programs will run in the cloud. Programs I think are of eternal importance. How they are executed will become irrelevant. Somewhere in that cloud are actual computers, of course. How they work I'm sure will change drastically, as it has fairly often, from the beginning. --Marc On Wed, Jan 11, 2017 at 10:01 AM, Corey Lindsly wrote: > > > > On 2017-01-11 17:25, Ron Natalie wrote: > > > Somewhere I have an etherkiller (unfortunately it is non functional). > > > > FTFY > > > > I don't think so. RJ-11? More like a telephone killer, or home > firestarter. > > --corey > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Thu Jan 12 04:07:21 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 11 Jan 2017 13:07:21 -0500 (EST) Subject: [TUHS] the guy who brought up SVr4 on Sun machines Message-ID: <20170111180721.0FBB018C08B@mercury.lcs.mit.edu> > I wonder if >pdd ... was in any way any inspiration for /proc? That may have been a bit too cryptic. "pdd" ('process directory directory') was a top-level directory in the Multics filesystem which contained a directory for each process active in the system; each directory contained data (in segments - roughly, 'files', but Multics didn't have files because it was a single-level store system) associated with the process, such as its kernel- and user-mode (effectively - technically, ring-0 and ring-4) stacks, etc. So if a process was sitting in a system call, you could go into the right directory in >pdd and look at its kernel stack and see the sequence of procedure calls (with arguments) that had led it to the point where it blocked. Etc, etc. Noel From scj at yaccman.com Thu Jan 12 04:34:29 2017 From: scj at yaccman.com (Steve Johnson) Date: Wed, 11 Jan 2017 10:34:29 -0800 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: Message-ID: <99f1301695eb38762765b91bff57b0486bc71af6@webmail.yaccman.com> >>  Or does the idea of a single OS disintegrate into a fractal cloud of zero-cost VM's?  What would a meta-OS need to manage that?  Would we still recognize it as a Unix? This may be off topic, but the aim of studying history is to avoid the mistakes of the past.  And part of that is being honest about where we are now... IMHO, hardware has left software in the dust.  I figured out that if cars had evolved since 1970 at the same rate as computer memory, we could now buy 1,000 Tesla Model S's for a penny, and each would have a top speed of 60,000 MPH.  This is roughly a factor of a trillion in less than 50 years. For quite a while, hardware (Moore's law) was giving software cover--the hardware speed and capacity was exceeding the rate of bloat in software.  For the last decade, that's stopped happening (the hardware speed, not the software bloat!)   What hardware is now giving us is many many more of the same thing rather than the same thing faster.  To fully exploit the hardware, the old model of telling a processor "do this, then do this, then do this" simply doesn't scale.  Things like multicore look to me like a desperate attempt to hold onto an outmoded model of computing in the face of a radically different hardware landscape. The company I'm working for now, Wave Computing, is building a chip with 16,000 8-bit processors on a chip.  These processors know how to snuggle up together and do 16-, 32-, and 64-bit arithmetic.  The chip is intended to be part of systems with as many as a quarter million processors, with machine learning being one possible target.  There are no global signals on the chip (e.g., no central clock).  (The hardware people aren't perfect--it's not yet sunk in that making a billion transistors operate fast while remaining in synch is ultimately an impossible goal as the line sizes get smaller). Chips like ours are not intended to be general purpose -- they act more like FPGA's. They allow tremendous resources to be focused on a single problem at a time.  And the focus to change quickly as desired.  They aren't good for everything.  But I do think they represent the current state of hardware pretty well, and the trends are strongly towards even more of the same.  The closest analogy of programming for the chip is microcode -- it's as if you have a programmable machine with hundreds or thousands of "instructions" that are far more powerful than traditional instructions, able to operate on structured data and do many arithmetic operations.  And you can make new instructions at will.  The programming challenge is to wire these instructions together to get the desired effect. This may not be the only path to the future, and it may fail to survive or be crowded out by other paths.  But the path we have been on for the last 50 years is a dead end, and the quicker we wise up and grab the future, the better... Steve PS: another way to visualize the hardware progress:  If we punched out a petabyte of data onto punched cards, the card deck height would be 6 times the distance to the moon!  Imagine the rubber band.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles.unix.pro at gmail.com Thu Jan 12 05:37:18 2017 From: charles.unix.pro at gmail.com (Charles Anthony) Date: Wed, 11 Jan 2017 11:37:18 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170111180721.0FBB018C08B@mercury.lcs.mit.edu> References: <20170111180721.0FBB018C08B@mercury.lcs.mit.edu> Message-ID: On Wed, Jan 11, 2017 at 10:07 AM, Noel Chiappa wrote: > > I wonder if >pdd ... was in any way any inspiration for /proc? > > That may have been a bit too cryptic. "pdd" ('process directory directory') > was a top-level directory in the Multics filesystem which contained a > directory for each process active in the system; each directory contained > data > (in segments - roughly, 'files', but Multics didn't have files because it > was > a single-level store system) associated with the process, such as its > kernel- > and user-mode (effectively - technically, ring-0 and ring-4) stacks, etc. > > So if a process was sitting in a system call, you could go into the right > directory in >pdd and look at its kernel stack and see the sequence of > procedure calls (with arguments) that had led it to the point where it > blocked. Etc, etc. > > 'pdd' also contained temporary segments, ala mktemp: r 11:33 0.092 1 cwd [pd] r 11:33 0.086 3 ls Segments = 21, Lengths = 0. 0 !BBBKLDJkqPKWqL.area.linker 0 stack_1 r w 0 archive_temp_.archive rew 0 !BBBKLDJkqGGBMh.temp.0346 rew 0 !BBBKLDJkqGFKDc.temp.0345 rew 0 !BBBKLDJkqGDWMn.temp.0344 rew 0 !BBBKLDJkqGCfXX.temp.0343 rew 0 !BBBKLDJkqGBpDB.temp.0342 rew 0 !BBBKLDJkqGBCwg.temp.0341 rew 0 !BBBKLDJkqFzFDz.temp.0340 rew 0 !BBBKLDJkqFxMcW.temp.0337 rew 0 !BBBKLDJkpmxKqH.temp.0332 rew 0 !BBBKLDJkpmwMfz.temp.0331 r w 0 process_search_segment_.4 rew 0 !BBBKLDJknDXFNp.temp.0304 rew 0 !BBBKLDJknCfjfK.area.linker rew 0 stack_4 re 0 pit 0 pds 0 kst 0 dseg r 11:33 0.214 0 -- Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at sdaoden.eu Thu Jan 12 05:46:50 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 11 Jan 2017 20:46:50 +0100 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: <99f1301695eb38762765b91bff57b0486bc71af6@webmail.yaccman.com> References: <99f1301695eb38762765b91bff57b0486bc71af6@webmail.yaccman.com> Message-ID: <20170111194650.pIbRmaw3g%steffen@sdaoden.eu> Hello. "Steve Johnson" wrote: |>>  Or does the idea of a single OS disintegrate into a fractal cloud \ |>>of zero-cost VM's?  What would a meta-OS need to manage that?  Would \ |>>we still |recognize it as a Unix? | |This may be off topic, but the aim of studying history is to avoid \ |the mistakes of the past.  And part of that is being honest about where \ |we are now... | |IMHO, hardware has left software in the dust.  I figured out that if \ |cars had evolved since 1970 at the same rate as computer memory, we \ |could now buy 1, |000 Tesla Model S's for a penny, and each would have a top speed of \ |60,000 MPH.  This is roughly a factor of a trillion in less than 50 years. I am even more off-topic, and of course this was only an example. But this reference sounds so positive, yet this really is no forward technology that you quote, touring along several hundred kilogram of batteries that is. Already at the end of the eighties i think (the usual ")everybody(") knew that fuel cells are the future. It is true that i have said in a local auditorium in 1993 that i wished with 18 everybody would get a underfloor with fuel cells in the sandwich that it is, and four wheel hub motors, and a minimalistic structure that one may replace at will. It was already possible back then (but for superior tightness of the tank), just like, for example, selective cylindre deactivation, diesel soot filter, diesel NOx reduction cat ("urea injection"). (Trying to clean Diesel in the uncertain conditions that multi-million engines at different heights and climate are in is megalomaniacal, in my opinion. And that already back then.) Unfortunately fuel cell development has never been politically pushed as much as desirable, and was mostly up to universities until at least about 2006, and in Germany, to the best of my knowledge. It may not be popular in the U.S. at the moment, but it is Toyota again, with the Mirai, who spends money due to responsibility. That is at least what i think. (And again it is the question whether a doubtful technology is spread millions and millions of times all over the place, or whether only some refineries have to be improved.) --steffen From dugo at xs4all.nl Thu Jan 12 06:32:45 2017 From: dugo at xs4all.nl (Jacob Goense) Date: Wed, 11 Jan 2017 15:32:45 -0500 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: <20170111170112.6D81840FC@lod.com> References: <20170111170112.6D81840FC@lod.com> Message-ID: On 2017-01-11 12:01, corey at lod.com wrote: > I don't think so. RJ-11? More like a telephone killer, or home > firestarter. A fix for this obviously broken etherkiller would be UI. Doesn't a telephone killer need a Honda portable generator on the other end of that RJ-11 ;) ObOT: Considering it is in now the realm of possibilities to simulate a ~ 1986 internet under a desk and stuff a ~ 1994 geocities in a netbook.. I imagine running close to a billion x86 emulators under the desk while running a copy of pinterest and facebook on it. I'll probably need a larger pid_t and rewrite some shell scripts in Rust or what you have by then. From crossd at gmail.com Thu Jan 12 06:56:06 2017 From: crossd at gmail.com (Dan Cross) Date: Wed, 11 Jan 2017 15:56:06 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Message-ID: On Tue, Jan 10, 2017 at 11:20 AM, Joerg Schilling wrote: > Berny Goodheart wrote: > [snip] > > VFSSW <=== NO, this is from SunOS-4 > Surely Berny meant the file system switch here, which could have come from early system V, but originated in research Unix (8th edition?). Note that this list is very similar to that in the early part of his book on System V internals. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Thu Jan 12 07:03:23 2017 From: crossd at gmail.com (Dan Cross) Date: Wed, 11 Jan 2017 16:03:23 -0500 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <1C99BA5E-9D2B-472B-AABB-EEB47242E969@ccc.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <20170110182853.GR8099@mcvoy.com> <20170110184203.GS8099@mcvoy.com> <1C99BA5E-9D2B-472B-AABB-EEB47242E969@ccc.com> Message-ID: On Tue, Jan 10, 2017 at 2:21 PM, Clem cole wrote: > Correct- that was the path as I know it. i.e. ITS gave Unix more and job > control. > > > Which is my point. When some one starts pontificating about how > SYS/BSD/Linux this that or the other thing - often the idea came > elsewhere. Wide distribution and use was supplied by the XXX Channel but > there were many many fathers and mothers To that end, someone once suggested to me that the inspiration for the swapping implementation in early Unix came fro MTS, but I found that dubious; the dates just don't line up. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From schily at schily.net Thu Jan 12 08:57:51 2017 From: schily at schily.net (Joerg Schilling) Date: Wed, 11 Jan 2017 23:57:51 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> Message-ID: <5876b86f.TZmh44N+Iwm79UKO%schily@schily.net> Dan Cross wrote: > On Tue, Jan 10, 2017 at 11:20 AM, Joerg Schilling wrote: > > > Berny Goodheart wrote: > > [snip] > > > VFSSW <=== NO, this is from SunOS-4 > > > > Surely Berny meant the file system switch here, which could have come from > early system V, but originated in research Unix (8th edition?). Note that > this list is very similar to that in the early part of his book on System V > internals. It is rather a part of the VFS interface that has first been completed with SunOS-3.0 in late 1985. There are small changes introduced into the VFS switch for SVr4 to permit to mount special filesystems without the need of root permissions. They cause the major differences between SunOS-4 and Svr4. VFS uses two interface parts: - the VFS switch with the vfs interface functions to mount a filesystem and to stat it - the VFS vnode interface with the vnode interface functions to hold the interfaces like open(), mmap(), ... Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Thu Jan 12 09:06:03 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 11 Jan 2017 15:06:03 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <5876b86f.TZmh44N+Iwm79UKO%schily@schily.net> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <5876b86f.TZmh44N+Iwm79UKO%schily@schily.net> Message-ID: <20170111230603.GE5891@mcvoy.com> On Wed, Jan 11, 2017 at 11:57:51PM +0100, Joerg Schilling wrote: > Dan Cross wrote: > > > On Tue, Jan 10, 2017 at 11:20 AM, Joerg Schilling wrote: > > > > > Berny Goodheart wrote: > > > [snip] > > > > VFSSW <=== NO, this is from SunOS-4 > > > > > > > Surely Berny meant the file system switch here, which could have come from > > early system V, but originated in research Unix (8th edition?). Note that > > this list is very similar to that in the early part of his book on System V > > internals. > > It is rather a part of the VFS interface that has first been completed with > SunOS-3.0 in late 1985. I think you are once again confused. System Vr3 had something called the file system switch which is what Berny is talking about. SunOS had virtual file system layer (VFS) and that would be one of things ported to SVr4. The history on wikipedia matches my memory and contradicts much of what you've been claiming. For what it is worth, I didn't write a word of that wikipedia page so others remember much as I do. https://en.wikipedia.org/wiki/UNIX_System_V From schily at schily.net Thu Jan 12 09:52:22 2017 From: schily at schily.net (Joerg Schilling) Date: Thu, 12 Jan 2017 00:52:22 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170111230603.GE5891@mcvoy.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <5876b86f.TZmh44N+Iwm79UKO%schily@schily.net> <20170111230603.GE5891@mcvoy.com> Message-ID: <5876c536.+BQujxabzDX0djG8%schily@schily.net> Larry McVoy wrote: > > It is rather a part of the VFS interface that has first been completed with > > SunOS-3.0 in late 1985. > > I think you are once again confused. System Vr3 had something called the > file system switch which is what Berny is talking about. SunOS had > virtual file system layer (VFS) and that would be one of things ported > to SVr4. But that SVr3 beast is not in Svr4. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Thu Jan 12 09:57:19 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 11 Jan 2017 15:57:19 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <5876c536.+BQujxabzDX0djG8%schily@schily.net> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <5876b86f.TZmh44N+Iwm79UKO%schily@schily.net> <20170111230603.GE5891@mcvoy.com> <5876c536.+BQujxabzDX0djG8%schily@schily.net> Message-ID: <20170111235719.GG5891@mcvoy.com> On Thu, Jan 12, 2017 at 12:52:22AM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > > It is rather a part of the VFS interface that has first been completed with > > > SunOS-3.0 in late 1985. > > > > I think you are once again confused. System Vr3 had something called the > > file system switch which is what Berny is talking about. SunOS had > > virtual file system layer (VFS) and that would be one of things ported > > to SVr4. > > But that SVr3 beast is not in Svr4. Correct, it got dumped in favor of the VFS layer from SunOS, which I said already. My point is that you were telling Berny that the FSS wasn't from SV but it was. You should go read that wikipedia page, so far as I can tell it's pretty accurate and has a better version of the history that you do. I suspect you've fallen victim to various people claiming stuff that wasn't really accurate. Like Bill Joy being responsible for SVr4. If by "responsible" you mean he worked on the code, I'm sure that's not true. If by "responsible" you mean he talked to AT&T about it, sure, that's pretty likely. You made it sound like the former which just isn't plausible. For lots of reasons. From schily at schily.net Thu Jan 12 10:07:17 2017 From: schily at schily.net (Joerg Schilling) Date: Thu, 12 Jan 2017 01:07:17 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170111235719.GG5891@mcvoy.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <5876b86f.TZmh44N+Iwm79UKO%schily@schily.net> <20170111230603.GE5891@mcvoy.com> <5876c536.+BQujxabzDX0djG8%schily@schily.net> <20170111235719.GG5891@mcvoy.com> Message-ID: <5876c8b5.fCJls03bP6vdCbOj%schily@schily.net> Larry McVoy wrote: > > But that SVr3 beast is not in Svr4. > > Correct, it got dumped in favor of the VFS layer from SunOS, which I > said already. > > My point is that you were telling Berny that the FSS wasn't from > SV but it was. Please refresh your memory and have a look at the sources... The first of the two SunOS tables I am referring to is called "vfssw". I thought this was what he referred to. I thought that he knows that what Svr3 used to have, did not make it into Svr4. And BTW: Wikipedia is of low quality and unusable when something may be related to religion. Check e.g. how long it took to name Konrad Zuse as the inventor of the first usable computer. When I look at Wikipedia, I check whether the references are trustworthy before I believe the claims. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From lm at mcvoy.com Thu Jan 12 11:58:23 2017 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 11 Jan 2017 17:58:23 -0800 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <5876c8b5.fCJls03bP6vdCbOj%schily@schily.net> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <5876b86f.TZmh44N+Iwm79UKO%schily@schily.net> <20170111230603.GE5891@mcvoy.com> <5876c536.+BQujxabzDX0djG8%schily@schily.net> <20170111235719.GG5891@mcvoy.com> <5876c8b5.fCJls03bP6vdCbOj%schily@schily.net> Message-ID: <20170112015823.GG12163@mcvoy.com> On Thu, Jan 12, 2017 at 01:07:17AM +0100, Joerg Schilling wrote: > Larry McVoy wrote: > > > > But that SVr3 beast is not in Svr4. > > > > Correct, it got dumped in favor of the VFS layer from SunOS, which I > > said already. > > > > My point is that you were telling Berny that the FSS wasn't from > > SV but it was. > > Please refresh your memory and have a look at the sources... So far you haven't mentioned a source base where I have not worked. And you keep dropping names, which is fun, but I know all the names you have dropped personally. John Pope came to my house after stopping by at Matt Jacob's house. Matt was my neighbor in Noe Valley and he was the guy who checked in my work on UFS (I was too chicken to do it, I had been to some SunOS 4.x staff meetings where Gingell ripped people a new asshole for doing something wrong. Matt checked in my UFS work with the comment "this is the only thing we have that makes performance better, it's going in".) I don't know where you are getting your information from but the time period you are talking about was when I was a pretty good engineer, I lived in the Unix source code, mcvoy.com is internally known as slovax, slovax was the 11/750 that had the 4.2 BSD source on it, I spent a boatload of time reading that code. Kernel and userland. Same for System III, same for the PWB release, same for System V, I love this stuff. I'm a total nerd for this stuff, it's why I'm on this list, the people I look up to are here. I really wonder where you got your information from because it's a little bit wrong. --lm From clemc at ccc.com Thu Jan 12 13:54:44 2017 From: clemc at ccc.com (Clem Cole) Date: Wed, 11 Jan 2017 19:54:44 -0800 Subject: [TUHS] History of select(2) In-Reply-To: References: <20170109023502.GA8507@minnie.tuhs.org> Message-ID: Paul -- this is a great stuff and fills in some pieces of my memories. FYI: CMU had the the Rand Ports code including empty() in some implementations of kernels in use in the mid 1970s. The later UNIX cu(1) program did not exist, and a couple of people hacked together a program called "connect" which was used to connect between serial ports for downloading PPT images to microprocessors such as a KIM-1 and communicating with it. I say a "couple of people," because I do not remember who wrote the original version. It could have been a number of us. I know I hacked on it a good bit, as did others over the course of time (and we did not use source control in those days). The sources were there for all to work with and we all added stuff as we though of things that would help us out. Anyway, besides microprocessor support, we also used connect(1) to "front-end" the serial lines from the PDP-10's in CS and were doing some rudimentary networking stuff over parallel DR-11C's, in the pre-TCP/IP world [a discussion a couple of us have had separately - more in a minute]. But the point is that the dates line up with my being there and what we were doing at the time. One version of connect(1) used the empty() system call and Rand ports to do its thing. I also remember reading the connect(1) code as an early education in network technology and concepts. We would later take hunks of into a TCP stack [3-4 years later Phil Karn would write the infamous KA9Q TCP stack and a few years later, Stan Smith and I would write the VMS TCP etc in BLISS but it model on some C network stuff from CMU... -- amazing circle]. Anyway, around the same time (either sept 76 or 77 would be my guess) the late Ted Kowalski came to CMU for his OYOC time (I've forgotten which it was to be honest) and brought a pre-UNIX/TS system with him - TS begat V7. Ted brought that system up on the 11/34 in EE and I remember connect(1) was the most important program that immediately broke. I also remember a large argument between Ted and one of the other hackers (I've forgotten whom), Ted saying we did not need it, it was wasteful, etc... and not in the official editions. I remember he re-implemented the connect(1) program one night with multiple process and EE systems were from them on based on Ted's system [although, I would later get to know Chesson and Greg would give me the mpx() code a year or two later for some networking stuff I would work on but that is a different story]. The point is that while I have no memory of capac(), but I can confirm that I definitely programmed with the empty() system call and Rand ports on a v6 based kernel in the mid-1970s and that it was definitely at places besides Rand themselves. Another thing I want thank you for it confirming something I have been saying for few years and some people have had a hard time believing. The specifications for what would become IP and TCP were kicking around the ARPAnet in the late 1970s. We definitely had them at CMU and that's where I first was introduced to them, long before the planned cut over in the early 1980s. I probably was not aware of the global politics involved outside of the ARPA community because I certainly thought at the time IP was we were headed and it was what we were thinking about and considering how to implement. Anyway - thanks again for a great piece of hunt up some good stuff. Clem On Mon, Jan 9, 2017 at 2:36 AM, Paul Ruizendaal wrote: > On 9 Jan 2017, at 3:35 , Warren Toomey wrote: > > > Also, I came across this history of select(2) a while back: > > > > https://idea.popcount.org/2016-11-01-a-brief-history-of-select2/ > > > > Cheers, Warren > > That is an interesting blog post, but I think it is a bit short on the > history of things before 4.2BSD. Below my current understanding of what > came before select(). > > In March 1975 the first networked Unix was created at the University of > Illinois, initially based on 5th edition, but soon ported to 6th edition. > It is described in RFC681 and a paper by Greg Chesson. Note that UoI was > the very first Unix licensee. Its primary authors were Steve Holmgren, > Steve Bunch and Gary Grossman. Greg Chesson was also involved. Grossman had > already done two earlier Arpanet implementations (the ANTS and ANTS II > systems) on bare metal and had a deep understanding of what a good > implementation needed. > > Their implementation was compact (about a thousand lines added to the > kernel, and another thousand in the connection daemon) and - I'm my opinion > at least - conceptually well integrated into the existing file API. It > became the leading Unix Arpanet implementation with wide use from 1975 to > 1981. Two things stand out: (i) no accept(); and (ii) no select(). The > original authors are still with us, with the exception of Greg, and I asked > for their input as well. > > (i) no accept() > > Listening sockets worked a bit different from today. If one opened a > listening socket it would not return a descriptor but block instead; when a > connection was made it would return with the listening socket now bound to > the new connection. Server applications would open a listening socket and > do a double fork for the client connection (i.e. getting process 1 as its > parent); the main process would loop around and open a new listening socket > (this can all be verified in surviving application sources). According to > Steve Holmgren this was not perceived as a big problem at the time. Network > speeds were still so low that the brief gap in listening did not matter > much, and the double fork was just a few lines of code. > > This changed when the CSRG team moved from a long-haul, Arpanet, 56Kb/s > context to a local, Ethernet, 3Mb/s context and Sam Leffler came up with > the concept of accept(). In 4.1a BSD and 2.9BSD the queue of pending > connections was fixed (possibly 1, I have to check). In 4.1c BSD listen() > was introduced; before then whether a socket was active or listening was a > flag to opening the socket. The second parameter to listen() specified the > maximum number of pending connections [as an aside, note that I'm using > 'socket' in the BSD sense; the term socket changed meaning several times > between 1973 and 1983]. > > (ii) no select() > > This was the real pain (Holmgren reconfirmed that). This is what Dennis > must have referred to in his retrospect paper. Various solutions were > thought of, but in Network Unix the model remained using separate processes > for simultaneous reading and writing. Progress in this area came from two > other places involved in Unix and Arpanet: Rand and BBN. > > In 1977 Rand was taking on this problem (see > http://www.dtic.mil/dtic/tr/fulltext/u2/a044200.pdf and > http://www.dtic.mil/dtic/tr/fulltext/u2/a044201.pdf). They considered a > solution with a new system call 'empty()' that would tell if there was any > data available on a file descriptor, a crude form of non-blocking I/O if > you like. As this would consume precious CPU cycles it proved inadequate. > Instead they came up with "ports". A port was a (possibly named) pipe with > multiple writers and a single reader, and it was created with a 'port()' > system call. The reader would see each write preceded by a header block > identifying the reader. The implementation (see second PDF) was simple, > apparently only taking 200 words of kernel code. Rand ports are a > simplistic version of the 'mpx' facility done by Greg Chesson at Bell Labs > (in 1978?). I am not sure whether this was independent invention or that > Greg was aware of Rand ports. Unfortunately we cannot ask him anymore. > > Later in 1977, over at BBN, Jack Haverty was doing an experimental TCP/IP > stack for Unix (this was TCP 2.5, not TCP 4). He had a working stack > written in PDP11 assembler for a different OS and was making this run on > Unix. He was using Rand ports to connect clients to the network stack, but > still lacked the required primitives to make this work properly. So he came > up with the await() system call, a direct precursor to select(). It is > documented in BBN report 3911 (http://bit.ly/2iU1TNK), including man > pages. With the awtenb() and awtdis() one would manage the monitored > descriptors (like the bit vectors going into a select), and await() would > then wait for an event or time out. > > Related to this was the capac() system call, to get the 'capacity' of a > descriptor. This returns the amount of data that can safely be written to > or read from a descriptor. I suppose it is an improved version of empty(). > There is no equivalent of that in the later BSD sockets, perhaps because > non-blocking I/O in the current sense was about to arrive. With port(), > await() and capac() it becomes possible to write single threaded network > programs. > > An example may be found here, the first TCP/IP (version 4) stack in C for > Unix, from early 1979: http://digital2.library.ucla.e > du/viewItem.do?ark=21198/zz002gvzqg (scroll down past IMP stuff). It's > documented in IEN98 (https://www.rfc-editor.org/ien/ien98.txt). I'm > currently retyping this source so that it can be better studied. > > The await() call is not in the TCP/IP code done for 4.1 BSD by BBN. I'm > puzzled by this as it is evidently useful and Jack Haverty and Rob Gurwitz > worked in the same corridor at BBN at the time. In 4.1a the select() call > appears and it seems to be an improved version of await(), with the need > for awtenb() en awtdis() replaced by the use of bit vectors. I am not sure > if Bill Joy was aware of await() or whether it was independent invention. > Here we can ask, but I have no contact details. > > Hope the above is of interest. I'm still learning new things about these > topics every day, so please advise if my above understanding is wrong. > > > As a side note, I am still looking for: > > - surviving copies of UoI "Network Unix" (I'm currently no further than > papers and bits of source that lingered in other code bases) > > - surviving copies of the 4.1a BSD distribution tape (Kirk McKusick's tape > was damaged) > > - surviving source of the kernel code of port(), await() and capac(); > (could possibly be recreated from documentation) > > Any and all help very much appreciated. > > Paul > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schily at schily.net Thu Jan 12 20:39:28 2017 From: schily at schily.net (Joerg Schilling) Date: Thu, 12 Jan 2017 11:39:28 +0100 Subject: [TUHS] the guy who brought up SVr4 on Sun machines In-Reply-To: <20170112015823.GG12163@mcvoy.com> References: <79091EE2-D7F8-4BE2-9422-47C365780367@berwynlodge.com> <587509e1.gGhkbfCz1YmUYkqT%schily@schily.net> <5876b86f.TZmh44N+Iwm79UKO%schily@schily.net> <20170111230603.GE5891@mcvoy.com> <5876c536.+BQujxabzDX0djG8%schily@schily.net> <20170111235719.GG5891@mcvoy.com> <5876c8b5.fCJls03bP6vdCbOj%schily@schily.net> <20170112015823.GG12163@mcvoy.com> Message-ID: <58775ce0.RnIyGLaXimTKid2V%schily@schily.net> Larry McVoy wrote: > I really wonder where you got your information from because it's a > little bit wrong. Do you really believe I invented all these people? You already confirmed that they exist. How do you believe, I could get the names without being in contact with them? I got my information from working on Solaris and in this special case, you seem to forget that you definitely made a mistake. Maybe you now like to refresh your memory.... >From Solaris sys/vfs.h: /* * Filesystem type switch table. */ typedef struct vfssw { char *vsw_name; /* type name -- max len _ST_FSTYPSZ */ int (*vsw_init) (int, char *); /* init routine (for non-loadable fs only) */ int vsw_flag; /* flags */ mntopts_t vsw_optproto; /* mount options table prototype */ uint_t vsw_count; /* count of references */ kmutex_t vsw_lock; /* lock to protect vsw_count */ vfsops_t vsw_vfsops; /* filesystem operations vector */ } vfssw_t; I however have to admit that I did also a mistake as struct vfssw just contains the structure vfsops_t and is not the structure I had in mind itself. But it is based on struct vfssw from SunOS-4 sys/vfs.h which looks this way: /* * Filesystem type switch table */ struct vfssw { char *vsw_name; /* type name string */ struct vfsops *vsw_ops; /* filesystem operations vector */ }; It seems that "porting" it to Svr4 caused a dot to be appended to the comment ;-) BTW: in 1990 with SunOS-4.0, I could already modload my "wofs" as I added a few nulled entries to the struct vfssw in os/vfs_conf.c, pretended that wofs was a device driver and let it then install itself into one of the empty entries in struct vfssw. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From wkt at tuhs.org Thu Jan 12 21:39:41 2017 From: wkt at tuhs.org (Warren Toomey) Date: Thu, 12 Jan 2017 21:39:41 +1000 Subject: [TUHS] VFS: confusion (and an end) Message-ID: <20170112113941.GA23718@minnie.tuhs.org> Ok, the story so far. Berny wrote: Here's the breakdown of SVR4 kernel lineage as I recall it. ... From SunOS: vnodes VFS Dan Cross wrote: > VFSSW <=== NO, this is from SunOS-4 Surely Berny meant the file system switch here, which could have come from early system V, but originated in research Unix (8th edition?). Joerg Schilling wrote: It is rather a part of the VFS interface that has first been completed with SunOS-3.0 in late 1985. And this is where the confusion starts. Does "It" refer to FSS or VFS? I've just looked through some sources. The file system switch was in SysVR3: uts/3b2/sys/mount.h: /* Flag bits passed to the mount system call */ #define MS_RDONLY 0x1 /* read only bit */ #define MS_FSS 0x2 /* FSS (4-argument) mount */ VFS was in SunOS 4.1.4: sys/sys/vfs.h: struct vfssw { char *vsw_name; /* type name string */ struct vfsops *vsw_ops; /* filesystem operations vector */ }; And VFS is in SysVR4: uts/i386/sys/vfs.h: typedef struct vfssw { char *vsw_name; /* type name string */ int (*vsw_init)(); /* init routine */ struct vfsops *vsw_vfsops; /* filesystem operations vector */ long vsw_flag; /* flags */ } vfssw_t; Interestingly, the "filesystem operations vector" comment also occurs in FreeBSD 5.3, NetBSD-5.0.2 and OpenBSD-4.6. Look for vector here: http://minnie.tuhs.org/cgi-bin/utree.pl?file=FreeBSD-5.3/sys/sys/mount.h http://minnie.tuhs.org/cgi-bin/utree.pl?file=NetBSD-5.0.2/sys/compat/sys/mount.h http://minnie.tuhs.org/cgi-bin/utree.pl?file=OpenBSD-4.6/sys/sys/mount.h Larry wrote: System Vr3 had something called the file system switch which is what Berny is talking about. SunOS had virtual file system layer (VFS) and that would be one of things ported to SVr4. which is consistent with everybody else. So now that we have consistency, let's move on. Cheers, Warren From wkt at tuhs.org Thu Jan 12 21:53:13 2017 From: wkt at tuhs.org (Warren Toomey) Date: Thu, 12 Jan 2017 21:53:13 +1000 Subject: [TUHS] VFS: confusion (and an end) In-Reply-To: <20170112113941.GA23718@minnie.tuhs.org> References: <20170112113941.GA23718@minnie.tuhs.org> Message-ID: <20170112115313.GA23485@minnie.tuhs.org> On Thu, Jan 12, 2017 at 09:39:41PM +1000, Warren Toomey wrote: > VFS was in SunOS 4.1.4: > sys/sys/vfs.h: > struct vfssw { > char *vsw_name; /* type name string */ > struct vfsops *vsw_ops; /* filesystem operations vector */ > }; Oh, and 4.3BSD from Uni of Wisconsin also has VFS: http://minnie.tuhs.org/cgi-bin/utree.pl?file=4.3BSD-UWisc/src/sys/h/vfs.h (dated 1987). And SunOS 2 has got VFS: include/sys/vfs.h /* @(#)vfs.h 1.1 84/12/20 SMI */ /* * Structure per mounted file system. * Each mounted file system has an array of * operations and an instance record. * The file systems are put on a singly linked list. */ struct vfs { struct vfs *vfs_next; /* next vfs in vfs list */ struct vfsops *vfs_op; /* operations on vfs */ struct vnode *vfs_vnodecovered; /* vnode we mounted on */ int vfs_flag; /* flags */ int vfs_bsize; /* native block size */ caddr_t vfs_data; /* private data */ }; That's as far back as I can go. Cheers, Warren From pnr at planet.nl Fri Jan 13 19:13:47 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Fri, 13 Jan 2017 10:13:47 +0100 Subject: [TUHS] History of select(2) In-Reply-To: References: <20170109023502.GA8507@minnie.tuhs.org> Message-ID: <7C71C8C7-AE29-4B64-894A-4913585B8763@planet.nl> On 12 Jan 2017, at 4:54 , Clem Cole wrote: > Paul -- this is a great stuff and fills in some pieces of my memories. Thanks! > The point is that while I have no memory of capac(), but I can confirm that I definitely programmed with the empty() system call and Rand ports on a v6 based kernel in the mid-1970s and that it was definitely at places besides Rand themselves. Thank you for confirming that. If anybody knows of surviving source for these extensions I'd love to hear about it. Although the description in the implementation report is clear enough to recreate it (it would seem to be one file similar to pipe.c and a pseudo device driver similar in size to mem.c), original code is better. It is also possible that the code in pipe.c was modified to drive both pipes and ports -- there would have been a lot of similarity between the two, and kernel space was at a premium. > [...] confirming something I have been saying for few years and some people have had a hard time believing. The specifications for what would become IP and TCP were kicking around the ARPAnet in the late 1970s. My understanding is that all RFC's and IEN's were available to all legit users of the Arpanet. By 1979 there were 90 nodes (IMP's) and about 200 hosts connected. I don't get the impression that stuff was always easy to find, with Postel making a few posts about putting together "protocol information binders". Apparently nobody had the idea to put all RFC's in a directory and give FTP access to it. I am not sure how available this stuff was outside the Arpanet community. I think I should put a question out about this, over on the internet history mailing list. As an aside: IMHO, conceptually the difference between NCP and TCP wasn't all that big. In my current understanding the big difference was that NCP assumes in-order, reliable delivery of packets (as was the case between IMP's) and that TCP allows for unreliable links. Otherwise, the connection build-up and tear-down and the flow control were similar. See for instance RFC54 and RFC55 from 1970. My point is: yes, these concepts were kicking around for over a decade in academia before BSD. Paul From berny at berwynlodge.com Fri Jan 13 20:36:11 2017 From: berny at berwynlodge.com (Berny Goodheart) Date: Fri, 13 Jan 2017 10:36:11 +0000 Subject: [TUHS] VFS: confusion (and an end) Message-ID: Thanks Warren for saving me to sort out the confusion. I am sorry I started it in the first place. On Tue, Jan 10, 2017 at 11:20 AM, Joerg Schilling > wrote: > …. Note that > this list is very similar to that in the early part of his book on System V > internals. Having having just removed the dust off my old copy of TMGE, it is interesting that the list I wrote here is very similar to what I wrote back in 1993. Just goes to show, Alzheimer’s hasn’t got me yet ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Fri Jan 13 22:31:58 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Fri, 13 Jan 2017 07:31:58 -0500 Subject: [TUHS] History of select(2) In-Reply-To: <7C71C8C7-AE29-4B64-894A-4913585B8763@planet.nl> References: <20170109023502.GA8507@minnie.tuhs.org> <7C71C8C7-AE29-4B64-894A-4913585B8763@planet.nl> Message-ID: <034901d26d99$08b2dfd0$1a189f70$@ronnatalie.com> I have in a box somewhere a printed copy of the TCP/IP protocol book printed by SRI. It's about 3" thick and has the essential protocol RFCs needed at the time the Arpanet went TCP/IP. Another slightly smaller one had the mail RFCs in it. Almost anybody who was working with these things had copies of these things on their desk. Understand that these networks were slow compared to modern standards. FTPing down that much data wasn't something you did casually. My mind is hazy about where the RFPs were archived before the cut over but afterward they most certainly were in the FTP directory at the SRI-NIC host. This was also where you downloaded the host table (though a later RFC made a non-FTP transfer available as well). An amusing historical tidbit on the host tables. The table had entries that looked like this: HOST : 10.0.0.73 : SRI-NIC,NIC : FOONLY-F3 : TENEX :NCP/TELNET,NCP/FTP, TCP/TELNET, TCP/FTP : Where you had the IP address, the names for the host, the hardware type, the OS, and a list of supported protocols. Now the BSD network code had their own format for /etc/hosts. A small yacc program was used to convert the above format to /etc/hosts. Then a routine, then just called rhost would look up in /etc/hosts. Now I thought this was silly. So the BRL machines just used hosts.txt directly. I modified rhost read it without the intermediary hosts.txt. To accomplish this I went to the appropriate RFC for the file format and implemented my own little parser. Now one day, we got our first networked attached laser printer. We named it BRL-ZAP and for a CPU type I just put down 68000. This was a legitimate thing to put in the host table. Well, my parser read it fine as did the TENEX and other parsers, but there was an error in the yacc grammar on the BSD machines. Every UNIX system on the net (other than mine) gagged trying to parse the BRL-ZAP entry. When people complained I pointed out that using yacc was pretty silly. That, gee, there might be some other important files on UNIX that uses a bunch of fields separated by colons first and then commas (/etc/passwd) and nobody had felt it necessary to write a yacc grammar to interpret those. There was a strong feeling that I did this on purpose (the fact that I called the machine ZAP didn't help, but ZAPPING was what we called printing on the laser printers compared to the mechanical printers back then). From jnc at mercury.lcs.mit.edu Fri Jan 13 23:19:44 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 13 Jan 2017 08:19:44 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170113131944.6EC7918C085@mercury.lcs.mit.edu> > From: Paul Ruizendaal >> On 12 Jan 2017, at 4:54 , Clem Cole wrote: >> The specifications for what would become IP and TCP were kicking around >> ... in the late 1970s. The whole works actually started considerably earlier than that; the roots go back to 1972, with the formation of the International Packet Network Working Group - although that group went defunct before TCP/IP itself was developed under DARPA's lead. I don't recall the early history well, in detail - there's a long draft article by Ronda Hauben which goes into it in detail, and there's also "INWG and the Conception of the Internet: An Eyewitness Account" by Alexander McKenzie which covers it too. By 1977 the DARPA-led effort had produced several working prototype implementations, and TCP/IP (originally there was only TCP, without a separate data packet carriage layer) were up to version 3. > My understanding is that all RFC's and IEN's were available to all legit > users of the Arpanet. Yes and no. The earliest distribution mechanism (for the initial NCP/ARPANet work) was hardcopy (you can't distribute things over the 'net before you have it working :-), and in fact until a recent effort to put them all online, not all RFC's were available in machine-readable form. (I think some IEN's still aren't.) So for many of them, if you wanted a copy, you had to have someone at ISI make a photocopy (although I think they stocked them early on) and physically mail it to you! > Apparently nobody had the idea to put all RFC's in a directory and give > FTP access to it. I honestly don't recall when that happened; it does seem obvious in retrospect! Most of us were creating document in online text systems, and it would have been trivial to make them available in machine-readable form. Old habits die hard, I guess... :-) > I think I should put a question out about this, over on the internet > history mailing list. Yes, good idea. > As an aside: IMHO, conceptually the difference between NCP and TCP > wasn't all that big. Depends. Yes, the service provided to the _clients_ was very similar (which can be seen in how similar the NCP and TCP versions of thing like TELNET, FTP, etc were), but internally, they are very different. > In my current understanding the big difference that was NCP assumes > in-order, reliable delivery of packets ... and that TCP allows for > unreliable links. Yes, that's pretty accurate (but it does mean that there are _a lot_ of differences internally - re-transmissions, etc). One other important difference is that there's no flow control in the underlying network (something that took years to understand and deal with properly). > yes, these concepts were kicking around for over a decade in academia > before BSD. TCP/IP was the product of a large, well-organized, DARPA-funded and -led effort which involved industry, academic and government players (the first two, for the most part, DARPA-funded). So I wouldn't really call it an 'academic' project. Noel From beebe at math.utah.edu Sat Jan 14 02:50:12 2017 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Fri, 13 Jan 2017 09:50:12 -0700 Subject: [TUHS] History of select(2) In-Reply-To: <7C71C8C7-AE29-4B64-894A-4913585B8763@planet.nl> Message-ID: Paul Ruizendaal writes today at Fri, 13 Jan 2017 10:13:47 +0100: >> By 1979 there were 90 nodes (IMP's) and about 200 hosts connected. I just checked my archives of our TOPS-20 PDP-10 filesystem (retired 31-Oct-1990); the oldests hosts.txt file has a filesystem timestamp of 30-Apr-1986, with 3210 lines, of which 3124 are hostnames (98 of them at the University of Utah). The comment header says ; DoD Internet Host Table ; 28-Apr-86 ; Version number 534 If someone / some site wishes to collect hosts.txt generations for a historical record, let me know offlist, and I'll supply a pointer to a copy of my file. Sadly, it looks like we just replaced PS:HOSTS.TXT with a new generation from time to time, purging previous generations (a legacy of the days when a washing-machine-sized 200MB disk drive and cabinet cost US$15,000). ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From downing.nick at gmail.com Sat Jan 14 03:57:38 2017 From: downing.nick at gmail.com (Nick Downing) Date: Sat, 14 Jan 2017 04:57:38 +1100 Subject: [TUHS] 2.11BSD cross compiling update Message-ID: So I got a fair bit further advanced on my 2.11BSD cross compiler project, at the moment it can make a respectable unix tree (/bin /usr etc) with a unix kernel and most software in it. I unpacked the resulting tarball and chrooted into it on a running SIMH system and it worked OK, was a bit rough around the edges (missing a few necessary files in /usr/share and that kind of thing) but did not crash. I haven't tested the kernel recently but last time I tested it, it booted, and the checksys output is OK. I then ended up doing a fair bit of re-engineering, how this came about was that I had to port the timezone compiler (zic) to run on the Linux cross compilation host, since the goal is eventually to build a SIMH-bootable disk (filesystem) with everything on it. This was a bit involved, it crashed initially and it turned out it was doing localtime() on really small and large values to try to figure out the range of years the system could handle. On the Linux system this returns NULL for unreasonable time_t values which POSIX allows it to do. Hence the crash. It wasn't too obvious how to port this code. (But whatever I did, it had to produce the exact same timezone files as a native build). So what I ended up doing was to port a tiny subset of 2.11BSD libc to Linux, including its types. I copied the ctime.c module and prefixed everything with "cross_" so there was "cross_time_t" and so forth, and "#include " became "#include ", in turn this depends on "#include " and so on. That way, the original logic worked unchanged. I decided to also redo the cross compilation tools (as, cc, ld, nm, ranlib and so on) using the same approach, since it was conceptually elegant. This involved making e.g. "cross_off_t" and "struct cross_exec" available by "#include ", and obviously the scheme extends to whatever libc functions we want to use. In particular we use floating point, and I plan to make a "cross_atof()" for the C compiler's PDP-11-formatted floating-point constant handling, etc. (This side of things, like the cross tools, was working, but was not terribly elegant before). So then I got to thinking, actually this is an incredibly powerful approach. Instead of just going at it piecemeal, would it not be easier just to port the entire thing across? To give an example what I mean, the linker contains code like this: if (nund==0) printf("Undefined:\n"); nund++; printf("%.*s\n", NNAMESIZE, sp->n_name); It is printing n_name from a fixed-size char array, so to save the cost of doing a strncpy they have used that "%.*s" syntax which tells printf not to go past the end of the char array. But this isn't supported in Linux. I keep noticing little problems like this (actually I switched off "-Wformat" which was possibly a bad idea). So with my latest plan this will actually run the 2.11BSD printf() instead of the Linux printf(), and the 2.11BSD stdio (fixing various other breakage that occured because BUFSIZ isn't 512 on the Linux system), and so on. What I will do is, provide a low level syscalls module like cross_open(), cross_read(), cross_write() and so on, which just redirect the request into the normal Linux system calls, while adjusting for the fact that size_t is 16 bits and so forth. This will be really great. In case it sounds like this is over-engineering, well bear in mind that one knotty problem I hadn't yet tackled is the standalone utilities, for instance the 2.11BSD tape distribution contains a standalone "restor" program which is essentially a subset of the kernel, including its device drivers, packaged with the normal "restor" utility into one executable that can be loaded off the tape. It was quite important to me that I get this ported over to Linux, so that I can produce filesystems, etc, at the Linux level, all ready to boot when I attach them to SIMH. But it was going to be hugely challenging, since compiling any program that includes more than the most basic kernel headers would have caused loads of conflicts with Linux's kernel headers and system calls. So the new approach completely fixes all that. I did some experiments the last few days with a program that I created called "xify". What it does is to read a C file, and to every identifier it finds, including macro names, C-level identifiers, include file names, etc, it prepends the sequence "x_". The logic is a bit convoluted since it has to leave keywords alone and it has to translate types so that "unsigned int" becomes "x_unsigned_int" which I can define with a typedef, and so on. Ancient C constructs like "register i;" were rather problematic, but I have got a satisfactory prototype system now. I also decided to focus on 4.3BSD rather than 2.11BSD, since by this stage I know the internals and the build system extremely intimately, and I'm aware of quite a lot of inconsistencies which will be a lot of work to tidy up, basically things that had been hurriedly ported from 4.3BSD while trying not to change the corresponding 2.8~2.10BSD code too much. Also in the build system there are quite a few different ways of implementing "make depend" for example, and this annoys me, I did have some ambitious projects to tidy it all up but it's too difficult. So a fresh start is good, and I am satisfied with the 2.11BSD project up to this moment. So what will happen next is basically once I have "-lx_c" (the "cross" version of the 4.3BSD C library) including the "xified" versions of the kernel headers, then I will try to get the 4.3BSD kernel running on top of Linux, it will be a bit like User-Mode Linux. It will use simulated network devices like libpcap, or basically just whatever SIMH uses, since I can easily drop in the relevant SIMH code and then connect it up using the 4.3BSD kernel's devtab. The standalone utilities like "restor" should then "just work". The cross toolchain should also "just work" apart from the floating point issue, since it was previously targeting the VAX which is little-endian, and the wordsize issues and the library issues are taken care of by "xifying". Very nice. The "xifying" stuff is in a new repository 43bsd.git at my bitbucket (user nick_d2). cheers, Nick From steve at quintile.net Sat Jan 14 05:02:25 2017 From: steve at quintile.net (Steve Simon) Date: Fri, 13 Jan 2017 19:02:25 +0000 Subject: [TUHS] 2.11BSD cross compiling update In-Reply-To: References: Message-ID: working with aged compilers is different. the biggest problem i had was the shared namespace for all structure members. i had great success with hash8 many years ago. i am fairly sure that this still available in comp.sources.unix or comp.sources.misc. -Steve > On 13 Jan 2017, at 17:57, Nick Downing wrote: > > So I got a fair bit further advanced on my 2.11BSD cross compiler > project, at the moment it can make a respectable unix tree (/bin /usr > etc) with a unix kernel and most software in it. I unpacked the > resulting tarball and chrooted into it on a running SIMH system and it > worked OK, was a bit rough around the edges (missing a few necessary > files in /usr/share and that kind of thing) but did not crash. I > haven't tested the kernel recently but last time I tested it, it > booted, and the checksys output is OK. > > I then ended up doing a fair bit of re-engineering, how this came > about was that I had to port the timezone compiler (zic) to run on the > Linux cross compilation host, since the goal is eventually to build a > SIMH-bootable disk (filesystem) with everything on it. This was a bit > involved, it crashed initially and it turned out it was doing > localtime() on really small and large values to try to figure out the > range of years the system could handle. On the Linux system this > returns NULL for unreasonable time_t values which POSIX allows it to > do. Hence the crash. It wasn't too obvious how to port this code. (But > whatever I did, it had to produce the exact same timezone files as a > native build). > > So what I ended up doing was to port a tiny subset of 2.11BSD libc to > Linux, including its types. I copied the ctime.c module and prefixed > everything with "cross_" so there was "cross_time_t" and so forth, and > "#include " became "#include ", in turn this > depends on "#include " and so on. That way, the > original logic worked unchanged. > > I decided to also redo the cross compilation tools (as, cc, ld, nm, > ranlib and so on) using the same approach, since it was conceptually > elegant. This involved making e.g. "cross_off_t" and "struct > cross_exec" available by "#include ", and obviously the > scheme extends to whatever libc functions we want to use. In > particular we use floating point, and I plan to make a "cross_atof()" > for the C compiler's PDP-11-formatted floating-point constant > handling, etc. (This side of things, like the cross tools, was > working, but was not terribly elegant before). > > So then I got to thinking, actually this is an incredibly powerful > approach. Instead of just going at it piecemeal, would it not be > easier just to port the entire thing across? To give an example what I > mean, the linker contains code like this: > if (nund==0) > printf("Undefined:\n"); > nund++; > printf("%.*s\n", NNAMESIZE, sp->n_name); > It is printing n_name from a fixed-size char array, so to save the > cost of doing a strncpy they have used that "%.*s" syntax which tells > printf not to go past the end of the char array. But this isn't > supported in Linux. I keep noticing little problems like this > (actually I switched off "-Wformat" which was possibly a bad idea). So > with my latest plan this will actually run the 2.11BSD printf() > instead of the Linux printf(), and the 2.11BSD stdio (fixing various > other breakage that occured because BUFSIZ isn't 512 on the Linux > system), and so on. What I will do is, provide a low level syscalls > module like cross_open(), cross_read(), cross_write() and so on, which > just redirect the request into the normal Linux system calls, while > adjusting for the fact that size_t is 16 bits and so forth. This will > be really great. > > In case it sounds like this is over-engineering, well bear in mind > that one knotty problem I hadn't yet tackled is the standalone > utilities, for instance the 2.11BSD tape distribution contains a > standalone "restor" program which is essentially a subset of the > kernel, including its device drivers, packaged with the normal > "restor" utility into one executable that can be loaded off the tape. > It was quite important to me that I get this ported over to Linux, so > that I can produce filesystems, etc, at the Linux level, all ready to > boot when I attach them to SIMH. But it was going to be hugely > challenging, since compiling any program that includes more than the > most basic kernel headers would have caused loads of conflicts with > Linux's kernel headers and system calls. So the new approach > completely fixes all that. > > I did some experiments the last few days with a program that I created > called "xify". What it does is to read a C file, and to every > identifier it finds, including macro names, C-level identifiers, > include file names, etc, it prepends the sequence "x_". The logic is a > bit convoluted since it has to leave keywords alone and it has to > translate types so that "unsigned int" becomes "x_unsigned_int" which > I can define with a typedef, and so on. Ancient C constructs like > "register i;" were rather problematic, but I have got a satisfactory > prototype system now. > > I also decided to focus on 4.3BSD rather than 2.11BSD, since by this > stage I know the internals and the build system extremely intimately, > and I'm aware of quite a lot of inconsistencies which will be a lot of > work to tidy up, basically things that had been hurriedly ported from > 4.3BSD while trying not to change the corresponding 2.8~2.10BSD code > too much. Also in the build system there are quite a few different > ways of implementing "make depend" for example, and this annoys me, I > did have some ambitious projects to tidy it all up but it's too > difficult. So a fresh start is good, and I am satisfied with the > 2.11BSD project up to this moment. > > So what will happen next is basically once I have "-lx_c" (the "cross" > version of the 4.3BSD C library) including the "xified" versions of > the kernel headers, then I will try to get the 4.3BSD kernel running > on top of Linux, it will be a bit like User-Mode Linux. It will use > simulated network devices like libpcap, or basically just whatever > SIMH uses, since I can easily drop in the relevant SIMH code and then > connect it up using the 4.3BSD kernel's devtab. The standalone > utilities like "restor" should then "just work". The cross toolchain > should also "just work" apart from the floating point issue, since it > was previously targeting the VAX which is little-endian, and the > wordsize issues and the library issues are taken care of by "xifying". > Very nice. > > The "xifying" stuff is in a new repository 43bsd.git at my bitbucket > (user nick_d2). > > cheers, Nick From random832 at fastmail.com Sat Jan 14 05:53:28 2017 From: random832 at fastmail.com (Random832) Date: Fri, 13 Jan 2017 14:53:28 -0500 Subject: [TUHS] 2.11BSD cross compiling update In-Reply-To: References: Message-ID: <1484337208.3105634.847053496.3432DDEC@webmail.messagingengine.com> On Fri, Jan 13, 2017, at 12:57, Nick Downing wrote: > I then ended up doing a fair bit of re-engineering, how this came > about was that I had to port the timezone compiler (zic) to run on the > Linux cross compilation host, since the goal is eventually to build a > SIMH-bootable disk (filesystem) with everything on it. This was a bit > involved, it crashed initially and it turned out it was doing > localtime() on really small and large values to try to figure out the > range of years the system could handle. On the Linux system this > returns NULL for unreasonable time_t values which POSIX allows it to > do. Hence the crash. It wasn't too obvious how to port this code. (But > whatever I did, it had to produce the exact same timezone files as a > native build). You know that the timezone file format that it uses is still in use today, right? There's extra data at the end in modern ones for 64-bit data, but the format itself is cross-platform, with defined field widths and big-endian byte order. What do you get when you compare the native built timezone files with one from your linux host's own zic? It *should* only differ by the version number in the header [first five bytes "TZif2" vs "TZif"] and the 64-bit section, if you're giving it the same input files. And I bet you could take the current version of the code from IANA and, if it matters to you, remove the parts that output the 64-bit data. If nothing else, looking at the modern code and the version in 2.11BSD side-by-side will let you backport bug fixes. (Note: Technically, the version present in most Linux systems is a fork maintained with glibc rather than the main version of the code from IANA) From downing.nick at gmail.com Sat Jan 14 14:41:18 2017 From: downing.nick at gmail.com (Nick Downing) Date: Sat, 14 Jan 2017 15:41:18 +1100 Subject: [TUHS] 2.11BSD cross compiling update In-Reply-To: References: <1484337208.3105634.847053496.3432DDEC@webmail.messagingengine.com> Message-ID: I see, no I had not realized that code is still in use, I would have thought it had been replaced by a whole lot of POSIX bloat. Admittedly the 2.11BSD ctime/asctime/localtime/timezone stuff is simplistic and doesn't address complicated cases but it's good enough. However I have to resist the temptation to improve or update stuff in 2.11BSD, I went down that path many times (with the Makefiles project for instance) and because everything is interdependent you always introduce more problems and get deeper and deeper enmeshed. In order to stay in control I only fix essentials and apply a rule of minimal change, period. This applies until I have a baseline that builds exactly the same binary system image as the native build. Then I might proactively improve parts of the system but I will not do it reactively if you follow. As I see it the zic behaviour is not a bug since time_t is 32 bits on 2.11BSD and has no unreasonable values, and localtime() is BSD not POSIX compliant and is not allowed to return NULL. cheers, Nick On 14/01/2017 6:53 AM, "Random832" wrote: On Fri, Jan 13, 2017, at 12:57, Nick Downing wrote: > I then ended up doing a fair bit of re-engineering, how this came > about was that I had to port the timezone compiler (zic) to run on the > Linux cross compilation host, since the goal is eventually to build a > SIMH-bootable disk (filesystem) with everything on it. This was a bit > involved, it crashed initially and it turned out it was doing > localtime() on really small and large values to try to figure out the > range of years the system could handle. On the Linux system this > returns NULL for unreasonable time_t values which POSIX allows it to > do. Hence the crash. It wasn't too obvious how to port this code. (But > whatever I did, it had to produce the exact same timezone files as a > native build). You know that the timezone file format that it uses is still in use today, right? There's extra data at the end in modern ones for 64-bit data, but the format itself is cross-platform, with defined field widths and big-endian byte order. What do you get when you compare the native built timezone files with one from your linux host's own zic? It *should* only differ by the version number in the header [first five bytes "TZif2" vs "TZif"] and the 64-bit section, if you're giving it the same input files. And I bet you could take the current version of the code from IANA and, if it matters to you, remove the parts that output the 64-bit data. If nothing else, looking at the modern code and the version in 2.11BSD side-by-side will let you backport bug fixes. (Note: Technically, the version present in most Linux systems is a fork maintained with glibc rather than the main version of the code from IANA) -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Sat Jan 14 15:40:32 2017 From: random832 at fastmail.com (Random832) Date: Sat, 14 Jan 2017 00:40:32 -0500 Subject: [TUHS] 2.11BSD cross compiling update In-Reply-To: References: <1484337208.3105634.847053496.3432DDEC@webmail.messagingengine.com> Message-ID: <1484372432.4116122.847401256.51690057@webmail.messagingengine.com> On Fri, Jan 13, 2017, at 23:41, Nick Downing wrote: > I see, no I had not realized that code is still in use, I would have > thought it had been replaced by a whole lot of POSIX bloat. POSIX doesn't even have the timezone files - it 'allows' for implementation-defined timezones, but POSIX itself basically only defines the System V TZ variable with a few extra bits to specify a single set of daylight saving rules, e.g. "EST5EDT,M3.2.0,M11.1.0". Admittedly > the > 2.11BSD ctime/asctime/localtime/timezone stuff is simplistic and doesn't > address complicated cases but it's good enough. What's in 2.11BSD (and 4.3BSD) is essentially the 1987 mod.sources version of Arthur David Olson's timezone code, compare e.g. https://github.com/eggert/tz/blob/c07b3825e1ae6e9d077a1d97088b853a79237a01/localtime.c to http://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/src/lib/libc/gen/ctime.c - it's basically the same except for a few ifdefs and the presence of asctime in the same file. The code has obviously evolved a lot since then, but the binary zone file format is the same (except for some backwards-compatible additions). The code in 2.9BSD and 4.1BSD is much more simplistic, hardcoding US daylight saving rules rather than looking up the applicable offset for a timestamp from a table. What's interesting is that 4.2BSD's is arguably "smarter" in some ways than either, calculating daylight savings based on rules at runtime whereas today that is the province of zic. This used the timezone structure of gettimeofday. > However I have to resist the temptation to improve or update stuff in > 2.11BSD, I went down that path many times (with the Makefiles project for > instance) and because everything is interdependent you always introduce > more problems and get deeper and deeper enmeshed. In order to stay in > control I only fix essentials and apply a rule of minimal change, period. > This applies until I have a baseline that builds exactly the same binary > system image as the native build. Then I might proactively improve parts > of > the system but I will not do it reactively if you follow. I guess I was considering my suggestion to be a "zic cross-compiler" - which runs on the host system and is therefore not part of 2.11BSD itself. From crossd at gmail.com Sat Jan 14 17:17:23 2017 From: crossd at gmail.com (Dan Cross) Date: Sat, 14 Jan 2017 02:17:23 -0500 Subject: [TUHS] 2.11BSD cross compiling update In-Reply-To: References: Message-ID: On Fri, Jan 13, 2017 at 12:57 PM, Nick Downing wrote: > [snip] > > So what I ended up doing was to port a tiny subset of 2.11BSD libc to > Linux, including its types. I copied the ctime.c module and prefixed > everything with "cross_" so there was "cross_time_t" and so forth, and > "#include " became "#include ", in turn this > depends on "#include " and so on. That way, the > original logic worked unchanged. > > I decided to also redo the cross compilation tools (as, cc, ld, nm, > ranlib and so on) using the same approach, since it was conceptually > elegant. This involved making e.g. "cross_off_t" and "struct > cross_exec" available by "#include ", and obviously the > scheme extends to whatever libc functions we want to use. In > particular we use floating point, and I plan to make a "cross_atof()" > for the C compiler's PDP-11-formatted floating-point constant > handling, etc. (This side of things, like the cross tools, was > working, but was not terribly elegant before). > > So then I got to thinking, actually this is an incredibly powerful > approach. [snip] > That sounds incredibly tedious. Can you specify a compiler flag to disable searching the host /usr/include? Then you could set your own include path and not have conflicts with headers from the host system. With, say, GCC one could use `-ffreestanding` or `-nostdinc` and the like. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From downing.nick at gmail.com Sat Jan 14 18:05:28 2017 From: downing.nick at gmail.com (Nick Downing) Date: Sat, 14 Jan 2017 19:05:28 +1100 Subject: [TUHS] 2.11BSD cross compiling update In-Reply-To: References: Message-ID: Yes, you are right, it is quite tedious. And yes, your idea is a good one, and it echoes more or less the direction my thoughts have been going in recent days. Originally I thought I'd only use the "cross_" prefix where there were clear system-to-system differences (like in the a.out stuff and the timezone stuff), then later I thought I'd use the "x_" prefix everywhere, but I have realized as you say, this is going to get annoying very quickly. It would be much better to be able to work on the non-prefixed files, as it would feel much more natural. So, in hindsight, given my decision that it's turned out to be too much work and for too little benefit to port the cross toolchain to glibc (as opposed to porting to gcc which is a given), it might have been better to tackle it how you say. But on the other hand my "x_" approach does have certain good points, which I will explain. Let's consider the "stat" system call which some of the cross tools use. Under my scheme I'll provide a prefixed version of "stat". Checking "x_stat.h" in the 43bsd.git repo that I mentioned in my previous post, I see this: struct x_stat { x_dev_t x_st_dev; x_ino_t x_st_ino; x_unsigned_short x_st_mode; x_short x_st_nlink; x_uid_t x_st_uid; x_gid_t x_st_gid; x_dev_t x_st_rdev; x_off_t x_st_size; x_time_t x_st_atime; x_int x_st_spare1; x_time_t x_st_mtime; x_int x_st_spare2; x_time_t x_st_ctime; x_int x_st_spare3; x_long x_st_blksize; x_long x_st_blocks; x_long x_st_spare4[2]; } There are also some defines like "x_S_IFMT" which I will ignore for brevity, but anyway the prefixed "stat" call will look something like: #include #include x_int x_stat(char *x_pathname, struct x_stat *x_statbuf) { struct stat statbuf; if (stat(x_pathname, &statbuf) == -1) { x_errno = (x_int)errno; return -1; } x_statbuf->x_st_dev = (x_dev_t)statbuf.st_dev; x_statbuf->x_st_ino = (x_ino_t)statbuf.st_ino; ... fill in all other fields ... return 0; } Obviously, this gets a bit tedious too, and it also ignores issues like converting the errno or what if ino_t is wider than x_ino_t... but I think it will work sufficiently well for the cross toolchain. With your suggestion I have no good way of creating a translation stub like the above, because both "struct stat" will have the same name... there are also lots more issues, like for instance some modules would be compiled with "-nostdinc" and the like, plus other modules being the translation stubs, would not... then all would be linked together, the translation stubs would pull in the regular C library and there would be conflicts everywhere. I think the "x_" prefix is the way to go, but as you say it's a bit tedious, so I'll probably create something like "x_cc" which automatically "xifies" the given C sources to /tmp and then runs "gcc". For the standalone utilities like "restor" it is less of an issue since they are going to be compiled into a kernel subset and everything will use the same "struct x_stat" so no translation will be necessary. In the latter case, I'll provide emulated disk and tape drivers which have an "x_" prefixed entry point to be called from the kernel, but then revert to non-prefixed code which uses the native Linux system calls like open(), read() and so on, and which I will lift out of SIMH. So although the emulation backend is much simpler in this case (it only has a few entry points, it doesn't need to provide system-call-like services), it will still be much easier to write if I have access to the "xification". Another issue is the C types, since most code is written to use "short", "int" and "long", a nice feature of the "xifier" is that it changes this to "x_int" and therefore lets me change the width. It's not a perfect emulation since pointers are still a different size and the automatic promotions will be wrong (and varargs functions need some massaging because of this). But it still substantially cuts down the porting work that I have to do. I manually fixed up all this stuff in the C compiler before, it was a big job and I still can't say definitely it's robust. cheers, Nick So, On Sat, Jan 14, 2017 at 6:17 PM, Dan Cross wrote: > On Fri, Jan 13, 2017 at 12:57 PM, Nick Downing > wrote: >> >> [snip] >> >> So what I ended up doing was to port a tiny subset of 2.11BSD libc to >> Linux, including its types. I copied the ctime.c module and prefixed >> everything with "cross_" so there was "cross_time_t" and so forth, and >> "#include " became "#include ", in turn this >> depends on "#include " and so on. That way, the >> original logic worked unchanged. >> >> I decided to also redo the cross compilation tools (as, cc, ld, nm, >> ranlib and so on) using the same approach, since it was conceptually >> elegant. This involved making e.g. "cross_off_t" and "struct >> cross_exec" available by "#include ", and obviously the >> scheme extends to whatever libc functions we want to use. In >> particular we use floating point, and I plan to make a "cross_atof()" >> for the C compiler's PDP-11-formatted floating-point constant >> handling, etc. (This side of things, like the cross tools, was >> working, but was not terribly elegant before). >> >> So then I got to thinking, actually this is an incredibly powerful >> approach. [snip] > > > That sounds incredibly tedious. Can you specify a compiler flag to disable > searching the host /usr/include? Then you could set your own include path > and not have conflicts with headers from the host system. With, say, GCC one > could use `-ffreestanding` or `-nostdinc` and the like. > > - Dan C. > From schily at schily.net Sat Jan 14 22:57:28 2017 From: schily at schily.net (Joerg Schilling) Date: Sat, 14 Jan 2017 13:57:28 +0100 Subject: [TUHS] 2.11BSD cross compiling update In-Reply-To: <1484372432.4116122.847401256.51690057@webmail.messagingengine.com> References: <1484337208.3105634.847053496.3432DDEC@webmail.messagingengine.com> <1484372432.4116122.847401256.51690057@webmail.messagingengine.com> Message-ID: <587a2038.oqkelWjKwQ+vhDzO%schily@schily.net> Random832 wrote: > On Fri, Jan 13, 2017, at 23:41, Nick Downing wrote: > > I see, no I had not realized that code is still in use, I would have > > thought it had been replaced by a whole lot of POSIX bloat. > > POSIX doesn't even have the timezone files - it 'allows' for > implementation-defined timezones, but POSIX itself basically only > defines the System V TZ variable with a few extra bits to specify a > single set of daylight saving rules, e.g. "EST5EDT,M3.2.0,M11.1.0". We recently discussed whether we should add a pointer to the Olson code but we did not find a website that is expected to exist in 10+ yaers. It is however obvious that there is no known current implementation that uses a different implementation. So it would be obvious to mention that code in the POSIX standard. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From bqt at update.uu.se Sun Jan 15 08:02:02 2017 From: bqt at update.uu.se (Johnny Billquist) Date: Sat, 14 Jan 2017 23:02:02 +0100 Subject: [TUHS] History of select(2) In-Reply-To: References: Message-ID: <6a167060-6f76-7719-3ba7-0c2f8a3716b9@update.uu.se> On 2017-01-13 18:57, Paul Ruizendaal wrote: > > On 12 Jan 2017, at 4:54 , Clem Cole wrote: > >> The point is that while I have no memory of capac(), but I can confirm that I definitely programmed with the empty() system call and Rand ports on a v6 based kernel in the mid-1970s and that it was definitely at places besides Rand themselves. > Thank you for confirming that. If anybody knows of surviving source for these extensions I'd love to hear about it. Although the description in the implementation report is clear enough to recreate it (it would seem to be one file similar to pipe.c and a pseudo device driver similar in size to mem.c), original code is better. It is also possible that the code in pipe.c was modified to drive both pipes and ports -- there would have been a lot of similarity between the two, and kernel space was at a premium. > >> [...] confirming something I have been saying for few years and some people have had a hard time believing. The specifications for what would become IP and TCP were kicking around the ARPAnet in the late 1970s. > My understanding is that all RFC's and IEN's were available to all legit users of the Arpanet. By 1979 there were 90 nodes (IMP's) and about 200 hosts connected. I don't get the impression that stuff was always easy to find, with Postel making a few posts about putting together "protocol information binders". Apparently nobody had the idea to put all RFC's in a directory and give FTP access to it. They were, and still are. And I suspect Clem is thinking of me, as I constantly question his memory on this subject. The problem is that all the RFCs are available, and they are later than this. The ARPAnet existed in 1979, but it was not using TCP/IP. If you look at the early drafts of TCP/IP, from around 1980-1981, you will also see that there are significant differences compared to the TCP/IP we know today. There was no ICMP, for example. Error handling and passing around looked different. IMPs did not talk IP, just for the record. RFC760 defines IPv4, and is dated January 1980. It refers to some previous documents that describe IP, but they are not RFCs. Also, if you look at RFC760, you will see that errors were supposed to be handled through options in the packet header, and that IP addresses, while 32 bits, were just split into 8 bits for network number, and 24 bits for host. There were obviously still some work needed before we got to what people think on IPv4 today. Anyone implementing RFC760 would probably not work at all with an IPv4 implementation that exist today. > I am not sure how available this stuff was outside the Arpanet community. I think I should put a question out about this, over on the internet history mailing list. > > As an aside: IMHO, conceptually the difference between NCP and TCP wasn't all that big. In my current understanding the big difference was that NCP assumes in-order, reliable delivery of packets (as was the case between IMP's) and that TCP allows for unreliable links. Otherwise, the connection build-up and tear-down and the flow control were similar. See for instance RFC54 and RFC55 from 1970. My point is: yes, these concepts were kicking around for over a decade in academia before BSD. Not sure if BSD is a good reference point. Much stuff was not actually done on Unix systems at all, if you start reading machine lists in the early RFCs. Unix had this UUCP thingy, that they liked. ;-) BSD and networking research came more to the front doing all the refinements over the years. Anyway, yes, for sure TCP did not come out of the void. It was based on earlier work. But there are some significant differences between TCP/IP and NCP, which is why you had the big switch day. Johnny -- Johnny Billquist || "I'm on a bus || on a psychedelic trip email: bqt at softjar.se || Reading murder books pdp is alive! || tryin' to stay hip" - B. Idol From bqt at softjar.se Sun Jan 15 08:09:17 2017 From: bqt at softjar.se (Johnny Billquist) Date: Sat, 14 Jan 2017 23:09:17 +0100 Subject: [TUHS] History of select(2) In-Reply-To: <6a167060-6f76-7719-3ba7-0c2f8a3716b9@update.uu.se> References: <6a167060-6f76-7719-3ba7-0c2f8a3716b9@update.uu.se> Message-ID: On 2017-01-14 23:02, Johnny Billquist wrote: > On 2017-01-13 18:57, Paul Ruizendaal wrote: >> >> On 12 Jan 2017, at 4:54 , Clem Cole wrote: >> >>> The point is that while I have no memory of capac(), but I can >>> confirm that I definitely programmed with the empty() system call and >>> Rand ports on a v6 based kernel in the mid-1970s and that it was >>> definitely at places besides Rand themselves. >> Thank you for confirming that. If anybody knows of surviving source >> for these extensions I'd love to hear about it. Although the >> description in the implementation report is clear enough to recreate >> it (it would seem to be one file similar to pipe.c and a pseudo device >> driver similar in size to mem.c), original code is better. It is also >> possible that the code in pipe.c was modified to drive both pipes and >> ports -- there would have been a lot of similarity between the two, >> and kernel space was at a premium. >> >>> [...] confirming something I have been saying for few years and some >>> people have had a hard time believing. The specifications for what >>> would become IP and TCP were kicking around the ARPAnet in the late >>> 1970s. >> My understanding is that all RFC's and IEN's were available to all >> legit users of the Arpanet. By 1979 there were 90 nodes (IMP's) and >> about 200 hosts connected. I don't get the impression that stuff was >> always easy to find, with Postel making a few posts about putting >> together "protocol information binders". Apparently nobody had the >> idea to put all RFC's in a directory and give FTP access to it. > > They were, and still are. And I suspect Clem is thinking of me, as I > constantly question his memory on this subject. > > The problem is that all the RFCs are available, and they are later than > this. The ARPAnet existed in 1979, but it was not using TCP/IP. If you > look at the early drafts of TCP/IP, from around 1980-1981, you will also > see that there are significant differences compared to the TCP/IP we > know today. There was no ICMP, for example. Error handling and passing > around looked different. > IMPs did not talk IP, just for the record. > > RFC760 defines IPv4, and is dated January 1980. It refers to some > previous documents that describe IP, but they are not RFCs. Also, if you > look at RFC760, you will see that errors were supposed to be handled > through options in the packet header, and that IP addresses, while 32 > bits, were just split into 8 bits for network number, and 24 bits for > host. There were obviously still some work needed before we got to what > people think on IPv4 today. Anyone implementing RFC760 would probably > not work at all with an IPv4 implementation that exist today. I should have also said that RFC791 is where IPv4 pretty much becomes what we can recognize, and what will probably work against other machines today. And RFC791 is dated September 1981. So I have this problem with people who say that they implemented TC/IP in 1978 for some reason. :-) Especially if they say they followed some RFC, and it was working well in heterogeneous networks. I don't want to claim that people didn't do the networking, but I don't think it's correct to say that it was TCP/IP, as we know it today. It was either some other protocol (like NCP) or some other version of IP, which was not even published as an RFC. Or else the RFCs in the IETF archives have been falsified with regards to their dates. Johnny -- Johnny Billquist || "I'm on a bus || on a psychedelic trip email: bqt at softjar.se || Reading murder books pdp is alive! || tryin' to stay hip" - B. Idol From jnc at mercury.lcs.mit.edu Sun Jan 15 09:43:56 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 14 Jan 2017 18:43:56 -0500 (EST) Subject: [TUHS] History of select(2) Message-ID: <20170114234356.28A8F18C079@mercury.lcs.mit.edu> > From: Johnny Billquist > And RFC791 is dated September 1981. Yes, but it had pretty much only editorial changes from RFC-760, dated January 1980 (almost two years before), and from a number of IEN's dated even earlier than that (which I'm too lazy to paw through). > So I have this problem with people who say that they implemented TC/IP > in 1978 for some reason. If you look at IEN-44, June 1978 (issued shortly after the fateful June 15-16 meeting, where the awful 32-bit address decision was taken), you will see that the packet format as of that date was pretty much what we have today (the format of addresses kept changing for many years, but I'll put that aside for now). > Especially if they say ... it was working well in heterogeneous > networks. TCP/IP didn't work "well" for a long time after 1981 - until we got the congestion control stuff worked out in the late 80's. And IIRC the routing/ addressing stuff took even longer. > I don't think it's correct to say that it was TCP/IP, as we know it > today. Why not? A box implementing the June '78 spec would probably talk to a current one (as long as suitable addresses were used on each end). > It was either some other protocol (like NCP) or some other version of > IP, which was not even published as an RFC. Nope. And don't put too much weight on the RFC part - TCP/IP stuff didn't start getting published as RFC's until it was _done_ (i.e. ready for the ARPANet to convert from NCP to TCP/IP - which happened January 1, 1983). All work prior to TCP/IP being declared 'done' is documented in IEN's (very similar documents to RFC's, distributed by the exact same person - Jon Postel). Noel From bqt at softjar.se Sun Jan 15 11:22:01 2017 From: bqt at softjar.se (Johnny Billquist) Date: Sun, 15 Jan 2017 02:22:01 +0100 Subject: [TUHS] History of select(2) In-Reply-To: <20170114234356.28A8F18C079@mercury.lcs.mit.edu> References: <20170114234356.28A8F18C079@mercury.lcs.mit.edu> Message-ID: <222274fa-8231-1ed1-1806-fc4e1f51a81c@softjar.se> On 2017-01-15 00:43, Noel Chiappa wrote: > > From: Johnny Billquist > > > And RFC791 is dated September 1981. > > Yes, but it had pretty much only editorial changes from RFC-760, dated January > 1980 (almost two years before), and from a number of IEN's dated even earlier > than that (which I'm too lazy to paw through). Like I pointed out, RFC760 lacks ICMP. Error messaging was supposed to be carried in IP options. That is more than an editorial change. > > So I have this problem with people who say that they implemented TC/IP > > in 1978 for some reason. > > If you look at IEN-44, June 1978 (issued shortly after the fateful June 15-16 > meeting, where the awful 32-bit address decision was taken), you will see that > the packet format as of that date was pretty much what we have today (the > format of addresses kept changing for many years, but I'll put that aside for > now). Packet format, yes. Semantics and operations differed. ICMP didn't even exist. > > Especially if they say ... it was working well in heterogeneous > > networks. > > TCP/IP didn't work "well" for a long time after 1981 - until we got the > congestion control stuff worked out in the late 80's. And IIRC the routing/ > addressing stuff took even longer. Depending on how you defined "well", people might still argue about that. :-) > > I don't think it's correct to say that it was TCP/IP, as we know it > > today. > > Why not? A box implementing the June '78 spec would probably talk to a current > one (as long as suitable addresses were used on each end). I would seriously question that, if nothing based on just my comments about ICMP above. Anyway, let's dig a bit more then, shall we... RFC 762 is "Assigned numbers". There you will find that the IP version field in the IP header values. Version 4 was defined in August 1979, with a reference to: "Postel, J. "DOD Standard Internet Protocol," IEN-128, USC/Information Sciences Institute, January 1980." Which also makes one question how anyone would have known about IPv4 in 1978. Also, first definition of TCP shows up in RFC 761, which is also IEN-129. And that is the DoD standard. Also dated January 1980. Another rather interesting document is RFC 801, which lists the current status of TCP/IP implementations in appendix D. This document is from November 1981, and there it is clearly stated which implementations exist at that point (that the authors know about). And it's for the most part very much "work in progress". And this is at the end of 1981... So yes, I still have problems with claims that they had it all running in 1978. :-) > > It was either some other protocol (like NCP) or some other version of > > IP, which was not even published as an RFC. > > Nope. And don't put too much weight on the RFC part - TCP/IP stuff didn't > start getting published as RFC's until it was _done_ (i.e. ready for the > ARPANet to convert from NCP to TCP/IP - which happened January 1, 1983). I don't really agree with your view on the RFCs here. :-) > All work prior to TCP/IP being declared 'done' is documented in IEN's (very > similar documents to RFC's, distributed by the exact same person - Jon > Postel). Yes, except of course, those documents aren't really that much earlier in some cases, like ones I pointed out above, which I think are most relevant here... Johnny -- Johnny Billquist || "I'm on a bus || on a psychedelic trip email: bqt at softjar.se || Reading murder books pdp is alive! || tryin' to stay hip" - B. Idol From jnc at mercury.lcs.mit.edu Sun Jan 15 12:30:31 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 14 Jan 2017 21:30:31 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170115023031.B302418C079@mercury.lcs.mit.edu> > From: Johnny Billquist > Like I pointed out, RFC760 lacks ICMP. So? TCP will work without ICMP. > Which also makes one question how anyone would have known about IPv4 in > 1978. Well, I can assure you that _I_ knew about it in 1978! (The decision on the v4 packet formats was taken in the 5th floor conference room at 545 Tech Sq, about 10 doors down from my office!) But everyone working on TCP/IP heard about Version 4 shortly after the June, 1978 meeting. > Also, first definition of TCP shows up in RFC 761 If you're speaking of TCPv4 (one needs to be precise - there were also of course TCP's 1, 2, 2.5 and 3, going back to 1974), please see IEN-44. (Ignore IEN's -40 and -41; those were proposals for v4 that got left by the wayside.) > So yes, I still have problems with claims that they had it all running > in 1978. I never said we had it "all" running in 1978 - and I explicitly referenced areas (congestion, addressing/routing) we were still working on over 10 years later. But there were working implementations (as in, they could exchange data with other implementations) of TCP/IPv4 by January 1979 - see IEN 77. (I'll never forget that weekend - we were in at ISI on Saturday, when it was normally closed, and IIRC we couldn't figure out how to turn the hallway lights on, so people were going from office to office in the gloom...) Noel From pnr at planet.nl Mon Jan 16 10:13:00 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Mon, 16 Jan 2017 01:13:00 +0100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170114164102.GA31665@yeono.kjorling.se> References: <20170109023502.GA8507@minnie.tuhs.org> <7C71C8C7-AE29-4B64-894A-4913585B8763@planet.nl> <20170114164102.GA31665@yeono.kjorling.se> Message-ID: On 14 Jan 2017, at 17:41 , Michael Kjörling wrote: > On 13 Jan 2017 10:13 +0100, from pnr at planet.nl (Paul Ruizendaal): >> over on the internet history mailing list. > > Interesting. Care to give me a pointer toward it? That mailing list is here: http://mailman.postel.org/mailman/listinfo/internet-history On 14 Jan 2017, at 23:02 , Johnny Billquist wrote: > IMPs did not talk IP, just for the record. Yes, this is true of course. The software of the IMP was resurrected from printouts some time ago: http://walden-family.com/impcode/ > The problem is that all the RFCs are available, and they are later than this. The ARPAnet existed in 1979, but it was not using TCP/IP. If you look at the early drafts of TCP/IP, from around 1980-1981, you will also see that there are significant differences compared to the TCP/IP we know today. There was no ICMP, for example. Error handling and passing around looked different. Once again: yes. When exactly was the TCP/IP specification completed? That is an issue where reasonable people can hold different opinions. What software first implemented this on Unix? Here too reasonable people can hold different opinions. Below my take on this, based on my current understanding (and I keep repeating that I'm learning new things about this stuff almost every day and please advise if I'm missing things). Development of TCP/IP The specification that became TCP/IP apparently finds its roots in 1974, and it is gradually developed over the next years with several trial implementations. By March 1977 we get to TCP2 and more trials. Next it would seem that there was a flurry of activity from January to August 1978, resulting in specifications for TCP4 (IEN54 and IEN55). Then, up to March more implementations follow, as documented in IEN98. With those implementations tested, also for interoperability, more changes to the protocol and implementations follow and I guess by April 1981 (RFC777) we reach a point where things are specified to a level where implementations would interoperate with today's implementations. This is not where it stops, 'modern' congestion control only goes back to the late 80's (see for instance Craig Partridge http://www.netarch2009.net/slides/Netarch09_Craig_Partridge.pdf, it is an interesting read). Early Unix code bases (1) The Mathis/Haverty stack In 1977 Jim Mathis at SRI writes a "TCP2.5" stack in PDP11 assembler, running on the MOS operating system (for the LSI11). In 1978 Haverty is assigned to take this stack and make it run on V6 Unix. He builds a kernel with ports, await and capac to achieve this. It is a mixed success (see http://mailman.postel.org/pipermail/internet-history/2016-October/004073.html), but he maintains it through 1978 and 1979 as a skunkworks project and the code eventually supports TCP4 (as defined in IEN54/55). The source has survived in Jack Haverty's basement as a printout, but it is not online. As far as I know know, this the first TCP/IP on Unix (a tie with the Wingfield implementation, and only if one accepts IEN54/55 as 'TCP/IP'). (2) The Grossman (DTI) stack IEN98 mentions a TCP3 stack done for Network Unix (by then called ENFE/INFE) in 1978 by DTI / Gary Grossman. I don't currently have information about that implementation. As at March 1979 it did not appear to support TCP4. (3) The Wingfield/Cain stack In 1978 BBN / Michael Wingfield, was commissioned by DCEC / Ed Cain to write a TCP4 tack in C for Unix V6. As it stood in March 1979 this code supported IEN54/55 with the AUTODIN II security extensions that heralded back to 1975. It is a partial implementation: it does not support IP fragmentation and it has a simplistic approach to incoming out-of-order packets (drop everything until the right one arrives). However, it worked and kept being maintained by Ed Cain, who by October 1981 has added support for ICMP and GGN (https://www.rfc-editor.org/rfc/museum/tcp-ip-digest/tcp-ip-digest.v1n1.1). He is still supporting it as late as 1985 (https://www.rfc-editor.org/rfc/museum/tcp-ip-implementations.txt.1) As far as I know know, only the March 1979 code survives. I'm currently retyping it, about halfway through. I'm not sure what compiler it was written for: it uses longs, but apparently this is still somewhat broken (with comments in the source amounting to 'this WTF necessary to work around compiler bugs'); at the same time it also uses old-style assignments ('=+' instead of '+='). Could this be "typesetter C"? The code feels like it might be based on earlier work, for instance the BBN BCPL implementation for TENEX a few years earlier, but that is pure speculation. It could also be that Wingfield was new to C and bringing habits from other languages with him. Once all done, I'll ask Michael about it. I'm on thin ice here, but my current guess would be that the 5,000 line code base would need some 500 lines of new code to make it interoperable with today's implementations. From the above I would support the moniker "first TCP/IP in C on Unix" as claimed by UCLA, either for the March 1979 version if one takes the view that '90% modern' is good enough, or for the October 1981 version if support for RFC777 is the benchmark. In the latter view it beats the Gurwitz stack by about a month or two. However, it is not a full implementation of the specifications. (4) The Gurwitz stack Last in the list of candidates is the Rob Gurwitz stack for BSD4.1 (see IEN168), started in January 1981. It is a full implementation of the protocols that looks like it was done from scratch (as confirmed by Gurwitz and Haverty) and consolidates the earlier learnings. In my opinion, it is the first implementation where the source code has a distinct 'early unix smell' (please excuse the phrase), arguably more so that the later Joy code. The first iterations of the code don't support (the as yet non-existent) RFC777. By November 1981 there is a beta distribution tape that does, and this code looks to interoperate with modern TCP/IP implementations as-is. If the benchmark is a full implementation interoperating with today's code, the first TCP/IP on Unix would I think be the Gurwitz stack. Possibly it is the first TCP/IP in that definition on any OS. Note that this is also where TCP/IP and Network Unix join back up [but this view might change as I learn more about the Grossman / DTI version]: the Gurwitz code uses an API closely based on that of UoI's Network Unix and the provided user land programs (Telnet, FTP, MTP, etc.) appear ports from that code base (or perhaps from a BBN development of the UoI work). === In any case, I think it is fair to say that TCP/IP as we know it today did not drop from the sky in 1981. There was a vast amount of prior work done in the second half of the 70's on a variety of hardware and operating systems, experience that all feeded into the well known RFC's. One thing that I'm unclear about is why all this Arpanet work was not filtering more into the versions of Unix done at Bell Labs. The folks involved were certainly aware of each other and the work that was going on. With universities the cost of 'always on' long distance lines may have been too great, but within Bell Labs that would have been less of an issue and there is a clear link with the core business of the Bell System. Would anybody have some background on that? Paul From jnc at mercury.lcs.mit.edu Mon Jan 16 11:01:01 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 15 Jan 2017 20:01:01 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170116010101.8DF6F18C083@mercury.lcs.mit.edu> > From: Paul Ruizendaal > I guess by April 1981 (RFC777) we reach a point where things are > specified to a level where implementations would interoperate with > today's implementations. Yes and no. Earlier boxes would interoperate, _if addresses on each end were chosen properly_. Modern handling of addresses on hosts (for the 'is this destination on my physical network' step of the packet-sending algorithm) did not come in until RFC-1122 (October 1989); prior to that, lots of host code probably tried to figure out if the destination was class A, B or C, etc, etc. Also, until RFC-826 (ARP, November 1982) pretty much all the network interfaces (and thus the code to turn the 'destination IP address' into an attached physical network address, for the first hop) were things like ARPANet that no longer exist, so you could't _actually_ fire up one of them unless you do something like the 'ARPANet emulation' that the various PDP-10 simulators use to allow old OS's running on them to talk to the current Internet. > only if one accepts IEN54/55 as 'TCP/IP' What are they, if not TCP/IP? Not the modern variant, of course, but then again, nothing before the early 90's is truly 'modern TCP/IP'. > IEN98 mentions a TCP3 stack done for Network Unix ... in 1978 by DTI / > Gary Grossman. I read this, BITD, but don't recall much about it. I was not impressed by the coding style. > at the same time it also uses old-style assignments ('=+' instead of > '+='). Could this be "typesetter C"? I don't know. IIRC, that compiler supported both styles. It had to have been a later compiler than the one that came with V6, that didn't support longs. But I don't recall any bug with long support in the typetter C compiler we had at MIT. > From the above I would support the moniker "first TCP/IP in C on Unix" No. That clearly belongs to the DTI one. (The difference between V3 and V4, while significant, aren't enough to make the DTI not 'TCP/IP in C for Unix'.) If you want to say 'first V4TCP/IP in C for Unix', maybe; I'd have to look for dates on the one done at MIT for V6, that may be earlier, but I don't think so. (Check the minutes in the IEN's, that's probably the best source of data on progress of the various implementations.) > One thing that I'm unclear about is why all this Arpanet work was not > filtering more into the versions of Unix done at Bell Labs. Here's my _guess_ - ask someone from Bell for a sure answer. You're using 20/20 hindsight. At that point in time, it was not at all obvious that TCP/IP was going to take over the world. There were a couple of alternatives for moving data around that Bell used - Datakit, and UUCP - and they worked pretty well, and there was no reason to pick up on this TCP/IP thing. I suspect that it wasn't until LAN's became popular than TCP/IP looked like a good thing to have - it fits very well with the capabilities most LANs had (in term of the service provided to things attached to them). Datakit was its own thing, and for UUCP you'd have to provide a reliable stream, and TCP/IP 'just did that'. Noel From wkt at tuhs.org Mon Jan 16 11:44:44 2017 From: wkt at tuhs.org (Warren Toomey) Date: Mon, 16 Jan 2017 11:44:44 +1000 Subject: [TUHS] Article on 'not meant to understand this' Message-ID: <20170116014444.GA32261@minnie.tuhs.org> http://thenewstack.io/not-expected-understand-explainer/ in case you haven't seen it yet. Cheers, Warren From jnc at mercury.lcs.mit.edu Mon Jan 16 11:47:49 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 15 Jan 2017 20:47:49 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170116014749.0570418C082@mercury.lcs.mit.edu> > (Check the minutes in the IEN's, that's probably the best source of data > on progress of the various implementations.) Another place to look is Internet Monthly Reports and TCP-IP Digests (oh, I see you've seen those, I see a reference to one). I have this distinct memory of Dave Clark mentioning the Liza Martin TCP/IP for Unix in one of the meeting report publihed as IENs, but a quick look didn't find it. Noel From lm at mcvoy.com Mon Jan 16 13:15:10 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 15 Jan 2017 19:15:10 -0800 Subject: [TUHS] Article on 'not meant to understand this' In-Reply-To: <20170116014444.GA32261@minnie.tuhs.org> References: <20170116014444.GA32261@minnie.tuhs.org> Message-ID: <20170116031510.GB6647@mcvoy.com> Yeah, saw it. I'm of the opinion that you aren't really truly an OS person unless you've written a context switcher. I wrote one for a user level threading package I did for Udi Manber as a grad student. I did most of the work in C and then dropped to assembler for the trampoline. It's really not that complicated, I think people make it out to be a bigger deal than it is. You're saving state (registers), switching stacks, and changing the return address so you return in the new process. Well, not that complicated on a simple machine line a VAX or a 68K or a PDP11. I sort of stopped playing in assembler when super scalar out of order stuff came around and I couldn't get the mental picture of what was where. On Mon, Jan 16, 2017 at 11:44:44AM +1000, Warren Toomey wrote: > http://thenewstack.io/not-expected-understand-explainer/ > > in case you haven't seen it yet. > > Cheers, Warren -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From pnr at planet.nl Mon Jan 16 20:06:06 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Mon, 16 Jan 2017 11:06:06 +0100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170116014749.0570418C082@mercury.lcs.mit.edu> References: <20170116014749.0570418C082@mercury.lcs.mit.edu> Message-ID: <10FD8B56-1972-4E78-BF3D-6D7BDC4BF25D@planet.nl> It may be mentioned in this report: http://web.mit.edu/Saltzer/www/publications/rfc/csr-rfc-228.pdf If so, it would seem to be late 1981, early 1982. Would you know if any of its source code survived? Paul On 16 Jan 2017, at 2:47 , Noel Chiappa wrote: > >> (Check the minutes in the IEN's, that's probably the best source of data >> on progress of the various implementations.) > > Another place to look is Internet Monthly Reports and TCP-IP Digests (oh, I > see you've seen those, I see a reference to one). > > I have this distinct memory of Dave Clark mentioning the Liza Martin TCP/IP > for Unix in one of the meeting report publihed as IENs, but a quick look > didn't find it. > > Noel From brantleycoile at me.com Mon Jan 16 20:11:02 2017 From: brantleycoile at me.com (Brantley Coile) Date: Mon, 16 Jan 2017 05:11:02 -0500 Subject: [TUHS] Article on 'not meant to understand this' In-Reply-To: <20170116031510.GB6647@mcvoy.com> References: <20170116014444.GA32261@minnie.tuhs.org> <20170116031510.GB6647@mcvoy.com> Message-ID: <3556CAD6-0DFE-4F6A-B897-0C4D59ACAF2E@me.com> I agree that one lacks true understanding of operating systems until one codes a process switch. My first was in 1979 on a home brew 6800 (not 68k). It was made easier by the fact that the 6800 saved all 64 bits of registers on each interrupt. All that was necessary was to wire a timer interrupt and change the value of SP in the handler. Brantley Coile Sent from my iPad > On Jan 15, 2017, at 10:15 PM, Larry McVoy wrote: > > Yeah, saw it. I'm of the opinion that you aren't really truly an OS > person unless you've written a context switcher. I wrote one for a > user level threading package I did for Udi Manber as a grad student. > I did most of the work in C and then dropped to assembler for the > trampoline. > > It's really not that complicated, I think people make it out to be > a bigger deal than it is. You're saving state (registers), switching > stacks, and changing the return address so you return in the new > process. > > Well, not that complicated on a simple machine line a VAX or a 68K > or a PDP11. I sort of stopped playing in assembler when super scalar > out of order stuff came around and I couldn't get the mental picture > of what was where. > >> On Mon, Jan 16, 2017 at 11:44:44AM +1000, Warren Toomey wrote: >> http://thenewstack.io/not-expected-understand-explainer/ >> >> in case you haven't seen it yet. >> >> Cheers, Warren > > -- > --- > Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From bqt at update.uu.se Mon Jan 16 20:21:41 2017 From: bqt at update.uu.se (Johnny Billquist) Date: Mon, 16 Jan 2017 11:21:41 +0100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: References: Message-ID: <7d72b495-6d6a-661a-9625-674bd79c672d@update.uu.se> On 2017-01-16 03:00, jnc at mercury.lcs.mit.edu (Noel Chiappa) wrote: > > From: Johnny Billquist > > > Like I pointed out, RFC760 lacks ICMP. > > So? TCP will work without ICMP. True. However, IP and UDP will have issues. > > Which also makes one question how anyone would have known about IPv4 in > > 1978. > > Well, I can assure you that _I_ knew about it in 1978! (The decision on the v4 > packet formats was taken in the 5th floor conference room at 545 Tech Sq, > about 10 doors down from my office!) > > But everyone working on TCP/IP heard about Version 4 shortly after the June, > 1978 meeting. Over a year before any documents said anything about it. This is where I have problems. :-) > > Also, first definition of TCP shows up in RFC 761 > > If you're speaking of TCPv4 (one needs to be precise - there were also of > course TCP's 1, 2, 2.5 and 3, going back to 1974), please see IEN-44. (Ignore > IEN's -40 and -41; those were proposals for v4 that got left by the wayside.) That is a very good point. I've been talking v4 all the time (both for IP and TCP). Like I said, I'm sure people were doing networking protocols and stuff earlier, but it wasn't the TCP/IP we know and talk about today, and you just reaffirmed this. And yes, the TCP/IP we know today did not come out of a blue sky. Of course it is based on earlier work. (Just do you don't have to go on about that again.) > > So yes, I still have problems with claims that they had it all running > > in 1978. > > I never said we had it "all" running in 1978 - and I explicitly referenced > areas (congestion, addressing/routing) we were still working on over 10 years > later. > > But there were working implementations (as in, they could exchange data with > other implementations) of TCP/IPv4 by January 1979 - see IEN 77. But not TCP4 then. And thus, not interoperable with an implementation today, and interoperable in general being a rather floating and moving target, as you had several imvompatible TCP versions, using different protocol numbers, and several incompatible IP versions. > (I'll never forget that weekend - we were in at ISI on Saturday, when it was > normally closed, and IIRC we couldn't figure out how to turn the hallway > lights on, so people were going from office to office in the gloom...) Fun times, I bet. Johnny -- Johnny Billquist || "I'm on a bus || on a psychedelic trip email: bqt at softjar.se || Reading murder books pdp is alive! || tryin' to stay hip" - B. Idol From pnr at planet.nl Mon Jan 16 20:31:32 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Mon, 16 Jan 2017 11:31:32 +0100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170116010101.8DF6F18C083@mercury.lcs.mit.edu> References: <20170116010101.8DF6F18C083@mercury.lcs.mit.edu> Message-ID: On 16 Jan 2017, at 2:01 , Noel Chiappa wrote: > >> From: Paul Ruizendaal > >> I guess by April 1981 (RFC777) we reach a point where things are >> specified to a level where implementations would interoperate with >> today's implementations. > > Yes and no. Earlier boxes would interoperate, _if addresses on each end were > chosen properly_. Modern handling of addresses on hosts (for the 'is this > destination on my physical network' step of the packet-sending algorithm) did > not come in until RFC-1122 (October 1989); prior to that, lots of host code > probably tried to figure out if the destination was class A, B or C, etc, etc. This is true of the Gurwitz implementation. The Wingfield implementation still uses the older form, where the first 8 bits signify the network and the remaining 24 bits the host address on that network. In terms of routing my view would be to keep it simple: traffic is either local or destined for the single interface / gateway (see below). Interop hence is just looking at TCP. > > Also, until RFC-826 (ARP, November 1982) pretty much all the network > interfaces (and thus the code to turn the 'destination IP address' into an > attached physical network address, for the first hop) were things like ARPANet > that no longer exist, so you could't _actually_ fire up one of them unless you > do something like the 'ARPANet emulation' that the various PDP-10 simulators > use to allow old OS's running on them to talk to the current Internet. Yes: all these old implementations have an IMP interface driver at their lowest level. What I'm doing for testing is replacing that by a SLIP driver so that I can hook up to today's network and see if it works. Paul From dot at dotat.at Mon Jan 16 21:19:00 2017 From: dot at dotat.at (Tony Finch) Date: Mon, 16 Jan 2017 11:19:00 +0000 Subject: [TUHS] Article on 'not meant to understand this' In-Reply-To: <20170116014444.GA32261@minnie.tuhs.org> References: <20170116014444.GA32261@minnie.tuhs.org> Message-ID: Warren Toomey wrote: > http://thenewstack.io/not-expected-understand-explainer/ Rob Pike observed on Twitter: https://twitter.com/rob_pike/status/820777895689732096 > The article misses an important fact: A few years later we understood it > well and could do it much simpler. https://twitter.com/rob_pike/status/820777988924981254 > We are always learning, and that comment was as much a note to the > author as to the reader. Now, stack switching is almost trivial. https://twitter.com/rob_pike/status/820778110253613056 > A similar thing happened a generation earlier figuring out subroutine > (function) calls. Whole books were written. Now it's an instruction. Tony. -- f.anthony.n.finch http://dotat.at/ - I xn--zr8h punycode South Utsire, Forties, Cromarty, Forth, Tyne, Dogger: Variable 4, becoming southerly or southwesterly 4 or 5, occasionally 6 in South Utsire and Forties. Slight or moderate, occasionally rough in north Forties. Occasional rain or drizzle. Good, occasionally poor. From aap at papnet.eu Mon Jan 16 21:26:16 2017 From: aap at papnet.eu (Angelo Papenhoff) Date: Mon, 16 Jan 2017 12:26:16 +0100 Subject: [TUHS] Article on 'not meant to understand this' In-Reply-To: References: <20170116014444.GA32261@minnie.tuhs.org> Message-ID: <20170116112616.GA40162@indra.papnet.eu> On 16/01/17, Tony Finch wrote: > Warren Toomey wrote: > > > http://thenewstack.io/not-expected-understand-explainer/ > > Rob Pike observed on Twitter: > > https://twitter.com/rob_pike/status/820777895689732096 > > > The article misses an important fact: A few years later we understood it > > well and could do it much simpler. > > https://twitter.com/rob_pike/status/820777988924981254 > > > We are always learning, and that comment was as much a note to the > > author as to the reader. Now, stack switching is almost trivial. > > https://twitter.com/rob_pike/status/820778110253613056 > > > A similar thing happened a generation earlier figuring out subroutine > > (function) calls. Whole books were written. Now it's an instruction. The author also actually missed the part that we're not supposed to understand. I'll just paste what I posted on hackernews: I don't think it's the context switching in general that we're not expected to understand, since it's pretty much the same in v7 but the comment is gone. Dmr actually explained the problem on his website (https://www.bell-labs.com/usr/dmr/www/odd.html). savu is used to save the current call stack, retu is used to switch to a saved call stack. The problem is that the function which did the savu was not necessarily the same as the function that does the retu, so after retu the function could have the call stack of a different function. As dmr explained, this worked with the PDP-11 compiler but not with the interdata compiler. In V7 the stack switching was moved into separate functions, save and resume. save retured 0 and resume returned 1 so that an if statement could be used to check if the return from save was actually that of resume after the stack switch (the same trick as that of fork). This way the code that was to be executed after a stack switch was in the same function and stack frame as the one that did the save (as opposed to swtch). Note that Lions doesn't explain this either, he assumed that the difficulty was with with u_rsav and u_ssav, but those are still in v7 with the comment gone (he probably wasn't that wrong though, it really is confusing, but it's just not what the comment refers to) aap From dot at dotat.at Mon Jan 16 22:07:06 2017 From: dot at dotat.at (Tony Finch) Date: Mon, 16 Jan 2017 12:07:06 +0000 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170116010101.8DF6F18C083@mercury.lcs.mit.edu> References: <20170116010101.8DF6F18C083@mercury.lcs.mit.edu> Message-ID: Noel Chiappa wrote: > > Modern handling of addresses on hosts (for the 'is this destination on > my physical network' step of the packet-sending algorithm) did not come > in until RFC-1122 (October 1989); prior to that, lots of host code > probably tried to figure out if the destination was class A, B or C, > etc, etc. AIUI there were two major revisions to the IPv4 addressing architecture: subnetting (RFC 917, October 1994 ... RFC 950, August 1985), and classless routing (RFC 1519, September 1993) which was originally called supernetting (RFC 1338, June 1992). RFC 1122 consolidated all the implementation requirements in one place, and said RFC 950 subnetting was mandatory. Tony. -- f.anthony.n.finch http://dotat.at/ - I xn--zr8h punycode Bailey, Fair Isle, Faeroes: Southwest 5 or 6, occasionally 7 later in Bailey, becoming variable 4 at times. Moderate or rough in Fair Isle, otherwise rough or very rough. Rain or drizzle, fog patches. Moderate or good, occasionally very poor. From jnc at mercury.lcs.mit.edu Tue Jan 17 00:42:19 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 16 Jan 2017 09:42:19 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170116144219.B075B18C085@mercury.lcs.mit.edu> > From: Johnny Billquist >> everyone working on TCP/IP heard about Version 4 shortly after the >> June, 1978 meeting. > Over a year before any documents said anything about it. Incorrect. It's documented in IEN-44, June 1978 (written shortly after the meeting, in the same month). > I'm sure people were doing networking protocols and stuff earlier, but > it wasn't the TCP/IP we know and talk about today People were working on Unix in 1977, but it's not the same Unix we know and talk about today. Does that mean it's not Unix they were working on? >> there were working implementations (as in, they could exchange data with >> other implementations) of TCP/IPv4 by January 1979 - see IEN 77. ^^ > But not TCP4 then. I just specified that it was v4 (see above). > thus, not interoperable with an implementation today No, with properly-chosen addresses (because of the changes in address handling), they probably would be. Noel From jnc at mercury.lcs.mit.edu Tue Jan 17 01:17:45 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 16 Jan 2017 10:17:45 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170116151745.1B12018C085@mercury.lcs.mit.edu> > From: Tony Finch This is getting a bit far afield from Unix, so my apologies to the list for that. But to avoid dumping it in the Internet-History list abruptly, let me answer here _briefly_ (believe it or not, the below _is_ brief). > AIUI there were two major revisions to the IPv4 addressing architecture: Not quite (see below). First, one needs to understand that there are two different timelines for changes to addressing: in the hosts, and in the routers (called 'gateways' originally). To start with, they were tied together, but as of RFC-1122, they were formally separated: hosts no longer fully understood the syntax/semantics of addresses, just (mostly) treated them as opaque 32-bit quantities. > subnetting (RFC 917, October 1994 ... RFC 950, August 1985), and > classless routing (RFC 1519, September 1993) Originally, network numbers were 8 bits, and the 'rest' (local) part was 24. Mapping from IP addresses to physical network addresses was some with direct mapping - ARP did not exist - the actual local address (e.g. IMP/Port) was contained in the 'rest' field - each network had a document which specified the mapping. (Which is part of the interoperability issue with old implementations.) As some point early on, it was realized that 8 bits of network number were not enough, and the awful A/B/C kludge was added (it was dropped on the community, not discussed before-hand). Subnetting was indeed the next change. Then the host/router split happened. Classless routing (which effectively extended addesses, for path-computation purposes, to 32+N bits - since you couldn't look at a 32-bit IP address and immediately tell which was the 'network' part any more, you _had_ to have the mask as well, to tell you how many bits of any given address were the network number) was more of a process than a single change - the inter-AS routing (BGP) had to change, but so did IGP's (OSPF, IS-IS), etc, etc. > originally called supernetting (RFC 1338, June 1992). There was this effort called ROAD which produced RFC-1338 and 1519, and IIRC there was an intermediate, involving blocks of network numbers (1338), and that slowly evolved into arbitrary blocks (1519). One should also note that the term "super-netting" comes from a proposal by Carl-Hubert ("Roki") Rokitansky which did not, alas, make it to RFC. (His purpose was different, but it used the same mechanism.) Alas, the authors of 1338/1519 failed to properly acknowledge his earlier work. Noel From random832 at fastmail.com Tue Jan 17 01:35:19 2017 From: random832 at fastmail.com (Random832) Date: Mon, 16 Jan 2017 10:35:19 -0500 Subject: [TUHS] Article on 'not meant to understand this' In-Reply-To: <20170116112616.GA40162@indra.papnet.eu> References: <20170116014444.GA32261@minnie.tuhs.org> <20170116112616.GA40162@indra.papnet.eu> Message-ID: <1484580919.560182.849267968.097C1979@webmail.messagingengine.com> On Mon, Jan 16, 2017, at 06:26, Angelo Papenhoff wrote: > I don't think it's the context switching in general that we're not > expected to understand, since it's pretty much the same in v7 but the > comment is gone. Dmr actually explained the problem on his website > (https://www.bell-labs.com/usr/dmr/www/odd.html). savu is used to save > the current call stack, retu is used to switch to a saved call stack. > The problem is that the function which did the savu was not necessarily > the same as the function that does the retu, so after retu the function > could have the call stack of a different function. As dmr explained, > this worked with the PDP-11 compiler but not with the interdata > compiler. In V7 the stack switching was moved into separate functions, > save and resume. save retured 0 and resume returned 1 so that an if > statement could be used to check if the return from save was actually > that of resume after the stack switch (the same trick as that of fork). > This way the code that was to be executed after a stack switch was in > the same function and stack frame as the one that did the save (as > opposed to swtch). My impression was that the 'magic' part was that it relied on the C both process's stacks containing registers saved by the C function prologue routine [csv, the kernel version can be found in conf/m*.s], and that the return statement in swtch [which calls cret] restores those registers. Ritchie alludes to this with "This worked on the PDP-11 because its compiler always used the same context-save mechanism; with the Interdata compiler, the procedure return code differed depending on which registers were saved. ", and, well, there's a reason the FreeBSD cpu_switch function the article mentions is written in assembly rather than C. > Note that Lions doesn't explain this either, he assumed that the > difficulty was with with u_rsav and u_ssav, but those are still in v7 > with the comment gone (he probably wasn't that wrong though, it really > is confusing, but it's just not what the comment refers to) > > aap From doug at cs.dartmouth.edu Tue Jan 17 02:00:00 2017 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Mon, 16 Jan 2017 11:00:00 -0500 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: References: Message-ID: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> > One thing that I'm unclear about is why all this Arpanet work was not filtering more into the versions of Unix done at Bell Labs. The short answer is that Bell Lbs was not on Arpanet. In the early 80s the interim CSNET gave us a dial-up window into Arpanet, which primarily served as a conduit for email. When real internet connection became possible, network code from Berkeley was folded into the research kernel. (I am tempted to say "engulfed the research kernel", for this was a huge addition.) The highest levels of AT&T were happy to carry digital data, but did not see digital as significant business. Even though digital T1 was the backbone of long-distance transmission, it was IBM, not AT&T, that offered direct digital interfaces to T1 in the 60s. When Arpanet came along MCI was far more eager to carry its data than AT&T was. It was all very well for Sandy Fraser to build experimental data networks in the lab, but this was seen as a niche market. AT&T devoted more effort to specialized applications like hotel PBXs than to digital communication per se. Doug From rochkind at basepath.com Tue Jan 17 02:22:09 2017 From: rochkind at basepath.com (Marc Rochkind) Date: Mon, 16 Jan 2017 09:22:09 -0700 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> Message-ID: Thanks for this, Doug. When I started at Bell Labs, in the Summer of 1970, my organization was involved in what I think was called the Digital Data System. I recall that it was synchronous, meaning, I think, that there were clocks that timed everything on the network. Where does that fit into your story? --Marc On Mon, Jan 16, 2017 at 9:00 AM, Doug McIlroy wrote: > > One thing that I'm unclear about is why all this Arpanet work was not > filtering more into the versions of Unix done at Bell Labs. > > The short answer is that Bell Lbs was not on Arpanet. In the early > 80s the interim CSNET gave us a dial-up window into Arpanet, which > primarily served as a conduit for email. When real internet connection > became possible, network code from Berkeley was folded into the > research kernel. (I am tempted to say "engulfed the research kernel", > for this was a huge addition.) > > The highest levels of AT&T were happy to carry digital data, but > did not see digital as significant business. Even though digital T1 > was the backbone of long-distance transmission, it was IBM, not > AT&T, that offered direct digital interfaces to T1 in the 60s. > > When Arpanet came along MCI was far more eager to carry its data > than AT&T was. It was all very well for Sandy Fraser to build > experimental data networks in the lab, but this was seen as a > niche market. AT&T devoted more effort to specialized applications > like hotel PBXs than to digital communication per se. > > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Tue Jan 17 02:44:21 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 16 Jan 2017 08:44:21 -0800 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> Message-ID: <20170116164421.GJ6647@mcvoy.com> On Mon, Jan 16, 2017 at 11:00:00AM -0500, Doug McIlroy wrote: > The highest levels of AT&T were happy to carry digital data, but > did not see digital as significant business. Even though digital T1 > was the backbone of long-distance transmission, it was IBM, not > AT&T, that offered direct digital interfaces to T1 in the 60s. AT&T seemed pretty clueless about networking. I gave a short talk at Hot Interconnects in the heyday of ATM. Paul Borrill got me a speaking spot, I wasn't well known person but inside of Sun I had been railing against ATM and pushing for 100Mbit ethernet and Paul decided to see what the rest of the world thought. The gist of my talk was that ATM was a joke. I had an ATM card (on loan from Sun Networking), I think it was 155 Mbit card. I also had an ethernet card that I had bought at Fry's on my way to the talk. The ATM card cost $4000. The ethernet card cost $49 IIRC. The point I was making was that ATM was doomed. This was at the time in history when every company was making long bets on ATM, they all thought it was the future; well, all meaning the execs had been convinced. I held up the two cards, disclosed the cost, and said "this ATM card is always going to be expensive but the ethernet card is gonna be $10 in a year or two. Why? Volume. Every computer has ethernet, it's gonna do nothing but get cheaper. And you're gonna see ethernet over fiber, long haul, you're going to see 100 Mbit, gigabit ethernet, and it's going to be cheap. ATM is going nowhere." There was a shocked silence. Weirdest talk ever, the room just went silent for what seemed forever. Then someone, I'm sure it was an engineer who had been forced to work on ATM, started clapping. Just one guy. And then the whole room joined in. I took the silence as "yeah, but my boss says I have to" and the clapping as "we agree". At the time AT&T was the biggest pusher of ATM. Telephone switches were big and expensive and it was clear, to me at least, that AT&T looked at all those cheap ethernet switches and said "yeah, let's get the industry working on phone switching and we'll get cheap switches too". Nice idea, didn't work out. From rochkind at basepath.com Tue Jan 17 02:52:27 2017 From: rochkind at basepath.com (Marc Rochkind) Date: Mon, 16 Jan 2017 09:52:27 -0700 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <20170116164421.GJ6647@mcvoy.com> References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> <20170116164421.GJ6647@mcvoy.com> Message-ID: If you think AT&T looked askance at cheap networking, you can imagine what they thought of cheap telephones. When I interviewed in early 1970 at Columbus, I recall one of the engineers joking that you'd have to buy one of those "imitation" phones at a discount store, as if that vision was enough to kill off the idea. On Mon, Jan 16, 2017 at 9:44 AM, Larry McVoy wrote: > On Mon, Jan 16, 2017 at 11:00:00AM -0500, Doug McIlroy wrote: > > The highest levels of AT&T were happy to carry digital data, but > > did not see digital as significant business. Even though digital T1 > > was the backbone of long-distance transmission, it was IBM, not > > AT&T, that offered direct digital interfaces to T1 in the 60s. > > AT&T seemed pretty clueless about networking. I gave a short talk at Hot > Interconnects in the heyday of ATM. Paul Borrill got me a speaking spot, > I wasn't well known person but inside of Sun I had been railing against > ATM and pushing for 100Mbit ethernet and Paul decided to see what the > rest of the world thought. > > The gist of my talk was that ATM was a joke. I had an ATM card (on loan > from Sun Networking), I think it was 155 Mbit card. I also had an > ethernet card that I had bought at Fry's on my way to the talk. > The ATM card cost $4000. The ethernet card cost $49 IIRC. > > The point I was making was that ATM was doomed. This was at the time in > history when every company was making long bets on ATM, they all thought > it was the future; well, all meaning the execs had been convinced. > > I held up the two cards, disclosed the cost, and said "this ATM card is > always going to be expensive but the ethernet card is gonna be $10 in > a year or two. Why? Volume. Every computer has ethernet, it's gonna > do nothing but get cheaper. And you're gonna see ethernet over fiber, > long haul, you're going to see 100 Mbit, gigabit ethernet, and it's > going to be cheap. ATM is going nowhere." > > There was a shocked silence. Weirdest talk ever, the room just went > silent for what seemed forever. Then someone, I'm sure it was an engineer > who had been forced to work on ATM, started clapping. Just one guy. > And then the whole room joined in. > > I took the silence as "yeah, but my boss says I have to" and the clapping > as "we agree". > > At the time AT&T was the biggest pusher of ATM. Telephone switches were > big and expensive and it was clear, to me at least, that AT&T looked at > all those cheap ethernet switches and said "yeah, let's get the industry > working on phone switching and we'll get cheap switches too". Nice idea, > didn't work out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Tue Jan 17 03:16:27 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 16 Jan 2017 12:16:27 -0500 (EST) Subject: [TUHS] Article on 'not meant to understand this' Message-ID: <20170116171627.45F3318C085@mercury.lcs.mit.edu> > From: Angelo Papenhoff > The problem is that the function which did the savu was not necessarily > the same as the function that does the retu, so after retu the function > could have the call stack of a different function. As dmr explained, > this worked with the PDP-11 compiler but not with the interdata > compiler. To put it slightly differently, in PDP-11 C all stack frames look identical, but this is not true of other machines/compilers. So if routine A called savu(), and routine B called aretu(), when the call to aretu() returned, procedure B is still running, but on procedure A's stack frame. So on machines where A's stack frame looks different from B's, hilarity ensues. (Note that aretu() was significantly different from retu() - the latter switched to a different process/stack, whereas aretu() did a 'non-local goto' [technically, switched to a different stack frame on the current stack] in the current process.) > Note that Lions doesn't explain this either, he assumed that the > difficulty was with with u_rsav and u_ssav .. (he probably wasn't that > wrong though, it really is confusing, but it's just not what the comment > refers to) Right. There are actually _three_ sets of saved stack info: int u_rsav[2]; /* save r5,r6 when exchanging stacks */ int u_qsav[2]; /* label variable for quits and interrupts */ int u_ssav[2]; /* label variable for swapping */ and it was the interaction among the three of them that I found very hard to understand - hence my (incorrect) memory that the 'you are not' comment actually referred to that, not the savu/aretu stuff! Calls to retu(), the primitive to switch stacks/processes, _always_ use rsav. The others are for 'non-local gotos' inside a process. Think of qsav as a poor man's exception handler for process software interrupts. When a process is sleeping on some event, when it is interrupted, rather than the sleep() call returning, it wakes up returning from the procedure that did the savu(qsav). (That last is because sleep() - which is the procedure that's running when the call to aretu(qsav) returns - does a return immediately after restoring the stack to the frame saved in qsav.) And I've forgotten exactly how ssav worked - IIRC it was something to do with how when a process is swapped out, since that can happen in a number of ways/places, the stack can contains calls to various things like expand(), etc; when it's swapped back in, the simplest thing to do is to just throw that all away and have it go back to where it was just before it was decided to swap it out. Noel From rochkind at basepath.com Tue Jan 17 03:49:06 2017 From: rochkind at basepath.com (Marc Rochkind) Date: Mon, 16 Jan 2017 10:49:06 -0700 Subject: [TUHS] Article on 'not meant to understand this' In-Reply-To: <3556CAD6-0DFE-4F6A-B897-0C4D59ACAF2E@me.com> References: <20170116014444.GA32261@minnie.tuhs.org> <20170116031510.GB6647@mcvoy.com> <3556CAD6-0DFE-4F6A-B897-0C4D59ACAF2E@me.com> Message-ID: "... one lacks true understanding of operating systems until ..." With this as the standard, I have a false understanding of operating systems. So, I am ready for the post-truth society we are entering. Are you? ;-) On Mon, Jan 16, 2017 at 3:11 AM, Brantley Coile wrote: > I agree that one lacks true understanding of operating systems until one > codes a process switch. My first was in 1979 on a home brew 6800 (not > 68k). It was made easier by the fact that the 6800 saved all 64 bits of > registers on each interrupt. All that was necessary was to wire a timer > interrupt and change the value of SP in the handler. > > Brantley Coile > > > Sent from my iPad > > > On Jan 15, 2017, at 10:15 PM, Larry McVoy wrote: > > > > Yeah, saw it. I'm of the opinion that you aren't really truly an OS > > person unless you've written a context switcher. I wrote one for a > > user level threading package I did for Udi Manber as a grad student. > > I did most of the work in C and then dropped to assembler for the > > trampoline. > > > > It's really not that complicated, I think people make it out to be > > a bigger deal than it is. You're saving state (registers), switching > > stacks, and changing the return address so you return in the new > > process. > > > > Well, not that complicated on a simple machine line a VAX or a 68K > > or a PDP11. I sort of stopped playing in assembler when super scalar > > out of order stuff came around and I couldn't get the mental picture > > of what was where. > > > >> On Mon, Jan 16, 2017 at 11:44:44AM +1000, Warren Toomey wrote: > >> http://thenewstack.io/not-expected-understand-explainer/ > >> > >> in case you haven't seen it yet. > >> > >> Cheers, Warren > > > > -- > > --- > > Larry McVoy lm at mcvoy.com > http://www.mcvoy.com/lm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajv-ewherachem at vsta.org Tue Jan 17 04:11:07 2017 From: ajv-ewherachem at vsta.org (Andy Valencia) Date: Mon, 16 Jan 2017 18:11:07 +0000 (UTC) Subject: [TUHS] 2.11 pcc with ANSI? Message-ID: <20170116181107.B08D7400D8@vsta.org> I'm having a lot of fun with a virtual 11/94 and 2.11. What a lot of excellent engineering! It seems like an obvious project would be to adapt a newer pcc with ANSI C support of some sort. Has this already been done? I'll take a look if not. Thanks, Andy Valencia p.s. The "less" in /usr/local doesn't seem to handle stty based TTY geometry. I re-ported "less2" from comp.sources.unix and added this. Somebody ping me if the mildly edited sources are of interest. From scj at yaccman.com Tue Jan 17 04:52:04 2017 From: scj at yaccman.com (Steve Johnson) Date: Mon, 16 Jan 2017 10:52:04 -0800 Subject: [TUHS] 2.11 pcc with ANSI? In-Reply-To: <20170116181107.B08D7400D8@vsta.org> Message-ID: <57d7c379b914dd1b23590d7e114bf9a970662e93@webmail.yaccman.com> It's been done.  Anders Magnusson has done quite a bit to port pcc and add both ANSI and selected GCC features. He's at ragge at ludd.ltu.se There is a newsgroup too:  pcc-list at ludd.ltu.se As I recall, at one point he did a head-to-head with gcc.  pcc produced 3% worse code and was 100x faster, I think, but that may just be my faulty memory... Steve ----- Original Message ----- From: "Andy Valencia" It seems like an obvious project would be to adapt a newer pcc with ANSI C support of some sort. Has this already been done? I'll take a look if not. Thanks, Andy Valencia -------------- next part -------------- An HTML attachment was scrubbed... URL: From scj at yaccman.com Tue Jan 17 05:10:37 2017 From: scj at yaccman.com (Steve Johnson) Date: Mon, 16 Jan 2017 11:10:37 -0800 Subject: [TUHS] Article on 'not meant to understand this' In-Reply-To: <3556CAD6-0DFE-4F6A-B897-0C4D59ACAF2E@me.com> Message-ID: I was well aware of the comment in V6, but had no idea what it referred to.   When Dennis and I were porting what became V7 to the Interdata 8/32, we spent about 10 frustrating days dealing with savu and retu.  Dennis did his most productive work between 10pm and 4am, while I kept more normal hours.  We would pore over the crash dumps (in hex, then a new thing for us--PDP-ll was all octal, all the time).  I'd tinker with the compiler, he'd tinker with the code and we would get it to limp, flap its wings, and then crash.  The problem was that the Interdata had many more registers than the PDP-11, so the compiler only saved the register variables across a call, where the PDP-11 saved all the registers.  This was just fine inside a process, but between processes it was deadly.  After we had tried everything we could think of, Dennis concluded that the fundamental architecture was broken.  In a couple of days, he came up with the scheme that ended up in V7. It was only several years later when I saw a T-shirt with savu and retu on it along with the famous comment that I realized what it had referred to, and enjoyed the irony that we hadn't understood it either... Steve ----- Original Message ----- From: "Brantley Coile" To:"Larry McVoy" Cc: Sent:Mon, 16 Jan 2017 05:11:02 -0500 Subject:Re: [TUHS] Article on 'not meant to understand this' -------------- next part -------------- An HTML attachment was scrubbed... URL: From scj at yaccman.com Tue Jan 17 05:17:09 2017 From: scj at yaccman.com (Steve Johnson) Date: Mon, 16 Jan 2017 11:17:09 -0800 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <20170116164421.GJ6647@mcvoy.com> Message-ID: This comment reminded me of an internal talk I attended at Bell Labs.  It had the single most powerful slide I've ever seen in a talk.  It was a talk about internal networking, and the slide looked like your standard network diagram -- lots of circles with lots of lines connecting them.  The computation centers were networked.  UUCP was on there, and datakit. But dead in the middle of the slide was a circle that had absolutely no connections with anything.  Of course, somebody asked about, and was told "Oh.  That's the networking department..." As I recall, said department ceased to exist about a month later... ----- Original Message ----- From: "Larry McVoy" . . . AT&T seemed pretty clueless about networking. . . . -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Tue Jan 17 05:21:12 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 16 Jan 2017 11:21:12 -0800 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: References: <20170116164421.GJ6647@mcvoy.com> Message-ID: <20170116192112.GR6647@mcvoy.com> It is pretty stunning that the company that had the largest network in the world (the phone system of course) didn't get packet switching at all. I dunno how Bell Labs was allowed to do all that great work with management that clueless, that's a minor (major?) miracle right there. On Mon, Jan 16, 2017 at 11:17:09AM -0800, Steve Johnson wrote: > This comment reminded me of an internal talk I attended at Bell > Labs.?? It had the single most powerful slide I've ever seen in a > talk.?? It was a talk about internal networking, and the slide looked > like your standard network diagram -- lots of circles with lots of > lines connecting them.?? The computation centers were networked.?? > UUCP was on there, and datakit. > > But dead in the middle of the slide was a circle that had absolutely > no connections with anything.?? Of course, somebody asked about, and > was told "Oh.?? That's the networking department..." > > As I recall, said department ceased to exist about a month later... > > ----- Original Message ----- > From: "Larry McVoy" > > . . . > AT&T seemed pretty clueless about networking. > . . . > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From jnc at mercury.lcs.mit.edu Tue Jan 17 05:46:27 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 16 Jan 2017 14:46:27 -0500 (EST) Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 Message-ID: <20170116194627.0FF8B18C085@mercury.lcs.mit.edu> > From: Larry McVoy > It is pretty stunning that the company that had the largest network in > the world (the phone system of course) didn't get packet switching at > all. Actually, it's quite logical - and in fact, the lack of 'getting it' about packets follows directly from the former (their large existing circuit switch network). This dates back to Baran (see his oral history: https://conservancy.umn.edu/handle/11299/107101 pg. 19 and on), but it was still detectable almost two decades later. For a variety of all-too-human reasons (of the flavour of 'we're the networking experts, what do you know'; 'we know all about circuit networks, this packet stuff is too different'; 'we don't want to obsolete our giant investment', etc, etc), along with genuine concerns about some real issues of packet switching (e.g. the congestion stuff, and how well the system handled load and overload), packet switching just was a bridge too far from what they already had. Think IBM and timesharing versus batch and mainframe versus small computers. Noel From ken at google.com Tue Jan 17 05:57:08 2017 From: ken at google.com (Ken Thompson) Date: Mon, 16 Jan 2017 11:57:08 -0800 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <20170116192112.GR6647@mcvoy.com> References: <20170116164421.GJ6647@mcvoy.com> <20170116192112.GR6647@mcvoy.com> Message-ID: note: this is my partisan recollection. a network proposal would arise from the previous BIG network failure. it would have a name like super-colossal-inter-galactic-hyper- bolic-better-than-last-time network. it would have merits that spoke to the failure of the last attempt. then some marketeers (all sun-tanned ex-IBM executives, i do remember one in particular -- roger moody) would try to get bell to warp the engineering so that they had an advantage from the inside on content. after all, money was to be made on services, not transportation. this would make the engineering teeter and after more and more "requirements" it would eventually fall. and then we start over with "stupendous" added to the new project name. thus, it is my opinion that it was totally impossible to make a network with engineers direct by marketeers. there are several things that should also be understood. the bell vs ibm rivalry; the new competition on phones; the desire to be in services and not equipment. On Mon, Jan 16, 2017 at 11:21 AM, Larry McVoy wrote: > It is pretty stunning that the company that had the largest network > in the world (the phone system of course) didn't get packet switching > at all. I dunno how Bell Labs was allowed to do all that great work > with management that clueless, that's a minor (major?) miracle right > there. > > On Mon, Jan 16, 2017 at 11:17:09AM -0800, Steve Johnson wrote: >> This comment reminded me of an internal talk I attended at Bell >> Labs.?? It had the single most powerful slide I've ever seen in a >> talk.?? It was a talk about internal networking, and the slide looked >> like your standard network diagram -- lots of circles with lots of >> lines connecting them.?? The computation centers were networked.?? >> UUCP was on there, and datakit. >> >> But dead in the middle of the slide was a circle that had absolutely >> no connections with anything.?? Of course, somebody asked about, and >> was told "Oh.?? That's the networking department..." >> >> As I recall, said department ceased to exist about a month later... >> >> ----- Original Message ----- >> From: "Larry McVoy" >> >> . . . >> AT&T seemed pretty clueless about networking. >> . . . >> > > -- > --- > Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From wkt at tuhs.org Tue Jan 17 06:15:43 2017 From: wkt at tuhs.org (Warren Toomey) Date: Tue, 17 Jan 2017 06:15:43 +1000 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: References: Message-ID: <20170116201543.GA16532@minnie.tuhs.org> On Mon, Jan 16, 2017 at 11:10:37AM -0800, Steve Johnson wrote: > We would pore over the crash dumps (in hex, then a > new thing for us -- PDP-11 was all octal, all the time). Something I've been meaning to ask for a while: why Unix and octal on the PDP-11? Because of the DEC documentation? I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is a multiple of 3. But PDP-11 is 16b, multiple of 4. After all, Unix had its own assembler, so was there a need/reason to use octal? Cheers, Warren From wkt at tuhs.org Tue Jan 17 06:22:46 2017 From: wkt at tuhs.org (Warren Toomey) Date: Tue, 17 Jan 2017 06:22:46 +1000 Subject: [TUHS] 2.11 pcc with ANSI? In-Reply-To: <57d7c379b914dd1b23590d7e114bf9a970662e93@webmail.yaccman.com> References: <20170116181107.B08D7400D8@vsta.org> <57d7c379b914dd1b23590d7e114bf9a970662e93@webmail.yaccman.com> Message-ID: <20170116202246.GB16532@minnie.tuhs.org> On Mon, Jan 16, 2017 at 10:52:04AM -0800, Steve Johnson wrote: > It's been done. Anders Magnusson has done quite a bit to port pcc and > add both ANSI and selected GCC features. A quick google search reveals: http://pcc.ludd.ltu.se/ and https://www.openbsd.org/papers/magnusson_pcc.pdf which is a set of slides on the work done. I've been itching to rewrite PDP-7 Unix in a higher level language. I've already designed the language and written a compiler for the PDP-7 (see https://github.com/DoctorWkt/h-compiler), but the generated code sucks. Maybe I should try to retarget PCC, it would be slightly ironic :-) Cheers, Warren From fair-tuhs at netbsd.org Tue Jan 17 06:31:19 2017 From: fair-tuhs at netbsd.org (Erik E. Fair) Date: Mon, 16 Jan 2017 12:31:19 -0800 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: <20170116201543.GA16532@minnie.tuhs.org> References: Message-ID: <6735.1484598679@cesium.clock.org> When I learned DG NOVA assembler in the mid-1970s, octal was it - it was everywhere. I didn't see hexadecimal notation until 8-bit microcomputers started using it in the late 1970s and early 1980s. Just a change in culture, I'd suppose. Hex fits neatly into a byte, and we don't seem to see computers with word sizes that aren't a multiple of 8 any more. Erik From jnc at mercury.lcs.mit.edu Tue Jan 17 06:45:08 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 16 Jan 2017 15:45:08 -0500 (EST) Subject: [TUHS] PDP-11, Unix, octal? Message-ID: <20170116204508.CFD1918C085@mercury.lcs.mit.edu> > From: Warren Toomey > Something I've been meaning to ask for a while: why Unix and octal on > the PDP-11? Because of the DEC documentation? Yeah, DEC did it all in octal. > I understand why other DEC architectures (e.g. PDP-7) were octal: 18b > is a multiple of 3. But PDP-11 is 16b, multiple of 4. Look at PDP-11 machine code. Two-op instructions look like this (bit-wise): oooossssssdddddd where 'ssssss' and 'dddddd' (source and destination) have the same format: mmmrrr where 'mmm' is the mode (things like R, @Rn, etc) and 'rrr' is the register number. All on octal boundaries. So if you see '010011' in a dump (or when looking at memory through the front console switches :-), you know immediately that means: MOV R0, @R1 Much harder in hex... :-) Noel From lars at nocrew.org Tue Jan 17 06:25:16 2017 From: lars at nocrew.org (Lars Brinkhoff) Date: Mon, 16 Jan 2017 21:25:16 +0100 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: <20170116201543.GA16532@minnie.tuhs.org> (Warren Toomey's message of "Tue, 17 Jan 2017 06:15:43 +1000") References: <20170116201543.GA16532@minnie.tuhs.org> Message-ID: <868tqau503.fsf@molnjunk.nocrew.org> Warren Toomey wrote: > I understand why other DEC architectures (e.g. PDP-7) were octal: 18b > is a multiple of 3. But PDP-11 is 16b, multiple of 4. After all, Unix > had its own assembler, so was there a need/reason to use octal? Octal is a natural fit for the instruction set encoding. From rminnich at gmail.com Tue Jan 17 08:31:16 2017 From: rminnich at gmail.com (ron minnich) Date: Mon, 16 Jan 2017 22:31:16 +0000 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: <868tqau503.fsf@molnjunk.nocrew.org> References: <20170116201543.GA16532@minnie.tuhs.org> <868tqau503.fsf@molnjunk.nocrew.org> Message-ID: octal was also a good fit for a lot of the other dec systems of the time, notably the 8 and the 10. I actually found octal to be a pain in the neck on the -11: values and addresses were 377, or 177777 and 377777 and ... bleah. I was glad when hex came along. On Mon, Jan 16, 2017 at 1:04 PM Lars Brinkhoff wrote: > Warren Toomey wrote: > > I understand why other DEC architectures (e.g. PDP-7) were octal: 18b > > is a multiple of 3. But PDP-11 is 16b, multiple of 4. After all, Unix > > had its own assembler, so was there a need/reason to use octal? > > Octal is a natural fit for the instruction set encoding. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfb at tfeb.org Tue Jan 17 09:41:16 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Mon, 16 Jan 2017 23:41:16 +0000 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <20170116164421.GJ6647@mcvoy.com> References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> <20170116164421.GJ6647@mcvoy.com> Message-ID: Less than ten years ago I wrote a big rant at people where I worked about fibre channel: all our machines had two entirely different networks attached to them: one built on ethernet which was at that point all Gb on new machines and 10Gb on some (I don't think that 10Gb switches were really available yet though) & where you could stuff a machine with interfaces for the cost of a good meal, and where everything just talked to everything else ... and one built on fibre channel which might have been 2Gb, where an interface cost as much as a car, and where interoperability involved weeks pissing around with firmware in the cards, and sometimes just buying new ones. Fibre channel was just laughably worse than ethernet. No one listened, of course, because my political skills are akin to those of a goat, and fibre channel is *storage* which is completely different than networking, somehow. Perhaps people still use fibre channel. > On 16 Jan 2017, at 16:44, Larry McVoy wrote: > > I held up the two cards, disclosed the cost, and said "this ATM card is > always going to be expensive but the ethernet card is gonna be $10 in > a year or two. Why? Volume. Every computer has ethernet, it's gonna > do nothing but get cheaper. And you're gonna see ethernet over fiber, > long haul, you're going to see 100 Mbit, gigabit ethernet, and it's > going to be cheap. ATM is going nowhere." From brantleycoile at me.com Tue Jan 17 09:45:52 2017 From: brantleycoile at me.com (Brantley Coile) Date: Mon, 16 Jan 2017 18:45:52 -0500 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> <20170116164421.GJ6647@mcvoy.com> Message-ID: Beware of SCSI folks who think they can design data network protocols. Brantley coraid.com > On Jan 16, 2017, at 6:41 PM, Tim Bradshaw wrote: > > Less than ten years ago I wrote a big rant at people where I worked about fibre channel: all our machines had two entirely different networks attached to them: one built on ethernet which was at that point all Gb on new machines and 10Gb on some (I don't think that 10Gb switches were really available yet though) & where you could stuff a machine with interfaces for the cost of a good meal, and where everything just talked to everything else ... and one built on fibre channel which might have been 2Gb, where an interface cost as much as a car, and where interoperability involved weeks pissing around with firmware in the cards, and sometimes just buying new ones. Fibre channel was just laughably worse than ethernet. > > No one listened, of course, because my political skills are akin to those of a goat, and fibre channel is *storage* which is completely different than networking, somehow. > > Perhaps people still use fibre channel. > >> On 16 Jan 2017, at 16:44, Larry McVoy wrote: >> >> I held up the two cards, disclosed the cost, and said "this ATM card is >> always going to be expensive but the ethernet card is gonna be $10 in >> a year or two. Why? Volume. Every computer has ethernet, it's gonna >> do nothing but get cheaper. And you're gonna see ethernet over fiber, >> long haul, you're going to see 100 Mbit, gigabit ethernet, and it's >> going to be cheap. ATM is going nowhere." > From brad at anduin.eldar.org Tue Jan 17 10:30:29 2017 From: brad at anduin.eldar.org (Brad Spencer) Date: Mon, 16 Jan 2017 19:30:29 -0500 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <20170116194627.0FF8B18C085@mercury.lcs.mit.edu> (jnc@mercury.lcs.mit.edu) Message-ID: jnc at mercury.lcs.mit.edu (Noel Chiappa) writes: > > From: Larry McVoy > > > It is pretty stunning that the company that had the largest network in > > the world (the phone system of course) didn't get packet switching at > > all. > > Actually, it's quite logical - and in fact, the lack of 'getting it' about > packets follows directly from the former (their large existing circuit switch > network). > > This dates back to Baran (see his oral history: > > https://conservancy.umn.edu/handle/11299/107101 > > pg. 19 and on), but it was still detectable almost two decades later. I was at AT&T much later then most who have commented, in 1992+ and I am pretty sure that a lot of people at that time who had been at AT&T a while STILL did not get packet networks. > For a variety of all-too-human reasons (of the flavour of 'we're the > networking experts, what do you know'; 'we know all about circuit networks, > this packet stuff is too different'; 'we don't want to obsolete our giant > investment', etc, etc), along with genuine concerns about some real issues of > packet switching (e.g. the congestion stuff, and how well the system handled > load and overload), packet switching just was a bridge too far from what they > already had. I can't fully explain it, but "a bridge too far" does describe it well. Everything had to be a circuit and it if wasn't, well, it was viewed with a great deal of suspicion. I worked with a lot of very smart and talented folks, but this was a real blind spot. > Think IBM and timesharing versus batch and mainframe versus small computers. > > Noel -- Brad Spencer - brad at anduin.eldar.org - KC8VKS http://anduin.eldar.org - & - http://anduin.ipv6.eldar.org [IPv6 only] From scj at yaccman.com Tue Jan 17 11:09:40 2017 From: scj at yaccman.com (Steve Johnson) Date: Mon, 16 Jan 2017 17:09:40 -0800 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: <20170116201543.GA16532@minnie.tuhs.org> Message-ID: The mainframes of the 60's and 70's all used 6-bit characters (and often different encodings for the non alphanumeric characters).  So the Unix folks, including me, had experience with octal long before Dec. Steve ----- Original Message ----- From: "Warren Toomey" Something I've been meaning to ask for a while: why Unix and octal on the PDP-11? Because of the DEC documentation? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pechter at gmail.com Tue Jan 17 11:33:52 2017 From: pechter at gmail.com (William Pechter) Date: Mon, 16 Jan 2017 20:33:52 -0500 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: References: Message-ID: And DEC used octal from the 1959 PDP1 through the VAX which was announced on October 1977. The PDP11 register front panel was painted to make front panel octal programming easier - breaking out the three bit patterns. Octal was deep in the DEC history and front panel flipping fingers. I'll never forget 014747 (single instruction memory decrement test (move the pc to address pc - 2... Bill Steve Johnson wrote: > The mainframes of the 60's and 70's all used 6-bit characters (and > often different encodings for the non alphanumeric characters). So the > Unix folks, including me, had experience with octal long before Dec. > > Steve > > > > ----- Original Message ----- > From: > "Warren Toomey" > > > Something I've been meaning to ask for a while: why Unix and octal > on the > PDP-11? Because of the DEC documentation? > > -- Digital had it then. Don't you wish you could buy it now! pechter-at-gmail.com http://xkcd.com/705/ From arnold at skeeve.com Tue Jan 17 13:32:49 2017 From: arnold at skeeve.com (arnold at skeeve.com) Date: Mon, 16 Jan 2017 20:32:49 -0700 Subject: [TUHS] 2.11 pcc with ANSI? In-Reply-To: <20170116202246.GB16532@minnie.tuhs.org> References: <20170116181107.B08D7400D8@vsta.org> <57d7c379b914dd1b23590d7e114bf9a970662e93@webmail.yaccman.com> <20170116202246.GB16532@minnie.tuhs.org> Message-ID: <201701170332.v0H3WnQ3019483@freefriends.org> Warren Toomey wrote: > On Mon, Jan 16, 2017 at 10:52:04AM -0800, Steve Johnson wrote: > > It's been done. Anders Magnusson has done quite a bit to port pcc and > > add both ANSI and selected GCC features. > > A quick google search reveals: > > http://pcc.ludd.ltu.se/ > and https://www.openbsd.org/papers/magnusson_pcc.pdf > which is a set of slides on the work done. > > I've been itching to rewrite PDP-7 Unix in a higher level language. > I've already designed the language and written a compiler for the PDP-7 > (see https://github.com/DoctorWkt/h-compiler), but the generated code sucks. > Maybe I should try to retarget PCC, it would be slightly ironic :-) > > Cheers, Warren That would be neat. For those of us who have moved on from CVS, I have mirror of the code at https://github.com/arnoldrobbins/pcc-revived. I tend to sync it with the CVS about once a week. There hasn't been much activity on it in the past few weeks. It's noticeably faster than GCC but I wouldn't say 100x. :-) Arnold From jsteve at superglobalmegacorp.com Tue Jan 17 14:07:43 2017 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Tue, 17 Jan 2017 12:07:43 +0800 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> <20170116164421.GJ6647@mcvoy.com> Message-ID: <4D79D86D-1A54-4E3E-AA5A-26A71AC42B43@superglobalmegacorp.com> I only used FC when everyone was jumping onto the iSCSI bandwagon for 1gb NICs and you could get FC stuff on the cheap. I was using the Compaq MSA arrays with a built in FC switch, and using all like cards on like servers with the then "new" ESX 2.5 and it worked like a champ. I've always been a fan of separate storage networks but in the brave new world of virtual everything it really doesn't matter as more and more moves up the stack. I'm sure we will be on AWS in the next few years then in 10 years there will be the tick tock swing of moving processing into closets and then back to private data centres... On January 17, 2017 7:41:16 AM GMT+08:00, Tim Bradshaw wrote: >Less than ten years ago I wrote a big rant at people where I worked >about fibre channel: all our machines had two entirely different >networks attached to them: one built on ethernet which was at that >point all Gb on new machines and 10Gb on some (I don't think that 10Gb >switches were really available yet though) & where you could stuff a >machine with interfaces for the cost of a good meal, and where >everything just talked to everything else ... and one built on fibre >channel which might have been 2Gb, where an interface cost as much as a >car, and where interoperability involved weeks pissing around with >firmware in the cards, and sometimes just buying new ones. Fibre >channel was just laughably worse than ethernet. > >No one listened, of course, because my political skills are akin to >those of a goat, and fibre channel is *storage* which is completely >different than networking, somehow. > >Perhaps people still use fibre channel. > >> On 16 Jan 2017, at 16:44, Larry McVoy wrote: >> >> I held up the two cards, disclosed the cost, and said "this ATM card >is >> always going to be expensive but the ethernet card is gonna be $10 in >> a year or two. Why? Volume. Every computer has ethernet, it's >gonna >> do nothing but get cheaper. And you're gonna see ethernet over >fiber, >> long haul, you're going to see 100 Mbit, gigabit ethernet, and it's >> going to be cheap. ATM is going nowhere." -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlc at jctaylor.com Tue Jan 17 15:22:54 2017 From: wlc at jctaylor.com (William Corcoran) Date: Tue, 17 Jan 2017 00:22:54 -0500 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <4D79D86D-1A54-4E3E-AA5A-26A71AC42B43@superglobalmegacorp.com> References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> <20170116164421.GJ6647@mcvoy.com> <4D79D86D-1A54-4E3E-AA5A-26A71AC42B43@superglobalmegacorp.com> Message-ID: <0FA4AF13-162F-4E78-9FB6-CDC4B07F97AE@jctaylor.com> However, in the high transaction volume corporate world, the FC card is peanuts and the Ethernet card is peanut shells. A product's security defense is often said to be inversely proportional to its market share. So, we chose FC over Ethernet primarily for this reason (not lack of market share, but for its purported security.) The cost of an FC solution was absurd when compared to Ethernet by any rational and reasonable means. Nevertheless, businesses that relied on these devices for larger volumes of financial transactions were led to believe that the NET cost of FC was far cheaper than Ethernet. I complained to our vendor at the time that Ethernet speeds were eclipsing FC. We were told that the FC fabric was far superior to Ethernet---especially its security. Hogwash. I remember racks of 2Gb FC switches, only three years old, completely and totally obsolete. There are guys like me with lollipops on the cheeks----born every minute. On Jan 16, 2017, at 11:42 PM, Jason Stevens > wrote: I only used FC when everyone was jumping onto the iSCSI bandwagon for 1gb NICs and you could get FC stuff on the cheap. I was using the Compaq MSA arrays with a built in FC switch, and using all like cards on like servers with the then "new" ESX 2.5 and it worked like a champ. I've always been a fan of separate storage networks but in the brave new world of virtual everything it really doesn't matter as more and more moves up the stack. I'm sure we will be on AWS in the next few years then in 10 years there will be the tick tock swing of moving processing into closets and then back to private data centres... On January 17, 2017 7:41:16 AM GMT+08:00, Tim Bradshaw > wrote: Less than ten years ago I wrote a big rant at people where I worked about fibre channel: all our machines had two entirely different networks attached to them: one built on ethernet which was at that point all Gb on new machines and 10Gb on some (I don't think that 10Gb switches were really available yet though) & where you could stuff a machine with interfaces for the cost of a good meal, and where everything just talked to everything else ... and one built on fibre channel which might have been 2Gb, where an interface cost as much as a car, and where interoperability involved weeks pissing around with firmware in the cards, and sometimes just buying new ones. Fibre channel was just laughably worse than ethernet. No one listened, of course, because my political skills are akin to those of a goat, and fibre channel is *storage* which is completely different than networking, somehow. Perhaps people still use fibre channel. On 16 Jan 2017, at 16:44, Larry McVoy > wrote: I held up the two cards, disclosed the cost, and said "this ATM card is always going to be expensive but the ethernet card is gonna be $10 in a year or two. Why? Volume. Every computer has ethernet, it's gonna do nothing but get cheaper. And you're gonna see ethernet over fiber, long haul, you're going to see 100 Mbit, gigabit ethernet, and it's going to be cheap. ATM is going nowhere." -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From downing.nick at gmail.com Tue Jan 17 15:30:35 2017 From: downing.nick at gmail.com (Nick Downing) Date: Tue, 17 Jan 2017 16:30:35 +1100 Subject: [TUHS] 2.11 pcc with ANSI? In-Reply-To: <201701170332.v0H3WnQ3019483@freefriends.org> References: <20170116181107.B08D7400D8@vsta.org> <57d7c379b914dd1b23590d7e114bf9a970662e93@webmail.yaccman.com> <20170116202246.GB16532@minnie.tuhs.org> <201701170332.v0H3WnQ3019483@freefriends.org> Message-ID: I have a fair bit of experience in this area, I only recently started to look at PCC but I have been in the guts of the Ritchie compiler for quite a while. And, my first thought was exactly the same as yours -- OMG this is archaic! This is unreadable! This is irritating! Modernize! Modernize! So I have put huge efforts into exactly that, every now and again I create a new repo and start the various conversions anew... having learned so much about the structure of the code on the previous attempts and what modernizations are appropriate/workable... however over the years I have come to see that this, the "obvious" approach for people who love the early PDP-11 systems and consider them more than just a toy/curiosity... is quite misguided on a number of levels. Firstly, there is always a lot of talk here about SysV being standardized over BSD and blah bleh... I think a lot of people agree that BSD is/was quite spare and elegant as compared with SysV being quite bloated and clunky... result of being designed by committee rather than a few passionate perfectionist engineer types. But what I don't see is the equivalent discussion about why ANSI C was standardized over K&R. Like everyone I took ANSI C more or less for granted and despite having briefly used K&R on a Z80 when I first encountered C, I regarded ANSI C as a more beautiful, more evolved, more useable/reliable language. Recently I have applied a bit more analysis and have completely changed my view, as I will explain. It seems to me the killer feature of ANSI C is prototypes, so that argument conversions can be applied automatically if you call another module passing unusual arguments (an int instead of a long etc) and indeed the automatic conversions like pointer to/from int can be curbed as being too eager (and they might be different sizes). This is quite a good reason to use ANSI C, since arguably if it saves you an hour's debugging time here and there, then it has paid for itself. But, it is still debateable. On the downside with ANSI C is the amount of useless new syntax and keywords it introduces... char *const *restrict p; anyone?? It's hard to think of more examples since these features are almost never used, but I think you will find an ANSI C yacc grammar has twice as many productions as, say, PCC's K&R grammar. The preprocessor gets a lot more bloat that addresses rare corner cases and specialized usages... the added features across the board are basically little more than convenience features and syntactic sugar, yet they come at a huge cost IMO in terms of bloat, by basically changing C compiler writing from something lots of people can creatively hack on, into something that is usually only undertaken by well funded commercial labs in practice. To see what I mean consider Wirth's way of assessing language or compiler features, he firstly assumes the compiler will be written in itself, then asks whether the overall complexity of the system is improved or worsened by a proposed feature. As I see it, It might save a line of code in 100 programs that use the compiler, but if the cost is 100 extra lines in the compiler the net gain is zero. As a thought experiment consider taking 4.3BSD and then dropping GCC 5.0 into its /usr/src/lib directory, I think you can agree the resulting system would be hopelessly out of balance since there would be like 50,000 lines of code for kernel and utilities plus another 250,000 for the compiler, more if you consider dependencies like the assembler, linker, probably would need glibc too, etc etc. I would say... NOT WORTHWHILE. So in summary I think the self containedness of Unix is far more valuable than anything ANSI C brings to the table. Plus if you think about it, Unix already had lint, so ANSI C just solved the same problem differently while forcing you to add reams of stupid boilerplate to your code whether you want their solution or not. Much the same thing has happened with const pointers, they have never to my knowledge statically flagged a bug in my code, but they sure have ruined many edit/compile/link cycles by detecting spurious or non problems. Plus the code is so much more verbose written with const, it just isn't worth it considering the tininess of any gains. Prototypes again... well if you think about it they introduce a clunky new syntax that is only used in special places and is not consistent with the rest of C... So I am happily exploring an alternate universe in which C and Unix were never standardized or the "right" version was standardized... and finding it quite nice. cheers, Nick On 17/01/2017 2:33 PM, wrote: > Warren Toomey wrote: > > > On Mon, Jan 16, 2017 at 10:52:04AM -0800, Steve Johnson wrote: > > > It's been done. Anders Magnusson has done quite a bit to port pcc > and > > > add both ANSI and selected GCC features. > > > > A quick google search reveals: > > > > http://pcc.ludd.ltu.se/ > > and https://www.openbsd.org/papers/magnusson_pcc.pdf > > which is a set of slides on the work done. > > > > I've been itching to rewrite PDP-7 Unix in a higher level language. > > I've already designed the language and written a compiler for the PDP-7 > > (see https://github.com/DoctorWkt/h-compiler), but the generated code > sucks. > > Maybe I should try to retarget PCC, it would be slightly ironic :-) > > > > Cheers, Warren > > That would be neat. > > For those of us who have moved on from CVS, I have mirror of the > code at https://github.com/arnoldrobbins/pcc-revived. > > I tend to sync it with the CVS about once a week. There hasn't been > much activity on it in the past few weeks. > > It's noticeably faster than GCC but I wouldn't say 100x. :-) > > Arnold > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsteve at superglobalmegacorp.com Tue Jan 17 21:43:11 2017 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Tue, 17 Jan 2017 19:43:11 +0800 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <20170116164421.GJ6647@mcvoy.com> References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> <20170116164421.GJ6647@mcvoy.com> Message-ID: Oh wow flashbacks to the joys of using ATM LANE over OC-3 as that 155Mb was "so superior" to 10 Mbit Ethernet and how "straightforward" it was getting pvp's from the phone company and setting up the LECS, BUS, LES, and each LEC. And how all the consultants scoffed at PC's with 100mbit Ethernet as the old 33Mhz bus couldn't push 100mbit in their crazy minds of thinking all buses push data one byte at a time. It was great once we started to get Cisco fast Etherchannel on the acquired catalyst switches so we could dump ATM at the core and even better to get those dance Intel NICs that could also FEC for super high bandwidth servers. It's a shame it took a while to get metroE and ether WAN, but here we are in that awesome future devoid of the disaster of ATM as it couldn't even begin to scale at and beyond OC-128, 10 gig E put an end to all that nonsense. There was a brief window I made some good money setting up ATM networks, but they all went from oc12 to at the end being over t1 bonds for rural areas. Can't say I miss it. On January 17, 2017 12:44:21 AM GMT+08:00, Larry McVoy wrote: >On Mon, Jan 16, 2017 at 11:00:00AM -0500, Doug McIlroy wrote: >> The highest levels of AT&T were happy to carry digital data, but >> did not see digital as significant business. Even though digital T1 >> was the backbone of long-distance transmission, it was IBM, not >> AT&T, that offered direct digital interfaces to T1 in the 60s. > >AT&T seemed pretty clueless about networking. I gave a short talk at >Hot >Interconnects in the heyday of ATM. Paul Borrill got me a speaking >spot, >I wasn't well known person but inside of Sun I had been railing against >ATM and pushing for 100Mbit ethernet and Paul decided to see what the >rest of the world thought. > >The gist of my talk was that ATM was a joke. I had an ATM card (on >loan >from Sun Networking), I think it was 155 Mbit card. I also had an >ethernet card that I had bought at Fry's on my way to the talk. >The ATM card cost $4000. The ethernet card cost $49 IIRC. > >The point I was making was that ATM was doomed. This was at the time >in >history when every company was making long bets on ATM, they all >thought >it was the future; well, all meaning the execs had been convinced. > >I held up the two cards, disclosed the cost, and said "this ATM card is >always going to be expensive but the ethernet card is gonna be $10 in >a year or two. Why? Volume. Every computer has ethernet, it's gonna >do nothing but get cheaper. And you're gonna see ethernet over fiber, >long haul, you're going to see 100 Mbit, gigabit ethernet, and it's >going to be cheap. ATM is going nowhere." > >There was a shocked silence. Weirdest talk ever, the room just went >silent for what seemed forever. Then someone, I'm sure it was an >engineer >who had been forced to work on ATM, started clapping. Just one guy. >And then the whole room joined in. > >I took the silence as "yeah, but my boss says I have to" and the >clapping >as "we agree". > >At the time AT&T was the biggest pusher of ATM. Telephone switches >were >big and expensive and it was clear, to me at least, that AT&T looked at >all those cheap ethernet switches and said "yeah, let's get the >industry >working on phone switching and we'll get cheap switches too". Nice >idea, >didn't work out. -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfb at tfeb.org Tue Jan 17 23:09:13 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Tue, 17 Jan 2017 13:09:13 +0000 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: <99f1301695eb38762765b91bff57b0486bc71af6@webmail.yaccman.com> References: <99f1301695eb38762765b91bff57b0486bc71af6@webmail.yaccman.com> Message-ID: <512ABFFE-C238-45CA-9C43-CF9A84E4DE49@tfeb.org> On 11 Jan 2017, at 18:34, Steve Johnson wrote: > > IMHO, hardware has left software in the dust. I figured out that if cars had evolved since 1970 at the same rate as computer memory, we could now buy 1,000 Tesla Model S's for a penny, and each would have a top speed of 60,000 MPH. This is roughly a factor of a trillion in less than 50 years. This doesn't mean that the process will continue: eventually you hit physics limits ('engineering' is really a better term, but it has been so degraded by 'software engineering' that I don't like to use it). Obviously we've already hit those limits for clock speed (when?) and we might be close to them for single-threaded performance in general: the current big (HPC big) machine where I work has both lower clock speed than the previous one and observed lower single-threaded performance as well, although its a lot more scalable, at least in theory. The previous one was POWER, and was I think the slightly mad very-high-clock-speed POWER chip, which might turn out to be the high-water-mark of single-threaded performance; the current one is x86. Obviously for a while parallel scaling will mean things continue, but that crashes into other limits as well. I think we've all lived in a wonderful time where it seemed like various exponential processes could continue for ever: they can't. --tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at kjorling.se Tue Jan 17 23:36:32 2017 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Tue, 17 Jan 2017 13:36:32 +0000 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: <512ABFFE-C238-45CA-9C43-CF9A84E4DE49@tfeb.org> References: <99f1301695eb38762765b91bff57b0486bc71af6@webmail.yaccman.com> <512ABFFE-C238-45CA-9C43-CF9A84E4DE49@tfeb.org> Message-ID: <20170117133632.GB12237@yeono.kjorling.se> On 17 Jan 2017 13:09 +0000, from tfb at tfeb.org (Tim Bradshaw): > I think we've all lived in a wonderful time where it seemed like > various exponential processes could continue for ever: they can't. I'm personally inclined to agree with Tim here. That's not to say that I don't think some of those processes could be pushed a bit farther, but as much as we would love it in some cases, a function on the form f(x)=Ca^{Dx} (for any values of C, D and a) describing something that can exist in the real world simply cannot continue forever before encountering some real-world limit. Zoom out what appears to be an exponential curve and more often than not, it turns out that what looked like an exponential curve was really the first portion of a S curve or (even worse in many cases) a portion of a parabola. Either that, or it's something like the Tsiolkovsky rocket equation or the relativistic colinear velocity addition formula, where the exponent is something you try very hard to _avoid_ the effects of for one reason or another. What could conceivably change that picture somewhat is a total paradigm shift in computing, kind of like if large general-purpose quantum computers turn out to be viable after all. But even in that case I'm pretty sure that at some point we would realize that we are on the same kind of S curve or parabola there as well, only having delayed the inevitable or shifted the origin. That's not to say that even steady-state computer power can't provide huge benefits. It absolutely can. Even the computers we have and are able to actually build today are immensely powerful both in terms of computational capability and storage, and they are, to a large degree, scalable with the proper software and algorithms. The kind of computer I have _at home_ today (which wasn't even top of the line when I put it together a few years ago) would have been considered almost unimaginably powerful just a few decades ago. -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “People who think they know everything really annoy those of us who know we don’t.” (Bjarne Stroustrup) From schily at schily.net Wed Jan 18 00:12:17 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 17 Jan 2017 15:12:17 +0100 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: <20170116201543.GA16532@minnie.tuhs.org> References: <20170116201543.GA16532@minnie.tuhs.org> Message-ID: <587e2641.3OkLy5V7l13RTp6G%schily@schily.net> Warren Toomey wrote: > On Mon, Jan 16, 2017 at 11:10:37AM -0800, Steve Johnson wrote: > > We would pore over the crash dumps (in hex, then a > > new thing for us -- PDP-11 was all octal, all the time). > > Something I've been meaning to ask for a while: why Unix and octal on the > PDP-11? Because of the DEC documentation? > > I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is > a multiple of 3. But PDP-11 is 16b, multiple of 4. > > After all, Unix had its own assembler, so was there a need/reason to > use octal? Note that the people wo did this, used the 18 bit machines before. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Wed Jan 18 00:21:54 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 17 Jan 2017 15:21:54 +0100 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> Message-ID: <587e2882.ucdKFsgP38LOZ3N7%schily@schily.net> Doug McIlroy wrote: > The highest levels of AT&T were happy to carry digital data, but > did not see digital as significant business. Even though digital T1 > was the backbone of long-distance transmission, it was IBM, not > AT&T, that offered direct digital interfaces to T1 in the 60s. Was T1 a "digital" line interface, or was this rather a 24x3.1 kHz channel? How was the 64 ??? Kbit/s interface to the first IMPs implemented? Wasn't it AT&T that provided the lines for the first IMPs? I was always wondering how they could provide such a "high speed" line in the 1960s. I hope somebody knows this.... Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Wed Jan 18 00:27:26 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 17 Jan 2017 15:27:26 +0100 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <20170116164421.GJ6647@mcvoy.com> References: <201701161600.v0GG00XA080461@tahoe.cs.Dartmouth.EDU> <20170116164421.GJ6647@mcvoy.com> Message-ID: <587e29ce.GfQnecHaHo8aHA3S%schily@schily.net> Larry McVoy wrote: > The gist of my talk was that ATM was a joke. I had an ATM card (on loan > from Sun Networking), I think it was 155 Mbit card. I also had an > ethernet card that I had bought at Fry's on my way to the talk. > The ATM card cost $4000. The ethernet card cost $49 IIRC. > > The point I was making was that ATM was doomed. This was at the time in > history when every company was making long bets on ATM, they all thought > it was the future; well, all meaning the execs had been convinced. I cannot speak for the US, but ATM was rather popular in the European telecommunication in the mid 1990s. I was e.g. in a EU research project, where I wrote IP-Multicast enhancements for the FORE ATM driver to support video multicast. From what I know, there may still have been ATM in the core network of German Telekom when they started their IP-TV offer that is based on IP-multicast. So people tried to work around the problems in ATM for a while until ATM was given up. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From schily at schily.net Wed Jan 18 00:28:46 2017 From: schily at schily.net (Joerg Schilling) Date: Tue, 17 Jan 2017 15:28:46 +0100 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: References: Message-ID: <587e2a1e.bls9D6AZ7WMsQuDc%schily@schily.net> "Steve Johnson" wrote: > The mainframes of the 60's and 70's all used 6-bit characters (and > often different encodings for the non alphanumeric characters).  So > the Unix folks, including me, had experience with octal long before > Dec. Bit IIRC, this has been done with 10 6 bit chars in a 60 bit word. Did people use octal in this area? Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From beebe at math.utah.edu Wed Jan 18 00:27:42 2017 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Tue, 17 Jan 2017 07:27:42 -0700 Subject: [TUHS] Questions for TUHS great minds Message-ID: Tim Bradshaw writes on 17 Jan 2017 13:09 +0000 >> I think we've all lived in a wonderful time where it seemed like >> various exponential processes could continue for ever: they can't. For an update on the exponential scaling (Moore's Law et al), see this interesting new paper: Peter J. Denning and Ted G. Lewis Exponential laws of computing growth Comm. ACM 60(1) 54--65 January 2017 https://doi.org/10.1145/2976758 ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From beebe at math.utah.edu Wed Jan 18 01:14:51 2017 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Tue, 17 Jan 2017 08:14:51 -0700 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: <587e2a1e.bls9D6AZ7WMsQuDc%schily@schily.net> Message-ID: Joerg Schilling asks today: >> this has been done with 10 6 bit chars in a 60 bit word. >> Did people use octal in this area? I worked on a CDC 6400 with both NOS and KRONOS operating systems in the 1970s. The 6400/6600/7600 family were definitely in the octal world. Initially, the character set was 6-bit, with one character reserved as an escape to mean that the next 6-bit chunk was to be included, giving a 12-bit representation that added support for lowercase letters (a feature that we could only get on our IBM 360 and Amdahl 470 mainframes with a once-a-night change of the line printer glyph chain). Here is a quote by the lead architect, James E. Thornton, who wrote the 1970 book, ``Design of a Computer: the Control Data 6600'', and the 1980 history paper ``The CDC 6600 Project'' (http://dx.doi.org/10.1109/MAHC.1980.10044): >> The selection of 60-bit word length came after a lengthy >> investigation into the possibility of 64 bits. Without going >> into it in depth, our octal background got the upper hand. That comes from page 347 of the paper. ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From rminnich at gmail.com Wed Jan 18 01:28:03 2017 From: rminnich at gmail.com (ron minnich) Date: Tue, 17 Jan 2017 15:28:03 +0000 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: <587e2641.3OkLy5V7l13RTp6G%schily@schily.net> References: <20170116201543.GA16532@minnie.tuhs.org> <587e2641.3OkLy5V7l13RTp6G%schily@schily.net> Message-ID: On Tue, Jan 17, 2017 at 6:12 AM Joerg Schilling wrote: > > > Note that the people wo did this, used the 18 bit machines before. > > > As steve johnson pointed out, just about everything around then was octal. I think IBM brought widespread use of hex with the 360 ca. 1964, but to most other vendors octal was a way of thinking as your interesting quote points out. As for character formats, on e.g. the pdp-10 you had lots of choice, including 6 6-bit chars or 7 5-bit chars with one bit left over as common ... I also recall people used to complain about the inefficiencies inherent in 8-bit character formats ... I for one was still pretty glad to see octal mostly go away. ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Wed Jan 18 01:32:07 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 17 Jan 2017 10:32:07 -0500 (EST) Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 Message-ID: <20170117153207.B39A518C094@mercury.lcs.mit.edu> > From: Joerg Schilling > Was T1 a "digital" line interface, or was this rather a 24x3.1 kHz > channel? Google is your friend: https://en.wikipedia.org/wiki/T-carrier https://en.wikipedia.org/wiki/Digital_Signal_1 > How was the 64 ??? Kbit/s interface to the first IMPs implemented? > Wasn't it AT&T that provided the lines for the first IMPs? Yes and no. Some details are given in "The interface message processor for the ARPA computer network" (Heart, Kahn, Ornstein, Crowther and Walden), but not much. More detail of the business arrangement is contained in "A History of the ARPANET: The First Decade" (BBN Report No. 4799). Details of the interface, and the IMP side, are given in the BBN proposal, "Interface Message Processor for the ARPA Computer Network" (BBN Proposal No. IMP P69-IST-5): in each direction there is a digital data line, and a clock line. It's synchronous (i.e. a constant stream of SYN characters is sent across the interface when no 'frame' is being sent). The 50KB modems were, IIRC, provided by the Bell system; the diagram in the paper above seems to indicate that they were not considered part of the IMP system. The modems at MIT were contained in a large rack, the same size as the IMP, which stood next to it. I wasn't able to find anything about anything past the IMP/modem interface. Perhaps some AT+T publications of that period might detail how the modem, etc, worked. Noel From ches at cheswick.com Wed Jan 18 02:53:04 2017 From: ches at cheswick.com (William Cheswick) Date: Tue, 17 Jan 2017 11:53:04 -0500 Subject: [TUHS] PDP-11, Unix, octal? In-Reply-To: References: Message-ID: <82A10288-02BE-4898-A4A1-E863286785D4@cheswick.com> > On 17Jan 2017, at 10:14 AM, Nelson H. F. Beebe wrote: > > the 1970s. The 6400/6600/7600 family were definitely in the octal > world. Initially, the character set was 6-bit, with one character > reserved as an escape to mean that the next 6-bit chunk was to be > included, giving a 12-bit representation that added support for > lowercase letters We called it “half-ASCII”, escaping with codes 74B and 76B. As far as I recall, it only worked on some versions of some of the timesharing systems in some modes. We never had a lower case print chain at Lehigh, SO ALL OUR OUTPUT WAS IN UPPER CASE. And don’t get me started on 63- vs 64- character set. The availability of ASCII on other operating systems was a great improvement in my life. And certain neurons still remember crap like 22B is R in display code. ches -------------- next part -------------- An HTML attachment was scrubbed... URL: From scj at yaccman.com Wed Jan 18 05:55:04 2017 From: scj at yaccman.com (Steve Johnson) Date: Tue, 17 Jan 2017 11:55:04 -0800 Subject: [TUHS] Questions for TUHS great minds In-Reply-To: <512ABFFE-C238-45CA-9C43-CF9A84E4DE49@tfeb.org> Message-ID: Ah, the notion of clock speed...   For 75 years we have designed circuits with a central clock.   For the last 10 years, people have gone to great length to make a billion transistors on a chip operate in synchrony, using techniques that are getting sillier and sillier and don't provide much benefit.  For example, at lower voltages and thinning wires, chips become dramatically more temperature sensitive, so all kinds of guard bands and additional hardware gook is required to make the chips function correctly.   There are some very interesting technologies that are not clock based, scale well, are low power, and perform well over wide variations of voltage and temperature,   The problem is they would require a completely new set of design tools, and the few players in this area don't want to rock the boat. It's not necessary to go that far, however.  Our chip has no global signals and will probably be faster than 6 GHz. Steve ----- Original Message ----- From: "Tim Bradshaw" This doesn't mean that the process will continue: eventually you hit physics limits ('engineering' is really a better term, but it has been so degraded by 'software engineering' that I don't like to use it).  Obviously we've already hit those limits for clock speed (when?) and we might be close to them for single-threaded performance in general: the current big (HPC big) machine where I work has both lower clock speed than the previous  one and observed lower single-threaded performance as well, although its a lot more scalable, at least in theory.  The previous one was POWER, and was I think the slightly mad very-high-clock-speed POWER chip, which might turn out to be the high-water-mark of single-threaded performance; the current one is x86. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at cs.dartmouth.edu Tue Jan 17 12:23:07 2017 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Mon, 16 Jan 2017 21:23:07 -0500 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? Message-ID: <201701170223.v0H2N7Q9010667@coolidge.cs.Dartmouth.EDU> > I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is a multiple of 3. But PDP-11 is 16b, multiple of 4. Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine, were issued in octal. And to help with arithmetic, the computer lab had an octal Friden (IIRC) desk calculator. One important feature of octal is you don't have to learn new numerals and their addition and multiplication tables, 2.5x the size of decimal tables. Established early, octal was reinforced by a decade of 6-bit bytes. Perhaps the real question is why did IBM break so completely to hex for the 360? (Absent actual knowledge, I'd hazard a guess that it was eased in on the 7030.) Doug> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is a multiple of 3. But PDP-11 is 16b, multiple of 4. Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine, were issued in octal. And to help with arithmetic, the computer lab had an octal Friden (IIRC) desk calculator. One important feature of octal is you don't have to learn new numerals and their addition and multiplication tables, 2.5x the size of decimal tables. Established early, octal was reinforced by a decade of 6-bit bytes. Perhaps the real question is why did IBM break so completely to hex for the 360? (Absent actual knowledge, I'd hazard a guess that it was eased in on the 7030.) Doug = From jnc at mercury.lcs.mit.edu Wed Jan 18 12:33:58 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 17 Jan 2017 21:33:58 -0500 (EST) Subject: [TUHS] [TUHS} PDP-11, Unix, octal? Message-ID: <20170118023358.BE5C818C095@mercury.lcs.mit.edu> > From: Doug McIlroy > Perhaps the real question is why did IBM break so completely to hex for > the 360? Probably because the 360 had 8-bit bytes? Unless there's something like the PDP-11 instruction format which makes octal optimal, octal is a pain working with 8-bit bytes; anytime you're looking at the higher bytes in a word, unless you are working through software which will 'interpret' the bytes for you, it's a PITA. The 360 instruction coding doesn't really benefit from octal (well, instructions are in 4 classes, based on the high two bits of the first byte, but past that, hex works better); opcodes are 8 or 16 bits, and register numbers are 4 bits. As to why the 360 had 8-bit bytes, according to "IBM's 360 and Early 370 Systems" (Pugh, Johnson, and Palmer, pp. 148-149), there was a big fight over whether to use 6 or 8, and they finally went with 8 because i) statistics showed that more customer data was numbers, rather than text, and storing decimal numbers in 6-bit bytes was inefficient (BCD does two digits per 8-bit byte), and ii) they were looking forward to handling text with upper- and lower-case. Noel From scj at yaccman.com Wed Jan 18 13:06:28 2017 From: scj at yaccman.com (Steve Johnson) Date: Tue, 17 Jan 2017 19:06:28 -0800 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: <20170118023358.BE5C818C095@mercury.lcs.mit.edu> Message-ID: <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> When we were considering what machine to port PDP-11 Unix to, there were several 36-bit machines around and some folks were lobbying for them.   Dennis' comment was quite characteristically succinct: "I'll consider it if they throw in a 10-track tape drive...".    Just thinking about Unix (and C!) on a machine where the byte size does not evenly divide the word size is pretty painful... (Historical note: before networking, magnetic tapes were essential for backups and moving large quantities of data.  Data was stored in magnetic dots running across the tape, and typically held a character plus a parity bit.  Thus, there were 7-track drives for 6-bit machines, and 9-track drives for 8-bit machines.  But nothing for 9-bit machines...) ----- Original Message ----- From: "jnc at mercury.lcs.mit.edu (Noel" To: Cc: Sent:Tue, 17 Jan 2017 21:33:58 -0500 (EST) Subject:Re: [TUHS] [TUHS} PDP-11, Unix, octal? > From: Doug McIlroy > Perhaps the real question is why did IBM break so completely to hex for > the 360? Probably because the 360 had 8-bit bytes? Unless there's something like the PDP-11 instruction format which makes octal optimal, octal is a pain working with 8-bit bytes; anytime you're looking at the higher bytes in a word, unless you are working through software which will 'interpret' the bytes for you, it's a PITA. The 360 instruction coding doesn't really benefit from octal (well, instructions are in 4 classes, based on the high two bits of the first byte, but past that, hex works better); opcodes are 8 or 16 bits, and register numbers are 4 bits. As to why the 360 had 8-bit bytes, according to "IBM's 360 and Early 370 Systems" (Pugh, Johnson, and Palmer, pp. 148-149), there was a big fight over whether to use 6 or 8, and they finally went with 8 because i) statistics showed that more customer data was numbers, rather than text, and storing decimal numbers in 6-bit bytes was inefficient (BCD does two digits per 8-bit byte), and ii) they were looking forward to handling text with upper- and lower-case. Noel -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Wed Jan 18 13:36:12 2017 From: crossd at gmail.com (Dan Cross) Date: Tue, 17 Jan 2017 22:36:12 -0500 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> References: <20170118023358.BE5C818C095@mercury.lcs.mit.edu> <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> Message-ID: A question about 36 bit machines.... In some of the historical accounts I've read, it seems that before the PDP-11 a pitch was made for a PDP-10 to support the then-nascent Unix efforts. This was shot down by labs management and sometime later the PDP-11 arrived and within a decade or so the question of byte width was the creatively settled for general purpose machines. The question then is twofold: why a PDP-10 in the early 70s (instead of, say, a 360 or something) and why later the aversion to word-oriented machines? The PDP-7 was of course word oriented. I imagine answers have to do with cost/performance for the former and with regard to the latter, a) the question was largely settled by the middle of the decade, and b) by then Unix had evolved so that a port was considered rather different than a rewrite. But I'd love to hear from some of the players involved. - Dan C. On Jan 18, 2017 10:06 AM, "Steve Johnson" wrote: > When we were considering what machine to port PDP-11 Unix to, there were > several 36-bit machines around and some folks were lobbying for them. > Dennis' comment was quite characteristically succinct: "I'll consider it if > they throw in a 10-track tape drive...". Just thinking about Unix (and > C!) on a machine where the byte size does not evenly divide the word size > is pretty painful... > > (Historical note: before networking, magnetic tapes were essential for > backups and moving large quantities of data. Data was stored in magnetic > dots running across the tape, and typically held a character plus a parity > bit. Thus, there were 7-track drives for 6-bit machines, and 9-track > drives for 8-bit machines. But nothing for 9-bit machines...) > > > > ----- Original Message ----- > From: > "jnc at mercury.lcs.mit.edu (Noel" > > To: > > Cc: > > Sent: > Tue, 17 Jan 2017 21:33:58 -0500 (EST) > Subject: > Re: [TUHS] [TUHS} PDP-11, Unix, octal? > > > > From: Doug McIlroy > > > Perhaps the real question is why did IBM break so completely to hex for > > the 360? > > Probably because the 360 had 8-bit bytes? > > Unless there's something like the PDP-11 instruction format which makes > octal > optimal, octal is a pain working with 8-bit bytes; anytime you're looking > at > the higher bytes in a word, unless you are working through software which > will 'interpret' the bytes for you, it's a PITA. > > The 360 instruction coding doesn't really benefit from octal (well, > instructions are in 4 classes, based on the high two bits of the first > byte, > but past that, hex works better); opcodes are 8 or 16 bits, and register > numbers are 4 bits. > > As to why the 360 had 8-bit bytes, according to "IBM's 360 and Early 370 > Systems" (Pugh, Johnson, and Palmer, pp. 148-149), there was a big fight > over > whether to use 6 or 8, and they finally went with 8 because i) statistics > showed that more customer data was numbers, rather than text, and storing > decimal numbers in 6-bit bytes was inefficient (BCD does two digits per > 8-bit > byte), and ii) they were looking forward to handling text with upper- and > lower-case. > > Noel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at nocrew.org Wed Jan 18 16:04:31 2017 From: lars at nocrew.org (Lars Brinkhoff) Date: Wed, 18 Jan 2017 07:04:31 +0100 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> (Steve Johnson's message of "Tue, 17 Jan 2017 19:06:28 -0800") References: <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> Message-ID: <86lgu8rjio.fsf@molnjunk.nocrew.org> Steve Johnson wrote: > Historical note: before networking, magnetic tapes were essential for > backups and moving large quantities of data. Data was stored in > magnetic dots running across the tape, and typically held a character > plus a parity bit. Thus, there were 7-track drives for 6-bit machines, > and 9-track drives for 8-bit machines. But nothing for 9-bit > machines... The 36-bit PDP-10 initially used 7-track drives, with six frames to a word. During its lifetime sunset, 7-track drives were no longer made, so 9-track drives were used instead. The most common encoding was to store a word in five 8-bit frames, with four bits unused. The PDP-10 did not have a fixed byte size. Were there any 9-bit machines? From aap at papnet.eu Wed Jan 18 16:53:51 2017 From: aap at papnet.eu (Angelo Papenhoff) Date: Wed, 18 Jan 2017 07:53:51 +0100 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: References: <20170118023358.BE5C818C095@mercury.lcs.mit.edu> <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> Message-ID: <20170118065351.GA57704@indra.papnet.eu> On 17/01/17, Dan Cross wrote: > A question about 36 bit machines.... > > In some of the historical accounts I've read, it seems that before the > PDP-11 a pitch was made for a PDP-10 to support the then-nascent Unix > efforts. This was shot down by labs management and sometime later the > PDP-11 arrived and within a decade or so the question of byte width was the > creatively settled for general purpose machines. > > The question then is twofold: why a PDP-10 in the early 70s (instead of, > say, a 360 or something) and why later the aversion to word-oriented > machines? The PDP-7 was of course word oriented. > > I imagine answers have to do with cost/performance for the former and with > regard to the latter, a) the question was largely settled by the middle of > the decade, and b) by then Unix had evolved so that a port was considered > rather different than a rewrite. But I'd love to hear from some of the > players involved. Doesn't exactly answer your question, but from the "Oral History of Ken Thompson": Q: As I recall this - once upon a time weren't you trying to get a PDP-10 or something like that for the lab? Ken Thompson: Yes, we were arguing that the Multi[cs] machine should be replaced with a PDP-10. And there was such a huge backlash from Multi[cs] that it was pretty soundly turned down. It was probably a good idea, the -10 is a kind of trashy machine with 36 bits - the future just left it behind. "the -10 is a kind of trashy machine with 36 bit" I'm not sure whether I can still like UNIX now :( I hope this is the bad (?) experience with Multics on the GE-645 speaking. "the future just left it behind" More like DEC didn't want internal competition with the VAX. aap From rminnich at gmail.com Wed Jan 18 17:31:43 2017 From: rminnich at gmail.com (ron minnich) Date: Wed, 18 Jan 2017 07:31:43 +0000 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: <20170118065351.GA57704@indra.papnet.eu> References: <20170118023358.BE5C818C095@mercury.lcs.mit.edu> <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> <20170118065351.GA57704@indra.papnet.eu> Message-ID: On Tue, Jan 17, 2017 at 11:00 PM Angelo Papenhoff wrote: > > "the future just left it behind" > More like DEC didn't want internal competition with the VAX. > > The 10 was introduced in 1968, the vax in 1977. They were not even close to being contemporaries. I don't think your argument is correct. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aap at papnet.eu Wed Jan 18 18:09:04 2017 From: aap at papnet.eu (Angelo Papenhoff) Date: Wed, 18 Jan 2017 09:09:04 +0100 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: References: <20170118023358.BE5C818C095@mercury.lcs.mit.edu> <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> <20170118065351.GA57704@indra.papnet.eu> Message-ID: <20170118080904.GA58792@indra.papnet.eu> On 18/01/17, ron minnich wrote: > The 10 was introduced in 1968, the vax in 1977. They were not even close to > being contemporaries. I don't think your argument is correct. The 10 was discontinued in 1983 because the VAX was more important to DEC, not because the 10 was getting old. aap From beebe at math.utah.edu Thu Jan 19 00:28:36 2017 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Wed, 18 Jan 2017 07:28:36 -0700 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? Message-ID: On the subject of the PDP-10, I recall seeing people at a DECUS meeting in the early 1980s wearing T-shirts that proclaimed I don't care what they say, 36 bits are here to say! I also recall a funny advertizing video spoof at that meeting that ended with the line At DIGITAL, we're building yesterday's tomorrow, today. That meeting was about the time of the cancellation of the Jupiter project at DEC that was planned to produce a substantially more powerful follow-on to the KL-10 processor model of the PDP-10 (we had two such at the UofUtah), disappointing most of its PDP-10 customers. Some of the Jupiter technology was transferred to later VAX models, but DEC never produced anything faster than the KL-10 in the 36-bit line. However, with microcomputers entering the market, and early workstations from Apollo, LMI, Sun, and others, the economics of computing changed dramatically, and departmental mainframes ceased to be cost effective. Besides our mainframe DEC-20/60 TOPS-20 system in the College of Science, we also ran Wollongong BSD Unix on a VAX 750, and DEC VMS on VAX 780 and 8600 models. In 1987, we bought our first dozen Sun workstations (and for far less than the cost of a DEC-20/60). After 12 good years of service (and a forklift upgrade from a 20/40 to a 20/60), our KL-10 was retired on 31-Oct-1990, and the VAX 8600 in July 1991. Our productivity increased significantly in the Unix world. I wrote about memories and history and impact of the PDP-10 in two keynote addresses at TUG meetings in articles and slides available at http://www.math.utah.edu/~beebe/talks/2003/tug2003/ http://www.math.utah.edu/~beebe/talks/2005/pt2005/ ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From pnr at planet.nl Thu Jan 19 00:29:26 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Wed, 18 Jan 2017 15:29:26 +0100 Subject: [TUHS] TUHS Digest, Vol 14, Issue 63 In-Reply-To: <20170117153207.B39A518C094@mercury.lcs.mit.edu> References: <20170117153207.B39A518C094@mercury.lcs.mit.edu> Message-ID: I asked over at the internet history list (http://mailman.postel.org/pipermail/internet-history/2017-January/thread.html) Short of it is that it used Bell 303C modems which operated at 50kb/s operating over an analog "broadband" channel predating the T1. It used the space of 12 voice channels and some fairly fancy modulation techniques. Connection to the trunk exchange was over a leased line. On 17 Jan 2017, at 16:32 , Noel Chiappa wrote: > >> From: Joerg Schilling > >> Was T1 a "digital" line interface, or was this rather a 24x3.1 kHz >> channel? > > Google is your friend: > > https://en.wikipedia.org/wiki/T-carrier > https://en.wikipedia.org/wiki/Digital_Signal_1 > > >> How was the 64 ??? Kbit/s interface to the first IMPs implemented? >> Wasn't it AT&T that provided the lines for the first IMPs? > > Yes and no. Some details are given in "The interface message processor for the > ARPA computer network" (Heart, Kahn, Ornstein, Crowther and Walden), but not > much. More detail of the business arrangement is contained in "A History of > the ARPANET: The First Decade" (BBN Report No. 4799). > > Details of the interface, and the IMP side, are given in the BBN proposal, > "Interface Message Processor for the ARPA Computer Network" (BBN Proposal No. > IMP P69-IST-5): in each direction there is a digital data line, and a clock > line. It's synchronous (i.e. a constant stream of SYN characters is sent > across the interface when no 'frame' is being sent). > > The 50KB modems were, IIRC, provided by the Bell system; the diagram in the > paper above seems to indicate that they were not considered part of the IMP > system. The modems at MIT were contained in a large rack, the same size as > the IMP, which stood next to it. > > I wasn't able to find anything about anything past the IMP/modem interface. > Perhaps some AT+T publications of that period might detail how the modem, > etc, worked. > > Noel From clemc at ccc.com Thu Jan 19 02:47:05 2017 From: clemc at ccc.com (Clem Cole) Date: Wed, 18 Jan 2017 11:47:05 -0500 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: <201701170223.v0H2N7Q9010667@coolidge.cs.Dartmouth.EDU> References: <201701170223.v0H2N7Q9010667@coolidge.cs.Dartmouth.EDU> Message-ID: On Mon, Jan 16, 2017 at 9:23 PM, Doug McIlroy wrote: > ctal was reinforced by a decade of 6-bit bytes. > Perhaps the real question is why did IBM break so completely to hex > for the 360? > ​I may be able to help a little here. A few years ago I used work with Russ Robelen who was the Chief Designer of the 360/50​. Russ regaled us with a number of the stories from those times and having met a few of the personalities involved in them I tend to believe Russ's stories as I have heard other of similar color. The first important thing about the 360 was it was supposed to be an the first ASCII machine from IBM. It's funny how history would use prove it otherwise, but IBM had invested heavily and originally planned on going ASCII. And the key is that ASCII was originally a 7-bit character set, being able to store a 7 bit character was an important design idea for the architecture. According to Russ, Amdahl came up with some [IMO hokey] schemes (similar to what CDC would do) that mapped into 6 bits. I understand that he even proposed a 7-bit byte. He felt that 8 bits was wasteful of the hardware. Russ says that, Brooks would toss him out of his office and said something on the order don't come back until you have a power of 2 - that he (Brooks) did not know how to program things in multiples of 3 and things made of 7s were ever worse. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at rulingia.com Thu Jan 19 04:47:09 2017 From: peter at rulingia.com (Peter Jeremy) Date: Thu, 19 Jan 2017 05:47:09 +1100 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: <86lgu8rjio.fsf@molnjunk.nocrew.org> References: <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> <86lgu8rjio.fsf@molnjunk.nocrew.org> Message-ID: <20170118184709.GC82883@server.rulingia.com> On 2017-Jan-18 07:04:31 +0100, Lars Brinkhoff wrote: >The PDP-10 did not have a fixed byte size. Were there any 9-bit >machines? The Honeywell 6000 series (aka 66/DPS, a rebadged GE 6xx series) was 36-bit and supported either 6-bit or 9-bit characters. I don't recall how you selected which you were using but I recall both Pascal and APL used the 9-bit byte. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 949 bytes Desc: not available URL: From charles.unix.pro at gmail.com Thu Jan 19 04:58:40 2017 From: charles.unix.pro at gmail.com (Charles Anthony) Date: Wed, 18 Jan 2017 10:58:40 -0800 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: <20170118184709.GC82883@server.rulingia.com> References: <50a7fbcbb6af280eb108fff1361c37ee1718bff0@webmail.yaccman.com> <86lgu8rjio.fsf@molnjunk.nocrew.org> <20170118184709.GC82883@server.rulingia.com> Message-ID: On Wed, Jan 18, 2017 at 10:47 AM, Peter Jeremy wrote: > On 2017-Jan-18 07:04:31 +0100, Lars Brinkhoff wrote: > >The PDP-10 did not have a fixed byte size. Were there any 9-bit > >machines? > > The Honeywell 6000 series (aka 66/DPS, a rebadged GE 6xx series) was > 36-bit and supported either 6-bit or 9-bit characters. I don't recall > how you selected which you were using but I recall both Pascal and APL > used the 9-bit byte. > > The [EIS] instruction set supported 4, 6 and 9 bit operands; it was a matter of which instructions you used. For pl1, the instructions generated were driven by the DCLs; for Pascal and APL, [I would guess] that the compiler/interpreter writers defined character size to be 9 bits and generated the 9 bit variants of the instructions. Move 9 6-bit bytes starting at the 3rd byte in the word, convert to 4 bit bytes in some signed manner, and store as 10 4-bit bytes starting at offset 6. MLR ,,400 move with sign captured ADSC6 FLD1,3,9 sending descriptor ADSC4 FLD2,6,10 receiving descriptor -- Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From scj at yaccman.com Thu Jan 19 07:04:20 2017 From: scj at yaccman.com (Steve Johnson) Date: Wed, 18 Jan 2017 13:04:20 -0800 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: Message-ID: <510395f632697f73c8a4d90e562790dfa8c082d5@webmail.yaccman.com> The PDP-10 and the GE/Honeywell were the two machines I recall that elicited Dennis' comment about 10-track tape drives.  When I ported C to the Honeywell machine at the Murray Hill comp center, I used 9-bit bytes as the default, and added a syntax `abcd` to create a constant in the 6-bit character set.  Most of the OS calls used 6-bit characters, although the time-sharing system was moving to 9-bits.  And most of the use of C on the Honeywell was in the time-sharing system. Quite a few years later, I discovered accidentally that the syntax `abcd` was still accepted on the Sun compiler, that had been based on PCC.  It drew some kind of error message like "GCOS characters not supported", presumably because some switch was turned off in the machine-dependent files... Steve ----- Original Message ----- From: "Dan Cross" To:"Steve Johnson" Cc:"Noel Chiappa" , "TUHS main list" Sent:Tue, 17 Jan 2017 22:36:12 -0500 Subject:Re: [TUHS] [TUHS} PDP-11, Unix, octal? A question about 36 bit machines.... In some of the historical accounts I've read, it seems that before the PDP-11 a pitch was made for a PDP-10 to support the then-nascent Unix efforts. This was shot down by labs management and sometime later the PDP-11 arrived and within a decade or so the question of byte width was the creatively settled for general purpose machines. The question then is twofold: why a PDP-10 in the early 70s (instead of, say, a 360 or something) and why later the aversion to word-oriented machines? The PDP-7 was of course word oriented. I imagine answers have to do with cost/performance for the former and with regard to the latter, a) the question was largely settled by the middle of the decade, and b) by then Unix had evolved so that a port was considered rather different than a rewrite.  But I'd love to hear from some of the players involved.         - Dan C. On Jan 18, 2017 10:06 AM, "Steve Johnson" wrote: When we were considering what machine to port PDP-11 Unix to, there were several 36-bit machines around and some folks were lobbying for them.   Dennis' comment was quite characteristically succinct: "I'll consider it if they throw in a 10-track tape drive...".    Just thinking about Unix (and C!) on a machine where the byte size does not evenly divide the word size is pretty painful... (Historical note: before networking, magnetic tapes were essential for backups and moving large quantities of data.  Data was stored in magnetic dots running across the tape, and typically held a character plus a parity bit.  Thus, there were 7-track drives for 6-bit machines, and 9-track drives for 8-bit machines.  But nothing for 9-bit machines...) ----- Original Message ----- From: "jnc at mercury.lcs.mit.edu [2] (Noel" To: Cc: Sent:Tue, 17 Jan 2017 21:33:58 -0500 (EST) Subject:Re: [TUHS] [TUHS} PDP-11, Unix, octal? > From: Doug McIlroy > Perhaps the real question is why did IBM break so completely to hex for > the 360? Probably because the 360 had 8-bit bytes? Unless there's something like the PDP-11 instruction format which makes octal optimal, octal is a pain working with 8-bit bytes; anytime you're looking at the higher bytes in a word, unless you are working through software which will 'interpret' the bytes for you, it's a PITA. The 360 instruction coding doesn't really benefit from octal (well, instructions are in 4 classes, based on the high two bits of the first byte, but past that, hex works better); opcodes are 8 or 16 bits, and register numbers are 4 bits. As to why the 360 had 8-bit bytes, according to "IBM's 360 and Early 370 Systems" (Pugh, Johnson, and Palmer, pp. 148-149), there was a big fight over whether to use 6 or 8, and they finally went with 8 because i) statistics showed that more customer data was numbers, rather than text, and storing decimal numbers in 6-bit bytes was inefficient (BCD does two digits per 8-bit byte), and ii) they were looking forward to handling text with upper- and lower-case. Noel Links: ------ [1] mailto:scj at yaccman.com [2] mailto:jnc at mercury.lcs.mit.edu [3] mailto:tuhs at minnie.tuhs.org [4] mailto:jnc at mercury.lcs.mit.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles.unix.pro at gmail.com Thu Jan 19 07:42:33 2017 From: charles.unix.pro at gmail.com (Charles Anthony) Date: Wed, 18 Jan 2017 13:42:33 -0800 Subject: [TUHS] [TUHS} PDP-11, Unix, octal? In-Reply-To: <510395f632697f73c8a4d90e562790dfa8c082d5@webmail.yaccman.com> References: <510395f632697f73c8a4d90e562790dfa8c082d5@webmail.yaccman.com> Message-ID: On Wed, Jan 18, 2017 at 1:04 PM, Steve Johnson wrote: > The PDP-10 and the GE/Honeywell were the two machines I recall that > elicited Dennis' comment about 10-track tape drives. When I ported C to > the Honeywell machine at the Murray Hill comp center, I used 9-bit bytes as > the default, and added a syntax `abcd` to create a constant in the 6-bit > character set. Most of the OS calls used 6-bit characters, although the > time-sharing system was moving to 9-bits. And most of the use of C on the > Honeywell was in the time-sharing system. > > Quite a few years later, I discovered accidentally that the syntax `abcd` > was still accepted on the Sun compiler, that had been based on PCC. It > drew some kind of error message like "GCOS characters not supported", > presumably because some switch was turned off in the machine-dependent > files... > > Steve > > r 13:40 0.072 1 qedx i main () { int i; i = `abcd`; } \f w foo.c q r 13:41 0.169 3 >sl3p>cc>x>cc foo linkage_editor: Entry not found. foo r 13:41 0.276 50 >sl3p>cc>x>cc foo.c "", line 3: gcos BCD constant illegal cc: An error has occurred while Compiling foo.c. r 13:41 3.575 211 -- Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From cym224 at gmail.com Thu Jan 19 12:52:58 2017 From: cym224 at gmail.com (Nemo) Date: Wed, 18 Jan 2017 21:52:58 -0500 Subject: [TUHS] Was pcc ever ported to the CDC6600? Message-ID: All this talk of targets for UNIX makes me wonder (given the eccentricity of the machine). N. From kayparker at mailite.com Thu Jan 19 17:53:38 2017 From: kayparker at mailite.com (=?utf-8?Q?Kay=20Parker=20=09=20?=) Date: Wed, 18 Jan 2017 23:53:38 -0800 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap Message-ID: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> guess it is the beginning of the end of Solaris and the Sparc CPU: 'Rumors have been circulating since late last year that Oracle was planning to kill development of the Solaris operating system, with major layoffs coming to the operating system's development team. Others speculated that future versions of the Unix platform Oracle acquired with Sun Microsystems would be designed for the cloud and built for the Intel platform only and that the SPARC processor line would meet its demise. The good news, based on a recently released Oracle roadmap for the SPARC platform, is that both Solaris and SPARC appear to have a future. The bad news is that the next major version of Solaris—Solaris 12— has apparently been canceled, as it has disappeared from the roadmap. Instead, it's been replaced with "Solaris 11.next"—and that version is apparently the only update planned for the operating system through 2021. With its on-premises software and hardware sales in decline, Oracle has been undergoing a major reorganization over the past two years as it attempts to pivot toward the cloud. Those changes led to a major speed bump in the development cycle for Java Enterprise Edition, a slowdown significant enough that it spurred something of a Java community revolt. Oracle later announced a new roadmap for Java EE that recalibrated expectations, focusing on cloud services features for the next version of the software platform. ' http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/ -- Kay Parker kayparker at mailite.com -- http://www.fastmail.com - The way an email service should be From wes.parish at paradise.net.nz Thu Jan 19 18:49:47 2017 From: wes.parish at paradise.net.nz (Wesley Parish) Date: Thu, 19 Jan 2017 21:49:47 +1300 (NZDT) Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> Message-ID: <1484815787.58807daba38e0@www.paradise.net.nz> I suppose that set of rumours will lead to people shifting to the FOSS versions of Solaris and SPARC. Wesley Parish Quoting Kay Parker : > guess it is the beginning of the end of Solaris and the Sparc CPU: > 'Rumors have been circulating since late last year that Oracle was > planning to kill development of the Solaris operating system, with > major > layoffs coming to the operating system's development team. Others > speculated that future versions of the Unix platform Oracle acquired > with Sun Microsystems would be designed for the cloud and built for the > Intel platform only and that the SPARC processor line would meet its > demise. The good news, based on a recently released Oracle roadmap for > the SPARC platform, is that both Solaris and SPARC appear to have a > future. > > The bad news is that the next major version of Solaris—Solaris 12— > has > apparently been canceled, as it has disappeared from the roadmap. > Instead, it's been replaced with "Solaris 11.next"—and that version > is > apparently the only update planned for the operating system through > 2021. > > With its on-premises software and hardware sales in decline, Oracle has > been undergoing a major reorganization over the past two years as it > attempts to pivot toward the cloud. Those changes led to a major speed > bump in the development cycle for Java Enterprise Edition, a slowdown > significant enough that it spurred something of a Java community > revolt. > Oracle later announced a new roadmap for Java EE that recalibrated > expectations, focusing on cloud services features for the next version > of the software platform. ' > http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/ > > -- > Kay Parker > kayparker at mailite.com > > -- > http://www.fastmail.com - The way an email service should be > > "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, Method for Guitar "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn From krewat at kilonet.net Fri Jan 20 00:40:17 2017 From: krewat at kilonet.net (Arthur Krewat) Date: Thu, 19 Jan 2017 09:40:17 -0500 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <1484815787.58807daba38e0@www.paradise.net.nz> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> Message-ID: Let's hope they do the right thing and release Solaris into the wild again. ZFS in particular. Personally, I think they are making a huge mistake. What are they going to do, move to Linux? Oh, right... the "cloud" will be Linux. Blech. On 1/19/2017 3:49 AM, Wesley Parish wrote: > I suppose that set of rumours will lead to people shifting to the FOSS versions > of Solaris and SPARC. > > Wesley Parish > > Quoting Kay Parker : > >> guess it is the beginning of the end of Solaris and the Sparc CPU: >> 'Rumors have been circulating since late last year that Oracle was >> planning to kill development of the Solaris operating system, with >> major >> layoffs coming to the operating system's development team. Others >> speculated that future versions of the Unix platform Oracle acquired >> with Sun Microsystems would be designed for the cloud and built for the >> Intel platform only and that the SPARC processor line would meet its >> demise. The good news, based on a recently released Oracle roadmap for >> the SPARC platform, is that both Solaris and SPARC appear to have a >> future. >> >> The bad news is that the next major version of Solaris—Solaris 12— >> has >> apparently been canceled, as it has disappeared from the roadmap. >> Instead, it's been replaced with "Solaris 11.next"—and that version >> is >> apparently the only update planned for the operating system through >> 2021. >> >> With its on-premises software and hardware sales in decline, Oracle has >> been undergoing a major reorganization over the past two years as it >> attempts to pivot toward the cloud. Those changes led to a major speed >> bump in the development cycle for Java Enterprise Edition, a slowdown >> significant enough that it spurred something of a Java community >> revolt. >> Oracle later announced a new roadmap for Java EE that recalibrated >> expectations, focusing on cloud services features for the next version >> of the software platform. ' >> > http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/ >> -- >> Kay Parker >> kayparker at mailite.com >> >> -- >> http://www.fastmail.com - The way an email service should be >> >> > > > "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, > Method for Guitar > > "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn > > From tfb at tfeb.org Fri Jan 20 02:39:45 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Thu, 19 Jan 2017 16:39:45 +0000 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> Message-ID: Well, they are probably reacting to what their customers want which, in my experience working at a fairly typical customer up to a couple of years ago, is indeed Linux. That's kind of sad, but Linux, much though I'd like to hate it, is unfortunately both a significantly more pleasant experience as a user & administrator, and a lot easier to hire people for. It's 20 years too late for Solaris to have a future. > On 19 Jan 2017, at 14:40, Arthur Krewat wrote: > > Let's hope they do the right thing and release Solaris into the wild again. ZFS in particular. > > Personally, I think they are making a huge mistake. What are they going to do, move to Linux? Oh, right... the "cloud" will be Linux. > > Blech. > >> On 1/19/2017 3:49 AM, Wesley Parish wrote: >> I suppose that set of rumours will lead to people shifting to the FOSS versions >> of Solaris and SPARC. >> >> Wesley Parish >> >> Quoting Kay Parker : >> >>> guess it is the beginning of the end of Solaris and the Sparc CPU: >>> 'Rumors have been circulating since late last year that Oracle was >>> planning to kill development of the Solaris operating system, with >>> major >>> layoffs coming to the operating system's development team. Others >>> speculated that future versions of the Unix platform Oracle acquired >>> with Sun Microsystems would be designed for the cloud and built for the >>> Intel platform only and that the SPARC processor line would meet its >>> demise. The good news, based on a recently released Oracle roadmap for >>> the SPARC platform, is that both Solaris and SPARC appear to have a >>> future. >>> >>> The bad news is that the next major version of Solaris—Solaris 12— >>> has >>> apparently been canceled, as it has disappeared from the roadmap. >>> Instead, it's been replaced with "Solaris 11.next"—and that version >>> is >>> apparently the only update planned for the operating system through >>> 2021. >>> >>> With its on-premises software and hardware sales in decline, Oracle has >>> been undergoing a major reorganization over the past two years as it >>> attempts to pivot toward the cloud. Those changes led to a major speed >>> bump in the development cycle for Java Enterprise Edition, a slowdown >>> significant enough that it spurred something of a Java community >>> revolt. >>> Oracle later announced a new roadmap for Java EE that recalibrated >>> expectations, focusing on cloud services features for the next version >>> of the software platform. ' >>> >> http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/ >>> -- >>> Kay Parker >>> kayparker at mailite.com >>> >>> -- >>> http://www.fastmail.com - The way an email service should be >>> >>> >> >> >> "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, >> Method for Guitar >> >> "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn >> >> > From krewat at kilonet.net Fri Jan 20 03:02:14 2017 From: krewat at kilonet.net (Arthur Krewat) Date: Thu, 19 Jan 2017 12:02:14 -0500 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> Message-ID: Have you used Solaris 11, especially up to 11.3? I find Linux and it's recently evolving service and network configuration methods to be a bit of a pain. The biggest example of this was the loss of ifconfig from the minimal install of Redhat/Centos recently. Manually installing nettools just to get it back, when every other UNIX still has it is exasperating. If you're a Linux-only house, and even then are up-to-date and use only one or two distros of Linux, I get it. But if, like me, you administer multiple distros, versions, and then start moving across AIX, HP/UX and Solaris, it's another glaring example of "we're going it alone" (again). For what it's worth, I've been administering a PeopleSoft environment built entirely on Solaris x86 (half of it virtualized) for quite a few years, and it's been trouble-free. All the other Linux distros I administer have had their own idiosyncrasies and bugs. Not to mention the glaring security holes that have come out in the past 2 years. Conversely, I've also administered some SFHA clusters on Redhat and they've been flawless too. But I digress. This is not supposed to be a "bash Linux" thread :) On 1/19/2017 11:39 AM, Tim Bradshaw wrote: > Well, they are probably reacting to what their customers want which, in my experience working at a fairly typical customer up to a couple of years ago, is indeed Linux. That's kind of sad, but Linux, much though I'd like to hate it, is unfortunately both a significantly more pleasant experience as a user & administrator, and a lot easier to hire people for. It's 20 years too late for Solaris to have a future. > >> On 19 Jan 2017, at 14:40, Arthur Krewat wrote: >> >> Let's hope they do the right thing and release Solaris into the wild again. ZFS in particular. >> >> Personally, I think they are making a huge mistake. What are they going to do, move to Linux? Oh, right... the "cloud" will be Linux. >> >> Blech. >> >>> On 1/19/2017 3:49 AM, Wesley Parish wrote: >>> I suppose that set of rumours will lead to people shifting to the FOSS versions >>> of Solaris and SPARC. >>> >>> Wesley Parish >>> >>> Quoting Kay Parker : >>> >>>> guess it is the beginning of the end of Solaris and the Sparc CPU: >>>> 'Rumors have been circulating since late last year that Oracle was >>>> planning to kill development of the Solaris operating system, with >>>> major >>>> layoffs coming to the operating system's development team. Others >>>> speculated that future versions of the Unix platform Oracle acquired >>>> with Sun Microsystems would be designed for the cloud and built for the >>>> Intel platform only and that the SPARC processor line would meet its >>>> demise. The good news, based on a recently released Oracle roadmap for >>>> the SPARC platform, is that both Solaris and SPARC appear to have a >>>> future. >>>> >>>> The bad news is that the next major version of Solaris—Solaris 12— >>>> has >>>> apparently been canceled, as it has disappeared from the roadmap. >>>> Instead, it's been replaced with "Solaris 11.next"—and that version >>>> is >>>> apparently the only update planned for the operating system through >>>> 2021. >>>> >>>> With its on-premises software and hardware sales in decline, Oracle has >>>> been undergoing a major reorganization over the past two years as it >>>> attempts to pivot toward the cloud. Those changes led to a major speed >>>> bump in the development cycle for Java Enterprise Edition, a slowdown >>>> significant enough that it spurred something of a Java community >>>> revolt. >>>> Oracle later announced a new roadmap for Java EE that recalibrated >>>> expectations, focusing on cloud services features for the next version >>>> of the software platform. ' >>>> >>> http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/ >>>> -- >>>> Kay Parker >>>> kayparker at mailite.com >>>> >>>> -- >>>> http://www.fastmail.com - The way an email service should be >>>> >>>> >>> >>> "I have supposed that he who buys a Method means to learn it." - Ferdinand Sor, >>> Method for Guitar >>> >>> "A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn >>> >>> > > From scj at yaccman.com Fri Jan 20 03:47:27 2017 From: scj at yaccman.com (Steve Johnson) Date: Thu, 19 Jan 2017 09:47:27 -0800 Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: Message-ID: PCC ended up being ported to many dozen different architectures, so it's quite possible, but I don't recall it being done.  It was kind of a dinosaur by the early 70's.  I'm not even sure that it had memory protection, and it certainly didn't have paging.  And the I/O system was strange.  So porting Unix would have been next to impossible. The main thing I remember about the 6600 was that it didn't have parity bits on its memory.  So people used to run the same program three times and if two of the answers agreed, they published... Steve ----- Original Message ----- From: "Nemo" To:"Steve Johnson" Cc:"TUHS main list" Sent:Wed, 18 Jan 2017 21:52:58 -0500 Subject:Was pcc ever ported to the CDC6600? All this talk of targets for UNIX makes me wonder (given the eccentricity of the machine). N. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Fri Jan 20 03:52:03 2017 From: rminnich at gmail.com (ron minnich) Date: Thu, 19 Jan 2017 17:52:03 +0000 Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: References: Message-ID: well, you know, parity is for farmers. For more good fun with SRC's arithmetic, ... > https://people.eecs.berkeley.edu/~wkahan/CS279/CrayUG.pdf > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles.unix.pro at gmail.com Fri Jan 20 04:06:50 2017 From: charles.unix.pro at gmail.com (Charles Anthony) Date: Thu, 19 Jan 2017 10:06:50 -0800 Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: References: Message-ID: On Thu, Jan 19, 2017 at 9:47 AM, Steve Johnson wrote: > PCC ended up being ported to many dozen different architectures, so it's > quite possible, but I don't recall it being done. It was kind of a > dinosaur by the early 70's. I'm not even sure that it had memory > protection, and it certainly didn't have paging. > "base and bound" memory protection; no paging. > And the I/O system was strange. > -- Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From schily at schily.net Fri Jan 20 04:17:08 2017 From: schily at schily.net (Joerg Schilling) Date: Thu, 19 Jan 2017 19:17:08 +0100 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <1484815787.58807daba38e0@www.paradise.net.nz> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> Message-ID: <588102a4.MszEO5kmgZsE4GwF%schily@schily.net> Wesley Parish wrote: > I suppose that set of rumours will lead to people shifting to the FOSS versions > of Solaris and SPARC. I would be happy if we had a commonly accepted OSS line for OpenSolaris development. The current problem is that the payers in that game all have commercial interests and removed many parts from the sources that they believe are not needed anymore. Another problem is that the closed source i18n parts of libc in Illumos have been replaced by a FreeBSD implementation version that is not POSIX compliant. So for today, there is development but only in order to meet particular interests. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From downing.nick at gmail.com Fri Jan 20 12:00:00 2017 From: downing.nick at gmail.com (Nick Downing) Date: Fri, 20 Jan 2017 13:00:00 +1100 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> Message-ID: I kind of differ here, I know I'm throwing fat into the fire but.... I had to use Solaris at uni when I started my degree 10yrs ago. It was at Melbourne University which DID have a very good computer science programme, you had to learn lots of low-level stuff like C and basic Unix programming, etc. I also used to tutor, and I remember I taught a course about MIPS assembly language using an emulator, which I believe all CS majors had to do. Obviously there was plenty of high-level stuff too (I taught a course on SQL relational algebra and so on), but what I remember is you had to learn Solaris in first year, and you had to basically know your way around the system, I recall there was a course that included a shell programming component too. Well much as I completely agree that graduates should be taught C and Unix, I found Solaris to be a pretty bad platform. I guess the core of my dissatisfaction stems from it being different to Linux, and so I suppose Solaris religious people are going to say "GNU is an ABOMINATION it should NEVER HAVE INTRODUCED LONG COMMAND-LINE OPTIONS and ALL THE TOOLS CONTAIN USELESS FEATURES" and so on... and they are right... but nevertheless I found it much less useable and I felt a lot of the tools had rough edges, as well as changing "mkdir --parents" into "mkdir -p" you had to account for different behaviour which was in most cases more naive. So, because of this the sysadmins had installed a whole boatload of GNU tools into places like /opt/gcc-(VERSION) or /opt/coreutils-(VERSION), and if you had to use a specific feature of say, "ls" or "find" or "sort" that wasn't in the Solaris tool, you had to exhaustively list the long pathname to the GNU tool in your script. The sysadmins did this kind of piecemeal in the system's scripts like .profile and the result was basically a total mess, I don't know how you would have explained all this to a naive student. Not to mention that the directory structure of the student servers had involved into a huge mess over a decade or more with lots of stuff installed in weird places that was referred to in the course material that the teachers had developed, i.e. our IP investment relied totally on this strange Solaris installation. I gather that the coursework has been significantly dumbed down now with the introduction of the so-called "Melbourne Model", it was very controversial at the time, but basically means you learn only dumbed down stuff at undergraduate level, and to get a real degree you have to do a Masters at least. I believe the Solaris student servers are still running but I don't think the coursework uses them anymore, it's all Python- and web-based now. When my sister did the course recently all her assignments were done by logging into a website and running a Python-based IDE via the website which would run her Python scripts and show her the results right there in the browser. Hmm. So I digress but anyway, I would have taken the opportunity to change all the coursework to refer to a Linux student server with all the normal tools installed in the expected places. I always found the Solaris system to be pretty much like stepping back in time. And I do not really understand why corporations would want to run Solaris when Linux is vastly more developed. I guess maybe they have a big investment in Solaris based tools and networks (perhaps stuff like Sun Grid Engine), but I would think it would be easier to port all this to Linux than to continue bashing one's head against a brick wall in trying to get this outdated and proprietary system to become modern. Much as I love BSD I feel that much the same argument applies to FreeBSD as well, it will simply never be as developed/mature as Linux. cheers, Nick On Fri, Jan 20, 2017 at 1:40 AM, Arthur Krewat wrote: > Let's hope they do the right thing and release Solaris into the wild again. > ZFS in particular. > > Personally, I think they are making a huge mistake. What are they going to > do, move to Linux? Oh, right... the "cloud" will be Linux. > > Blech. > > On 1/19/2017 3:49 AM, Wesley Parish wrote: >> >> I suppose that set of rumours will lead to people shifting to the FOSS >> versions >> of Solaris and SPARC. >> >> Wesley Parish >> >> Quoting Kay Parker : >> >>> guess it is the beginning of the end of Solaris and the Sparc CPU: >>> 'Rumors have been circulating since late last year that Oracle was >>> planning to kill development of the Solaris operating system, with >>> major >>> layoffs coming to the operating system's development team. Others >>> speculated that future versions of the Unix platform Oracle acquired >>> with Sun Microsystems would be designed for the cloud and built for the >>> Intel platform only and that the SPARC processor line would meet its >>> demise. The good news, based on a recently released Oracle roadmap for >>> the SPARC platform, is that both Solaris and SPARC appear to have a >>> future. >>> >>> The bad news is that the next major version of Solaris—Solaris 12— >>> has >>> apparently been canceled, as it has disappeared from the roadmap. >>> Instead, it's been replaced with "Solaris 11.next"—and that version >>> is >>> apparently the only update planned for the operating system through >>> 2021. >>> >>> With its on-premises software and hardware sales in decline, Oracle has >>> been undergoing a major reorganization over the past two years as it >>> attempts to pivot toward the cloud. Those changes led to a major speed >>> bump in the development cycle for Java Enterprise Edition, a slowdown >>> significant enough that it spurred something of a Java community >>> revolt. >>> Oracle later announced a new roadmap for Java EE that recalibrated >>> expectations, focusing on cloud services features for the next version >>> of the software platform. ' >>> >> >> http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confirms-demise-of-solaris-12-effort/ >>> >>> -- >>> Kay Parker >>> kayparker at mailite.com >>> >>> -- >>> http://www.fastmail.com - The way an email service should be >>> >>> >> >> >> >> "I have supposed that he who buys a Method means to learn it." - Ferdinand >> Sor, >> Method for Guitar >> >> "A verbal contract isn't worth the paper it's written on." -- Samuel >> Goldwyn >> >> > From wkt at tuhs.org Fri Jan 20 12:59:11 2017 From: wkt at tuhs.org (Warren Toomey) Date: Fri, 20 Jan 2017 12:59:11 +1000 Subject: [TUHS] Working Group for Release of 8th, 9th, 10th Ed Unix? Message-ID: <20170120025911.GB32698@minnie.tuhs.org> Now that we have quite a few ex-Bell Labs staff on the list, and several other luminaries, and with the Unix 50th anniversary not far off, perhaps it is time to form a working group to help lobby to get 8th, 9th and 10th Editions released. I'm after volunteers to help. People who can actually move this forward. Let me know if and how you can help out. Thanks, Warren From schily at schily.net Fri Jan 20 21:24:24 2017 From: schily at schily.net (Joerg Schilling) Date: Fri, 20 Jan 2017 12:24:24 +0100 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> Message-ID: <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> Nick Downing wrote: > right... but nevertheless I found it much less useable and I felt a > lot of the tools had rough edges, as well as changing "mkdir > --parents" into "mkdir -p" you had to account for different behaviour > which was in most cases more naive. mkdir introduced the -p option with SunOS-4.0 (Spring 1988). This is long before gmkdir apeared....you need to correct your standoint. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From tfb at tfeb.org Fri Jan 20 21:54:48 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Fri, 20 Jan 2017 11:54:48 +0000 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> Message-ID: On 20 Jan 2017, at 02:00, Nick Downing wrote: > > I always found the > Solaris system to be pretty much like stepping back in time. And I do > not really understand why corporations would want to run Solaris when > Linux is vastly more developed. This is actually exactly why they run it, and also what will lead to its (probable) demise. Large commercial organisations are entirely made of systems which were written (or, more likely, constructed from a bunch of large third-party bits held together with locally-written glue) a long time ago, which perform some purpose which is assumed to be critical, and which no-one now understands. They are *assumed* to be critical because no-one dares to poke at them to find out if they really are: if perturbing some system might result in your ATMs not working, you don't, ever, perturb it, even if there is a very good chance that it won't. These systems need to be maintained somehow, which means two things at the OS level and below (it means related but other things above the OS level): the hardware and OS has to be supportable, and the OS has to pass various standards, usually related to security. This in turn means that the HW and OS need to be kept reasonably current. But on top of the OS sits a great mass of code which no-one understands and which certainly was not written by people who understood, well, anything really. So there will very definitely be hard-wired assumptions about things like filesystem layout and the exact behaviour of various tools, and equally definitely there will not be any checks that things are behaving well: the people who write this stuff are not people who check exit codes. So, since you need to deploy new versions of the OS, these new versions need to be *very compatible indeed* with old versions. Technically, this isn't incompatible with adding new features, so long as you don't break the old ones. But in practice the risk of doing so is so high that things tend to get pretty frozen (have you tested that the behaviour of your new 'mkdir' is compatible in every case with the old one, including in cases where the old one fails but the new one might not, because some code somewhere will be relying on that). So new features tend to get added off to the side, leaving the old thing alone. And that's why systems like Solaris seem old-fashioned: they're not old-fashioned, they're just extremely compatible. And it's also why they slowly die: their market ends up being people who have huge critical legacy systems which they need to maintain, not people who are building new systems. Indeed even the people with the great legacy chunks of software, when they build new systems, start using the shiny new platforms, because the shiny young people they hire to do this like the new platforms. Of course, no lessons are ever learned, so these shiny new systems are no more robust than the old ones were, meaning that the currently shiny new platforms they are built on will also gradually deteriorate into the slow death of compatibility (or they won't, and the ATMS will indeed stop working: I am not sure which is the worse outcome) --tim From random832 at fastmail.com Fri Jan 20 23:26:40 2017 From: random832 at fastmail.com (Random832) Date: Fri, 20 Jan 2017 08:26:40 -0500 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> Message-ID: <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> On Fri, Jan 20, 2017, at 06:24, Joerg Schilling wrote: > Nick Downing wrote: > > > right... but nevertheless I found it much less useable and I felt a > > lot of the tools had rough edges, as well as changing "mkdir > > --parents" into "mkdir -p" you had to account for different behaviour > > which was in most cases more naive. > > mkdir introduced the -p option with SunOS-4.0 (Spring 1988). > This is long before gmkdir apeared....you need to correct your standoint. I think his assertion is that he personally had originally learned the command as "mkdir --parents" [which was and is GNU-only], and had to change to spelling it "-p" when going to non-linux systems, along with some unspecified behavior differences. From schily at schily.net Sat Jan 21 00:23:25 2017 From: schily at schily.net (Joerg Schilling) Date: Fri, 20 Jan 2017 15:23:25 +0100 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> Message-ID: <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> Random832 wrote: > I think his assertion is that he personally had originally learned the > command as "mkdir --parents" [which was and is GNU-only], and had to > change to spelling it "-p" when going to non-linux systems, along with > some unspecified behavior differences. This is why I call Linux a system that tries to establich a vendor lock in. Man pages could have been written in a way that makes it obvious that --parents is non-portable, but they rather encourage people to learn gnu long options. Jörg -- EMail:joerg at schily.net (home) Jörg Schilling D-13353 Berlin joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.org/private/ http://sourceforge.net/projects/schilytools/files/ From usotsuki at buric.co Sat Jan 21 00:29:59 2017 From: usotsuki at buric.co (Steve Nickolas) Date: Fri, 20 Jan 2017 09:29:59 -0500 (EST) Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> Message-ID: On Fri, 20 Jan 2017, Joerg Schilling wrote: > Random832 wrote: > >> I think his assertion is that he personally had originally learned the >> command as "mkdir --parents" [which was and is GNU-only], and had to >> change to spelling it "-p" when going to non-linux systems, along with >> some unspecified behavior differences. > > This is why I call Linux a system that tries to establich a vendor lock in. > > Man pages could have been written in a way that makes it obvious that --parents > is non-portable, but they rather encourage people to learn gnu long options. > > Jörg > > I blame GNU rather than the Linux people. GNU are just as much masters of "embrace, extend, exterminate" as Microsoft. I've thought of trying to build a "GNUless" Linux distribution with a purer Unix feel, but I get hung trying to step myself through the process. -uso. From rudi.j.blom at gmail.com Sat Jan 21 00:58:28 2017 From: rudi.j.blom at gmail.com (Rudi Blom) Date: Fri, 20 Jan 2017 21:58:28 +0700 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap Message-ID: I'm a bit puzzled, but then I only ever worked with some version of Ultrix and an AT&T flavour of UNIX in Philips, SCO 3.2V4.2 (OpenServer 3ish), DEC Digital UNIX, Tru64, HP-UX 1123/11.31 and only ever used "mkdir -p". Some differences in the various versions are easily solved in scripts, like shown below. Not the best of examples, but easy. Getting it to work on a linux flavour wouldn't be too difficult :-) OS_TYPE=`uname -n` case "${OS_TYPE}" in "OSF1") PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:" TZ="THA-7" ;; "HP-UX") PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/contrib/bin:/xyz/field/scripts:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:" TZ="TST-7" ;; *) echo "${OS_TYPE} unknown, exit" exit 1 ;; esac From krewat at kilonet.net Sat Jan 21 01:20:46 2017 From: krewat at kilonet.net (Arthur Krewat) Date: Fri, 20 Jan 2017 10:20:46 -0500 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> Message-ID: I agree. Why would anyone ever write a piece of software that has multiple versions of options, when there's no need for the extra wordy versions? In the mkdir source, there is no conditionals around anything to do with "static struct option const longopts[]" - so there's no environment where they wouldn't exist. From man page for mkdir on a RedHat 6.8 install: * -m, --mode=MODE* set file mode (as in chmod), not a=rwx - umask * -p, --parents* no error if existing, make parent directories as needed * -v, --verbose* print a message for each created directory * -Z, --context=CTX* set the SELinux security context of each created directory to CTX When COREUTILS_CHILD_DEFAULT_ACLS environment variable is set, -p/--parents option respects default umask and ACLs, as it does in Red Hat Enterprise Linux 7 by default --help display this help and exit --version output version information and exit Oh, and an old-man rant: get off my lawn. On 1/20/2017 9:29 AM, Steve Nickolas wrote: > > I blame GNU rather than the Linux people. GNU are just as much > masters of "embrace, extend, exterminate" as Microsoft. > > I've thought of trying to build a "GNUless" Linux distribution with a > purer Unix feel, but I get hung trying to step myself through the > process. > > -uso. -------------- next part -------------- An HTML attachment was scrubbed... URL: From akosela at andykosela.com Sat Jan 21 01:45:35 2017 From: akosela at andykosela.com (Andy Kosela) Date: Fri, 20 Jan 2017 09:45:35 -0600 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> Message-ID: On Friday, January 20, 2017, Tim Bradshaw > wrote: > > And it's also why they slowly die: their market ends up being people who > have huge critical legacy systems which they need to maintain, not people > who are building new systems. Indeed even the people with the great legacy > chunks of software, when they build new systems, start using the shiny new > platforms, because the shiny young people they hire to do this like the new > platforms. > > > I understand that Linux can still be called a new kid on the block, but it is actually not "a new platform" anymore. It has been deployed (along with FreeBSD) in large corporations for around 20 years now. It really became the Standard OS from embedded world to supercomputers. Personally I do not find this to be a bad thing, because with OS standardization comes uniformity, and I would rather have one true Unix standard than hundreds of incompatible ones. I believe that the future of proprietary UNIX is doomed and the only remaining choices for server operating systems will be Linux and Windows in the near future. If you think about it, the future is already here... --Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfb at tfeb.org Sat Jan 21 02:30:14 2017 From: tfb at tfeb.org (tfb at tfeb.org) Date: Fri, 20 Jan 2017 16:30:14 +0000 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> Message-ID: <2684AF1F-8B33-4646-BF9C-0FCAD6C70D5A@tfeb.org> On 20 Jan 2017, at 15:45, Andy Kosela wrote: > > I understand that Linux can still be called a new kid on the block, but it is actually not "a new platform" anymore. It has been deployed (along with FreeBSD) in large corporations for around 20 years now. It really became the Standard OS from embedded world to supercomputers. The people I'm talking about (who might be characterised as 'COBOL shops') are not early adopters: 20 years is about how long it takes for them to decide something is safe. Yes, of course Linux has been everywhere for a long time, but ten years ago it almost certainly was not involved in running your bank account, while today it almost certainly is. > Personally I do not find this to be a bad thing, because with OS standardization comes uniformity, and I would rather have one true Unix standard than hundreds of incompatible ones. 'Linux' and 'OS standardisation' are funny phrases to see in the same sentence. (Note: I work in an exclusively Linux HPC environment: I am not some anti-Linux holdout, I just have previously worked in the above-mentioned environments and I appreciate their needs and fears). I think this is probably off-topic tor TUHS, sorry. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cym224 at gmail.com Sat Jan 21 05:38:29 2017 From: cym224 at gmail.com (Nemo) Date: Fri, 20 Jan 2017 14:38:29 -0500 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <2684AF1F-8B33-4646-BF9C-0FCAD6C70D5A@tfeb.org> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <2684AF1F-8B33-4646-BF9C-0FCAD6C70D5A@tfeb.org> Message-ID: On 20 January 2017 at 11:30, wrote (in part): > 'Linux' and 'OS standardisation' are funny phrases to see in the same > sentence. As they say, 'ave a laugh, guv: http://www.iso.org/iso/catalogue_detail.htm?csnumber=43781 N. From steffen at sdaoden.eu Sat Jan 21 06:30:54 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Fri, 20 Jan 2017 21:30:54 +0100 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> Message-ID: <20170120203054.xqn40%steffen@sdaoden.eu> Andy Kosela wrote: |On Friday, January 20, 2017, Tim Bradshaw <[1]tfb at tfeb.org[/1]> wrote: \ |And it's also why they slowly die: their market ends up being people \ |who have |huge critical legacy systems which they need to maintain, not people \ |who are building new systems.  Indeed even the people with the great \ |legacy chunks |of software, when they build new systems, start using the shiny new \ |platforms, because the shiny young people they hire to do this like \ |the new |platforms. | | [1] javascript:_e(%7B%7D,'cvml','tfb at tfeb.org'); (Couldn't resist, sorry.) |I understand that Linux can still be called a new kid on the block, \ |but it is actually not "a new platform" anymore.  It has been deployed \ |(along with |FreeBSD) in large corporations for around 20 years now.  It really \ |became the Standard OS from embedded world to supercomputers. | |Personally I do not find this to be a bad thing, because with OS standardiz\ |ation comes uniformity, and I would rather have one true Unix standard |than hundreds of incompatible ones. I am really glad with the POSIX standard, that, if in doubt, not few members of this list have anticipated in. And if it is that they have thought or implemented the foundations that lead to POSIX. Note that i am really happy with it, but without it nothing but ISO C would be there. And then i would rather boot a Plan9/9front/(9atom) system and adore so much the manuals that have been written by the really good ones. And if just for the spirit from in between the lines. |I believe that the future of proprietary UNIX is doomed and the only \ |remaining choices for server operating systems will be Linux and Windows \ |in the |near future.  If you think about it, the future is already here... Not to start throwing with something that stinks, but in practice my resource files from FreeBSD 4.7 are still in use for 10.3. I never setup a server as such until January 2016, and i had a FreeBSD one running via inetd on one afternoon. And in general one thing that i for one would never overestimate is the documentation, and even though Linux has so much improved, the /usr/share isn't there, which is think is a real pity for young programmers, which may possibly never access doc.cat-v.org. Maybe they have a good professor. And the release information of FreeBSD and also OpenBSD really is appreciated so much by someone like me, who has neither time nor interest to stay totally up-to-date regarding Linux kernel and GNU userland development! --steffen From kayparker at mailite.com Sat Jan 21 08:30:29 2017 From: kayparker at mailite.com (=?utf-8?Q?Kay=20Parker=20=09=20?=) Date: Fri, 20 Jan 2017 14:30:29 -0800 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <20170120203054.xqn40%steffen@sdaoden.eu> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <20170120203054.xqn40%steffen@sdaoden.eu> Message-ID: <1484951429.4025367.854538648.4561EC2A@webmail.messagingengine.com> I'm running fedora25 beside my Linuxmint 18.1 installation. fedora25 means fully featured systemd and wayland on the top of the super fast Linux (kernel) 4.9. Linux thats were the story goes. Solaris etc. just running behind in the past. On Fri, Jan 20, 2017, at 12:30 PM, Steffen Nurpmeso wrote: > Andy Kosela wrote: > |On Friday, January 20, 2017, Tim Bradshaw <[1]tfb at tfeb.org[/1]> wrote: > \ > |And it's also why they slowly die: their market ends up being people \ > |who have > |huge critical legacy systems which they need to maintain, not people \ > |who are building new systems.  Indeed even the people with the great \ > |legacy chunks > |of software, when they build new systems, start using the shiny new \ > |platforms, because the shiny young people they hire to do this like \ > |the new > |platforms. > | > | [1] javascript:_e(%7B%7D,'cvml','tfb at tfeb.org'); > > (Couldn't resist, sorry.) > > |I understand that Linux can still be called a new kid on the block, \ > |but it is actually not "a new platform" anymore.  It has been deployed > \ > |(along with > |FreeBSD) in large corporations for around 20 years now.  It really \ > |became the Standard OS from embedded world to supercomputers. > | > |Personally I do not find this to be a bad thing, because with OS > standardiz\ > |ation comes uniformity, and I would rather have one true Unix standard > |than hundreds of incompatible ones. > > I am really glad with the POSIX standard, that, if in doubt, not > few members of this list have anticipated in. And if it is that > they have thought or implemented the foundations that lead to > POSIX. Note that i am really happy with it, but without it > nothing but ISO C would be there. And then i would rather boot > a Plan9/9front/(9atom) system and adore so much the manuals that > have been written by the really good ones. And if just for the > spirit from in between the lines. > > |I believe that the future of proprietary UNIX is doomed and the only \ > |remaining choices for server operating systems will be Linux and > Windows \ > |in the > |near future.  If you think about it, the future is already here... > > Not to start throwing with something that stinks, but in practice > my resource files from FreeBSD 4.7 are still in use for 10.3. > I never setup a server as such until January 2016, and i had > a FreeBSD one running via inetd on one afternoon. > And in general one thing that i for one would never overestimate > is the documentation, and even though Linux has so much improved, > the /usr/share isn't there, which is think is a real pity for > young programmers, which may possibly never access doc.cat-v.org. > Maybe they have a good professor. And the release information of > FreeBSD and also OpenBSD really is appreciated so much by someone > like me, who has neither time nor interest to stay totally > up-to-date regarding Linux kernel and GNU userland development! > > --steffen -- Kay Parker kayparker at mailite.com -- http://www.fastmail.com - The way an email service should be From steffen at sdaoden.eu Sat Jan 21 09:50:45 2017 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Sat, 21 Jan 2017 00:50:45 +0100 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <1484951429.4025367.854538648.4561EC2A@webmail.messagingengine.com> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <20170120203054.xqn40%steffen@sdaoden.eu> <1484951429.4025367.854538648.4561EC2A@webmail.messagingengine.com> Message-ID: <20170120235045.DgR9I%steffen@sdaoden.eu> Kay Parker wrote: |I'm running fedora25 beside my Linuxmint 18.1 installation. fedora25 |means fully featured systemd and wayland on the top of the super fast |Linux (kernel) 4.9. |Linux thats were the story goes. Solaris etc. just running behind in the |past. Well i am currently running Alpine on the server (i can't recall the name of the init system they use at the moment, it is what Debian had and Gentoo i think still has; i don't like it, e.g., it cannot perform proper restart if one of the processes fails to stand up again, you have to start them all one by one, then, which is -- let aside how complicated it is to program and maintain such an init system, it is a science! -- mysterious to me given that it gets the dependencies right in normal conditions, and has so much state laying around; and note it will run FreeBSD again at some later time, it was just that i haven't really cared for Linux since i have discovered FreeBSD 4.7, and really felt i need to learn about it again after so and so many years, and then it was 2016, and it was clear what that would mean, and then it was David Bowie, etc. And CRUX-Linux, which is totally underrated, and uses a wonderful unagitated BSD-like approach, i'm looking forward for their new 3.3, in a not too distant future! And VoidLinux, which has a very fine package manager and uses runit, which i think is a really pragmatic, smart, and very small, init system that also is completely underrated. I hope all these systems can survive the very way their developers drive them up the road. I couldn't say, i really love the BSD way, but of course the Linux kernel is _so_ supportive, i really like CRUX. And Void, it is not even graphical no more by default. ArchLinux i have, too, in fact it is my main system since my main machine died. Arch uses systemd. Yes, i don't like systemd. Void is even more surfing the edge than Arch as of today, isn't that amazing? It has a shutdown time of 1 second. Linux wales 4.8.13-1-ARCH #1 SMP PREEMPT Fri Dec 9 07:24:34 CET 2016 x86_64 GNU/Linux Linux irish 4.9.5_1 #1 SMP PREEMPT Fri Jan 20 14:48:47 UTC 2017 i686 GNU/Linux --steffen From usotsuki at buric.co Sat Jan 21 10:09:00 2017 From: usotsuki at buric.co (Steve Nickolas) Date: Fri, 20 Jan 2017 19:09:00 -0500 (EST) Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: <20170120235045.DgR9I%steffen@sdaoden.eu> References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <20170120203054.xqn40%steffen@sdaoden.eu> <1484951429.4025367.854538648.4561EC2A@webmail.messagingengine.com> <20170120235045.DgR9I%steffen@sdaoden.eu> Message-ID: On Sat, 21 Jan 2017, Steffen Nurpmeso wrote: > Well i am currently running Alpine on the server (i can't recall > the name of the init system they use at the moment, it is what > Debian had and Gentoo i think still has sysvinit? -uso. From tfb at tfeb.org Sat Jan 21 10:25:04 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Sat, 21 Jan 2017 00:25:04 +0000 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <2684AF1F-8B33-4646-BF9C-0FCAD6C70D5A@tfeb.org> Message-ID: On 20 Jan 2017, at 19:38, Nemo wrote: > > As they say, 'ave a laugh, guv: > http://www.iso.org/iso/catalogue_detail.htm?csnumber=43781 > You have not understood the problem I was describing I think: things like POSIX or LSB do not solve it: if they did it would be trivial to port between platforms, and it is not, because will always rely on behaviour which is outwith the standard, whatever the standard may be. These standards solve the problem of making well-written code portable, but your bank is not held together by well-written code, unfortunately. An interesting approach would be platforms which only supported the standard they purport to conform to (ie there would be no additional functionality at all): such platforms would make porting things more easy, but they would also be mostly indistinguishable from each other and thus eliminate most of the competition between vendors. They would also be impossibly austere of course. -------------- next part -------------- An HTML attachment was scrubbed... URL: From krewat at kilonet.net Sat Jan 21 11:03:08 2017 From: krewat at kilonet.net (Arthur Krewat) Date: Fri, 20 Jan 2017 20:03:08 -0500 Subject: [TUHS] Working Group for Release of 8th, 9th, 10th Ed Unix? In-Reply-To: <20170120025911.GB32698@minnie.tuhs.org> References: <20170120025911.GB32698@minnie.tuhs.org> Message-ID: <6bdaea53-7127-d871-9fdf-f1147447b7dc@kilonet.net> If you need a tester, I'm your guy. I can't help in the development arena, but I can certainly run it on a variety of machines and try to break it :) On 1/19/2017 9:59 PM, Warren Toomey wrote: > Now that we have quite a few ex-Bell Labs staff on the list, and several > other luminaries, and with the Unix 50th anniversary not far off, perhaps > it is time to form a working group to help lobby to get 8th, 9th and 10th > Editions released. > > I'm after volunteers to help. People who can actually move this forward. > Let me know if and how you can help out. > > Thanks, Warren > From khm at sciops.net Sat Jan 21 11:03:16 2017 From: khm at sciops.net (Kurt H Maier) Date: Fri, 20 Jan 2017 17:03:16 -0800 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <20170120203054.xqn40%steffen@sdaoden.eu> <1484951429.4025367.854538648.4561EC2A@webmail.messagingengine.com> <20170120235045.DgR9I%steffen@sdaoden.eu> Message-ID: <20170121010316.GC64506@wopr> On Fri, Jan 20, 2017 at 07:09:00PM -0500, Steve Nickolas wrote: > On Sat, 21 Jan 2017, Steffen Nurpmeso wrote: > > > Well i am currently running Alpine on the server (i can't recall > > the name of the init system they use at the moment, it is what > > Debian had and Gentoo i think still has > > sysvinit? > > -uso. OpenRC. https://wiki.gentoo.org/wiki/OpenRC khm From rp at servium.ch Sat Jan 21 11:58:47 2017 From: rp at servium.ch (Rico Pajarola) Date: Sat, 21 Jan 2017 02:58:47 +0100 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <2684AF1F-8B33-4646-BF9C-0FCAD6C70D5A@tfeb.org> Message-ID: On Sat, Jan 21, 2017 at 1:25 AM, Tim Bradshaw wrote: > > An interesting approach would be platforms which *only* supported the > standard they purport to conform to (ie there would be no additional > functionality at all): such platforms would make porting things more easy, > but they would also be mostly indistinguishable from each other and thus > eliminate most of the competition between vendors. They would also be > impossibly austere of course. > that's more or less what Solaris is doing, and why the defaults seem archaic to people who've only ever used Linux. You can change the "feel" of the Solaris (by adding/removing/rearranging stuff in $PATH) from SysV (/usr/bin), BSD (/usr/ucb), the "X/Open standard" (/usr/xpg4/bin), to GNU (/usr/gnu/bin). I might have missed some. AIUI /usr/xpg4 mostly exists in order to pass the standards tests ;) I really despised the "messiness" in Linux where the choice was either stable and outdated to the point of being useless (Debian until they got their act together), stable but patched beyond recognition (anything "Enterprise"), or bleeding edge where entire subsystems can get exchanged at any time without warning (anything "Desktop"). I was clinging to real systems like Solaris and FreeBSD, but eventually I gave up and I'm not looking back. The ease of getting stuff to work (hardware and software) greatly outweighs the lack of elegance and the occasional breakage due to unexpected changes. And there's another kind of elegance in being able to boot Linux on any random PC and have at least graphics, network, and storage work out of the box (most of the time anyway. Solaris never stood a chance on that front). Software gets installed with a simple "yum install foo" or "apt-get install foo" command. At some point Solaris also lost the performance race and that was pretty much it. I loved Solaris while it was alive and even when it was on life support. Oracle killing Solaris came hardly as a surprise to anyone. The writing has been on the wall for a while, in bold and blinking. I'm only surprised it took so long... -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Sat Jan 21 22:38:35 2017 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 21 Jan 2017 23:38:35 +1100 (EST) Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: References: Message-ID: On Thu, 19 Jan 2017, Steve Johnson wrote: > PCC ended up being ported to many dozen different architectures, so it's > quite possible, but I don't recall it being done.  It was kind of a > dinosaur by the early 70's.  I'm not even sure that it had memory > protection, and it certainly didn't have paging.  And the I/O system was > strange.  So porting Unix would have been next to impossible. My memory of the Kyber (as we called them; we had a 72) was that it was not character-addressable, but 60-bit word-addressable, thus making string handling somewhat difficult. Don't get me started on its utterly broken architecture... I have thankfully lost my programming manual for the beast. > The main thing I remember about the 6600 was that it didn't have parity > bits on its memory.  So people used to run the same program three times > and if two of the answers agreed, they published... Parity only slowed it down, and besides, hardware never failed... My fondest memory of the thing was its command completion; I would start to type "O, TR" and it would fill out "O, TRANSACTION TERMINAL STATUS". Which reminds me that my worst memory was its console keyboard, with "0" on the left... -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From kayparker at mailite.com Sat Jan 21 22:51:42 2017 From: kayparker at mailite.com (=?utf-8?Q?Kay=20Parker=20=09=20?=) Date: Sat, 21 Jan 2017 04:51:42 -0800 Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: References: Message-ID: <1485003102.1912634.854925048.79C4A97C@webmail.messagingengine.com> > My memory of the Kyber (as we called them; we had a 72) wasn't it a CDC Cyber? https://en.wikipedia.org/wiki/CDC_Cyber On Sat, Jan 21, 2017, at 04:38 AM, Dave Horsfall wrote: > On Thu, 19 Jan 2017, Steve Johnson wrote: > > > PCC ended up being ported to many dozen different architectures, so it's > > quite possible, but I don't recall it being done.  It was kind of a > > dinosaur by the early 70's.  I'm not even sure that it had memory > > protection, and it certainly didn't have paging.  And the I/O system was > > strange.  So porting Unix would have been next to impossible. > > My memory of the Kyber (as we called them; we had a 72) was that it was > not character-addressable, but 60-bit word-addressable, thus making > string > handling somewhat difficult. Don't get me started on its utterly broken > architecture... I have thankfully lost my programming manual for the > beast. > > > The main thing I remember about the 6600 was that it didn't have parity > > bits on its memory.  So people used to run the same program three times > > and if two of the answers agreed, they published... > > Parity only slowed it down, and besides, hardware never failed... > > My fondest memory of the thing was its command completion; I would start > to type "O, TR" and it would fill out "O, TRANSACTION TERMINAL STATUS". > Which reminds me that my worst memory was its console keyboard, with "0" > on the left... > > -- > Dave Horsfall DTM (VK2KFU) "Those who don't understand security will > suffer." -- Kay Parker kayparker at mailite.com -- http://www.fastmail.com - Does exactly what it says on the tin From lrw at acm.org Sun Jan 22 00:42:22 2017 From: lrw at acm.org (Lorne Wilkinson) Date: Sat, 21 Jan 2017 09:42:22 -0500 Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: <1485003102.1912634.854925048.79C4A97C@webmail.messagingengine.com> References: <1485003102.1912634.854925048.79C4A97C@webmail.messagingengine.com> Message-ID: Control Data had a lab in Mississauga On, (outside of Toronto) in the 70s and 80s where some of the CDC Cyber 180s were designed and built. A company called HCR Corp in Toronto did a considerable amount of work for CDC re-targeting pcc and UNIX System V for the 180s and ETA10. I was fortunate enough to work for both companies. I didn't work on the pcc port to the Cyber 180, but some of the UNIX port. I thought the Cyber 180 architecture was way ahead of it's time. Virtual memory, 64-bit ints, shared libraries. Some of the 180s were also dual-state, NOS/VE 64-bit OS and apps 50% of the time, a CPU microcode switch, to NOS and the 60-bit platform for 50% of the time, to support NOS to NOS/VE migration. Pcc re-targeting was challenging in a number of ways, addresses were 48 bits, with a ring and segment number, which resulted in a NULL pointer actually not being 0. HCR also did work on the ETA10 UNIX port, I didn't participate on that project, but HCR also re-targeted pcc for the Intel iWarp CPU, which I worked on. HCR had a portable global code optimizer and peephole optimizer for pcc, so much of the work involved splitting pcc, for the global optimizer, to operate between front and back ends, and integrating the peephole optimizer, re-targeting the code generator, and tuning. A lot of very smart people who were great to work with at CDC and HCR, certainly a great way to start my career. Some more background here on the CYBERs http://bitsavers.informatik.uni-stuttgart.de/pdf/cdc/cyber/cyber_180/ I seem to remember reading a number of those manuals 30 years ago or so. And info on the iWarp project: http://www.cs.cmu.edu/~iwarp/ On Sat, Jan 21, 2017 at 7:51 AM, Kay Parker wrote: > > My memory of the Kyber (as we called them; we had a 72) > wasn't it a CDC Cyber? > https://en.wikipedia.org/wiki/CDC_Cyber > > On Sat, Jan 21, 2017, at 04:38 AM, Dave Horsfall wrote: > > On Thu, 19 Jan 2017, Steve Johnson wrote: > > > > > PCC ended up being ported to many dozen different architectures, so > it's > > > quite possible, but I don't recall it being done. It was kind of a > > > dinosaur by the early 70's. I'm not even sure that it had memory > > > protection, and it certainly didn't have paging. And the I/O system > was > > > strange. So porting Unix would have been next to impossible. > > > > My memory of the Kyber (as we called them; we had a 72) was that it was > > not character-addressable, but 60-bit word-addressable, thus making > > string > > handling somewhat difficult. Don't get me started on its utterly broken > > architecture... I have thankfully lost my programming manual for the > > beast. > > > > > The main thing I remember about the 6600 was that it didn't have parity > > > bits on its memory. So people used to run the same program three times > > > and if two of the answers agreed, they published... > > > > Parity only slowed it down, and besides, hardware never failed... > > > > My fondest memory of the thing was its command completion; I would start > > to type "O, TR" and it would fill out "O, TRANSACTION TERMINAL STATUS". > > Which reminds me that my worst memory was its console keyboard, with "0" > > on the left... > > > > -- > > Dave Horsfall DTM (VK2KFU) "Those who don't understand security will > > suffer." > > > -- > Kay Parker > kayparker at mailite.com > > -- > http://www.fastmail.com - Does exactly what it says on the tin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cym224 at gmail.com Sun Jan 22 01:43:59 2017 From: cym224 at gmail.com (Nemo) Date: Sat, 21 Jan 2017 10:43:59 -0500 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <2684AF1F-8B33-4646-BF9C-0FCAD6C70D5A@tfeb.org> Message-ID: On 20 January 2017 at 19:25, Tim Bradshaw wrote: > On 20 Jan 2017, at 19:38, Nemo wrote: > As they say, 'ave a laugh, guv: > http://www.iso.org/iso/catalogue_detail.htm?csnumber=43781 > You have not understood the problem I was describing I think: I did understand. My response was to your comment about Linux and standards in the same sentence. (This was humour, indicated by the phrase 'ave a laugh.) > [...] such platforms would make porting things more easy, but they would > also be mostly indistinguishable from each other and thus eliminate most of > the competition between vendors. Well, I believe that was the original intent of POSIX as far as the Pentagon was concerned. It (and other countries) wanted source that could be recompiled/run on any POSIX box. N. From lm at mcvoy.com Sun Jan 22 01:44:34 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sat, 21 Jan 2017 07:44:34 -0800 Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: References: <1485003102.1912634.854925048.79C4A97C@webmail.messagingengine.com> Message-ID: <20170121154434.GK5620@mcvoy.com> On Sat, Jan 21, 2017 at 09:42:22AM -0500, Lorne Wilkinson wrote: > HCR also did work on the ETA10 UNIX port, I didn't participate on that > project Do you know what they did on the ETA10 project? I was part of the Lachman team that did the Unix port, I wasn't there at the very beginning so maybe that's why I never heard of them. From ron at ronnatalie.com Sun Jan 22 02:27:06 2017 From: ron at ronnatalie.com (Ronald Natalie) Date: Sat, 21 Jan 2017 11:27:06 -0500 Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: References: Message-ID: <25945AAA-678E-41F1-AE1C-DDB599EB722C@ronnatalie.com> BRL got the last 7600 ever built. After a fiasco one night when they left me to turn on the “network” (in CDC terms that just means the dumb terminals connected to it), they didn’t much let me near it again. When they were planning to decommission it in anticipation of the Cray 2 I ordered (nothing like putting your signature on a $25MM procurement), I snuck in and put a “surplus property tag” on the corner of the 7600. They weren’t amused. Amazingly, CDC makes some of the prettiest computers built. When I worked for one of their subsidiaries I was always mesmerized looking in the glass windows at the main machine room. From krewat at kilonet.net Sun Jan 22 03:32:55 2017 From: krewat at kilonet.net (Arthur Krewat) Date: Sat, 21 Jan 2017 12:32:55 -0500 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <2684AF1F-8B33-4646-BF9C-0FCAD6C70D5A@tfeb.org> Message-ID: Take another look at Solaris 11 - the pkg command is basically the same thing. Install PHP, MYSQL, Apache, update the system, do almost anything. I've booted Solaris 11 on a slew of servers and PC's since it came out. From Intel SATA to LSI SAS, Emulex fiber channel cards, Qlogic fiber cards, Intel 10Gbe NICs, etc. It just "works". While you had to pay attention to the HCL back in the Solaris 7/8/9 and early 10 days, and adjust accordingly, Solaris has been pretty decent in the past few years in terms of hardware compatibility. Except for one horrid instance where the Emulex driver would fail if virtualization was turned on - but in 11.3, that seems to have been fixed (or, the hardware changed, I installed it on newer M630 Dell hardware). For the "desktop" however, I just don't use it as a desktop. Windows won that war for me. As for performance, I'll have to look into that - I remember a while back that Oracle's best practices for it's database was to turn off NUMA under Linux, but not Solaris. Either way, virtualization (VMware mostly) has made that a moot point in many of the environments I administer. On 1/20/2017 8:58 PM, Rico Pajarola wrote: > And there's another kind of elegance in being able to boot Linux on > any random PC and have at least graphics, network, and storage work > out of the box (most of the time anyway. Solaris never stood a chance > on that front). Software gets installed with a simple "yum install > foo" or "apt-get install foo" command. At some point Solaris also lost > the performance race and that was pretty much it. From dave at horsfall.org Sun Jan 22 08:55:10 2017 From: dave at horsfall.org (Dave Horsfall) Date: Sun, 22 Jan 2017 09:55:10 +1100 (EST) Subject: [TUHS] Was pcc ever ported to the CDC6600? In-Reply-To: <1485003102.1912634.854925048.79C4A97C@webmail.messagingengine.com> References: <1485003102.1912634.854925048.79C4A97C@webmail.messagingengine.com> Message-ID: On Sat, 21 Jan 2017, Kay Parker wrote: > > My memory of the Kyber (as we called them; we had a 72) > wasn't it a CDC Cyber? As I said, we called it the Kyber (as in Kyber Pass i.e. arse; you have to know Cockney rhyming slang). -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From rp at servium.ch Tue Jan 24 13:51:49 2017 From: rp at servium.ch (Rico Pajarola) Date: Mon, 23 Jan 2017 22:51:49 -0500 Subject: [TUHS] Oracle euthanizes Solaris 12, expunging it from roadmap In-Reply-To: References: <1484812418.3800555.852554160.1638329B@webmail.messagingengine.com> <1484815787.58807daba38e0@www.paradise.net.nz> <5881f368.ZrpDJbpXm3DIAAjJ%schily@schily.net> <1484918800.4043318.854019536.3CB39B4A@webmail.messagingengine.com> <58821d5d.xUgQiZSe3DhW87+W%schily@schily.net> <2684AF1F-8B33-4646-BF9C-0FCAD6C70D5A@tfeb.org> Message-ID: On Sat, Jan 21, 2017 at 12:32 PM, Arthur Krewat wrote: > Take another look at Solaris 11 - the pkg command is basically the same > thing. Install PHP, MYSQL, Apache, update the system, do almost anything. > I've used Solaris 2.5.1, 8, 10, and 11 extensively. The pkg command was cool (and so was blastwave's pkg-get before pkg was incorporated into the base system), but it was always some kind of "poor-mans" yum/apt-get (with occasional manual surgery required to make stuff work). I've booted Solaris 11 on a slew of servers and PC's since it came out. > From Intel SATA to LSI SAS, Emulex fiber channel cards, Qlogic fiber cards, > Intel 10Gbe NICs, etc. that wasn't my experience. HW support in Solaris 11 was orders of magnitude better than older versions, especially (and surprisingly) on laptops, but any random network card or SATA/SAS controller (if it wasn't 3com/Realtek/Intel/LSI) had a good chance of not working out of the box, or at all. Solaris 11 had a lot of cool, even "linux-y" things, but it was too little, too late, and Oracle immediately killed whatever velocity they had when they took over. But at this point, we were already actively getting rid of it anyway (10 was the last version we deployed in production before jumping ship). -------------- next part -------------- An HTML attachment was scrubbed... URL: From henry.r.bent at gmail.com Tue Jan 24 14:27:03 2017 From: henry.r.bent at gmail.com (Henry Bent) Date: Mon, 23 Jan 2017 23:27:03 -0500 Subject: [TUHS] Package Management Message-ID: The recent discussion of Solaris made me think - what was the first Unix to have centralized package management as part of the OS? I know that IRIX had it, I think from the beginning (possibly even for the GL2 releases) but I imagine there was probably something before that. -Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jan 25 03:06:00 2017 From: clemc at ccc.com (Clem Cole) Date: Tue, 24 Jan 2017 12:06:00 -0500 Subject: [TUHS] Package Management In-Reply-To: References: Message-ID: Hmmm - I suspect is depends on what you call package & installation management. My guess is that all of the UNIX systems had something that were made from people that were birthed on DEC systems. Certainly, Masscomp's RTU had something very much like VMS's scheme - why because the same person designed/influenced/implemented both of them (Tom Kent). My guess is that SunOS, Apollo/Domain et al were similar - as at least they knew the importance of same. The problem I have with the question is that the managers we have today are much different than the managers we had then. Even things as simple as BSD's pkg_add is different from RPM much less yum, apt or brew compared to the (shutter) setld (DEC's my least favorite). Clem On Mon, Jan 23, 2017 at 11:27 PM, Henry Bent wrote: > The recent discussion of Solaris made me think - what was the first Unix > to have centralized package management as part of the OS? I know that IRIX > had it, I think from the beginning (possibly even for the GL2 releases) but > I imagine there was probably something before that. > > -Henry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From henry.r.bent at gmail.com Wed Jan 25 03:46:20 2017 From: henry.r.bent at gmail.com (Henry Bent) Date: Tue, 24 Jan 2017 12:46:20 -0500 Subject: [TUHS] Package Management In-Reply-To: References: Message-ID: Perhaps I should have been more specific - I was referring to something akin to Ultrix's setld or IRIX's inst, a user-friendly utility to view/install/upgrade OS components as well as applications. Ultrix setld first appeared in 2.0, which was 1987. As far as I can tell, IRIX inst appeared at about the same time. A quick look through some manuals shows that SunOS 3 (same timeframe) appears to have had a user-friendly initial setup program but it's not clear to me if it could be used after an installation to deinstall/modify/upgrade/etc. I know almost nothing about early HPUX, AIX, Domain/OS, etc. and hopefully some folks who used them might be able to chime in. And yes, setld is pretty bad. I remember it being painfully slow on real hardware, and it's still somewhat slow on emulated hardware. -Henry On 24 January 2017 at 12:06, Clem Cole wrote: > Hmmm - I suspect is depends on what you call package & installation > management. My guess is that all of the UNIX systems had something that > were made from people that were birthed on DEC systems. Certainly, > Masscomp's RTU had something very much like VMS's scheme - why because the > same person designed/influenced/implemented both of them (Tom Kent). > My guess is that SunOS, Apollo/Domain et al were similar - as at least > they knew the importance of same. > > The problem I have with the question is that the managers we have today > are much different than the managers we had then. Even things as simple > as BSD's pkg_add is different from RPM much less yum, apt or brew compared > to the (shutter) setld (DEC's my least favorite). > > Clem > > On Mon, Jan 23, 2017 at 11:27 PM, Henry Bent > wrote: > >> The recent discussion of Solaris made me think - what was the first Unix >> to have centralized package management as part of the OS? I know that IRIX >> had it, I think from the beginning (possibly even for the GL2 releases) but >> I imagine there was probably something before that. >> >> -Henry >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfb at tfeb.org Wed Jan 25 04:58:09 2017 From: tfb at tfeb.org (Tim Bradshaw) Date: Tue, 24 Jan 2017 18:58:09 +0000 Subject: [TUHS] Package Management In-Reply-To: References: Message-ID: <26330EBD-AE22-4FD8-8EC9-54505D51F67A@tfeb.org> On 24 Jan 2017, at 17:46, Henry Bent wrote: > > A quick look through some manuals shows that SunOS 3 (same timeframe) appears to have had a user-friendly initial setup program but it's not clear to me if it could be used after an installation to deinstall/modify/upgrade/etc. I don't think it could. I also remember that SunOS 4's installer was somehow much more rudimentary. I remember (for both) that things like patches were just tarballs or some equivalent: there was no registry I think. However my memory is not reliable. --tim From pnr at planet.nl Fri Jan 27 08:04:51 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Thu, 26 Jan 2017 23:04:51 +0100 Subject: [TUHS] Early TCP/IP: 3Com UNet Message-ID: <54216FB6-4D50-46F5-9DCC-03E90BB59B69@planet.nl> Just stumbled over another early TCP/IP for Unix: http://bitsavers.informatik.uni-stuttgart.de/pdf/3Com/3Com_UNET_Nov80.pdf It would seem to be a design similar to that of Holmgren's (NCP-based) Network Unix (basic packet processing in the kernel, connection management in a user space daemon). In time and in concept it would sit in between the Wingfield ('79, all user space) and the Gurwitz ('81, all kernel) implementations. I think it was distributed initially as a mod versus V7 and later as a mod versus 2BSD. Would anybody here know of surviving source of this implementation? Thanks, Paul From clemc at ccc.com Fri Jan 27 09:20:56 2017 From: clemc at ccc.com (Clem Cole) Date: Thu, 26 Jan 2017 18:20:56 -0500 Subject: [TUHS] Early TCP/IP: 3Com UNet In-Reply-To: <54216FB6-4D50-46F5-9DCC-03E90BB59B69@planet.nl> References: <54216FB6-4D50-46F5-9DCC-03E90BB59B69@planet.nl> Message-ID: Indeed, It was their first product, it was primarily written by Bruce Borden (of Rand Ports fame) and Greg Shaw- and I was the first customer for same @ Tektronix [we debugged it and our own against Stan and my VMS implementation]. Steve Glaser wrote a HyperChannel driver for it, which is a pretty amazing piece of work. BTW: Somewhere, I have the mailing envelope that is dated the "32 of December, 1980" because they had an end of the year clause with their VC's and ran into a problem right before they shipped it too me. I thought that was pretty cool, so I kept it. As for if I have contents of the UNET tape -- i.e. the bits themselves... the answer is maybe. I'm not sure to be honest. The original tape would have been at Tek but I did have somethings in my archives from those days, *i.e. *my home directory which in couple of cases has compressed tar or cpio images of interesting things. For instance it was discovered a few years back that I last known copy of UCDS - which Dennis was able to get released as a very late delivery of part of V7 and Warren now has in his archives. The point is, I do have a box of tapes from those days in my basement that I have not tried to read in a few years - so assuming I can read them (which is a huge) if although we did succeed as with UCSD and I have the information you are looking for ... the status/ownership of the bits is 3Com's -- which makes it sticky. It's there copy-written IP. We would need to find someone at 3Com to release it. Borden might be able to help as the original author, but he has not worked for 3Com for eons, plus I have not talked to him a few years, although I may know how to find him. Bob Metcalfe might also be able to help, but other than being a stock holder, I'm not sure what influence he has with 3Com management. Similarly, I have not spoken to Bob is while either, in fact the last time I did he was still a Principal at Polaris and one our Board of Directors at Ammasso -- I think he's now @ UT Austin. Clem On Thu, Jan 26, 2017 at 5:04 PM, Paul Ruizendaal wrote: > > Just stumbled over another early TCP/IP for Unix: > http://bitsavers.informatik.uni-stuttgart.de/pdf/3Com/3Com_UNET_Nov80.pdf > > It would seem to be a design similar to that of Holmgren's (NCP-based) > Network Unix (basic packet processing in the kernel, connection management > in a user space daemon). In time and in concept it would sit in between the > Wingfield ('79, all user space) and the Gurwitz ('81, all kernel) > implementations. > > I think it was distributed initially as a mod versus V7 and later as a mod > versus 2BSD. > > Would anybody here know of surviving source of this implementation? > > Thanks, > > Paul > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Fri Jan 27 19:50:03 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Fri, 27 Jan 2017 10:50:03 +0100 Subject: [TUHS] Early TCP/IP: 3Com UNet In-Reply-To: References: <54216FB6-4D50-46F5-9DCC-03E90BB59B69@planet.nl> Message-ID: <4C9EE6F3-77DA-4ACA-A696-B57652E26A39@planet.nl> That is helpful info. I sure hope the source survived in your basement. Bob Metcalfe is indeed a professor at UT now and Bruce Borden is at gThrive. I will reach out to both and ask for their support in getting the code released. If the company founders support the release of 40 year old code with no current commercial value, we might have a good case with 3Com's general counsel. I was not aware of a link between Bruce Borden and Rand Ports. The Rand reports about this were written by Sunshine and Zucker, and I had assumed Zucker was the implementor of the code. Paul On 27 Jan 2017, at 0:20 , Clem Cole wrote: > Indeed, It was their first product, it was primarily written by Bruce Borden (of Rand Ports fame) and Greg Shaw- and I was the first customer for same @ Tektronix [we debugged it and our own against Stan and my VMS implementation]. Steve Glaser wrote a HyperChannel driver for it, which is a pretty amazing piece of work. BTW: Somewhere, I have the mailing envelope that is dated the "32 of December, 1980" because they had an end of the year clause with their VC's and ran into a problem right before they shipped it too me. I thought that was pretty cool, so I kept it. > > As for if I have contents of the UNET tape -- i.e. the bits themselves... the answer is maybe. I'm not sure to be honest. The original tape would have been at Tek but I did have somethings in my archives from those days, i.e. my home directory which in couple of cases has compressed tar or cpio images of interesting things. For instance it was discovered a few years back that I last known copy of UCDS - which Dennis was able to get released as a very late delivery of part of V7 and Warren now has in his archives. > > The point is, I do have a box of tapes from those days in my basement that I have not tried to read in a few years - so assuming I can read them (which is a huge) if although we did succeed as with UCSD and I have the information you are looking for ... the status/ownership of the bits is 3Com's -- which makes it sticky. It's there copy-written IP. > > We would need to find someone at 3Com to release it. Borden might be able to help as the original author, but he has not worked for 3Com for eons, plus I have not talked to him a few years, although I may know how to find him. Bob Metcalfe might also be able to help, but other than being a stock holder, I'm not sure what influence he has with 3Com management. Similarly, I have not spoken to Bob is while either, in fact the last time I did he was still a Principal at Polaris and one our Board of Directors at Ammasso -- I think he's now @ UT Austin. > > Clem > > > > On Thu, Jan 26, 2017 at 5:04 PM, Paul Ruizendaal wrote: > > Just stumbled over another early TCP/IP for Unix: > http://bitsavers.informatik.uni-stuttgart.de/pdf/3Com/3Com_UNET_Nov80.pdf > > It would seem to be a design similar to that of Holmgren's (NCP-based) Network Unix (basic packet processing in the kernel, connection management in a user space daemon). In time and in concept it would sit in between the Wingfield ('79, all user space) and the Gurwitz ('81, all kernel) implementations. > > I think it was distributed initially as a mod versus V7 and later as a mod versus 2BSD. > > Would anybody here know of surviving source of this implementation? > > Thanks, > > Paul > > From jnc at mercury.lcs.mit.edu Mon Jan 30 03:41:42 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 29 Jan 2017 12:41:42 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> > From: Paul Ruizendaal >> I have this distinct memory of Dave Clark mentioning the Liza Martin >> TCP/IP for Unix in one of the meeting report publihed as IENs > It may be mentioned in this report: > http://web.mit.edu/Saltzer/www/publications/rfc/csr-rfc-228.pdf Yeah, I had run across that in my search for any remnants of the Martin stuff. > Would you know if any of its source code survived? As I had mentioned, I had found some old dump tapes, and had one of them read; it had some bad spots, but we've just (this morning) succeeding in having a look as to what's there, and I _think_ all of the source is OK (including the kernel code, as well as applications like server Telnet and FTP). No SCCS or anything like that, so it's a bit hit or miss doing history - the file write dates were preserved, but of course a lot of them would have been edited over time to fix bugs, add features, etc. The tape appears to contains a _lot_ of other historic material, and it's going to take a while to sort it all out; it includes a Version 6 with NCP from NOSC/SRI, some Unix from BBN; a BCPL compiler; a 'bind' for .rel format files (produced by MACRO-11 and probably BCPL) written in BCPL; programs to convert from .rel to a.out and back; an early verion of Montgomery EMACS; another Unix from 'TMI' (whoever that might be); another UNIX that's somehow associated with TRIX; someone's early kernel overlay stuff; an early 68K C compiler, and also an early 8080 C compiler - just a ton of stuff (that's just a few items that grabbed my eye as I scrolled by). Algol, alas, appears not to be there (we probably didn't add it, because of space reasons). The copy of LISP on this tape seem to be damaged; I do have 3 other tapes, and between them, I hope we'll be able to retrieve it. Noel From jnc at mercury.lcs.mit.edu Mon Jan 30 04:35:12 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 29 Jan 2017 13:35:12 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170129183512.3BDFD18C0A8@mercury.lcs.mit.edu> > some Unix from BBN This one is from 1979, it includes Mike Wingfield's TCP. The 'Trix UNIX' is a port to the 68K, probably started with something V7ish (I see "setjmp.h" in there). Bits of the Montgomery EMACS appear to date from 1981, but the main source files seem to be from 1984. I also have the source to 'vsh' (Visual Shell), whatever that is. Noel From pnr at planet.nl Mon Jan 30 06:28:02 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Sun, 29 Jan 2017 21:28:02 +0100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> Message-ID: On 29 Jan 2017, at 18:41 , Noel Chiappa wrote: > >> From: Paul Ruizendaal > >>> I have this distinct memory of Dave Clark mentioning the Liza Martin >>> TCP/IP for Unix in one of the meeting report publihed as IENs > >> It may be mentioned in this report: >> http://web.mit.edu/Saltzer/www/publications/rfc/csr-rfc-228.pdf > > Yeah, I had run across that in my search for any remnants of the Martin > stuff. > >> Would you know if any of its source code survived? > > As I had mentioned, I had found some old dump tapes, and had one of them read; > it had some bad spots, but we've just (this morning) succeeding in having a > look as to what's there, and I _think_ all of the source is OK (including the > kernel code, as well as applications like server Telnet and FTP). No SCCS or > anything like that, so it's a bit hit or miss doing history - the file write > dates were preserved, but of course a lot of them would have been edited over > time to fix bugs, add features, etc. Great! I'd love to take a look at all that. > The tape appears to contains a _lot_ of other historic material, and it's > going to take a while to sort it all out; it includes a Version 6 with NCP > from NOSC/SRI, [...] That is very interesting. It may be related to the V6 with NCP from UoI/DTI. >> [...] some Unix from BBN > > This one is from 1979, it includes Mike Wingfield's TCP. Super! In the last couple of months I had retyped the Wingfield TCP from a printout, some 5,000 lines done and some 700 still to go. Ah well, at least I have now read the source with attention for each and every line. The printout does not have the kernel modifications with it, so it would be great if your archive does include that. Once you're ready for that, I'd love to get a copy of those 3 versions of Unix. Paul From downing.nick at gmail.com Mon Jan 30 11:34:41 2017 From: downing.nick at gmail.com (Nick Downing) Date: Mon, 30 Jan 2017 12:34:41 +1100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> Message-ID: This is a wonderful find, is it possible for you to read the other tapes also? I would be particularly interested in the early 8080 compiler, I am actively working on something like that at the moment. I have quite extensively reverse engineered the famous Ritchie PDP-11 C compiler to figure out how it works, it is actually pretty straightforward and I may write a document about it someday (the code is a bit horrible due to the many exceptional cases added to improve the output in particular situations, all this has to be ignored in order to get at the underlying algorithm which is elegant). Steven Schultz or someone else also seems to have begun a PDP-11-targeted port of the of 4.3BSD VAX-targeted PCC backend, I can't see myself completing this but I was considering trying to adapt the Ritchie pass2 to understand PCC intermediate code instead of Ritchie pass1 intermediate code and using it more-or-less as-is as a PCC backend. There is no requirement that a PCC backend use the PCC instruction table or macro format and in this case it would probably be simpler if it did not. But one or other of these backends has to be ported to Z180 (~= Z80 ~= 8080) and I'd be thrilled to have a starting point. I will also eventually pick up the 68K compiler too although I believe some pretty good PCC based 68K C compilers will be extant due to late versions of BSDs having been developed on 68K (I could be wrong about the BSDs and 68K but I am sure many unices ran on 68010+ and even a few on 68000 using the famous second CPU chip to handle faults). cheers, Nick On 30/01/2017 4:42 AM, "Noel Chiappa" wrote: > > From: Paul Ruizendaal > > >> I have this distinct memory of Dave Clark mentioning the Liza Martin > >> TCP/IP for Unix in one of the meeting report publihed as IENs > > > It may be mentioned in this report: > > http://web.mit.edu/Saltzer/www/publications/rfc/csr-rfc-228.pdf > > Yeah, I had run across that in my search for any remnants of the Martin > stuff. > > > Would you know if any of its source code survived? > > As I had mentioned, I had found some old dump tapes, and had one of them > read; > it had some bad spots, but we've just (this morning) succeeding in having a > look as to what's there, and I _think_ all of the source is OK (including > the > kernel code, as well as applications like server Telnet and FTP). No SCCS > or > anything like that, so it's a bit hit or miss doing history - the file > write > dates were preserved, but of course a lot of them would have been edited > over > time to fix bugs, add features, etc. > > The tape appears to contains a _lot_ of other historic material, and it's > going to take a while to sort it all out; it includes a Version 6 with NCP > from NOSC/SRI, some Unix from BBN; a BCPL compiler; a 'bind' for .rel > format > files (produced by MACRO-11 and probably BCPL) written in BCPL; programs to > convert from .rel to a.out and back; an early verion of Montgomery EMACS; > another Unix from 'TMI' (whoever that might be); another UNIX that's > somehow > associated with TRIX; someone's early kernel overlay stuff; an early 68K C > compiler, and also an early 8080 C compiler - just a ton of stuff (that's > just > a few items that grabbed my eye as I scrolled by). > > Algol, alas, appears not to be there (we probably didn't add it, because of > space reasons). The copy of LISP on this tape seem to be damaged; I do > have 3 > other tapes, and between them, I hope we'll be able to retrieve it. > > Noel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Mon Jan 30 11:44:20 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 29 Jan 2017 20:44:20 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170130014420.792E918C0BA@mercury.lcs.mit.edu> > From: Nick Downing > This is a wonderful find Yes, I was _very_ happy to find those tapes in my basement; up till that, I was almost sure all those bits were gone forever. Thanks to Chuck Guzis, whose old data recovery service made this possible - he actually read the tape. > is it possible for you to read the other tapes also? Alas, they're all of the same system. So the most we're going to get is the files that are missing on this one due to bad spots on the tape. Noel From clemc at ccc.com Mon Jan 30 12:19:48 2017 From: clemc at ccc.com (Clem Cole) Date: Sun, 29 Jan 2017 21:19:48 -0500 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> Message-ID: On Sun, Jan 29, 2017 at 8:34 PM, Nick Downing wrote: > But one or other of these backends has to be ported to Z180 (~= Z80 ~= > 8080) and I'd be thrilled to have a starting point. > ​In 1978/79, there was a Z80 Z Compiler from either Teletype or Western Electric, that IIRC was based on the Ritchie compiler. Somebody that worked in the IH or Columbus labs might remember such as Phil Karn might be a good source. That said, is you want a goos starting point the best C compiler for the 8080 was the Leor Zolman's "Brain Damaged Software" C -- aka BDS C. Which he has put in the public Domain: http://www.bdsoft.com/resources/bdsc.html There is also the classic from Ron Cain, the "Small C Compiler" [ http://www.svipx.com/pcc/PCCminipages/zc9b6ec9e.html] which was updated and expanded by Jim Hendrix is also in the public domain: http://www.deturbulator.org/jim.asp ​ > > I will also eventually pick up the 68K compiler too although I believe > some pretty good PCC based 68K C compilers will be extant due to late > versions of BSDs having been developed on 68K > ​Close, but you have some of the history a little bent. I believe that the first C compiler for the 68000 was started at motorola ​by the 68K developers themselves (a little known fact -- what would become the 68K was developed on an PDP-11/7- running UNIX). As I understand from Les Crudele (who lead the 68K team), the compiler was based on the Ritchie compiler, but I do not think it as every taken very far. I do not believe a C tool chain was ever made available to their customers/UNIX community. We should ask some of the Moto guys about that. So, before the chip was released and became the 68K (when there were 10 beta sites and it was an eXperiment -- i.e. an "X-Series" chip), we had a couple them @ Tek Labs. Which were the basis for the Magnolia system that Roger Bates created. I hacked up the Ritchie compiler shortly there after (I was not aware of the Motorola work at the time), and created something that worked sans FP. The assembler & linker were written by Paul Blattner. My compiler used a 16 bit int. Sometime after the Tek work, Steve Ward's guys writing Trix hacked together a compiler, assembler and the like. If memory serves me, tjt wrote the assembler and hacked the linker and Jack Test did much of the compiler and again IIRC that was based on PCC. But, they did support FP and an "int" was 32 bits. I believe that this was the "seed" compiler for most of the UNIX workstations. It was scattered across the land as it were. I switch over to it at some point, although I don't remember if I switched it to a 16-bit int. I know we talked about it and I've forgotten how that all played out. > (I could be wrong about the BSDs and 68K but I am sure many unices ran on > 68010+ and even a few on 68000 using the famous second CPU chip to handle > faults). > ​Right the Masscomp and Apollo systems were the two most successful to use the Forest Basket "Fixer/Executor" model. The original Masscomp C Compiler was based on the MIT Compiler, as was Suns. Apollo (being Pr1mates, used Ratfor & Pascal as their systems languages at beginning) and did not cut C in until later. All three firms would start compiler teams. The Masscomp compiler team was direct decedent of the VMS Compiler team with many of the same players. Those stories are better told over beers. ;-)​ BTW: for the first 2-3 years, the reason why Masscomp was faster than Sun, and each were using the 10Mhz 68000 chips, was the Masscomp had a real optimizing compiler, in the same key as the DEC and DG compilers (al biet written in C from the ground up). The team was lead by Peter Darnell and had Marty Jack and a number of other infamous backend folks. Masscomp was getting 5-25% better performance than the MIT/PCC based one, so eventually Sun & Apollo (also have a bunch of ex-pat DECies, made similar investments). -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Mon Jan 30 12:30:37 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 29 Jan 2017 18:30:37 -0800 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> Message-ID: <20170130023037.GQ15819@mcvoy.com> On Sun, Jan 29, 2017 at 09:19:48PM -0500, Clem Cole wrote: > That said, is you want a goos starting point the best C compiler for the > 8080 was the Leor Zolman's "Brain Damaged Software" C -- aka BDS C. Which > he has put in the public Domain: http://www.bdsoft.com/resources/bdsc.html Oh, how I know that compiler. Or knew it. When I was a student and they had 30-40 people logged into the VAX 11/780, I said "screw this, it's too slow" and I got $2000 Okidata CPM Z80 machine (why that one? Because it had an integrated printer and the screen had colors, wasn't B&W). CPM wasn't that great, I was used to BSD Unix, so I wrote a bunch of clones of the unix commands like ls, cat, more, cp, rm, etc. For those I used assembler because I was trying like crazy to get each one to fit in one sector of the floppy disk; that was faster to load and left more room on the disk for other stuff. Anyhoo, that BDS compiler was awesome. Not as awesome as turbo pascal but it wasn't pascal, if you know what I mean. I did most of my school projects on that machine, wrote my on dial / terminal program (why? Because I could :), all of that in BDS C. Anyone remember his non-standard standard I/O? From lm at mcvoy.com Mon Jan 30 12:32:56 2017 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 29 Jan 2017 18:32:56 -0800 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> Message-ID: <20170130023256.GR15819@mcvoy.com> On Sun, Jan 29, 2017 at 09:19:48PM -0500, Clem Cole wrote: > ???Right the Masscomp and Apollo systems were the two most successful to use > the Forest Basket "Fixer/Executor" model. Huh, I didn't know that. So he's the guy who came up with that design? From clemc at ccc.com Mon Jan 30 12:39:48 2017 From: clemc at ccc.com (Clem Cole) Date: Sun, 29 Jan 2017 21:39:48 -0500 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170130023256.GR15819@mcvoy.com> References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> <20170130023256.GR15819@mcvoy.com> Message-ID: On Sun, Jan 29, 2017 at 9:32 PM, Larry McVoy wrote: > On Sun, Jan 29, 2017 at 09:19:48PM -0500, Clem Cole wrote: > > ???Right the Masscomp and Apollo systems were the two most successful to > use > > the Forest Basket "Fixer/Executor" model. > > Huh, I didn't know that. So he's the guy who came up with that design? I used have a copy of the paper. I looked for it a few years ago and could not find it. [ I lost some stuff in the great condo flood of 1990. I fear that was one of the papers that got soaked]. IIRC: He presented it at the Asilomar microprocessor conference in 1980. Robert Chew might know, as he used to run the conference.​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron at ronnatalie.com Mon Jan 30 12:43:41 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Sun, 29 Jan 2017 21:43:41 -0500 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170130023037.GQ15819@mcvoy.com> References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> <20170130023037.GQ15819@mcvoy.com> Message-ID: <006f01d27aa2$ab3edcc0$01bc9640$@ronnatalie.com> I remember BDS C. We were early users of that. Amusingly I became affiliated with a company called BDS (no relation). We were an early player in the internet domains and I owned BDS.COM for a while. From clemc at ccc.com Mon Jan 30 12:43:58 2017 From: clemc at ccc.com (Clem Cole) Date: Sun, 29 Jan 2017 21:43:58 -0500 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170130023037.GQ15819@mcvoy.com> References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> <20170130023037.GQ15819@mcvoy.com> Message-ID: On Sun, Jan 29, 2017 at 9:30 PM, Larry McVoy wrote: > I was used to BSD Unix, ​I remember, Leor had it running in his room on floppies from Z80 box of some flavor at the first Boston USENIX (in Fairmont Copley Place)​. We took Dennis to see it and he was pretty impressed. His comment was it reminded him of UNIX when they originally wrote it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Mon Jan 30 12:50:03 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 29 Jan 2017 21:50:03 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170130025003.1CAEA18C0BA@mercury.lcs.mit.edu> > From: Paul Ruizendaal > Great! I'd love to take a look at all that. OK, it'll all be appearing once we have a chance to get organized (it's all mixed in with personal files). > That is very interesting. It may be related to the V6 with NCP from > UoI/DTI. I think it _is_ the V6 from UoI/DTI. The source has Gary (?) Grossman's and Steve Holmgren's name on it, and the headers say they date from 1074-75. > The printout does not have the kernel modifications with it, so it would > be great if your archive does include that. The archive does include the complete kernel, but i) the changes aren't listed in any way (I forsee a lot of 'diffs', unless you just take the entire kernel), ii) there's a file called 'history' which contains a long list of general changes/improvements of the kernel not really related to TCP/IP, by a long list of people, dated from the middle of '78 to the middle of '79. So it looks like he started with a considerably modified system. The only client code I see is User Telnet. (The MIT code has User and Server Telnet and FTP, as well as SMTP, but it uses a wholly different TCP interface.) Noel From downing.nick at gmail.com Mon Jan 30 13:33:12 2017 From: downing.nick at gmail.com (Nick Downing) Date: Mon, 30 Jan 2017 14:33:12 +1100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170130025003.1CAEA18C0BA@mercury.lcs.mit.edu> References: <20170130025003.1CAEA18C0BA@mercury.lcs.mit.edu> Message-ID: Thanks a lot for the valuable info about C compilers everyone. I was aware of BDS C and also Hitech C and Small C. I briefly picked up cc65 (6502 port of Small C which has an ANSI frontend and a macroassembler and other improvements) and started trying to port it back to Z80, but I didn't get too far. I found quite a few things there that I didn't like, so I put aside the project. I hadn't been aware that BDS C had such a following, I thought of it as a closed source commercial offering of no value to me. I'm happy to see it's now open source!! Note that I actually own copies of Manx Aztec C and IAR C which are both pretty good commercial C compilers for Z80 and Z180 respectively, however being closed source they are of no value to my projects, which I hope to eventually release as open source. A funny story arises actually in regards to IAR C and its dongle-protection. I was working for an overseas customer who bought the IAR C package for me to use, I think they bought 3 copies, one for my manager, one for my colleague and one for me. I went to visit the customer for some weeks and we did a lot of work together. When I got home I had to attend to various things that couldn't be put off any longer, like family matters, cleaning the house and so on. For a few weeks I strung my manager along saying "yes I have implemented this, yes I have done that, blah blah"... knowing it would be easy to catch up when I had a chance to sit down and work. When that time came, I was embarrassed to discover that I had left my dongle over there at the customer's premises... what was I going to say?? Haha. So, Visual Studio to the rescue, it was relatively easy to trace into the executable and find out where the dongle checks were occurring. I don't think they had even stripped the executable, which made things a bit easier, although the dongle stuff was obviously a 3rd party library since it did have some protection against being traced and so on. Anyway, it took me about half to one day to circumvent this, and I was very happy that I had done so, because the dongle check took quite a while (between .1s and .3s if my memory is reliable) and it was doing this hundreds of times during a full build. I don't know why it was so slow, possibly because it was polling a licence server even though I didn't have a licence server. So my builds were much faster, and of course I didn't have the hassle of dongle protection anymore. But getting back to the point, I'm fairly keen to standardize on a PCC-based compiler. There are a number of reasons for this. (1) Since I have in mind targetting multiple platforms such as PDP-11, VAX, Z180, 8086, 80286, possibly 68K, it seems a natural choice. (2) I like the fact that lint uses a PCC front-end, if all my compilers use a PCC front-end then all code is interpreted in the SAME dialect. (3) I have a heavy focus in my project on source-to-source transformations, presently these are a bit ad-hoc (using scripts or C programs based on getchar() / putchar() type stuff), and I want to upgrade them to a PCC front-end, a transformation, and a C-target PCC backend. (4) I have the option of upgrading to the ANSI-compliant PCC compiler and the latest backend stuff which has a register allocator, these features are not things which I would want to spend time on, but I feel that people who download my eventual release packages might. (5) The ANSI-compliant PCC compiler has a selection of backends, such as 8086 and 80286 backends I believe, so might save me time. Having said all that, I have been very careful to preserve compatibility with all reasonable dialects of C. To make this work, I do not use any features that are only available in ANSI C (except if there is a traditional C alternative, such as ## versus /**/, which I can check for using a macro like __STDC__), but I make sure all code is valid ANSI C or traditional C, using the __P(()) macro and such like. So there would be no real reason I cannot use BDS C as-is in my project. Note I'm going to standardize on the asxxxx assemblers, and indeed one of my next projects will be to make the PCC VAX-targeted compiler output asxxxx rather than its own assembler code. I can do the same for BDS C. That way, it will generate (via the assembler) the correct *.o and relocation entries, and I can use the 4.3BSD standard ld. For the time being, I ported 4.3BSD "as" as a VAX cross-assembler, just to establish a baseline, but I will be changing to asxxxx when I can. cheers, Nick On Mon, Jan 30, 2017 at 1:50 PM, Noel Chiappa wrote: > > From: Paul Ruizendaal > > > Great! I'd love to take a look at all that. > > OK, it'll all be appearing once we have a chance to get organized (it's all > mixed in with personal files). > > > That is very interesting. It may be related to the V6 with NCP from > > UoI/DTI. > > I think it _is_ the V6 from UoI/DTI. The source has Gary (?) Grossman's and > Steve Holmgren's name on it, and the headers say they date from 1074-75. > > > The printout does not have the kernel modifications with it, so it would > > be great if your archive does include that. > > The archive does include the complete kernel, but i) the changes aren't listed > in any way (I forsee a lot of 'diffs', unless you just take the entire > kernel), ii) there's a file called 'history' which contains a long list of > general changes/improvements of the kernel not really related to TCP/IP, by a > long list of people, dated from the middle of '78 to the middle of '79. So it > looks like he started with a considerably modified system. > > The only client code I see is User Telnet. (The MIT code has User and > Server Telnet and FTP, as well as SMTP, but it uses a wholly different > TCP interface.) > > Noel From ron at ronnatalie.com Mon Jan 30 13:38:59 2017 From: ron at ronnatalie.com (Ron Natalie) Date: Sun, 29 Jan 2017 22:38:59 -0500 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: References: <20170130025003.1CAEA18C0BA@mercury.lcs.mit.edu> Message-ID: <008501d27aaa$64acf050$2e06d0f0$@ronnatalie.com> > A funny story arises actually in regards to IAR C and its dongle-protection. I was working for an overseas customer who bought the IAR C package for me to use Sounds even worse the Plauger's Whitesmith Software "stamp" you were supposed to stick to your machine. From pnr at planet.nl Mon Jan 30 18:26:05 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Mon, 30 Jan 2017 09:26:05 +0100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170130025003.1CAEA18C0BA@mercury.lcs.mit.edu> References: <20170130025003.1CAEA18C0BA@mercury.lcs.mit.edu> Message-ID: On 30 Jan 2017, at 3:50 , Noel Chiappa wrote: > I think it _is_ the V6 from UoI/DTI. The source has Gary (?) Grossman's and > Steve Holmgren's name on it, and the headers say they date from 1074-75. Wow, that's great! That means that you have the initial version. Possibly it is V5 not V6 (according to Holmgren they started out on V5 and ported to V6). All my leads for the 1975 version of this code base came up dry and I feared it lost. I think this code base is an important milestone. For example, I think it may contain the first version of 'mbufs'. It may also contain the first (Unix) instance of a network "work queue", which also seems to have been a common design element of early (Unix) TCP's. I still have one pending lead on the 1978 version of this code base. > The archive does include the complete kernel, but i) the changes aren't listed > in any way (I forsee a lot of 'diffs', unless you just take the entire > kernel), ii) there's a file called 'history' which contains a long list of > general changes/improvements of the kernel not really related to TCP/IP, by a > long list of people, dated from the middle of '78 to the middle of '79. So it > looks like he started with a considerably modified system. Yes, a 'history' file seems to have been common practice at BBN. The kernel would have had many modifications: - the 'ports' extension from Rand - the 'await' extension by Jack Haverty - an 1822-driver - possibly, an Autodin II network driver - possibly, shared memory extensions It might even have some NCP code in it, and if so probably derived from the above. Indeed, lot's of diff'ing ahead to figure out what changes belong together. There seem to have been two versions of the BBN modified kernel. One was done for systems without separate I/D with stuff heavily trimmed (even kernel messages were shortened to a few bytes to save space). The other may have extended the V6 kernel to run in separate I and D spaces and was less anemic. Paul From dave at horsfall.org Mon Jan 30 23:13:48 2017 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 31 Jan 2017 00:13:48 +1100 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> Message-ID: On Sun, 29 Jan 2017, Clem Cole wrote: > That said, is you want a goos starting point the best C compiler for the > 8080 was the Leor Zolman's "Brain Damaged Software" C -- aka BDS C.   > Which he has put in the public Domain: >   http://www.bdsoft.com/resources/bdsc.html Talk about brain-damaged software... BDS C is irretrievably broken, with all sorts of things missing. I used to use Hi-Tech C for CP/M, but I don't know whether they're still in business. It was a full ANSI C compiler, with function prototypes, initialisation assignments, static variables, etc. As Henry Spencer once said, to be called a C compiler it ought to at least be able to compile C... (And as I've said on many occasions, the true test of a compiler is whether it can compile itself.) -- Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer." From lm at mcvoy.com Mon Jan 30 23:37:50 2017 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 30 Jan 2017 05:37:50 -0800 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: References: <20170129174142.1062618C0A8@mercury.lcs.mit.edu> Message-ID: <20170130133750.GB32068@mcvoy.com> On Tue, Jan 31, 2017 at 12:13:48AM +1100, Dave Horsfall wrote: > On Sun, 29 Jan 2017, Clem Cole wrote: > > > That said, is you want a goos starting??point the best C compiler for the > > 8080 was the Leor Zolman's "Brain Damaged Software" C -- aka BDS C. ?? > > Which he has put in the public Domain: > > ????http://www.bdsoft.com/resources/bdsc.html > > Talk about brain-damaged software... BDS C is irretrievably broken, with > all sorts of things missing. I dunno, I wrote a lot of code with it. It felt enough like C to me. At the time, I believe it was the fastest CPM C compiler you could get. That was a big part of why I used it. From jnc at mercury.lcs.mit.edu Tue Jan 31 01:34:26 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 30 Jan 2017 10:34:26 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170130153426.D9CFA18C0B4@mercury.lcs.mit.edu> > From: Paul Ruizendaal >> the headers say they date from 1974-75. > Wow, that's great! That means that you have the initial version. The file write dates are May 1979, so that's the latest it can be. There is one folder called 'DTI' which contains an email message from someone at DTI to someone at SRI which is dated "10 Apr 1979" so that seems to indicate that that's indeed when they are from. (The message says that the folder contains the source for DTI's IMP-11A driver, which is different from UIll's, although they both descend from the same original version.) > Possibly it is V5 not V6 Nope, definitely V6 here. > All my leads for the 1975 version of this code base came up dry and I > feared it lost. I could have sworn that I'd seen _listings_ of the code in a UIllinois document about NCP Unix that I had found (and downloaded) on the Internet, but I can't find them here now. I did look again and found: "A Network Unix System for the Arpanet", by Karl C. Kelley, Richard Balocca, and Jody Kravitz but it doesn't contain any sources. > it may contain the first version of 'mbufs' It might - the code is conditionalized for "UCBUFMOD" all over the place. > Yes, a 'history' file seems to have been common practice at BBN. The > kernel would have had many modifications: > - the 'ports' extension from Rand Yes. > - the 'await' extension by Jack Haverty Yup. > - an 1822-driver Yes (also by Haverty) - although IMP11-A drivers are all over the place, there are two different ones in the NCP Unix alone. > - possibly, an Autodin II network driver Didn't see one. > - possibly, shared memory extensions Yes, there are two module in 'ken', map_page.c and set_lcba.c (I was unable to work out what 'LCBA' stood for) which seem to do something with mapping. > It might even have some NCP code in it Yes, there's an 'ncpkernel' directory. > There seem to have been two versions of the BBN modified kernel. One was > done for systems without separate I/D with stuff heavily trimmed Yes, there's a 'SMALL' preprocessor flag which conditionally removes some stuff. > The other may have extended the V6 kernel to run in separate I and D > spaces That capability was present in stock V6. Noel From jnc at mercury.lcs.mit.edu Tue Jan 31 02:15:28 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 30 Jan 2017 11:15:28 -0500 (EST) Subject: [TUHS] Early Internet work (Was: History of select(2)) Message-ID: <20170130161528.7A0CE18C0B4@mercury.lcs.mit.edu> > From: Clem Cole > Steve Ward's guys writing Trix hacked together a compiler, assembler and > the like. All of which I have the source for - just looked through it. > If memory serves me, tjt wrote the assembler I have the NROFF source for the "A68 Assembler Reference", and it's by James L. Gula and Thomas J. Teixeira. It says that "A68 is an edit of the MICAL assembler also written by Mike [Patrick].". > Jack Test did much of the compiler and again IIRC that was based on PCC. I dunno, I'm not familiar with PCC, so I can't say. It definitely looks very different from the Ritchie C compiler. Noel From clemc at ccc.com Tue Jan 31 02:41:14 2017 From: clemc at ccc.com (Clem Cole) Date: Mon, 30 Jan 2017 11:41:14 -0500 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170130161528.7A0CE18C0B4@mercury.lcs.mit.edu> References: <20170130161528.7A0CE18C0B4@mercury.lcs.mit.edu> Message-ID: On Mon, Jan 30, 2017 at 11:15 AM, Noel Chiappa wrote: > > > If memory serves me, tjt wrote the assembler > > I have the NROFF source for the "A68 Assembler Reference", and it's by > James > L. Gula and Thomas J. Teixeira. ​Indeed. Thomas J. Teixeira - aka tjt - who would be my office mate at Masscomp and Stellar. We would work together 3 or 4 times. He's good guy and good friend. Clem​ PS Glad to hear to you saved Trix. I always thought it there were some cool ideas in it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Tue Jan 31 02:44:12 2017 From: clemc at ccc.com (Clem Cole) Date: Mon, 30 Jan 2017 11:44:12 -0500 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170130161528.7A0CE18C0B4@mercury.lcs.mit.edu> References: <20170130161528.7A0CE18C0B4@mercury.lcs.mit.edu> Message-ID: On Mon, Jan 30, 2017 at 11:15 AM, Noel Chiappa wrote: > I dunno, I'm not familiar with PCC, so I can't say. It definitely looks > very > different from the Ritchie C compiler. > To many beers ago, I'm not sure I would recognize the differences by inspection any more. But Steve reads the list. He might recognize​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Tue Jan 31 07:20:45 2017 From: pnr at planet.nl (Paul Ruizendaal) Date: Mon, 30 Jan 2017 22:20:45 +0100 Subject: [TUHS] Early Internet work (Was: History of select(2)) In-Reply-To: <20170130153426.D9CFA18C0B4@mercury.lcs.mit.edu> References: <20170130153426.D9CFA18C0B4@mercury.lcs.mit.edu> Message-ID: On 30 Jan 2017, at 16:34 , Noel Chiappa wrote: > >>> the headers say they date from 1974-75. > >> Wow, that's great! That means that you have the initial version. > > The file write dates are May 1979, so that's the latest it can be. There is > one folder called 'DTI' which contains an email message from someone at DTI to > someone at SRI which is dated "10 Apr 1979" so that seems to indicate that > that's indeed when they are from. Based on that extra info I think you have a later version of Network Unix, which is still wonderful and exciting. > I could have sworn that I'd seen _listings_ of the code in a UIllinois > document about NCP Unix that I had found (and downloaded) on the Internet, but > I can't find them here now. I did look again and found: > > "A Network Unix System for the Arpanet", by Karl C. Kelley, Richard Balocca, > and Jody Kravitz > > but it doesn't contain any sources. The initial 1975 implementation was - in the authors' recollection - only some one to two thousand lines of extra kernel code and one thousand for the NCP daemon. That would make for some 50 pages of printout. It is possible. I know the Kelley document well and sections 5 and 6 contain a fairly detailed code walkthrough. Perhaps this is what lingered in your memory. I suspect this (Oct 1978) code walkthrough will match with the code on your tape. From beebe at math.utah.edu Tue Jan 31 10:56:18 2017 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Mon, 30 Jan 2017 17:56:18 -0700 Subject: [TUHS] PDP-10 in the news today Message-ID: This story appears today in The Register: PDP-10 enthusiasts resurrect ancient MIT operating system Incompatible Timesharing System now compatible with modern machines https://www.theregister.co.uk/2017/01/30/pdp10_enthusiasts_resurrect_ancient_mit_operating_system/ Near the end of the story is a mention of SIMH and of KLH10, both of which emulate the PDP-10. There is also mention of a PDP-11 emulator running inside ITS. ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From jnc at mercury.lcs.mit.edu Tue Jan 31 12:39:42 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 30 Jan 2017 21:39:42 -0500 (EST) Subject: [TUHS] PDP-10 in the news today Message-ID: <20170131023942.49FC118C0C1@mercury.lcs.mit.edu> > There is also mention of a PDP-11 emulator running inside ITS. SYSENG;11SIM > Noel From lars at nocrew.org Tue Jan 31 17:41:14 2017 From: lars at nocrew.org (Lars Brinkhoff) Date: Tue, 31 Jan 2017 08:41:14 +0100 Subject: [TUHS] PDP-10 in the news today In-Reply-To: (Nelson H. F. Beebe's message of "Mon, 30 Jan 2017 17:56:18 -0700") References: Message-ID: <867f5b8yn9.fsf@molnjunk.nocrew.org> Nelson H. F. Beebe wrote: > Near the end of the story is a mention of SIMH and of KLH10, both > of which emulate the PDP-10. There is also mention of a PDP-11 > emulator running inside ITS. Hmm, does the PDP-11 item make this on topic for TUHS? :-) There is actually a lot of PDP-11 software in ITS, because 11s were used all over the place as dedicated processors to control hardware devices. There's a cross assembler called PALX, and several debuggers called RUG and CARPET. No Unix though. From peter at rulingia.com Tue Jan 31 18:12:55 2017 From: peter at rulingia.com (Peter Jeremy) Date: Tue, 31 Jan 2017 19:12:55 +1100 Subject: [TUHS] PDP-10 in the news today In-Reply-To: <867f5b8yn9.fsf@molnjunk.nocrew.org> References: <867f5b8yn9.fsf@molnjunk.nocrew.org> Message-ID: <20170131081255.GA30074@server.rulingia.com> On 2017-Jan-31 08:41:14 +0100, Lars Brinkhoff wrote: >Nelson H. F. Beebe wrote: >> Near the end of the story is a mention of SIMH and of KLH10, both >> of which emulate the PDP-10. There is also mention of a PDP-11 >> emulator running inside ITS. > >Hmm, does the PDP-11 item make this on topic for TUHS? :-) I suspect this would be more on-topic in PUPS than TUHS. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 949 bytes Desc: not available URL: From jnc at mercury.lcs.mit.edu Tue Jan 31 23:26:36 2017 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 31 Jan 2017 08:26:36 -0500 (EST) Subject: [TUHS] PDP-10 in the news today Message-ID: <20170131132636.C285E18C0C3@mercury.lcs.mit.edu> > From: Lars Brinkhoff > several debuggers called RUG and CARPET SYSENG;CARPET > and SYSENG;KLRUG > (and also SYSEN2;URUG >). CARPET runs in the PDP-10, and talks to the 11's via the Rubin 10-11 interface on MIT-AI (which let the PDP-10 see into the PDP-11s' memory); it installed a small toehold in the 11 (e.g. for trap handling). There was also a version (conditionalized in the source) called "Hali" ("Hali is Carpet over a [serial] line") - 'hali' is Turkish for 'carpet' (I wonder how someone knew that). RUG runs in the front-end 11 on the KL (MIT-MC). URUG is a really simple version of RUG that runs in a GT40, and use the GT40 display for output. There's also 11DDT (KLDCP;11DDT >) - not sure why both this and KLRUG exist - unless RUG was for the front-end 11, and 11DDT was for the I/O-11? Noel