1972 Unix V2 "Beta" Resurrected

(tuhs.org)

420 points | by henry_flower4 days ago

10 comments

  • m4r1k3 days ago
    I once saw a talk from Brian Kernighan who made a joke about how in three weeks Ken Thompson wrote a text editor, the B compiler, and the skeleton for managing input/output files, which turned out to be UNIX. The joke was that nowadays we're a bit less efficient :-D
    • ferguess_k3 days ago
      Ken is definitely a top-notch programmer. A top-notch programmer can do a LOT given 3 weeks of focus time. I remember his wife took the kids to England so he was free to do whatever he wanted. And he definitely had a lot of experience before writing what was first version UNIX.

      Every programmer that has a project in mind should try this: Put away 3 weeks of focus time in a cabin, away from work and family, gather every book or document you need and cut off the Internet. Use a dumb phone if you can live with it. See how far you can go. Just make sure it is something that you already put a lot of thoughts and a bit of code into it.

      After thinking more thoroughly about the idea, I believe low level projects that rely on as few external libraries as possible are the best ones to try the idea out. If your project relies on piles of 3rd party libraries, you are stuck if you have an issue but without the Internet to help you figure it out. Ken picked the right project too.

      • foxglacier3 days ago
        > low level projects that rely on as few external libraries

        I think this is key. If you already have the architecture worked out in your head, then it's just smashing away at they keyboard. Once you have a 3rd party library, you can spend most of your time fighting with and learning about that.

        • ferguess_k3 days ago
          Exactly. Both projects mentioned in this thread (UNIX, Git) have clean cuts visions of what the authors wanted to achieve from the beginning. Nowadays it is almost impossible to FIND such a project. I'm not saying that you can't write another Git or UNIX but most likely you won't even bother using it yourself, so what's the point? That's why I think "research projects" don't fit here -- you learn something and then you throw them away.

          What I have in mind are embedded projects -- you are probably going to use it even when you are the only user. So that fixes the motivation issue. You probably have a clean cut objective so that clicks the other checkbox. You need to bring a dev board, a bunch of breadboards and electronics components to the cabin, but that doesn't take a lot of spaces. You need the specifications of the dev board and of the components used in the project, but those are just pdf files anyway. You need some C best practices? There must be a pdf for that. You can do a bit of experimental coding before you leave for the cabin, to make sure the idea is solid, feasible and the toolchain works. The preparations give you a wired up breadboard and maybe a few hundred lines of C code. That's all you need to complete the project in 3 weeks.

          Game programming, modding and mapping come into my mind, too. They are fun, clean cut and well defined. The thing is you might need the Internet to check documents or algorithms from time to time. But it is a lot better to cut off Internet completely. I think they fit if you are well into them already -- and then you boost them up working 3 weeks in a cabin.

          There must be other lower level projects that fit the bill. I'm NOT even a good, ordinary programmer, so the choices are few.

    • cjs_ac3 days ago
      The interview is here: https://www.youtube.com/watch?v=EY6q5dv_B-o

      One hour long, and Thompson tells a lot of entertaining stories. Kernighan does a good job of just letting Thompson speak.

    • Cthulhu_3 days ago
      A newer joke is that Ken Thompson (along with Rob Pike and Robert Griesemer) designed Go while waiting for C / C++ to compile.
      • kragen2 days ago
        Not C/C++. Specifically C++. C compiles pretty fast. And it's not really a joke, though obviously it wasn't a single build they were waiting for.
        • pjmlp2 days ago
          Back in 2000, our builds, C modules to be consumed by Tcl, Apache and IIS modules, DB drivers took about one hour per OS for a full platform release.

          OS being Windows NT/2000, Aix, HP-UX, Solaris, Red-Hat Linux.

          • kragen2 days ago
            Yeah, that's similar to my experience. C++ projects commonly took a week to build in the 90s.
            • pjmlp1 day ago
              I can assure that wasn't the case in our desktop software for Windows, and that already feels like trolling.

              Not even Nokia NetAct took half as much, given its complexity of distributed CORBA services written in C++, nor the CERN TDAQ/HLT builds I was responsible for.

              • kragen1 day ago
                You were probably working on more competently designed software than I was, but sure, desktop software for Windows usually didn't take that long.
      • 3 days ago
        undefined
    • xattt3 days ago
      I’m wondering what the process was for the early UNIX developers to attain this level of productivity.

      Did they treat this as a 9-5 effort, or did they go into a “goblin mode” just to get it done while neglecting other aspects of their lives?

      • ironmanszombie3 days ago
        Back in my early career, the company I worked for needed an inventory system tailored to their unique process flow. Such system was already in development and was scheduled to launch "soon". A few months went by and I got fed up with the toil. Sat down one weekend and implemented the whole thing in Django. I'm no genius and I managed to have a solution that my team used for a few years until the company had theirs launched. In a weekend. Amazing what you can do when you want to Get Shit Done!
        • smm113 days ago
          I worked at a place in love with their ERP system. Some there had been using it 30+ years, since it ran in DOS.

          My Excel skills completely blow, and I hate Microsoft with a passion, but I created a shared spreadsheet one long Saturday afternoon that had more functionality than our $80K annual ERP system. Showed it to a few more open-minded employees, then moved it to my server, never to be shown again. Just wanted to prove when I said the ERP system was pointless, that I was right.

        • deaddodo3 days ago
          That's fine when it's self-motivated, but it sets a terrible precedent for expectations. Doing things like this can put in management's mind unrealistic expectations for you to always work at that pace. Which can be unhealthy and burnout-inducing.
      • masom3 days ago
        A big one is the lack of peer reviews and processes, including team meetings, that would slow them down. No PM, no UX, just yourself and the keyboard with some goals in mind. No OKRs or tickets to close.

        It's a bit like any early industry, from cars to airplanes to trains. Earlier models were made by a select few people, and there was several versions until today where GM and Ford have thousands of people involved in designing a single car iteration.

        • jandrese3 days ago
          IMHO the biggest thing is that they were their own customer. There was no requirements gathering, ui/ux consultation, third party bug reporting, just like you said. They were eating their own dogfood and loving it. No overhead meant they could focus entirely on the task at hand.
      • noisy_boy3 days ago
        Genius level mind minus scrum/agile nonsense can help.
        • hylaride3 days ago
          Impossible! How can the product managers maintain control without the bureaucracy? /s
      • Daishiman3 days ago
        A lot of the supposed "features" we have in Unix nowadays are the result of artifacts resulting from primitive limitations, like dotfiles.

        If you're willing to let everything crash if you stray from the happy path you can be remarkably productive. Likewise if you make your code work on one machine, on a text interface, with no other requirements except to deliver the exact things you need.

      • kragen2 days ago
        We aren't talking about a very large amount of code here. Mainly the process was implementing several similar systems over the previous 10 years. You'd be surprised how much faster it is to write a program the fifth time, now that you know all the stuff you can leave out.
    • pinoy4203 days ago
      Otoh: I got react to run my tests without any warnings today.
      • 9dev3 days ago
        If I write a bunch of tests for new code, and all of them pass on the first attempt, I'm immediately suspicious of a far more egregious bug hiding somewhere…
        • throwanem3 days ago
          Where feasible, I like to start a suite with a unit test that validates the unit's intended side effects actually occur, as visible in their mocks being exercised.
          • pinoy4203 days ago
            I laughed. Thank you for that
            • throwanem2 days ago
              Sure. For Patreon subscribers at the $5/month tier and up, I also have a course on making integration ("e2e", "functional") tests more maintainable by eliminating side effects.
        • michaelcampbell3 days ago
          "never trust a test you've never seen fail." has kept me honest on more than one occasion.
        • noisy_boy3 days ago
          // Todo: remove

          return true;

          • kps3 days ago
            /bin/true used to be an empty file. On my desktop here, it's 35K (not counting shared libraries), which is an asolute increase of 35K and a relative increase of ∞%.
            • 3 days ago
              undefined
    • mr_toad3 days ago
      I’ve heard that Torvalds build Git in 5 (or 10) days and that Brendan Eich created JavaScript in 10 days.

      Maybe the average programmer is less efficient, but the distribution is probably heavily skewed these days.

      • markus_zhang3 days ago
        I'd argue that ordinary programmers can perform the same *type* of exercises if they:

        - Put away a few weeks and go into Hermit mode;

        - Plan ahead what projects they have in mind, which books/documents to bring with them. Do enough research and a bit of experimental coding beforehand;

        - Reduce distraction to minimum. No Internet. Dumb phone only. Bring a Garmin GPS if needed. No calls from family members;

        I wouldn't be surprised if they could up-level skills and complete a tough project in three weeks. Surely they won't write a UNIX or Git, but a demanding project is feasible with researches allocated before they went into Hermit mode.

        • richardlblair3 days ago
          I also think people under estimate how much pondering one does before starting a project.
          • markus_zhang3 days ago
            I think so. I don't think Ken had zero thought about UNIX and then suddenly came up with a minimum but complete solution in under 3 weeks. Previous experience also tells a lot too. Wozniak was able to quickly design some electronics, but he probably already bagged 10,000 hours (just to borrow the popular metaphor) before he joined HP.
            • nyrikki3 days ago
              They both had been working on the Multics project for Bell Labs before they pulled out of the project and had written several languages already.

              While some ideas like hierarchical filesystems were new it was mainly a modernized version of CTSS according to Dennis Ritchie's paper "The UNIX Time-sharing SystemA Retrospective"

              I was playing with this version on simh way too late last night, taking a break from ITS, and being very familiar with v7 2.11 etc.. It is quite clearly very cut down.

              I think being written in Assembly, which they produced by copying the DEC PAL-11R helped a lot.

              If you look through the v1 here:

              https://www.tuhs.org/Archive/Distributions/Research/Dennis_v...

              It is already very modular, and obviously helped by dmr's MIT work:

              https://people.csail.mit.edu/meyer/meyer-ritchie.pdf

              But yet...work for years making an ultra complex OS that intended to provide 'utility scale' compute, and writing a fairly simple OS for a tiny mini would be much easier....if not so for us mortals.

              It isn't like they just came out of a code boot camp...they needed the tacit knowledge and experience to push out 100K+ lines in one year from two people over 300bps terminals etc...

              • ForOldHack2 days ago
                "EDIT: This was created to collect everything:" Wow. Amazing. Dennis would have been proud. Thank you, and thank everyone for their work. Thanks.
              • markus_zhang3 days ago
                Yeah. They were pretty professional by then :D
      • somat3 days ago
        > I’ve heard that Torvalds build Git in 5 days

        And it shows.

        I am joking of course, git is pretty great, well half-joking, what is it about linux that it attracts such terrible interfaces. git vs hg, iptables vs pf. there is a lot of technical excellence present, marred by a substandard interface.

        • wbl3 days ago
          That's why Magit exists
      • BrendanEich20 hours ago
        My understanding (direct from Graydon, some 18 years ago) was that Linus admired Monotone except for its slowness. Graydon liked making things correct then fast, but was on a break in Köln when Linus (provoked by loss of BitKeeper) emailed on Friday endorsing Monotone's design and asking for speed. Graydon said "let me buy a laptop and I'll start on Monday".

        Monday rolled around and Linus said "too late, I wrote it already" (it => git).

      • wbl3 days ago
        Brendan Eich would say "10 days" whenever one of the big warts from the that are unfixable came up.
        • BrendanEich1 day ago
          The Mocha prototype was better than what I did after the ten days. The biggest wart, == implicitly converting operand types, came after the ten days and was me being an idiot by agreeing with early inside-Netscape adopter requests for slop.

          Ryan Dahl gave a speech decades later saying "don't do them" when you are tempted to add little features that might be "cute": https://youtu.be/M3BM9TB-8yA?t=900.

          Unfixability is a property of the Web and applies to CSS and HTML as well as JS.

    • somat3 days ago
      It is also the case that the first 80% of a projects functionality goes really quickly. Especially when you are interested and highly motivated about the project. That remaining 20% though. That is a long tail, it tends to be a huge slog that kills your motivation.
      • jefurii2 days ago
        The first 80% of a project takes 80% of the time. The last 20% of the project takes the other 80% of the time.
  • digitalsushi4 days ago
    Spock levels of fascinating from me. I want to learn how to compile a pdp11 emulator on my mac.
    • thequux4 days ago
      Compiling an emulator is quite easy: have a look at simh. It's very portable and should just work out of the box.

      Once you've got that working, try installing a 2.11BSD distribution. It's well-documented and came after a lot of the churn in early Unix. After that, I've had great fun playing with RT-11, to the point that I've actually written some small apps on it.

      • somat4 days ago
        The daves garage youtube has an episode where he documents the pitfalls of compiling 2bsd for a PDP-11/83. https://www.youtube.com/watch?v=IBFeM-sa2YY basically it is an art on a memory constrained system.

        What I found entertaining was when he was explaining how to compile the kernel, I went Oh! that's where openbsd gets it from. it is still a very similar process.

      • an-unknown3 days ago
        > After that, I've had great fun playing with RT-11 [...]

        If you want to play around with RT-11 again, I made a small PDP-11/03 emulator + VT240 terminal emulator running in the browser. It's still incomplete, but you can play around with it here: https://lsi-11.unknown-tech.eu/ (source code: https://github.com/unknown-technologies/weblsi-11)

        The PDP-11/03 emulator itself is good enough that it can run the RT-11 installer to create the disk image you see in the browser version. The VT240 emulator is good enough that the standalone Linux version can be used as terminal emulator for daily work. Once I have time, I plan to make a proper blog post describing how it all works / what the challenges were and post it as Show HN eventually.

      • colechristensen4 days ago
        From the link:

        > It's somewhat picky about the environment. So far, aap's PDP-11/20 emulator (https://github.com/aap/pdp11) is the only one capable of booting the kernel. SIMH and Ersatz-11 both hang before reaching the login prompt. This makes installation from the s1/s2 tapes difficult, as aap's emulator does not support the TC11. The intended installation process involves booting from s1 and restoring files from s2.

        • kragen2 days ago
        • aap_4 days ago
          good luck though. my emulator is not particularly user friendly, as in, it has no user interface. i recommend simh (although perhaps not for this thing in particular).
          • colechristensen4 days ago
            So what mechanism do you have set up to reply 4 minutes after being mentioned? :)
            • aap_4 days ago
              Compulsively checking HN i suppose :D
              • lanstin3 days ago
                Also looking at threads view first before actual news helps with that.
      • icedchai3 days ago
        I've been messing around with RSX-11M myself! I find these early OSes quite fascinating. So far I've set up DECNet with another emulator running VMS, installed a TCP stack, and a bunch of compilers.
    • snovymgodym4 days ago
      https://opensimh.org/

      Works great on Apple Silicon

      • haunter4 days ago
        What’s the difference between an emulator and a simulator in this context?
        • bityard4 days ago
          There is LOADS of gray area, overlap, and room for one's own philosophical interpretation... But typically simulators attempt to reproduce the details of how a particular machine worked for academic or engineering purposes, while emulators are concerned mainly with only getting the desired output. (Everything else being an implementation detail.)

          E.g. since the MAME project considers itself living documentation of arcade hardware, it would be more properly classified as a simulator. While the goal of most other video game emulators is just to play the games.

          • Imustaskforhelp3 days ago
            I don't want to offend you , but this has made me even wonder more what the difference is.

            It just feels that one is emulator if its philosophy is "it just works" and simulator if "well sit down kids I am going to give you proper documentation and how it was built back in my days"

            but I wonder what that means for programs themselves...

            I wonder if simulator==emulator is more truer than what javascript true conditions allow.

          • 4 days ago
            undefined
          • anthk3 days ago
            Not the case at all. Tons of emulators are near 100% accurate.
            • Brian_K_White3 days ago
              Irrelevant to the concept being expressed, and does not invalidate.

              The goals merely overlap, which is obvious. Equally obviously, if two goals are similar, then the implimentations of some way to attain those goals may equally have some overlap, maybe even a lot of overlap. And yet the goals are different, and it is useful to have words that express aspects of things that aren't apparent from merely the final object.

              A decorative brick and a structural brick may both be the same physical brick, yet if the goals are different then any similarity in the implimentation is just a coincidense. It would not be true to say that the definition of a decorative brick includes the materials and manufacturing steps and final physical properties of a structural brick. The definition of a decorative brick is to create a certain appearance, by any means you want, and it just so happens that maybe the simplest way to make a wall that looks like a brick wall, is to build an actual brick wall.

              If only they had tried to make it clear that there is overlap and the definitions are grey and fuzzy and open to personal philosophic interpretation and the one thing can often look and smell and taste almost the same as the other thing, if only they had said anything at all about that, it might have headed off such a pointless confusion...

            • bityard3 days ago
              Huh? I didn't mention anything about accuracy. And "accuracy" (an overloaded and ill-defined term on its own) doesn't have anything to do with the differences between simulators and emulators.
            • Imustaskforhelp3 days ago
              exactly. makes you wonder , is it all just philosophical.

              Calling the same thing a different name.

        • o11c4 days ago
          In theory, an emulator is oriented around producing a result (this may mean making acceptable compromises), whereas a simulator is oriented around inspection of state (this usually means being exact).

          In practice the terms are often conflated.

          • codr74 days ago
            The difference is about as crystal clear as compiler/interpreter.
            • Imustaskforhelp3 days ago
              compiler creates a binary in elf format or other format which can be run given a shared object exists.

              Intepreter either writes it in bytecode and then executes the bytecode line by line ?

              Atleast that is what I believe the difference is , care to elaborate , is there some hidden joke of compiler vs intepreter that I don't know about ?

              • dpassens3 days ago
                I assume GP meant that a lot of compilers also interpret and interpreters also compile.

                For compilers, constant folding is a pretty obvious optimization. Instead of compiling constant expressions, like 1+2, to code that evaluates those expressions, the compiler can already evaluate it itself and just produce the final result, in this case 3.

                Then, some language features require compilers to perform some interpretation, either explicitly like C++'s constexpr, or implicitly, like type checking.

                Likewise, interpreters can do some compilation. You already mentioned bytecode. Producing the bytecode is a form of compilation. Incidentally, you can skip the bytecode and interpret a program by, for example, walking its abstract syntax tree.

                Also, compilers don't necessarily create binaries that are immediately runnable. Java's compiler, for example, produces JVM bytecode, which requires a JVM to be run. And TypeScript's compiler outputs JavaScript.

                • Imustaskforhelp3 days ago
                  Then what is the difference, I always thought of Java as closer to python in the sense that it's running the byte code. And python also has bytecode.

                  I don't know what the difference is , I know there can be intepreters of compilers but generally speaking it's hard to find compilers of intepreters

                  Eg C++ has compilers , intepreters both (cpi) , gcc

                  Js doesn't have compilers IIRC , it can have transpilers Js2c is good one but i am not sure if they are failsafe (70% ready) ,

                  I also have to thank you , this is a great comment

                  • o11c3 days ago
                    Programming languages mostly occupy a 4-dimensional space at runtime. These axes are actually a bit more complicated than just a line:

                    * The first axis is static vs dynamic types. Java is mostly statically-typed (though casting remains common and generics have some awkward spots); Python is entirely dynamically-typed at runtime (external static type-checkers do not affect this).

                    * The second axis is AOT vs JIT. Java has two phases - a trivial AOT bytecode compilation, then an incredibly advanced non-cached runtime native JIT (as opposed to the shitty tracing JIT that dynamically-typed languages have to settle for); Python traditionally has an automatically-cached barely-AOT bytecode compiler but nothing else (it has been making steps toward runtime JIT stuff, but poor decisions elsewhere limit the effectiveness).

                    * The third axis is indirect vs inlined objects. Java and Python both force all objects to be indirect, though they differ in terms of primitives. Java has been trying to add support for value types for decades, but the implementation is badly designed; this is one place where C# is a clear winner. Java can sometimes inline stack-local objects though.

                    * The fourth axis is deterministic memory management vs garbage collection. Java and Python both have GC, though in practice Python is semi-deterministic, and the language has a somewhat easier way to make it more deterministic (`with`, though it is subject to unfixable race conditions)

                    I have collected a bunch more information about language implementation theory: https://gist.github.com/o11c/6b08643335388bbab0228db763f9921...

                  • amszmidt3 days ago
                    The easy definition is that an interpreter takes somethings and runs/executes it.

                    A compiler takes the same thing, but produces an intermediate form (byte code, machine code, another languages sometimes called "transpilar"). That you can then pass through an interpreter of sorts.

                    There is no difference between Java and JVM, and Python and the Python Virtual Machine, or even a C compiler targeting x86 and a x86 CPU. One might call some byte code, and the other machine code .. they do the same thing.

                    • codr79 hours ago
                      And most CPUs have multiple layers of compilers/interpreters inside.

                      Any complete, practical implementation of a programming language is going to involve both imo.

                • amszmidt3 days ago
                  While an interpreter can do optimizations, they do not produce "byte code" -- by that time they are compilers!

                  As for the comparison with the JVM .. compare to a compiler that produces x86 code, it cannot be run without an x86 machine. You need a machine to run something, be it virtual or not.

                • codr73 days ago
                  Thank you!
              • somat3 days ago
                I would generalize it to a compiler produces some sort of artifact that is intended to later be used directly, while for an interpreter the whole mechanism(source to execution) is intended to be used directly.

                The same tool can often be used to do both. trival example: a web browser. save your web page as a pdf? compiler. otherwise interpreter. but what if the code it is executing is not artisanal handcrafted js but the result of a typescript compiler?

              • amszmidt3 days ago
                An interpreter runs the code as it is being read in.

                A compiler processes the code and provides an intermediate result which is then "interpreted" by the machine.

                So to take the "writes it in byte code" -- that is a compiler. "executes the byte code" -- is the interpreter.

                If byte code is "machine code" or not, is really secondary.

                • Imustaskforhelp2 days ago
                  Then isn't theoretically all languages assembly intepreters in the endd
          • ijustlovemath3 days ago
            Adding some anecdata, I feel like emulator is mainly used in the context of gaming, in which case they actually care a great deal about accurate reproduction (see: assembly bugs in N64 emulators that had to be reproduced in order to build TAS). I haven't seen it used much for old architectures; instead I'd call those virtual machines.

            definitely agree on simulator though!

        • amszmidt3 days ago
          I think it is more about design, emulation mimics what something does. A simulator replicates what something does.

          It is a tiny distinction, but generally I'd say that a simulator tries to accurately replicate what happens on an electrical level as good one can do.

          While an emulator just does things as a black box ... input produces the expected output using whatever.

          You could compare it to that an accurate simulator of a 74181 tries to do it by using AND/OR/NOT/... logic, but an emulator does it using "normal code".

          In HDL you have a similar situation between structural, behavioral design ... structural is generally based on much more lower level logic (eg., AND/NOR/.. gates ...), and behavioral on higher logic (addition, subtraction ...).

          "100%" accuracy can be achieved with both methods.

    • nonrandomstring4 days ago
      Yep, this is a metal-detectorists finding religious relic moment.
    • boznz4 days ago
      Too easy! Going to build one with NAND gates.
  • dataf3l4 days ago
    I love this!

    first time I see people use 'ed' for work!!!

    I wonder who else has to deal with ed also... recently I had to connect to an ancient system where vi was not available, I had to write my own editor, so whoever needs an editor for an ancient system, ping me (it is not too fancy).

    amazing work by the creators of this software and by the researchers, you have my full respect guys. those are the real engineers!

    • wpollock4 days ago
      I remember using an ed-like editor on a Honeywell timeshare system in the 1960s, over a Teltype ASR-33. I don't remember much except you invoked it using "make <filename>" to create a new file. And if you typed "make love" the editor would print "not war" before entering the editor.
      • skissane3 days ago
        The “MAKE LOVE”/“NOT WAR” easter egg was in TECO for DEC PDP-6/10 machines. But DEC TECO was also ported to Multics, so maybe that was the Honeywell machine you used it on.

        But, for a whole bunch of reasons, I’m left with the suspicion you may be misremembering something from the early 1970s as happening in the 1960s. While it isn’t totally impossible you had this experience in 1968 or 1969, a 1970s date would be much more historically probable

        • flyinghamster3 days ago
          The easter egg carried over to the PDP-11 as well. I remember it being present in RSTS/E 7.0's TECO back in my high school days, and I just fired up SIMH and found it's definitely there.

          On the other hand, I never really tried to do anything with TECO other than run VTEDIT.

        • wpollock3 days ago
          You're probably right. It definitely was teco and likely 1970ish.
    • pjmlp3 days ago
      Not ed, but definilty inspired by it, I am old enough to have done typewriting school exam on MS-DOS 3.3 edlin.

      And since then never used it ever again, nor ed when a couple of years later we had Xenix access, as vi was much saner alternative.

      • skissane3 days ago
        I also remember using MS-DOS 3.3 EDLIN in anger, on our home computer [0] when I was roughly 8, because it was the only general purpose text editor we had. (We also had Wordstar, which I believe could save files in plain text mode, but I don’t think my dad or I knew that at the time.) I didn’t do much with it but used it to write some simple batch files. My dad had created a directory called C:\BAT and we used it a bit like a menu system, we put batch files in it to start other programs. I don’t remember any PC-compatible machines at my school, it was pretty much all Apple IIs, although the next year moved to a new school which as well as Apple IIs, also had IBM PC JXs (IBM Japan variant of the IBM PCjr which was sold to schools in Australia/New Zealand) and Acorn Archimedes.

        [0] it was an IBM PC clone, an ISA bus 386SX, made by TPG - TPG are now one of Australia’s leading ISPs, but in the late 1980s were a PC clone manufacturer. It had a 40Mb hard disk, two 5.25 inch floppy drives (one 1.2Mb, the other 360Kb), and a vacant slot for a 3.5 inch floppy, we didn’t actually install the floppy in it until later. I still have it, but some of the innards were replaced, I think the motherboard currently in it is a 486 or Pentium

    • kragen3 days ago
      I used ed in Termux on my cellphone to write http://canonical.org/~kragen/sw/dev3/justhash.c in August. Someone, I forget who, had mentioned they were using ed on their cellphone because the Android onscreen keyboard was pretty terrible for vi, which is true. So I tried it. I decided that, on the cellphone, ed was a little bit worse than vi, but they are bad in different ways. It really is much easier to issue commands to ed than to vi on the keyboard (I'm using HeliBoard) but a few times I got confused about the state of the buffer in a way that I wouldn't with vi. Possibly that problem would improve with practice, but I went back to using vi.
    • relistan3 days ago
      In the mid 90s we had an AT&T 3B2 that only had ed on it. We used it via DEC VT-102 terminals. It (ed) works but it’s not fun by any modern standards. Must’ve been amazing on a screen compared to printout from a teletype though!

      Side note: that ~1 MIP 3B2 could support about 20 simultaneous users…

    • wglb3 days ago
      An early consulting gig was to write a tutorial for ed (on the Coherent system). I often use ed--in fact I used it yesterday. I needed to edit something without clearing the screen.

      Earlier, I wrote an editor for card images stored on disks. Very primitive.

    • kps3 days ago
      In my first computing job I used ed for about six months (we didn't have character-mode I/O yet). I learned to make good use of regular expressions.
    • ajross4 days ago
      Interestingly it's actually a sort of degenerate use of ed. All it does is append one line to an empty buffer and write it to "hello.c". It's literally the equivalent of

          echo 'int main(void) { printf("hello!\n"); }' > hello.c
      
      ...EXCEPT...

      It's not, because the shell redirection operators didn't exist yet at this point in time. Maybe (or maybe not?) it would work to cat to the file from stdin and send a Ctrl-D down the line to close the descriptor. But even that might have been present yet. Unix didn't really "look like Unix" until v7, which introduced the Bourne shell and most of the shell environment we know today.

    • WhyNotHugo4 days ago
      The keystokes are pretty much what you'd press in vim to perform the same actions. Except that append mode ends when they finished the line (apparently) rather than having to press Esc.

      The feedback from the editor, however, is… challenging.

      • rchard2scout4 days ago
        In ed, append mode ends by entering a single '.' on an empty line, and then pressing enter. You can see that happening in the article.
        • ThePowerOfFuet3 days ago
          Now we know where SMTP got it, I guess.
          • kragen3 days ago
            That's possible but unlikely. MTP as defined by Suzanne Sluizer and Jon Postel in RFC 772 in September 01980 https://datatracker.ietf.org/doc/html/rfc772 seems to have been where SMTP got that convention for ending the message:

            > ...and considers all succeeding lines to be the message text. It is terminated by a line containing only a period, upon which a 250 completion reply is returned.

            But in 01980 Unix had only been released outside of Bell Labs for five years and was only starting to support ARPANET connections (using NCP), so I wouldn't expect it to be very influential on ARPANET protocol design yet. I believe both Sluizer and Postel were using TOPS-20; the next year the two of them wrote RFC 786 about an interface used under TOPS-20 at ISI (Postel's institution, not sure if Sluizer was also there) between MTP and NIMAIL.

            For some context, RFC 765, the June 01980 version of FTP, extensively discusses the TOPS-20 file structure, mentions NLS in passing, and mentions no other operating systems in that section at all. In another section, it discusses how different hardware typically handles ASCII:

            > For example, NVT-ASCII has different data storage representations in different systems. PDP-10's generally store NVT-ASCII as five 7-bit ASCII characters, left-justified in a 36-bit word. 360's store NVT-ASCII as 8-bit EBCDIC codes. Multics stores NVT-ASCII as four 9-bit characters in a 36-bit word. It may be desirable to convert characters into the standard NVT-ASCII representation when transmitting text between dissimilar systems.

            Note the complete absence of either of the hardware platforms Unix could run on in this list!

            (Technically Multics is software, not hardware, but it only ever ran on a single hardware platform, which was built for it.)

            RFC 771, Cerf and Postel's "mail transition plan", admits, "In the following, the discussion will be hoplessly [sic] TOPS20[sic]-oriented. We appologize [sic] to users of other systems, but we feel it is better to discuss examples we know than to attempt to be abstract." RFC 773, Cerf's comments on the mail service transition plan, likewise mentions TOPS-20 but not Unix. RFC 775, from December 01980, is about Unix, and in particular, adding hierarchical directory support to FTP:

            > BBN has installed and maintains the software of several DEC PDP-11s running the Unix operating system. Since Unix has a tree-like directory structure, in which directories are as easy to manipulate as ordinary files, we have found it convenient to expand the FTP servers on these machines to include commands which deal with the creation of directories. Since there are other hosts on the ARPA net which have tree-like directories, including Tops-20 and Multics, we have tried to make these commands as general as possible.

            RFC 776 (January 01981) has the email addresses of everyone who was a contact person for an Internet Assigned Number, such as JHaverty@BBN-Unix, Hornig@MIT-Multics, and Mathis@SRI-KL (a KL-10 which I think was running TOPS-20). I think four of the hosts mentioned are Unix machines.

            So, there was certainly contact between the Unix world and the internet world at that point, but the internet world was almost entirely non-Unix, and so tended to follow other cultural conventions. That's why, to this day, commands in SMTP and header lines in HTTP/1.1 are terminated by CRLF and not LF; why FTP and SMTP commands are all four letters long and case-insensitive; and why reply codes are three-digit hierarchical identifiers.

            So I suspect the convention of terminating input with "." on a line of its own got into ed(1) and SMTP from a common ancestor.

            I think Sluizer is still alive. (I suspect I met her around 01993, though I don't remember any details.) Maybe we could ask her.

            • bbanyc3 days ago
              The "." to terminate input was used in FTP mail on ARPANET, defined in RFC 385 which was well before anyone outside Bell had heard of Unix.
              • kragen2 days ago
                Oh wow, really? I didn't look because I assumed mail over FTP was transferred over a separate data connection, just like other files. Thank you!

                And yes, in August 01972 probably nobody at MIT had ever used ed(1) at Bell Labs. Not impossible, but unlikely; in June, Ritchie had written, "[T]he number of UNIX installations has grown to 10, with more expected." But nothing about it had been published outside Bell Labs.

                The rationale is interesting:

                > The 'MLFL' command for network mail, though a useful and essential addition to the FTP command repertoire, does not allow TIP users to send mail conveniently without using third hosts. It would be more convenient for TIP users to send mail over the TELNET connection instead of the data connection as provided by the 'MLFL' command.

                So that's why they added the MAIL command to FTP, later moved to MTP and then in SMTP split into MAIL, RCPT, and DATA, which still retains the terminating "CRLF.CRLF".

                https://gunkies.org/wiki/Terminal_Interface_Processor explains:

                > A Terminal Interface Processor (TIP, for short) was a customized IMP variant added to the ARPANET not too long after it was initially deployed. In addition to all the usual IMP functionality (including connection of host computers to the ARPANET), they also provided groups of serial lines to which could be attached terminals, which allowed users at the terminals access to the hosts attached to the ARPANET.

                > They were built on Honeywell 316 minicomputers, a later and un-ruggedized variant of the Honeywell 516 minicomputers used in the original IMPs. They used the TELNET protocol, running on top of NCP.

            • EfficientDude1 day ago
              Why the leading zero in the year dates but no leading zero for the RFC numbers, word length, or bits?

              EDIT: I checked your web site and you DON'T use leading zeros in front of dates there. Just trying something out? Enlighten me please...

    • lmm4 days ago
      I had to use ed to configure X on my alpha/vms machine back when I had it, there was something wrong with the terminfo setup so visual editors didn't work, only line-based programs.
    • jamesfinlayson3 days ago
      Never had to use ed but I remember working with someone a fair bit older than me that remembered using ed.
    • dboreham4 days ago
      Hmm. I still use ed now and then. It's an alias to vim I assume these days.
      • kragen2 days ago
        No, vim doesn't implement ed, just ex.
    • S04dKHzrKT3 days ago
      Real Programmers use ed. https://xkcd.com/378/
  • starspangled4 days ago
    I love browsing the tuhs mailing list from time to time. Awesome to see names like Ken Thompson and Rob Pike, and a bunch of others with perhaps less recognizable names but who were involved in the early UNIX and computing scene.
  • typeofhuman4 days ago
    Software archeology
    • api3 days ago
      One of the many things I dislike about the SaaS era is that this will never happen. Nobody in 2075 will boot up an old version of Notion or Figma for research or nostalgia.

      Like the culture produced and consumed on social media and many other manifestations of Internet culture it is perfectly ephemeral and disposable. No history, no future.

      SaaS is not just closed but often effectively tied to a literal single installation. It could be archived and booted up elsewhere but this would be a much larger undertaking, especially years later without the original team, than booting 1972 Unix on a modern PC in an emulator. That had manuals and was designed to be installed and run in more than one deployment. SaaS is a plate of slop that can only be deployed by its authors, not necessarily by design but because there are no evolutionary pressures pushing it to be anything else. It's also often tangled up with other SaaS that it uses internally. You'd have to archive and restore the entire state of the cloud, as if it's one global computer running proprietary software being edited in place.

      • pjmlp3 days ago
        And since many applications are basically plugging SaaS with each other via APIs and webhooks, not even those.

        We're living the SOA dreams, but it will be an hefty price.

      • kragen2 days ago
        A lot of software in 01972 was also effectively tied to a literal single installation. Most of the software people ran under Unix at the time was only present on one of the ten Unix installations and has consequently been lost. The shrink-wrapped mass-distribution software epoch was still ten years in the future.
        • api1 day ago
          That's why software archaeology from that era is hard, but it's still a lot easier than today's cloud software. It's a lot more complex, much more of a moving target, and is more interdependent with other services all of which would have to be either emulated or restored.

          My other point was that we've gone back to the mainframe era. The PC revolution has mostly been abandoned.

          • kragen1 day ago
            Everybody I see on the bus has a personal computer in their hand, and a lot of them also have an additional personal computer in each ear. USB-C chargers typically each contain a personal computer to decide what voltage to output. All this doesn't necessarily result in enhanced user autonomy and agency, though; I wrote this essay about the disturbing trend in the late 90s: https://www.gnu.org/philosophy/kragen-software.html
      • EfficientDude1 day ago
        Nobody will care what Notion or Figma are in the future. I don't know what they are, nor do I care to find out. Saas simply doesn't exist for many people, it's not a software model that all people are comfortable with.
    • joquarky3 days ago
      It's not as glamorous as it sounds.
  • JeffTickle3 days ago
    Can anyone provide a reference on what those file permissions mean? I can make a guess but when I searched around, could not find anything about unix v2 permissions. ls output looks so familiar, except for the sdrwrw!
    • b0in3 days ago
      Someone in the mailing list thread linked the man pages that they were able to extract out

      https://gitlab.com/segaloco/v1man/-/blob/master/man1/stat.1?...

      for sdrwrw:

      - column 1 is s or l meaning small or large

      - column 2 is d, x, u, -; meaning directory, executable, setuid, or nothing.

      - the rest are read-write bits for owner and non-owner.

      • Postosuchus3 days ago
        Pretty interesting. I guess it was way later, when they came up with the SUID semantics and appropriated the first character for symlinks (l) or setuid binaries (s)...
  • WhyNotHugo4 days ago
    1328 bytes for a hello world? BLOAT!
    • runlevel13 days ago
      That reminded me of the compiler that used to include a large poem in every binary, just for shits and giggles. You've heard of a magic number, it had a magic sonnet.

      I thought it was early versions of the Rust compiler, but I can't seem to find any references to it. Maybe it was Go?

      EDIT: Found it: 'rust-lang/rust#13871: "hello world" contains Lovecraft quotes' https://github.com/rust-lang/rust/issues/13871

    • ptspts4 days ago
      My https://github.com/pts/minilibc686 can do printf-hello-world on i386 in less than 1 KiB. write-hello-world is less than 170 bytes.
    • ramon1564 days ago
      Time to rice my unix!
      • yjftsjthsd-h3 days ago
        Hm. I wonder how hard it would be to write a neofetch (...er, "oldfetch"?) for v1 Unix. Maybe hardcode some of it? Should work.
  • 4 days ago
    undefined
  • doublerabbit4 days ago
    Cool. Can we enter that time portal and live in that alternate reality?
    • IgorPartola4 days ago
      When gasoline was leaded, cigarette smoke was normal everywhere, and asbestos was used for everything you can think of? It is a fascinating decade but also quality of life likely has skyrocketed since.
      • queuebert3 days ago
        Depends on what you value. Purchasing power of wages has declined, for example. That's probably not better.

        I suspect the sentiment is more that it would be nice to live in a simpler time, with fewer options, because it would reduce anxiety we all feel about not being able to "keep up" with everything that is going on. Or maybe I'm just projecting.

      • smeeger3 days ago
        it is fascinating to consider that this might not be true even though it seems true
        • msla3 days ago
          No? Thinking the world has gotten worse is classic old person chuntering from time immemorial.
          • smeeger3 days ago
            thinking the world can only get better is another thing too
      • oguz-ismail3 days ago
        > quality of life likely has skyrocketed since

        it hasn't

        • Cthulhu_3 days ago
          Is that an objective truth or rose-tinted nostalgia speaking? (I wouldn't know, I wasn't alive then.)
        • msla3 days ago
          I survived cancer because of modern medical advances.

          I'll take the world with Rituxan and CAR T-cell therapy, thank you.

        • azinman23 days ago
          Depends on the specifics of your life.

          As a gay man, I’m much happier in 2025.

    • yjftsjthsd-h3 days ago
      I mean... Sure? Go buy an actual VT* unit ( maybe https://www.ebay.com/itm/176698465415?_skw=vt+terminal&itmme... ?), get the necessary adaptors to plug into a computer, and run simh on it running your choice of *nix. I recommend https://jstn.tumblr.com/post/8692501831 as a reference. Once you have it working, shove the host machine behind a desk or otherwise out of sight, and you can live like it's 1980.
      • an-unknown3 days ago
        The only problem with real VTs is you have to be careful not to get one where the CRT has severe burn-in, like in the ebay listing. Sure, some VTs (like the VT240 or VT525) are a separate main box + CRT, but then you're missing the "VT aesthetics". The VT525 is probably the easiest one to get which also uses (old) standard interfaces like VGA for the monitor and PS/2 for the keyboard, so you don't need an original keyboard / CRT. At least for me, severe burn-in, insane prices, and general decay of some of the devices offered on ebay are the reason why I don't have a real VT (yet).

        The alternative is to use a decent VT emulator attached to roughly any monitor. By "decent" I certainly don't mean projects like cool-retro-term, but rather something like this, which I started to develop some time ago and which I'm using as my main terminal emulator now: https://github.com/unknown-technologies/vt240

        • cbm-vic-203 days ago
          There is firmware available online for some terminals; you could potentially get a lot more accuracy in emulating the actual firmware, but I'm sure a lot of that code gets into the guts of timing CRT cycles and other "real-world" difficulties. I'm not suggesting this would be easy to build out, just pointing out that it's available. While I haven't searched for the VT240 firmware, the firmware for the 8031AH CPU inside the VT420 (and a few other DEC terminals) is available on bitsavers. The VT240 has a T-11 processor, which is actually a PDP-11-on-a-chip.
          • an-unknown3 days ago
            Actually I have the VT240 firmware ROM dumps, that's where I got the original font from. The problem is, at least the VT240 is a rather sophisticated thing, with a T-11 CPU, some additional MCU, and a graphics accelerator chip. There is an extensive service manual available, with schematics and everything, but properly emulating the whole firmware + all relevant peripherals is non-trivial and a significant amount of work. The result is then a rather slow virtual terminal.

            There is a basic and totally incomplete version of a VT240 in MAME though, which is good enough to test certain behavior, but it completely lacks the graphics part, so you can't use it to check graphics behavior like DRCS and so on.

            EDIT: I also know for sure that there is a firmware emulation of the VT102 available somewhere.

        • kragen2 days ago
          You can also just use the terminal despite the burn-in.
      • MobiusHorizons3 days ago
        Ha, I just bought a VT420 a couple of weeks ago. I just finished a hacked together converter for USB keyboards working well enough (in the last hour actually). Next job is to connect it up as a login terminal for my freebsd machine.
        • icedchai3 days ago
          I love those old terminals! I remember using them during late nights in college...
  • unit1494 days ago
    Recovering RF tapes, even a simple text file demonstrates buffer space that is not being used by the dos, or .iso file. Even in a 2.11 BSD distro, a default tiling and window manager has to be installed on the native OS. So yes, going with KDE or the X11 wm.