107 points | by todsacerdoti19 小时前
Another (mid-late) data point may be the original NeXT brochure[1], which claims the NeXT to be "a mainframe on two chips". It provides a definition along the lines of a throughput oriented architecture and peripheral channels (in analogy to the NeXT's two DSP chip) and, while doing so, also marries the concepts of physical size and architecture (there's also some romanticism around uncompromised designs and "ruthless efficiency" involved):
> The NeXT Computer acknowledges that throughput is absolutely key to performance. For that reason, we chose not to use the architecture of any existing desktop computer. The desired performance could be found only in a computer of a different class: the mainframe.
> Having long shed any self-consciousness over such mundane matters as size and expense, mainframes easily dwarf desktop computers in the measure of throughput.
> This is accomplished by a different kind of architecture. Rather than require the attention of the main processor for every task, the mainframe has a legion of separate Input/Output processors, each with a direct channel to memory It's a scheme that works with ruthless efficiency.
[1] https://lastin.dti.supsi.ch/VET/sys/NeXT/N1000/NeXTcube_Broc...
> This shows that by 1962, "main frame" had semantically shifted to a new word, "mainframe."
> IBM started using "mainframe" as a marketing term in the mid-1980s.
I must conclude it takes the competition 10 years to catch up to IBM, and IBM about 20 years to realize they have competition. Setting a countdown timer for IBM to launch an LLM in 2040.
Thanks for researching and writing this up. It's a brilliant read!
The guy who told me this was the Australian engineer sent over to help make the machine to bring back for UQ. He parked in the quiet side of the Maynard factory, not realising why the other drivers avoided it. Then his car got caught in a snowdrift.
A prior engineer told me about the UUO wire wrap feature on the instruction set backplane: you were allowed to write your own higher level ALU "macros" in the instruction space by wiring patches in this backplane. Dec 10 had a 5 element complex instruction model. Goodness knows what people did in there but it had a BCD arithmetic model for the six bit data (36 bit word so 6 bytes of six bits in BCD mode)
A guy from Latrobe uni told me for their Burroughs, you edited the kernel inside a permanently resident Emacs like editor which did recompile on exit and threw you back in on a bad compile. So it was "safe to run" when it decided your edits were legal.
We tore down our IBM 3030 before junking it to use the room for a secondhand Cray 1. We kept so many of the water cooled chip pads (6" square aluminium bonded grids of chips, for the water cooler pad. About 64 chips per pad) the recycler reduced his bid price because of all the gold we hoarded back.
The Cray needed two regenerator units to convert Australian 220v to 110v for some things, and 400hz frequency for other bits (this high voltage ac frequency was some trick they used doing power distribution across the main CPU backplane) and we blew one up spectacularly closing a breaker badly. I've never seen a field engineer leap back so fast. Turned out reusing the IBM raised floor for a Cray didn't save us money: we'd assumed the floor bed for liquid cooled computers was the same; not so - Cray used a different bend radius for flourinert. The flourinert recycling tank was clear plastic, we named the Cray "yabby" and hung a plastic lobster in it. This tank literally had a float valve like a toilet cistern.
When the Cray was scrapped one engineer kept the round tower "loving seat" module as a wardrobe for a while. The only CPU cabinet I've ever seen which came from the factory with custom cushions.
In his report, John von Neumann had used the terms "central arithmetical part (CA)" and "central control part (CC)". He had not used any term for the combination of these 2 parts.
The first reference to CPU that I could find is the IBM 704 manual of operation from 1954, which says: “The central processing unit accomplishes all arithmetic and control functions.”, i.e. it clearly defines CPU as the combination of the 2 parts described by von Neumann.
In IBM 704, the CPU was contained in a single cabinet, while in many earlier computers multiple cabinets were used just for what is now named CPU. In IBM 704, not only the peripherals were in separate cabinets, but also the main memory (with magnetic cores) was in separate cabinets. So the CPU cabinet contained nothing else.
The term "processor" has appeared later at some IBM competitors, who used terms like "central processor" or "data processor" instead of the "central processing unit" used by IBM.
Burroughs might have used "processor" for the first time, in 1957, but I have not seen the original document. Besides Burroughs, "processor" was preferred by Honeywell and Univac.
The first use of "multiprocessing" and "multiprocessor" that I have seen was in 1961, e.g. in this definition by Burroughs: "Multiprocessing is defined here as the sharing of a common memory and all peripheral equipment by two or more processor units."
While "multi-tasking" was coined only in 1966-09 (after IBM PL/I had chosen in 1964-12 the name "task" for what others called "process"), previously the same concept was named "multiprogramming", which was already used in 1959, when describing IBM Stretch. ("multitasking" was an improved term, because you can have multiple tasks executing the same program, while "multiprogramming" incorrectly suggested that the existence of multiple programs is necessary)
She had one CPU she worked on where you could change its instruction set by moving some patch cords.
One obvious solution to this problem was to buy an Ethernet network device for the mainframe (which used Token Ring), but that was yet another very expensive IBM product. With that device, we could have simply compressed and uncompressed the files on any standard PC before transferring them to/from the mainframe.
Another obvious solution was to use C to compile a basic compression and decompression tool. However, C wasn’t available—buying it would have been expensive as well!
So, we developed the compression utility twice (for performance comparisons), using COBOL and REXX. These turned out to be two amusing projects, as we had to handle bits in COBOL, a language never intended for this purpose.
Circa 2002 I’m a Unix admin at a government agency. Unix is a nascent platform previously only used for terminal services. Mostly AIX and HPUX, with some Digital stuff as well. I created a ruckus when I installed OpenSSH on a server (Telnet was standard). The IBM CE/spy ratted me out to the division director, who summoned me for an ass chewing.
He turned out to be a good guy and listened to and ultimately agreed with my concerns. (He was surprised, as mainframe Telnet has encryption) Except one. “Son, we don’t use freeware around here. We’ll buy an SSH solution for your team. Sit tight.”
I figured they’d buy the SSH Communications software. Turned out we got IBMSSH, for the low price of $950/cpu for a shared source license.
I go about getting the bits and install the software… and the CLI is very familiar. I grab the source tarball and it turns out this product I never heard of was developed by substituting the word “Open” with “IBM”. To the point that the man page had a sentence that read “IBM a connection”.
On the subject of expensive mainframe software, I got to do the spit take once of "you are paying how much for a lousy ftp client? per month!" I think it was around $500 per month.
Man open source software really has us spoiled.
$200 back in 1980 is about $800 today. Amazing to think anyone would spend that much for a fairly simple tool.
I would have thought in the (IBM) mainframe world, PL/I (or PL/S) would have been the obvious choice.
Telephone systems used main distribution frames, but these are unrelated. First, they look nothing like a computer mainframe, being a large rack with cable connections. Second, there isn't a path from the telephony systems to the computer mainframes; if the first mainframes were developed at Bell Labs, for instance, it would be plausible.
As for Colossus, it was built on 90-inch racks, called the J rack, K rack, S rack, C rack, and so forth; see https://www.codesandciphers.org.uk/lorenz/colossus.htm
So it's entirely possible that somebody from the telephone industry decided to borrow a term of art from it for computing.
19 inch racks seem to come from railroad interlocking and may have been introduced by Union Switch and Signal, founded by Westinghouse in 1881, and still around, as a unit of Ansaldo.
(the mainframe song, uncertain of its background)