V7/x86 is a port of the Seventh Edition of the UNIX operating system to the x86 (i386) based PC. UNIX V7 was the last general distribution (around 1979) to come from the Research group at Bell Labs, the original home of UNIX. The port was done mostly around 1999 when “Ancient UNIX” source code licenses first became available, and was revised for release, with some enhancements, during 2006-7.
The distribution includes the full UNIX Version 7 operating system, with source code, pre-built binaries, man pages, and original Version 7 documentation. Also included are a custom UNIX-style x86 assembler, an ACK-based C compiler, and several key early UCB software components such as the C shell, the editors ex and vi, and the pager more.
I’m inclined to try and run this virtually, to see just how bastardised and messy UNIX has become in our current UNIX derivatives.
“Tue UNIX?”
And I am not even sure that definition is correct given Research UNIX. But most it feels like some kind of weird “no true scotsman” definition
jockm,
+1
I think that Linus did intend for linux to be a real unix clone, although obviously he didn’t work alone and the GNU tools he used are largely responsible for the unix environment on linux. As far as the kernel goes, I don’t think it’s as relevant to the end product although system calls are heavily influenced by the POSIX standards (insofar as Linus was able to derive them without paying for them).
Sure, but I was also referring to the fact that version 8-10 were released to at least some universities. On top of that it denies the authenticity of the original BSD team which created a lot what we think of as Unix. The v7/x86 project pointed to as supposedly being orthodox contains ex and vi which were both created at Berkeley.
But I also don’t understand why System III and System V aren’t real. Is it because they were commercial, because they weren’t produced by Bell Labs, etc? The author doesn’t say, but somehow implies that only Bell Labs — which was a research facility — could produce “real” unix. Did Unix “sell out” by going commercial and becoming popular?
“True Unix” perhaps is just an editorializing click bait title? Given how there are a bunch of articles about BSD on this same front page.
Maybe the last of the original Unix? Things like BSD took hold right after this distribution was released, and moved/morphed the system into a more complete OS which was outside of the scope of the initial research project at Bell Labs.
I have no idea if the author meant that though.
I mean the only difference between v7 and System III — aside from features — is that one was developed by one division of AT&T and the the other by a different and that commercial licensing didn’t cost $20K a seat (v7’s commercial license cost). But while v8-v10 unix were widely distributed outside of Bell Labs, at least some universities had it; but most prefered BSD IIRC because it was what Berkely used and contained their enhancements. For example BSD had virtual memory before Bell Labs did.
It looks like Thom is responsible for the “True Unix” comment, and it would be nice if he explained his logic on the orthodoxy.
I have used and developed for v7, System III, System V, BSD, and even Multics in my time. I don’t know anyone who considered on more True than the others… except some BSD guys trying to get a rise out newbies from time to time
“Maybe the last of the original Unix?”
That’s how I’d interpret it, which I think is reasonable enough. Essentially UNIX as a tangible product rather than a certification. As in The UNIX Operating System vs A UNIX Operating system. The former being the original product with the latter being SysV and BSD.
The thing to notice is how small the core kernel C code is, maybe a couple hundred KB. This could make the subject matter for an elective one-semester (or even shorter) course in CS at the upper undergraduate level, assuming the students have already taken a first course in operating systems. Obviously it’s missing lots of stuff we expect from a modern OS, but covering process, memory and file management as implemented in the original Unix system would be a valuable experience for some.
Honestly, “old” projects are often not the best example on how to do things. Mainly because they tend to be very byzantine, super optimized pieces of code, with too many compromises and system specific hacks and side effects. To the point that the initial concept/idea gets very compromised being very implementation specific.
I’m sure there are hacks but I’m guessing it’s far less per KLOC than is usually the case, because this was under 10 years gestation in Bell Labs by the people who won the Turing Award for it, and they had far fewer pressures to release stuff for competitive reasons, to land a big customer account, etc. Lions published a commentary on Unix v6 along with the source code, which is still in print although a bit pricey (Jeff sells it for $40). Maurice Bach wrote a textbook on AT&T Unix System V R2, which Torvalds acknowledges was a good resource for what he was working on. Incidentally both of those might make good textbooks for this imaginary undergraduate seminar.
astro
I don’t think the issue is about the quality of the people, who wrote the kernel, but rather the context of the time and place when this product was developed. It’s not just the kernel that is ancient, but the entire toolchain is very old as well.
I used to teach in Uni, and let me tell you. For teaching you want stuff that is as current and KISS as possible. You need projects with an active ecosystem and support system. Last thing you want is for one of your students to trigger some old bug.
You also need projects that are relatively current with some of the more fundamental advancements that have happened in the past 40 years.
However, this is a great tool for more advanced students (maybe at grad school level) who are interested in historical artifacts or digital archeology.
I personally wish Unix had never left the lab it was created… 🙂
javiercero1,
Modern OS projects can face those problems too. Today’s operating systems tend to be very bloated and also have lots of system specific hacks. It may be getting worse since originally hardware vendors attempted to be compatible with popular one another such that operating systems with only a couple drivers could realistically cover large swaths of hardware (VGA compatible, NE2K compatible, Sound Blaster compatible, etc). Now days operating systems require a lot more code & drivers to support hardware. And although we do create abstractions in the OS, some of these abstractions kind of missed the mark (the termio subsystem under linux is awfully complex & ugly) and over time new technology (ie offload engines) can come in an break your clean abstractions.
To boil it down into a single semester as in astro’s comment, you’ll have to either gloss over tons of bloat, or pick an OS project that doesn’t have so much bloat in the first place. It’s hard to say which is better… On the one hand I’d feel a greater sense of accomplishment by learning everything there is to know about a straitforward kernel. But on the other hand in the real world most developers will encounter situations where they are responsible for making relatively small changes to a huge project they have not fully reviewed. As such maybe it makes more sense to get accustomed to huge bloated code bases in preparation for the real world.
What you pointed out makes perfect sense, which is why there have been lots of academic kernels released as teaching tools. Which isolate a lot of the “academic” concepts, and shields students from the “reality” of a commercial product.
Last thing you want is to have your students losing their minds dealing with ancillary stuff not important to the concepts being discussed. Specially with stuff like bloat, it’s hard for a neophite to shift through what’s important and what’s not, when they don’t have much of a background in the subject.
https://en.wikipedia.org/wiki/Xv6
Xv6 is probably a better candidate for this kind of thing. It’s a simple, but modern, implementation of V6.
Oh, it’s already been done, basically. I had no idea.
Andy Tanenbaum: “When AT&T released Version 7, it began to realize that UNIX was a valuable commercial product, so it issued Version 7 with a license that prohibited the source code from being studied in courses, in order to avoid endangering its status as a trade secret. Many universities complied by simply dropping the study of UNIX, and teaching only theory.” {Operating Systems (1987) p 13]
And thus MINIX was born.
“to see just how bastardised and messy UNIX has become in our current UNIX derivatives”
Care to elaborate on which UNIX derivative is bastardised and messy?
So who exactly is the target audience for this?
OS researchers, OS nerds, and curious. A lot changed between v7 and SVR[2-4] all of which made Unix the moden operating system it was.
SVR5 introduced the idea of the Single Unix Specification that meant that any os that complied with the SUS was UNIX even if it didn’t share the same codebase. Thus Linux IS certified Unix.
An of course BSD started as a fork of v7 Unix and expanded the implementation (for example TCP/IP) and eventually became it’s own distinct codebase after the replaced all the original Bell Labs code