UNIX was a terrific workhorse for its time, but eventually the old nag needs to be put out to pasture. David Chisnall argues that it’s time to retire UNIX in favor of modern systems with a lot more horsepower. “UNIX has a lot of strengths, but like any other design it’s starting to show its age. Some of the points listed in this article apply less to some UNIX-like systems, some apply more.”
– Author lists all strong points of unix systems
– Author proves that he either knows too little about each point or has misunderstood it
– Author lists possible alternatives, based on his misconcepts.
’nuff said.
yawn…
“- Author proves that he either knows too little about each point or has misunderstood it ”
I would like to add one more statement which has be shamelessly stolen:
” Those who do not understand Unix are condemned to reinvent it, poorly.”
May I propose a modern version:
“Those who do not understand Unix are condemned to invent something vastly better.”
Come on, Unix is not the shoulders of giants.
Good points all.
I think it would be interesting the list the 10 things I hate about Windows:
1) Microsoft is too f–king stupid to create an installer capable of loading additional drivers from anything but an obsolete floppy disk.
2) Microsoft is too f–king stupid to create a 64-bit OS that recognized SATA drives.
3) Microsoft is too f–king stupid to create an OS that can work correctly when other OSs exist on the same machine (e.g. locking up at the login screen when using XOSL bootloader – we got this far, but damn, we just can’t figure out how to display the f–king login prompt – Assholes!)
4) Microsoft is too f–king stupid to include drivers for even the most common of commodity hardware with their 64-bit OS (e.g. Audigy drivers).
5) Microsoft has created one of the biggest and most pointless wastes of time since D&D in their Windows operating system.
6) Malware
7) Spyware
8) An embedded media player that supports pop-up adds
9) Lack of security
10) The inclusion of userland tools into the Kernel.
There. There’s my list of 10 things I hate about Windows, Microsoft, Bill Gates, and all his relatives.
Can you tell I’ve been on a Windows box today? Good!
Most of your points are either not MS’ responsibility, not specific to Windows, or incorrect.
1) Windows setup currently can load drivers from other media besides floppies (one method is via unattended installation). Vista’s installer makes this directly accessible via the standard graphical setup.
2) The OS has no problems recognizing SATA drives depending on how the motherboard exposes them. Usually if the support is provided via a chipset other than the main controller, third-party storage drivers are required. For example, my nforce has 8 SATA ports — 4 native, 4 via an on-board Silicon Image controller. Only the Silicon Image ports require third-party drivers.
3) Try a different loader.
4) In-box drivers have to meet WHQL requirements and it’s up to the OEM to submit their drivers during the beta period for inclusion in the final product.
5) Purely subjective.
6 & 7) Other OSes have the same issues.
8) You can disable it.
9) Needs better defaults, but when used correctly it’s no less secure than anything else.
10) There are no userland tools in the kernel.
re 1: and which home user does unattended installs? and vista is whrere?
re 3: that doesnt disprove the fact.
re 1: and which home user does unattended installs? and vista is whrere?
His assertion wasn’t about a certain class of users, it was that the Windows installer didn’t include the functionality at all. His assertion was false.
1) What home user does unattended installs ya douche.
2) It most certainly does unless you have a 3rd party driver. How lame is that? Install any BSD or Linux system and guess what, your SATA drives are recognized automatically.
3) That doesn’t negate what I said.
4) Don’t have those problems with Linux, BSD, hmmm, come to think of it, with ANY other OS. Still doesn’t negate what I said.
5) No. I think it is undeniable fact. I think you’re a windows shill though.
6 & 7) Name some.
8) Why the hell should I have to? It shouldn’t be there in the first place.
9) Bzzzzt! Wrong.
10) Well, you’d know better that I Mr. Gates.
We’re talking about operating system design… You know what that is? Go play Quake 4.
Did I insult your favorite OS? I’m sorry. I’d console you, but I’m not there. Why don’t you walk upstairs and cry on your mom’s lap for a while. I’m sure you’ll feel better.
Lilo and Grub works for me.
They work wonderfully for me too. Unfortunately, my client required XOSL. If it weren’t for them, I wouldn’t be using Windows at all, filthy malware infested lump of poop that it is.
Well, VMS does not run well on x86, so it’s a bit stupid to retire the only working system…
Maybe RiscOS? Except that has no virtual memory, multithreading, or multi-user support.
My guess would be one of the BeOSen
Of course Unix is old, but man, the article pointed out things that are far from being the worst problems of unix.
If you want to know the real problems behind unix, and why unix sucks (and why windows and/or mac os x suck too because of sharing some of those unix defects) you might to want to read about plan9, which was the OS that the unix guys wrote to fix unix
http://herpolhode.com/rob/ugly.pdf
http://plan9.bell-labs.com/sys/doc/9.html
Edited 2005-11-06 12:39
If you want to know the real problems behind unix, and why unix sucks (and why windows and/or mac os x suck too because of sharing some of those unix defects) you might to want to read about plan9, which was the OS that the unix guys wrote to fix unix
Can I get a synopsis of what they fixed from Unix? Also, if Plan9 is so good, why aren’t more people using it?
I’m not doubting you, just honestly asking.
Can I get a synopsis of what they fixed from Unix? Also, if Plan9 is so good, why aren’t more people using it?
I’m not doubting you, just honestly asking.
The reason not more people are using plan9 is the same reason people use Windows instead of OS X, or OS X instead of BeOS etc etc: lack of momentum means fewer software/programmers, lock-in in old systems, and (perceived cost) of transition. That and the performance isn’t proven/at par with the highly tweaked BSDs/Linux.
Nice introduction:
http://en.wikipedia.org/wiki/Plan_9_%28operating_system%29
Also, if Plan9 is so good, why aren’t more people using it?
If Windows 3.11 was so bad, why did it rule the world?
Answer; the best technology doesn’t always win, the best sales department does.
the question was mostly rethorical but you have a very good point there…
another thing is that it came preinstalled on the computer when bought. and at the time, alternatives was not easy to get. unlike now when you can just down it off the net
Edited 2005-11-07 18:03
that .pdf was interesting. thanks.
And Jimmy Page had bands after Led Zeppelin. Who cares?
What are those alternative oses the author is so keen about?
A couple of points he mentions are addressed by plan9, and to a lesser extent by macosx (unix derivatives btw, who was going to retire you said?) but… any other os? anyone? Or we are just talking about wishlists for the new uber-os that will solve all of our problems and dominate the world?
Plan 9 source is also out there; maybe the move is just too radical for some people to stomach, so its the old case of sticking with the devil one knows rather than taking a risk in going with the devil that no one knows.
On paper Plan 9 looks like a great OS, but in production, can it stand up on its own merits in pratical, real world terms?
What all is ported to plan9 though…. I’m not about to give up a nice DE so I can have media streams or meta-data filesystems…
Is anyone majorly working on Plan9 still? Last time I saw it X11 was ported and you could run something that looked vaguely like twm and a couple of apps written to show off a motif looking toolkit…
You’ve got the requirements right…now go write a better OS
It was refreshing to read a complaint about X11 that actually had some value!
If this such a huge problem I think it would be possible to transfrom X11 (X12?) in a compatible way. This NeWS system he mentions sound a little like cairo, okay clearly I haven’t got enought knowledge of X11 internals to have an opinion, but non the less, would this be fesable?
I believe that complaint was long known. But I think now that the modular version of X11 is almost finished (see x.org) things can improve much faster and this issue will also be dealt with. (can’t wait until X11R7 is available in the FreeBSD ports)
In my opinion, to put more intelligence in the display, you have to dump X which isn’t going to happen anytime soon..
The evolution of the toolkits even do the opposite: putting more intelligence on the remote station than on the local server: fonts are now computed by the X client (the remote one) instead of the server..
I think it’s weird: the latency won’t ever be good if you have to have roundtrips all around instead of doing it ‘News style’..
That said each try to replace X by something different has died (Berlin/Fresco, YWindow, etc) so by default X will stay a long time..
There’s much more problems with unix …
My thoughts exactly. A link!
http://web.mit.edu/~simsong/www/ugh.pdf
This book is no longer relevant. UNIX (and by that I really mean Linux and BSD) has changed too much in the last 11 years for these rants to be valid. The authors, as accomplished as they may have been as programmers, knew nothing of the world of computing as it is now. Spyware was nonexistent in 1994.
For me, the file system, and indeed the variety of *NIXes out there to be huge benefit, not a hinderance. Keep in mind, this book was published at the dawn of Microsoft’s evil plan to take over the computing world. There were still several DOSes out there (Digital, Compaq, IBM, and Microsoft), plus replacement command line interpreters. Linux, KDE, and Gnome didn’t exist, and by anyone’s calculations have revitalized an OS that they complain should be axed. The amount of effort gone into refining *NIX and X11, plus the now widely recoginized superior file system of *NIX leads me to conclude that the book is nothing but a rant, and that *NIX is here to stay, warts and all.
And in the end, any operating system that is immune from malware should NEVER be retired.
no os is immune. that is unless every app that connects to the outside world is running inside a sandbox that dont allow any other app to be started inside it.
and even then you dont know if a potential flaw in the sandbox may allow the app to launch something outside of the sandbox.
the only diff between *nix and windows is that while in windows your default admin/root, in *nix your default a limited user. still, malware may well run quite nice inside your user area. only that it will be simpler to clean out. create a new user. move your data over. purge the old user
and never forget, right now windows is the biggest target. what happens when your fav *nix becomes the biggest target? its not without reason that computer security borrows terms from biology. in both areas a monoculture can be bad for survivability. one fast spreading, deadly virus and presto, extinction…
to realy be sure your computer dont pick anything up in the near future is either to go with something like skyos or syllable, or write your own os
Edited 2005-11-07 19:24
Ok, let’s assume, for a moment, that Linux or BSD become popular enough to spawn hackers to write malware for them. At least the stupidity of the average user in downloading and installing malware wouldn’t bring down the entire system. Besides, as long as there’s a root password, the last time I check it would be impossible to install that vast majority of software on *NIX. The one exception I can think of is OpenOffice.org, which will install in the user’s home folder. That still leaves /bin and /sbin protected from attack. I’ll take that scenario any day, over the rediculously poorly executed file system of DOS/Windows/OS2.
but in the end its the users data thats important. alltho one could in theory set up one account that one run any net related apps from and another that one run everything else from. then use sudo or similar to launch these inside shortcuts so that one never notice the changes until one needs to save a file on the net.
thing is that a true malware dont need to be installed. it only need to be downloaded and executed, at worst extraced.
the one real strenght i can see about *nix systems in this regard is that unless you specificaly say so, no system will deal with a file as if its a executable file. therefor you cant trick a user into firing up a foo.jpg.sh file the way you can fool a windows user into doubleclicking a foo.jpg.bat file.
but malware thats installed on a exploit vector by supressing download dialogs and similar can still happen. only that currently you cant expect every *nix user to use the same browser. some will user firefox, some will use konqueror, some will use opera, and some may even use something as obscure as w3m or links on the console.
again, its the diverse digital biosphere that may save *nix from the pitfalls of windows. funny then that this is often the exact thing that people complain about on *nix, to much choice. distros come preinstalled with 3-4 browsers, 5-6 desktops/wm’s and so on…
1. i must say i prefer the everything is a file thought that powers unix. why? because when you know this then everything inside the computer is exposed inside the file system. this unlike say windows where you have to use the gui for everything, even when the gui drivers fail.
2. a very silly example i must say. sure it makes a point but its a very limited one.
3. sorry, but the zsh behaviour is a exception, not the norm. dont expect it to work every time. still, the number of commands out there that dont respect –help or -h is very small from my experience…
4. XN anyone? was x11 ever designed to be used outside of a lan enviroment? oh, and with the increase in bandwith on both wired and wireless external connections, where realy is the problem?
5. sorry, but whats being attempted there the pipes where never designed to do. therefor it naturaly fails. i dont see how this is a flaw of the pipe system, more like trying to force a square peg into a round hole. and then claiming its the hole thats at fault, not the user.
6. – 10. this is out of user space and into program space, therefor ill stay away from those.
in the end it comes down to evolution rather then revolution. take what works and then replace what does not. this is whats happening on both bsds and linux, and is a great strength in my view.
still, i wonder, if plan9 is so great. why have it not become the baseline for every os out there? it seems to show up only in a very small group of dedicated users. and how old is it? isnt it older then both the gnu hurd project, most bsds and linux? what is going on?
4. … was x11 ever designed to be used outside of a lan enviroment? oh, and with the increase in bandwith on both wired and wireless external connections, where realy is the problem?…
You got it a bit wrong. I agree, that the technical resources we have access to are constantly expanding but even though I may have 1G of RAM, a dual-core 3GHz CPU and a 10M link to the internet, that does NOT mean that every little bastard of a program should feel free to use up most of my RAM, CPU time or network link just because the author didn’t want to optimize its work process, didn’t know how to or just plainly thought that “why should I give a rats *** when everybody has 1G of RAM anyways”. Not to mention the people who do not have access to such resources. It just doesn’t cut it.
ok, so it was untoughtfull of me, but i have a bad habbit of replying like with like at times (makes me one nice target for flamers and trolls).
thing is tho that its being worked on, and the problem wasnt realy an issue over the lan networks of old. its only realy if you try to stream X over say a modem connection that you run into problems these days.
still, i would like to find a link to the XN system. from what i understand it takes care of most of the problems with X over low bandwith connections while still being compatible.
oh, and lets not forget VNC
Who needs 1GB of RAM for X? I’ve successfully run it on as little as 16 and still run applications on top… I suspect you need to learn how to interpret X reported memory consumption..
1. i must say i prefer the everything is a file thought that powers unix. why? because when you know this then everything inside the computer is exposed inside the file system. this unlike say windows where you have to use the gui for everything, even when the gui drivers fail.
Plan 9 does it better though.
4. XN anyone? was x11 ever designed to be used outside of a lan enviroment? oh, and with the increase in bandwith on both wired and wireless external connections, where realy is the problem?
Saying an increase in bandwidth means it’s not a problem is saying KDE/Gnome/Windows’ bloat is not a problem because machines are getting faster. The UNIX world has been suffering under X too long already.
5. sorry, but whats being attempted there the pipes where never designed to do. therefor it naturaly fails. i dont see how this is a flaw of the pipe system, more like trying to force a square peg into a round hole. and then claiming its the hole thats at fault, not the user.
ERGO, the pipe system is outdated.
“Plan 9 does it better though.”
maybe so, but i wonder what will come of FUSE for linux. i have a feel that someone will write some plugins that can do much of the stuff i have read plan 9 does.
“Saying an increase in bandwidth means it’s not a problem is saying KDE/Gnome/Windows’ bloat is not a problem because machines are getting faster. The UNIX world has been suffering under X too long already.”
kde/gnome bloat isnt a problem as i can allways go use blackbox or maybe fvwm. still, it was in parts aimed as a joke. trowing more bandwith at it does not help but i cant at the moment find a good link to the NX system. still, i think a next gen X protocol may well help. then write a setup that understand both types of protocols. ie, let more of the window stuff happen on the user end. lets evolve the X idea rather then starting over from scratch. most of the X stuff works. its only realy the network part thats a problem. hmm, i wonder what will come out of the x.org in the future…
“ERGO, the pipe system is outdated.”
no, its working just fine if your using it for what it was intended to do. it was never intended to do real time video and audio decoding. what he is trying to do is force the internal workings of say windows media player into the file system layer and then claims the file system layer is broken when it blows up. sorry, but maybe i should try force a txt document into wmp and claim its broken when it refuses to read the file?
no, its working just fine if your using it for what it was intended to do. it was never intended to do real time video and audio decoding. what he is trying to do is force the internal workings of say windows media player into the file system layer and then claims the file system layer is broken when it blows up. sorry, but maybe i should try force a txt document into wmp and claim its broken when it refuses to read the file?
Bingo.
Many of the complaints about streams seems a bit odd to me. There is nothing to prevent you from writing a program that takes input from multiple stream sources, and delivers output to multiple output streams. The existence of a simplified stdin + stdout + stderr model does not prohibit the creation of other models when needed, and some examples are quite old. LaTeX produces formatted output, errors and index information.
Likewise, ls was designed to deliver useful output to a text-based terminal. If you want structured information about files, there are multiple other ways to query the file system that can deliver a structured list, that can be processed however you wish.
Just because an operating system makes POSIX interfaces available, does not mean that you are limited to just POSIX interfaces.
KDE/Gnome aren’t a problem. I can run them well on my 184MB Celeron 700MHz… I’m not kidding, I’m punching this into konqueror on it as we speak. Why, just last night I was running yast (now there’s an actual bloated piece of software) to install something while browsing a couple of websites…
Sure, the machine picks up a little bit in windowmaker, but not very much. You can’t have everything.
And X11 is the only reason there are alternatives to KDE/Gnome. Had KDE and Gnome had to develop their own graphics solutions you wouldn’t see very many alternative windowmanagers.
I often wonder if people complaining about X11 actually use it.. I’m just not seeing the slowness.
4. XN anyone?
Do you mean NX? If so, the site you’re looking for is http://www.nomachine.com/.
Damn, forgot the second link, for the OS version:
http://freenx.berlios.de/
an, no wonder i could not find it. thanks
I was thinking of a compatible way to evolve files into objects. By learning a few things about the http desgin, the stuff about representational state transfers and content negotiations.
As long as we are operating locally we don’t need caches and/or bandwith conserderation so instead of state transfers we talk about representational interface negotiation.
Basic ideas is that a program accessing a file negotiate with the operating system for a suitable interface for it. The most basic interface (and the default) would be the standard posix interface. But it would be possible to get an interface more suitable for that data manipulations one had in mind.
If Linux is a UNIX like OS and Unix is old and needs to be retired according to the article. well that alone should predict linux outcome.
You’re a troll, and I shouldn’t really reply to your very crappy FUD post, but I will. Firstly, Linux is inherently more secure than Microsoft Windows. It also scales much better than Windows. It also supports far more platforms than Windows. So, if Linux should be retired, then what does that say for Windows? BTW, what operating system do you use? Windows? You’re definitely in big trouble then 😉
Dave
PS I wasted a mod point on modding down your FUD despite my better common sense…
I think he has some strong arguments. And I agree with him.
this guy knows absolutely nothing about Unix except a few key phrases (what do you call a sound bite in print?) This is far below the standard of the Unix Hater’s Handbook. Both air legitimate issues (and Unix, like all OS’s, has many areas that could have been done better), but Chisnall constantly displays such astonishing ignorance that he can’t be taken seriously.
I won’t bother to go through and laugh at all 10 arguments (why should I put in more effort than he did?) but consider:
– one big weakness he finds is Unix’s reliance on text streams, largely (as best I could figure out) because they don’t support unicode. His proposed solution: switch to binary data! Aside from text being binary data, I consider this one of the major strengths of Unix. If you want “binary data” go play on Windows, with their atrocious mess of strung together and interlocking COM interfaces. I’d rather have text, in Chinese.
– Of course, we have the usual objections to C as being “out of date,” and as we all know, C and Unix are one and the same thing. Chisnall really doesn’t understand this subject. There are great alternatives available for application programming if you prefer them, but just about every current OS is written in C (for good reason) with no viable competitor in sight. Until that changes (and all OS’s are rewritten), OS API’s will be exposed in C and a lot of people will choose to use C/C++ to program them. I’m sorry if that causes him pain!
The one thing I’ll give him is that he doesn’t come out and say we all need to switch to Windows. (And since I don’t consider Windows a “modern system” in any good sense, I don’t consider it implied.)
Just as you know nothing about C and C++ otherwise you would not have grouped them the same as in “C/C++” Again you know nothing about common object models either. You are obviously a fanboi that is afraid of change and unable to be self critical about your previous OS.
Just as you know nothing about C and C++ otherwise you would not have grouped them the same as in “C/C++”
What? C and C++ are essentially different dialects of the same language. C++ inherits features from C, Algol, and Simula 67, but it was writen as a superset of C. In fact, I argue if Scheme and Dylan can be considered dialects of LISP, then C++, Java, and C# can be considered dialects of the C family (or even the Algol family, which is much (!) larger. I will make a note here that Java and C# are syntactically like C, but not much like C under the hood. C++, on the other hand, although less like C than in the early days, is still very much a C language.). I think the grandparent poster may have understood more than you believe.
What about LISP machines? What about House (http://www.cse.ogi.edu/~hallgren/House/)? What about SqueakNOS (http://minnow.cc.gatech.edu/squeak/1762)? These are Operating Systems (in the general sence of the term) that are built upon languages that are definitely not in the C family. Maybe the point of the grandparent in grouping C and C++ together was to point out that creating an OS on C++ really isn’t much different than creating an OS on C, as far as implementation language goes.
one big weakness he finds is Unix’s reliance on text streams, largely (as best I could figure out) because they don’t support unicode. His proposed solution: switch to binary data! Aside from text being binary data
Of course it is, but ls, sort and other dummy UNIX tools don’t understand them unless they are ASCII.
If you want “binary data” go play on Windows, with their atrocious mess of strung together and interlocking COM interfaces. I’d rather have text, in Chinese.
Yes, go and play with MSH and it’s superior object-oriented semantics. You probably have no clue how much more powerful making ps list System.Diagnostics.Process objects instead of some crappy text is.
Of course, we have the usual objections to C as being “out of date,” and as we all know, C and Unix are one and the same thing. Chisnall really doesn’t understand this subject. There are great alternatives available for application programming if you prefer them, but just about every current OS is written in C (for good reason) with no viable competitor in sight.
Almost all public windows user-mode libraries are written in C++ and based on COM, even some parts of kernel are writen in C++ (USER/GDI inside win32k.sys). On Windows and MacOS you can use C++ (at least a large part of it) to write drivers, on Linux and other UNIXen…you can’t. Why? Because Linus has astrong objection to it – he thinks C++ is “bloated”. Ugh.
Until that changes (and all OS’s are rewritten), OS API’s will be exposed in C and a lot of people will choose to use C/C++ to program them. I’m sorry if that causes him pain!
And they will make crappy programs with tons of strcpy()’s, buffer/heap/integer overflows, format string bugs..why not use C# or C++/CLI or any other managed language and forget these classes of bugs which UNIX heritage cursed unto us?
Yes, go and play with MSH and it’s superior object-oriented semantics. You probably have no clue how much more powerful making ps list System.Diagnostics.Process objects instead of some crappy text is.
An object-oriented system like MSH could work very nicely. What’s great about UNIX shell commands and their reliance on text, however, is that everything understands text. APIs may shift and different tools may be designed to work with different object sets, but text is text, and if it’s not quite in the correct format, there are formatting tools to massage it quite easily.
And they will make crappy programs with tons of strcpy()’s, buffer/heap/integer overflows, format string bugs..why not use C# or C++/CLI or any other managed language and forget these classes of bugs which UNIX heritage cursed unto us?
Managed languages are slower to run and use up more resources. I think there are a lot of people with older systems who are thankful their OSes are not running on a VM.
C is a pain in the ass, but I’m not sure object-orientation is the ultimate answer. It’s painful, sometimes, to see some programming tasks crammed into an OO model. A good, low-level, “portable assembler” is required for an OS because it has to run on bare metal, with hardware access, and because at the very bottom of the abstraction stack it has to be the most efficient. C is not ideal for this, but it’s as good as I’ve seen.
> text is text, and if it’s not quite in the correct format, there are formatting tools to massage it quite easily.
Well if you use only ASCII text for English, sure, but I bet that a huge percentage of shell scripts break when the input is in Chinese UTF-32..
With UTF-8, it works probably better, but before saying “text is text”, read the Unicode specification, you’ll get an headache!
Text is not so simple as you’re suggesting.
how right you are. Even PHP multibyte extension is heavily broken in parts. As long as you com with plain ASCII everything is fine. Once you leave that box behind you, you are out in the wild ..
C is a pain in the ass, but I’m not sure object-orientation is the ultimate answer.
I’m sure object-orientation is not the ultimate answer.
It’s painful, sometimes, to see some programming tasks crammed into an OO model.
I agree 100%. Isn’t it nice to have a language that allows such abstractions when it is appropriate, but doesn’t force those abstractions when it is not? Isn’t C kind of like that in some ways? No, C doesn’t force you to use object oriented abstractions. It does force you to use a certian set of abstractions though. Now, don’t missunderstand me, it is certianly possible to use other models in C, but the language doesn’t make it easy, and some models are almost impossible in C. I belive C actually inhibits our understanding of what is conceptually possible in in OS.
A good, low-level, “portable assembler” is required for an OS because it has to run on bare metal, with hardware access, and because at the very bottom of the abstraction stack it has to be the most efficient.
Really? It is? So, a higher level language cannot be made to “run on bare metal”? Why not? I can’t think of any reason why you have to use a “portable assembler” to write an OS.
In fact some people are working on just the thing:
http://tunes.org/
http://unununium.org/
http://common-lisp.net/project/movitz/ (History if full of Lisp Machines)
http://www.cse.ogi.edu/~hallgren/House/
I can hear it now: “But these languages don’t perform as well as C” Well, of course, that is true, however, some of these languages come aweful close to C in performance. Really, is it worth giving up saftey and protection for a few extra microseconds of performace? For scientific applications it may be, but not for general computing. Think about it. The buffer overflow is a very well documented and understood problem, and has been for a number of years now. Yet, the buffer overflow is still one of the biggest security problems of modern computing. C, as a programming language, forces you to think about insignificant issues that you shouldn’t have to think about.
“Inefficient software isn’t gross. What’s gross is a language that makes programmers do needless work. Wasting programmer time is the true inefficiency, not wasting machine time. This will become ever more clear as computers get faster.” — Paul Graham (http://www.paulgraham.com/hundred.html)
My whole point is that C, as a language, may be (not is, but may be) hindering our understanding of what an Operating System is and what an Operating System should and can do.
Python performs on an order of magnitude of 30 times slower than c for repetitive iterative computations with base values (things like calculating prime numbers).
I’m sure with some recompilation genius and magic python can code can be coaxed into similar speed, but by the time you get it there I think you’ve lost pythons “everything’s a reference” model; because that’s my best guess as to what makes it so much slower than c.
I think if we want to get away from c we need to get away from c machines. That will probably happen, but not in the next 5 years.
C really is a nice language. It’s gotten a bad rap for no good reason. Is it complicated? No. Does its lack of complication make it feel more complicated: Yes.
I do find some of these managed code OS’s intriguing, although I’m still of the opinion that hairy kernel development is not so bad as long as development for the code running on the kernel isn’t hairy. It’s almost like automatic policing for who does kernel development: If you’re not able to clean up your own memory, do we want you writing disk drivers that might destroy our hardware?
Python performs on an order of magnitude of 30 times slower than c for repetitive iterative computations with base values (things like calculating prime numbers).
What percentage of general computing is doing these sorts of repetitive tasks? For general computing, the bottleneck on a computer is not in computations, but in I/O (network and otherwise). By the way, I wasn’t talking about Python when I said that some of these languages come close to C in performance, Python just happened to be one of the implementation languages of one of the projects I was linking to (see: http://openmap.bbn.com/~kanderso/performance/). Fine tuning the performance of Lisp has been an area of considerable research in the last 40 years. O’Caml is another language that has generally high performance (on par with C++), but it is a fully garbage collected language. (Did I link to an O’Caml project? I can’t remember. Here you go: http://dst.purevoid.org/) The point is that there are other languages out there that offer more safety and flexibility than C, but are not “30 times slower”.
I think if we want to get away from c we need to get away from c machines. That will probably happen, but not in the next 5 years.
Yes, I agree, getting away from c machines would help a lot, but this is probably not something we will see for quite some time. Perhaps one day we will.
> just about every current OS is written in C (for good reason) with no viable competitor in sight.
And every current OS have buffer overflows.. There may be a link, don’t you think?
There are competitors: I seem to remember that there were some kernel written in Ada and some are written in variants of C++ (L4, BeOS too I think).
Now be honest: does their viability depends on the merits of the language or on their ability to run existing applications?
I thought so! So it is hardly an endorsement for C..
There are competitors: I seem to remember that there were some kernel written in Ada and some are written in variants of C++ (L4, BeOS too I think).
BeOS was written in C++, with drivers written in C.
Yee haa! I’ve never been so thoroughly trolled! I feel strangely honored.
For the record, I’m a Mac guy that has to do Windows applications for a living (almost exclusively in C++ the last 11 or so years.) And yes, COM as done by MS is on my short list for “All-Time Worst Implemented and Utilized Idea In Software Engineering.”
I stopped reading somewhere on page 2.
What an idiot, ssh supports tab completion?
Get a real job.
What an idiot, ssh supports tab completion?
What he actually said was that zsh provides tab completion for scp. And it does, I just checked.
Type: scp localfile user@remotehost:
Then press tab. It prompts you for a password and then lists / on the remote host.
So who’s an idiot?
hrmph, it looked like ssh until I copied and pasted it and saw that it said zsh.
He’s still an idiot.
“He’s still an idiot”
Agreed.
Just to point out, it actually says zsh, but the text is almost too small to read properly and zsh does support all the features he talks about.
“in the end it comes down to evolution rather then revolution. take what works and then replace what does not. this is whats happening on both bsds and linux, and is a great strength in my view. ”
are you sure? I don’t agree with all points of the author, but it may be necessary that UNIX *has* to be completely replaced/rewritten in the long term in order to achieve new goals. This doesn’t mean one has to abandon all good properties of Unix.
For example, it seems to be nearly impossible to transform a monolithic kernel into a microkernel. Regardig stability, security and (possibly) scalability over multiple nodes, microkernels could be the superiour solution. Even MS seems to realize this now (see Singularity) and their new approach stands in stark contrast to the Windows/NT…-kernel line (which is obviously even much more bloated than Unix(-like)-kernels, as it includes parts of the GUI).
sure you cant change a monolithic kernel into a microkernel. but replacing one kernel for another dont change the overall unix philosophy. where does it say that a unix have to have a monolithic kernel?
didnt debian start releasing debian hurd distros a year or two ago? most of the distro was basicly the linux distro but with a new kernel and some new kernel tools, right?
again, its evolution, but inside the unix parameters.
to me it sounds like the author wants to trow away unix concepts, something totaly diffrent then trowing away code and binarys. ie, replace the code, maintain the ideas.
so in my view plan9 is basicly unix, only that its a unix for the workstation age by the looks of it.
give linux fuse and you will see that some smart coder makes a plugin that allows a user to see and control his prosesses using a mount under his home dir and so on. this is one of the few things that right now are not exposed as part of the file system and therefor breaks with the basic unix idea of everything is a file.
still, it boils down to what one see as evolution vs what one see as revolution.
hmm, it would be interesting to have the diffrent windows of a desktop exposed as files so that one could in theory pipe the content of one over into the other
like say: cat windows1 > window2
question is, what would be piped over, and in what sequence…
This guy said that Unixs suxe and next to the summary, you got an advertisement for a book “UNIX System Administration Handbook, 3rd Edition”.
1) Why put an advertisement for a Unix book if thoses systems suxe ?
2) Why buys this book when you see the technical level of the author of this piece of crap ?
First the article complains about how everything in UNIX is a file. Then someone says Plan9 fixes UNIX, and the first article he links says something like ‘Now we’ve made even more stuff into files’.
“Anyone who has used zsh will be familiar with the incredible auto-completion features it provides. For a great many programs, it can auto-complete command-line options.”
He said it, he’s an idiot.
The shell does the autocompletion not SSH.
He said it, he’s an idiot.
The shell does the autocompletion not SSH.
Sir, put on your glasses, i read “zsh” there, not “ssh”.
Who gives a shit about that guy’s view on Unix ?
No one.
Let people choose. If Unix is to be retired, and if something better comes along, people will move to it. The same way people moved away from proprietary Unices to BSD and Linux.
No one cares about that guy’s view. Whoever he is or think he is, he’s just a voice and I prefer people to have the choice.
He wants to leave Unix ? Have him do so and tell why.
But he should not believe he can talk for everyone but his own self. And people propagating his view should also keep this on mind.
We just don’t care about him or what he’s saying. It’s just nonsense to propagate this waste of bits.
([email protected])
It’s just nonsense to propagate this waste of bits.
Actually it might be important. It’s common for people to not be able to see outside of the box they are used to. I think it is important to be reminded from time to time that there are other ways to do it and that the ones we have might not be the best.
true, but this article is more about replacing one box for another rather then stepping outside of all boxes…
” rather then stepping outside of all boxes…”
errr, that’s impossible anyway from the philosophical point of view… which is a kind of a box, too 😉
true. but still, my point is that he sounds like he wants to trow away unix for windows or something rather then trow away unix and start from scratch in the thinking.
maybe he should take a look at skyos or syllable?
Edited 2005-11-06 15:21
Absolutely. This is why I’m glad the author is writing these sorts of things even though I disagree with him on probably 60% of his items.
There are things like meta-data in filesystems and more io streams on the shell that I’d like to see!
Not to mention, some sort of awareness that other types of data than text exist. This is, IMO, probably the worst problem with Unix today. It is only aware that text exists.. Simple things like a stronger clipboard in X11 would go a long way to fixing this for 99% of the people it affects (not many people are going to be worried about demuxing video over std streams, and about three guys will want to send their demuxed stream straight into a video player and audio player).
“Who gives a shit about that guy’s view on Unix ?”
Who gives a shit about your view on the article?
“No one.”
Right!
“Let people choose. If Unix is to be retired, and if something better comes along, people will move to it.”
Let people choose. If the article is crap, and if something better comes along, people will read it.
“No one cares about that guy’s view. Whoever he is or think he is, he’s just a voice and I prefer people to have the choice.”
No one cares about your view. Whoever you are or think you are, you’re just a voice and I prefer people to have the choice.
“He wants to leave Unix ? Have him do so and tell why.”
You want to forget the article? Do so and tell why.
“But he should not believe he can talk for everyone but his own self. And people propagating his view should also keep this on mind.”
But you should not believe you can talk for everyone but your own self. And people propagating your view should also keep this on mind.
“We just don’t care about him or what he’s saying. It’s just nonsense to propagate this waste of bits.”
We just don’t care about you or what you’re saying. It’s just nonsense to propagate this waste of bits.
I just love recursion….
>”Who gives a shit about that guy’s view on Unix ?”
“Who gives a shit about your view on the article?”
Who gives a shit about your view on the comment?
>”No one.”
“Right!”
Right!
>”Let people choose. If Unix is to be retired, and if something better comes along, people will move to it.”
“Let people choose. If the article is crap, and if something better comes along, people will read it.”
Let people choose. If the comment is crap, and if something better comes along, people will read it.
>”No one cares about that guy’s view. Whoever he is or think he is, he’s just a voice and I prefer people to have the choice.”
“No one cares about your view. Whoever you are or think you are, you’re just a voice and I prefer people to have the choice.”
No one cares about your view. Whoever you are or think you are, you’re just a voice and I prefer people to have the choice.
>”He wants to leave Unix ? Have him do so and tell why.”
“You want to forget the article? Do so and tell why.”
You want to be childish over others comments? Do so and tell why.
>”But he should not believe he can talk for everyone but his own self. And people propagating his view should also keep this on mind.”
“But you should not believe you can talk for everyone but your own self. And people propagating your view should also keep this on mind.”
But you should not believe you can talk for everyone but your own self. And people propagating your view should also keep this on mind.
>”We just don’t care about him or what he’s saying. It’s just nonsense to propagate this waste of bits.”
“We just don’t care about you or what you’re saying. It’s just nonsense to propagate this waste of bits.”
We just don’t care about you or what you’re saying. It’s just nonsense to propagate this waste of bits.
heh. waste of bits indeed !!
Most of his whining are about things that can be addressed or are, in a way or other, already done.
And about everything be a file, well, it is not so bad, as a previous post signed.
If he really wants to develop good criticism, he should follow more accurately the critics described on Plan9 (as another one posted).
Anyway, in the end, as most of the refining process on human activities, is the old evolutionary method that will prevail, because people are not going to throw away all they know and make a live with. They will seek for improvements. If new things will fix the process without being disruptive, wonderful. If the bill is a little adaptative effort, good. If the cost is a complete retraining without a really huge reward, dump (hey, almost like the gold rules of optimizing 😉 ).
acobar
1.) He makes a good point. The current Unices don’t deal well with anything but text as far as piping and directing to devices (excelt /dev/null ). Also, there are meta-data systems for some Unix filesystems.
2.) Nope. I’ve yet to see a Unix where fopen( “file.bin”, “wb”) doesn’t work.
3.) Ok? So, what’s the problem? Being able to autocomplete things for which you don’t know their meaning is *not*, I repeat *not* a good thing.
4.) If the same sort of thing didn’t exist on Windows and Mac, boy Java would be stuck now wouldn’t it? Network transparency is not a problem buddy, it’s a wonderful and heavily used thing. Have you ever met a Unix user in your life? This trouble with things like NeWS and AJAX is that they’ve yet to provide a transparent way to work well remotely and locally. Where X applications work exactly the same locally as they do remotely, and locally incur no humanly noticeable performance penalty.
5.) This is true, I’d love to see something more like “output levels 1-9”.
6.) Mmmk, haven’t really noticed this one yet. But I’ll just go ahead and take your word for it. Maybe it can be even faster.. I don’t see what this has to do with *the* Unix design, I see it as in no way not changeable. I sort of doubt that POSIX says write must run in kernel space.
7.) Alrighty… So, inotify isn’t as snappy as it could be?
8.) C is fine, leave it alone, please, before you screw it up for the rest of us. C can’t support strings, yes, it can’t. You could add an ansi lib for it, a better one, but c cannot support them: C only supports scalars.
9.) I’m with you here. This annoys me daily, try sending mail to GNU; they’ve so far been the only ones to care to add features to the core utilities.
He really needs to go ahead and provide us with something better before he runs off and says “retire it.” We can get the integration for modern data people want from the doze object clipboard system; except it’s closed source and thereby non-existant on real OS’s. But, we probably don’t want to jump onboard the monolithic nightmare of explorer, go to a CP/M shell, lose all our core utilities, break thousands of applications that depend on POSIX functions, etc etc etc.
I think it’d be a better idea to fix this wheel rather than reinvent it. The modular nature of Unix systems allows for plugging in fixes for a lot of the things he mentions which aren’t already fixed…
Of course one cannot handle multimedia stream in the way he imagines with traditional UNIX shells. That’s what gstreamer is for…
or maybe videolan?
He is introducing Monad…
The author of this article is.
Ok, I have to agree somewhat that his critism of C is unfair. It would be nice to have a better language but that is hardly Unix’s fault. Also just because the OS is written in C doesn’t mean that the API’s have to be exposed in it.
But as for this comment:
– one big weakness he finds is Unix’s reliance on text streams, largely (as best I could figure out) because they don’t support unicode. His proposed solution: switch to binary data!
I’ve got to tell you that you completely missed the point. He doesn’t advocate unicode nor binary data. He’s talking about metadata. Supplying some extra information would allow easy internationalisation and more powerful interactions with data. Done in a standardised way it would allow non English speakers to have default translations for column headings and better ways of manipulating those columns. Unicode might be needed to really make it work globally but that’s a side effect not the root of his argument.
Quote from the article:
Consider the ls command, which lists the contents of a directory. If you wanted the contents sorted in a case-insensitive way, you would pipe the output into sort. Now imagine that you want the output sorted by file size. You can make ls display the file size, and then tell sort to sort it by that column. This is fine, except that then the file sizes are all in bytes (or sometimes allocation units, usually of 512 bytes, depending on your UNIX variant). This is not very human-readable, so you tell ls to output the size in human-readable format—in bytes, kilobytes, megabytes, etc. Unfortunately, sort doesn’t understand that 1MB is bigger than 6KB, so it sorts everything into a silly order. On the other hand, if ls would output a header defining its output as a set of columns with names and types, then you could tell sort to sort by the column called size, and tell your shell to translate the size into a human-readable form.
Yes, that would be so if you are a n00b and try to use something like
$ ls -lh | sort -n -k5
Whereas a experienced UNIX user would use something in the lines of
$ du -s * | sort -n | cut -f2 | xargs ls -flh
which gets around that problem quite nicely, actually.
Message to the author: get a good book on UNIX and study some more.
Those who do not understand Unix are condemned to reinvent it, poorly. — Henry Spencer 1987.
Edited 2005-11-06 16:42
How about :
$ ls -Shl
(S = sort by size, non-POSIX, works with linux/freebsd)
> du -s * | sort -n | cut -f2 | xargs ls -flh
So you have either to be a master in shell-fu or to use non-POSIX commands (which doesn’t fit with the mentality one command == one usage) to be able to display files sorted by size with the size displayed in a user-friendly way?
Somehow this makes me agree with the author of the article that the shell could be improved..
I think that it may be possible to use both intelligent binary data (IBD) and text: executables would work with IBD and at the end (or for an executable which doesn’t understand IBD) the IBD would be serialised into text.
I think the problem with executables generating text is that either the text output is easy to parse but hard to read for human (think XML) or it is easy to read for human but hard to parse by executables: for example, parsing ‘ps’ output is quite hard..
try this
ls -lhS
ls -h is not posix ether.
GNU fixed a lot o problems of Unix’s cli.
The guy who wrote the article is either do not know Unix or do not want to know Unix.
hmm, it would be interesting to have the diffrent windows of a desktop exposed as files so that one could in theory pipe the content of one over into the other
like say: cat windows1 > window2
question is, what would be piped over, and in what sequence…
I think if you check back to the article that is the underlying theme running through it. Window!=text so pipes become useless except for the most basic of tasks.
now that depends on the way the fuse plugin used to list windows are done. maybe all that is presented to whatever that tryes to read the window “file” is the content of diffrent text boxes, dumped in a left to right, top to bottom way.
still, that would just be silly realy. the thing about having windows listed as files would be that you could do stuff like “cat kill > window1” and it would be interpeted as trying to kill the underlying prosess that “owns” window1.
from what i understand, something similar can be done in plan 9.
Actually, wouldn’t that try to overwrite the window buffer with the contents of file ‘kill’?
heh, maybe. i guess it realy depends on how the “file system” is implemented
I’m all about the alternatives, but the alternatives have to prove themselves first, don’t you think? And that may take another decade or more. So good luck to the alternatives. I’ll stick to Unix in the meanwhile.
— that’s the problem. UNIX did its share of work, it did push the OS/IT development in 70-80-90s, I have nothing but warm thanks to it.
it’s LINUX that is THE CRAP – a ccopy of 20-years-old – (at the time of linux writing !) – system wrote by half-educated nerds, who have never witnessed or understood what an OS is for in real life ,- other than in undergrad campus.
LINUX-flood effectively stifled all OS development since mid-90s.
He suggests trying diff on two powerpoint files. The thing is, Powerpoint’s file format is binary. If Powerpoint files were XML documents, than diff would do exactly what’s expected.
What did he expect? For diff to call up a user interface that would be able to show two powerpoint presentations side-by-side with differences? diff never claims to be able to do anything but produce the difference between two files at the bits-and-bytes level.
You could surely produce some GTK/QT app that reads binary diffs stored as patch files to show diffs between powerpoint documents, if you knew how to easily decode Microsoft’s PPT format.
Plus many people have already made filters to handle binary data.
For ppt files:
$ ppthtml first.ppt |html2text > first.tmp
$ ppthtml second.ppt |html2text > second.tmp
$ diff first.tmp second.tmp
4a5
> I added this line
David Chisnall:
1) Is only trying to get some attention and traffic
2) Would like us to believe he is knowledgeable regarding UNIX
3) Would like us to believe that he has credibility regarding UNIX
But, posting articles like this won’t achieve any of these goals.
It is a pleasure to see you, and many others, calling an idiot and an ignorant, pretending that he isn’t knowledgeable or credible, a university professor who teaches operating systems, networking and logic programming, and the author, among other works, of several portions of “A Practical Guide to Red Hat Linux”, a tutorial about the Bourne-Again shell, or Project Evil (using Windows drivers for network cards under FreeBSD).
I have no doubt that you, and the others, have better credentials than David Chisnall…
Some point by point details:
1) It’s more “everything is a stream”, and once set up, write(2) and read(2) handle a vast majority of the communications from there. With this common interprocess communication system, you get a high level of composability and integration with even the simplest of tools (cat(1) anyone?). Programs that embrace this philisophy interact with and can be leveraged by other programs much easier than those that can not. Systems like Windows and Max OS X rely on a more rich component model than simple streams, but even then those components are merged and integrated together through a simple stream based system via scripting languages. Finally these streams let programs intercommunicate without imposing a system wide object model upon them, making even the most arcane of modern systems work with legacy code.
2) Only programs that produce human readable output rely on text, the fact that most programs do this is a detail. But when I can pipe gzip output into an remote shell session to stream compressed filesystems to tape storage, there’s very little text involved there. Again we see the fruits of having a stream based system. Rather than having to incorporate code and logic within gzip to know anything about tape systems, much less remote tape systems, yet still be able to leverage it in that context, is a big win. If the author wants an ‘ls(1)’ that outputs structured binary data, or XML, he has all of the tools to write one. Perl is a particularly well suited tool for scripting and exposing the Unix object model, but most any tool can do it.
3) Introspection would simply give parameters, but not use cases. Most every command will display options when run, and minimally you (should) have the man page for the command. Agreed there is a lot of “lore” used to communicate command information in Unix, but even introspection wouldn’t fix that. It just makes some of the documentation more handy, but doesn’t solve the base problem overall.
4) This is a criticism about X11. X11 is not Unix as witnessed by NeWS (written on Unix), SunWindows (written on Unix), NEXTSTEP (written on Unix), and now Mac OS X.
5) stdin/stdout are limited soley by the shell. There are far more than three streams available to any Unix program. You’ll note that you can write “cmd > file1 2> file2” in a typical Unix shell. You can also write “cmd > file1 2> file2 3> file3 4> file4 5< infile5” etc. all day long. That’s been around since forever. Where the shell breaks down is you can’t easily make a more complex network of pipes beyond the simple utility of tee(1). But that’s the shells problem. The programs don’t stop you from doing that, just the shell keeps you from expressing it easily. Oh, and what’s the difference between a reading and writing the same channel vs reading on one and writing on another?
6) Unix has asynchronous calls and mechanisms to hurdle the kernel/user space boundary on system calls where these inefficiencies really affect performance. There’s little reason for EVERY system call to be burdened with the complexities of asynchronicity.
7) We seem to be bouncing all over the map. So, a Unix on top of a Mach kernel (ala Mac OS X), this Unix is OK, it’s the OTHER Unix which is bad??
8) This is a language flame fest and has little to do with Unix. All of the mechanisms he complains about are necessary at some low level within the system. You do not want every buffer to have overlflow checking. You do not want every integer to be silently upgraded to a Bignum. Since he mentioned C as a portable assembler, there’s no reason you can not write kernel code in any language that produced C code compilable within the kernel linking environment. Most, however, don’t bother. Were we running Lisp OS on a Lisp machine, we’d have the same complaints from the folks wanting to write “down to the metal” C code on that system as we have from the Lisp hacckers complaining about it on Unix systems.
9) This is all about mv(1)? Oh dear. 1/10 of the entire reason set for the damnation of Unix is mv. Well, the entire purpose of NOT creating an expansion library was to ensure that each tool was simple and straightforward. If the shell can do all of the chicanery for filenames, including creating them from a wide range of sources (i.e. not just wildcard expansions), why include all of that code, all of that weight and bulk into the actual utility? ESPECIALLY a static linked utility?
10) I guess he’s never seen the Unix signal(3C) functions.
The criticisms are all over the map, from user space to kernel space to specific application implementations.
I wonder what his philosophy is on things like Web Services, since that’s where a large amount of integration is happening today — and they’re simply streams of data as well. HTTP is becomeing the stdin/stdout of the internet age.
I want to thank you for your intelligent post, I can’t beleive that the noise is so bad in this discussion (the author is an idiot is not an interesting post!).
For your point two: I think the author’s point is: a command must either output structured data or text, both are problematic: structured data (XML or binary) are not human readable and human friendly text is hard to parse (as evidenced by the ls -h|sort mismatch)
So I think that he advocates ‘intelligent’ data which are either seen as structured data by application which are able to understand it or as text in the final output. I think that it is maybe interesting but somehow I expect that it would hard to achieve without distroying the human readability which is a big plus of unix’s shell command output.
For the point 3, I agree with you, adding introspection doesn’t ensure that executables do use the introspection consistently which is the real problem here..
For your point 10, I disagree, his point is that metadata are mixed with data in Unix which is true, nothing to do with signal. As an aside, I think that Unix’s signals suck are they are not real messages (no payload)..
I think that the author likes very much web services are they are made in XML, so while HTTP is stdin/stdout, the information transfered is structured, which is not the case for Unix shell utilities..
Me I think that XML readibility sucks for human and that we would have better used some type of S-expression which would have been a better compromise (and the ultimate LISP revenge ;-))
My view is that criticism are easy, doing better is hard! But his critiscims are not without interest if a bit ‘all over the map’ as you said.
It’s a shame when guys like David Chisnall write articles like this.
All he does here is use “technical terms” to sound credible while pointing out trivial nuances.
UNIX = POWER
UNIX = FULL CONTROL
UNIX – SECURITY
Anyone that advocates putting out to pasture an OS with these features in today’s world is a fool.
This guy is an idiot, but if you want a serious read on the subject read http://primates.ximian.com/~miguel/bongo-bong.html
UNIX (Linux and BSD) have their strengths and weaknesses – however is anything really better?
You can implement anything on top of everything is a file and a file is text. You cant implement simple things on top of complex ones. Simplicity is hard to beat.
So UNIX is old and simple but are new complex things any better. If only Microsoft hadnt needed to re-invent the wheel we would have been spared decades of pain.
“C was designed when the procedural paradigm was only just gaining acceptance, when Real Programmers used assembly languages and structured programming was something only people in universities cared about”
Give me a time machine!
C as in language is real bad choice.. for just about anything. BUT the fact that Unix and btw Windows too are made in C is positive in an indirect way. C is so primitive that anything can use it’s libraries. C has no objects and THAT’S a mesiah. Why? Because objects are not interoperable between languages and sometimes (C++) even between compilers. The reasons are many fold from VMT layout to RTTI info and more. I also do not agree with the “OOP will save us” approach of today.I AM an programmer using OOP but I also use procedural. In some places OOP is simply stupid.
“All is file” is both a strength and a weakness.
His rant about *.exe is stupid (use ”)
X11 is a bloated old superfat frog which should’ve been killed long ago. Sadly it stayed and it’s impossible to go without it or atleast a fully compatible alternative. Problems are manyfold from shared mem to fd eating but atleast it’s getting modular in latest Xorg.
Small tools for the job are good tools for the job.
Dave Chisnall makes a lot of good points, even if the majority opinion in here wants to rip him a new one. Apart from trying to sound superior what does this achieve? Not a lot. UNIX and C are pieces of shit, given what we know today, and will get retired one day, like it or not. The success of Linux doesn’t prove UNIX and C are wonderful any more than the success of McDonalds Hamburgers proves hamburgers are good for you. Damn, the IQ of this place is dropping fast…
You are wrong!!1
Unix is bad byt Linux is not and has Freedom superior and made by Linus and it has Kernal!! Plan1&sun&bsd does not have kernal from linux and they are bad and steal and rob from Linus who made Linux!
Other than Open Source Unix Like Linux Made By Linus dies because they have no KErnals of Freedom like Free!!1
For example linux unlike unix scales to many many Kernels!
LOL!
heh, is this guy for real?
Many of the things he pointed out as “the things he hates” are the things that make unix so robust and also are the things that make unix more usable to me then other os’s for most tasks.
He makes a couple realtively good points but it is very clear to me that he knows very little about what he is talking about, so many of the things he listed as bad are in fact the things that make unix superior in many aspects.
As this may have been lost amidst righteous wrath and flaming, here it goes again:
http://herpolhode.com/rob/ugly.pdf
Written by Rob Pike it has some notes of interest about what went right and what went wrong. He concludes that what went right is about the same thing that what went wrong.
WTF is he talking about? Firstoff, in-band signaling, as I use the term, has to do with telecommunications, not filesystems, and is very appropriate for some situations. Sure ESF/B8ZS is better than “robbed-bit” for most stuff, but that’s not a rule. Second, metadata, unless stored in an xml file somewhere (which is obviously a hideously bad idea) *IS* inband signaling for a file system. Trying to implement any sort of “SS7 filesystem” would be insanity. Oh, wait, that’s what he’s ADVOCATING?!?!? Resource forks, anyone? Asshat.
This story makes it really easy to separate out the script kiddies (i.e. those calling the author an idiot) from those who have more than a “cargo cult” level understanding of UNIX.
UNIX Sucks
Windows rules forever
So say UNIX developers.
The people criticizing are doing it but clearly lack base knowledge about software engineering. Most UNIX operating system designers agree with the article. Ironically only the “new UNIX” users don’t. There is a reason why Plan9 exist and it was developed by the same guys that did UNIX.
There is more to an operating system than hosting your php and mysql databases.
Remember: The guys who developed UNIX think UNIX SUCKS! Do you think you know more about UNIX than them?
I’m not trolling… Just pointing that out.
“The guys who developed UNIX think UNIX SUCKS!”
Where does Ken Thompson say that UNIX sucks?
Please provide proof to your claim.
Ken Thompson was one of the major contributors to Plan9 as well as Rob Pike. Rob Pike as already expressed his feelings about UNIX. Ken too. They both think it sucks.
Do you think Kenneth would work on Plan9 if UNIX was any good?
Many things start being used by the industry before “ideal time”. I think that UNIX is one of those things. Plan9 is the natural evolution of UNIX.
Smalltalk creator’s Alan Kay also saw is language being used “before time” and he also expressed Smalltalk was no good and lacked some design evolution to make it more usable. Yet people used Smalltalk anyway.
Don’t get me wrong… Smalltalk and UNIX are two fine products. But they are stuck in time not because the creators like them that way, but because the outside world doesn’t like changing.
Ken Thompson was one of the major contributors to Plan9 as well as Rob Pike. Rob Pike as already expressed his feelings about UNIX. Ken too. They both think it sucks.
Where did they say that UNIX “sucks”? Your reasoning is flawed. Ken and Rob working on Plan9 doesn’t imply that they think UNIX “sucks”.
Do you think Kenneth would work on Plan9 if UNIX was any good?
Yes, I do. Just because they worked on Plan9 doesn’t mean they think UNIX is no good. I can think of a very good reason why they would work on Plan9 regardless of there opinions of UNIX: Because they want to make something better.
Example 1: Paul Graham really likes Lisp a lot. If you have ever read any of his essays or heard any of his speeches, you would know this, yet Paul Graham is working on a language called Arc. This language is, in his opinion, going to be a better Lisp. As much as he likes Lisp, he feels, in retrospect, Lisp has some suboptimal design. By your reasoning, because Paul Graham is working on another language, he must think Lisp “sucks” and is no good.
Example 2: An author writes a story. Later, the author revises the story and rewrites it. Now, does the author think that the original story “sucked”? No, they wanted to improve the story, make it better. In fact its obvious they kind of like the story if they wrote a better version. They just wanted to fix the problems the older story had.
Now, Plan9 is very much a derivative of UNIX. It shares a lot of the ideas and philosophies of UNIX. Plan9 also fixes a lot of the problems that many people felt UNIX had. The fact that Plan9 shares so much in common with UNIX shows that the authors of Plan9 don’t think UNIX “sucks” and is no good. It just demonstrates that they feel UNIX can be improved. (Who, in there right mind, would think that UNIX is perfect or even close to perfect?)
Don’t think for a minute that just because somebody criticizes something that they think it “sucks” or is no good. I’m critical of various things in UNIX too, but I don’t think that UNIX “sucks”. I just feel that there are better ideas yet to be realized and we need to be exploring those.
In http://www.paulgraham.com/popular.html, he does say that Common Lisp sucks.
Does this guy even know how to use Unix?
I wonder
the ls problem..
ls -lhS
YOU don’t know how to use unix.
Just try this in AIX or Solaris…
It’s not POSIX compliant.
I have Solaris here installed and it working so i do not know what you are talking about.
Could it be you have not installed the GNU?
hum..
GNU’s Not Unix (c)
>GNU’s Not Unix (c)
Indeed. That is why you can install the GNU addons on almost every Unix.
It does not matter how you solve a problem, it matters if it can get solved.
GNU is Not UNIX!
i should have tought about this sooner.
is the article talking about UNIX, as in the granfather of all we see around us, unix as in the concepts or *nix as in all the bsds, linux, hurd and whats not that based on the concepts of unix?
as im looking at the points it seems more and more like he is doing a bit of all…
kinda frustrating.
still, im still wondering. if plan9 supposedly fixes whats wrong with unix in all forms. why is it not picked up as the greatest thing after sliced bread?
This entire argument is pointless, since it does not cover the impending problem in the REAL WORLD. He never shows you any real-world problems he cannot solve using UNIX. WHAT CAN’T YOU SOLVE?
If you had problems with character encoding, then solve your problems yourself with XML and UTF-8 unicode. Everything is evolving and improving because we all have to adapt and come up with solutions so we can make a living!
You can solve most problems and many times have to find work-arounds so solve the problems. This is the real world David, not University.
Most of my Universoty professors didn’t know what the hell they were doing. The author recently graduated and has a PHD, which proves nothing more than his lack of real-world problems solving experience. David, please don’t use UNIX. Use something else.
We aren’t retiring UNIX just because you can’t solve some annoyances.
J.O. – BSCS
the language implemeneted in vms, weather its a good alternative to c, is something i cannot answer,
on the vms usenet, there are always arguing why bliss is better than c, and c coders have always go a answer in reply.
anyone know anything about bliss? vms is hardly that well known considering its age, according to the hp website its used in financial institutions and hospitals, why is that?
The only `thing’ that needs retiring is him.
Go find something else to complain about, moron.
Download Windows Vista: http://windows.czweb.org/show_article.php?id_article=173
its quite good actually
http://research.microsoft.com/~daniel/unix-haters.html
http://research.microsoft.com/~daniel/uhh-download.html
Well… I’m sure a lot of people could disagree with the article, but how is this guy “wrong” if the article is name “10 Things **I** Hate About (U)NIX”… Really, can’t we just say “ok, this guy has some points, but I disagree with him because…”. Just became it’s a different opinion this guy ‘has’ to be “clueless” and other similar adjectives… There’s a lot of good information and discussion on this “thread”, but somehow it feels like a place with a lot of inquisitors looking for capital sinners and witches…
…please, can’t we just respect other peoples opinions? even if they’re different from ours?
IMHO, it’s a requirement for deserving some respect too…
Well… I’m sure a lot of people could disagree with the article, but how is this guy “wrong” if the article is name “10 Things **I** Hate About (U)NIX”… Really, can’t we just say “ok, this guy has some points, but I disagree with him because…”. Just became it’s a different opinion this guy ‘has’ to be “clueless” and other similar adjectives… There’s a lot of good information and discussion on this “thread”, but somehow it feels like a place with a lot of inquisitors looking for capital sinners and witches…
…please, can’t we just respect other peoples opinions? even if they’re different from ours?
IMHO, it’s a requirement for deserving some respect too…
Some people can’t take *anything* against their own personal beliefs without attacking it and the person who posted it. But if we just accepted the post at face value we wouldn’t be having a (somewhat) intelligent conversation about it whether we agree with it or not.
I think the art of discussion of different ideas is slowing being lost in this world, if it ever existed outside of academia to begin with.
Recipe for disaster:
1 Handful of money
1 document signed with the beast of redmond
Masturbate with money until soul has vanished. When the flames close in and the mad chuckles of the convicted monopoly begins to sound more like pleasant circus music, you have reached orgasm.
Thank you for the people who linked the “ugly” article by Rob Pike. I found it much more useful than Mr. David Chisnall’s.
I want to discuss a couple of his points.
One, text is the lowest common denominator between humans and computers. Almost all human-computer interfaces include text of some kind, from the fanciest GUIs to the most restricted CLI. Text is the junction point where computers and humans meet.
The author’s problem with this is an issue of implementation, not with the idea itself. He has a problem with the fact that text is traditionally represented as 7-bit ASCII. Slowly this is changing in the computer world as we move more towards Unicode formats. That doesn’t make text itself any less important; just its machine representation.
Two, everything is a file provides a minimum known interface for dealing with API or OS objects.
The author again identifies a problem with the implementation rather than the concept itself. Some things, like sockets, don’t respond to the basic open/read/write/close methods that files do. But there is no reason why they couldn’t except for the fact they were implemented that way.
In later points the author does identify some opportunites of improvement of the basic UNIX approach, such as metadata, that can be learned from. Message-passing and microkernels are also important but the problems with their implementation (or not) are not limited to UNIX-like OSes.
Two points:
One, if the programming language doesn’t handle strings natively, then maybe the OS should. Some aspects of string manipulation, memory management, and garbage collection should be handled at the OS level; especially when programming in a language as primitive as C.
Two, because of the text-based nature of human-computer interaction, metadata and objects should be easily serialized by the OS into text that is both human and machine readable.
Of course we should look at better message-passing between kernels and user space, and I believe that traditional hierarchical filesystems should be replaced by database style filesystems. But neither of these ideas are excluded by the UNIX ideals.
I think it is summed up best by saying that the UNIX ideals are sound (pretty good actually for an OS designed to run a computer game on [at the time] low-end hardware), but that the implementation needs work. I also think that we have much to learn from other projects such as Plan9, BeOS, AmigaOS, and others.
Yes. Text is a good way to achieve communication between computers and humans.
The problem with UNIX is that not only you use text for communication you are also using it for INTERNAL COMMUNICATION BETWEEN SYSTEM COMPONENTS. Pipes and text parsing has limits…
The problem with UNIX is that not only you use text for communication you are also using it for INTERNAL COMMUNICATION BETWEEN SYSTEM COMPONENTS. Pipes and text parsing has limits…
In a word, no.
To use the author’s example. ls is a program designed to give a formatted human-readable listing of file statistics. While you can slice this apart using text extraction tools, for “INTERNAL COMMUNICATION BETWEEN SYSTEM COMPONENTS,” you are better off using a call to stat(). (Conveniently provided as stat() in perl, and os.stat() in python. Probably equivalents exist in many other languages.) This rich set of interfaces into the underlying data of the operating system is just as much a part of UNIX as ls and sort.
Which is what I see as one of the major flaws in the article. Command line shell languages dervied from sh, csh and ksh were designed to make heavy use of pipes and text parsing. However, the fact that you can pipes and text parsing in a command line shell language does not mean that you must or should. Almost always, you get better performance writing the whole shebang in a language with direct access to system calls that give you typed and structured representations of data. Sometimes, it is easier to use lisp/perl/python/ruby/tcl/rexx/emacs than to slice apart output from ls or ps.
While some criticisms in the article are good, other criticisms conflate UNIX with shell languages.
The best Unix without being Unix!
In search of a GUI endless clicking loop.
You know what is good about clicking? It’s like sending one and zeroes in a binary fashion to the PC. Gotta love that human interface.
mv *.exe *.bin
Sorry, but this is a really crappy example.
1. There is no point in renaming all files ending with .exe in .bin to begin with.
2. We have seen what this leads in MS-DOS: Stupid tools that cannot handle wildcat properly.
3. The author should lern regular expressions. Two “*” wildcat leads to ambuguity in more complex cases (e.g. match and swap two filename fields).
4. If mv is his only problem, the author should simply demand an “mv-improved” with regular expressions. I would be perfectly willing to use quotes in those < 0.01% cases.
re: mv-improved
see: mmv(1)
In Windows filename globbing is handled by the FS driver (!) and in *nix by the shell. Nowhere is it ‘standard’ to have the application handle it, nor should it be. That’s too much work to pass off to each programmer. This is one area where pushing decisions out toward users is definitively bad.
Concerning X: X is brilliant at what it does. NeWS would have been better. Don’t throw the baby out with the bathwater. At this point any better network transperant GUI would have to build on existing X drivers at the very least. You can now probably extend/modify X to have everything that’s good about NeWS more easily than you can build a new system from scratch.
Metadata: See resierfs4. Metadata can be represented as files. This + some plan9 stuff would solve a lot of my *nix gripes.
Introspection is good but there’s no reason to force it (MSH is the suck). Mechanism not policy. Invent a schema and add a –xml-help to your apps; that does most of what you might need.
Limitations of pipes and stdin/stdout is annoying but it isn’t a *nix flaw, it’s more of a shell flaw. I think that multiple (named) pipes would be useful but the syntax would likely be horrible.
A lot of the stuff the guy says is quite valid, but his conclusions are way off base. Nothing he cites is unfixable it’s just unfixed. *nix has its flaws but it is a great platform on to which to build fixes.
The NT kernel is also brilliant, but it’s saddled with win32. Win32 is hard to fix. POSIX is much easier to fix because it demands so much less. The kernel isn’t the flaw here; (or to the extent that it is, the hurd is the fix) it has nothing broken that can’t be incrementally fixed. A fresh start (as appealing as it is) isn’t the best answer.
Or, maybe it is. I’ll be watching Haiku and AtheOS/Syllable.
“In Windows filename globbing is handled by the FS driver (!) and in *nix by the shell. Nowhere is it ‘standard’ to have the application handle it, nor should it be. That’s too much work to pass off to each programmer. This is one area where pushing decisions out toward users is definitively bad. “
<p>
Each program does globbing by itself on AmigaOS. Fortunately, globbing is part of the standard library used for parsing command line arguments. (think: libgetopt). The benefit of applications having control over command line parsing is that the app knows when and when not to expand wildcards. For example, you can write this and it works the way it should:
<p>
find . -name *.jpg
<p>
–<p>
Simon
and unless im misstaken, dropping the star gives the same result in *nix…
Try for yourself:
$ mkdir test
$ cd test
$ touch .jpg test.jpg
$ find . -name .jpg
./.jpg
ah, my bad
edit:
did a bit of searching and came up with this:
find -name ‘*.jpg’
those single quotes make all the diff
Edited 2005-11-08 12:36
For example, you can write this and it works the way it should:
find . -name *.jpg
How it should work is a matter of definition. In the time before bash and tab autocompletion, “*” is used extensively to expand in the case you know that there are exactly one match and you do not want to write out the full name (e.g. if it is excessivly long).
And for find, even today the autocompletion won’t neccessarily work, because the initial directory is not always “.”.
Frankly, I’m rather surprised by the tone of most of the comments… many people don’t seem to even understand the points highlighted by david, and behave just like linux fanboys. Just a thing: nowhere in the article anything is implied that the answer is MS Windows !
Actually, David’s other article on informit (http://www.informit.com/articles/article.asp?p=420287&seqNum=1) is a plea for open source…
So basically it’s impossible to critic Linux/Unix ? What does people think, that Unix’s design is the paramount of software evolution ? gosh…
Frankly, all the points he highlights ARE valid. And they are here as examples; no need to answer point by point each example with a very specific solution, they are just there to point to bigger problems…
I find it quite incredible to have people 1) not understanding the problems 2) not able to understand that criticising unix doesn’t mean you advocate for windows 3) using personal attacks on david …
Disclaimer: I’m in the same lab as David, and I can assure you he knows extremely well unix, and he’s actually a bsd lover, not a windows fanboy (duh). That doesn’t mean that he can’t see flaws in unix design — to the contrary, it’s when you know well your subject that you can understand well its flaws…
We can make better system than Unix, and not beeing satisfied by the status quo. That’s all he was saying.
Edited 2005-11-07 14:01
Frankly, I’m rather surprised by the tone of most of the comments… many people don’t seem to even understand the points highlighted by david, and behave just like linux fanboys.
Well, yeah. There is a lot of that going around. However, some of his examples (ls | sort and mime type mapping) are less than enlightening about the points he is trying to make, because there are abundant examples of attempts to solve those specific problems, and some problems are debatable. Given the fact that it is trivially easy to get typed and structed data about processes and files, the “everything is text” criticism seems to be less about UNIX and more about shell languages. Likewise, the desktop toolkits have methods for mapping file types to a list of applications, and most of what I see on UNIX including OSX have adopted filename extensions as a signal for file types.
The proposal for introspection is interesting. Applescript has methods through which an application can expose interfaces to other applications. But this has the potential to be a security problem, but it seems like an idea worth persuing.
I see the problems he raises. However about half of those problems are limited to the lowest common denominator of bourne shell + POSIX command-line tools. I’ve never seen or experienced “UNIX” (in 15 years of using it) as just that collection of tools. And this is further confused by the fact that he never really defines exactly what is UNIX. Is X11 really UNIX? are the userland utilities UNIX? Is OSX (which has the filetype->application introspection described) UNIX?
BTW, I find the “of course, he is right and you are just a fanboy for not seeing the problem” position to be just as conservative as the “he is an idiot and must be in favor of windows.”
Edited 2005-11-07 15:21
It seems that this site is mostly visited by teenaged Linux fanboys. You can’t have sensible discussion with a teenager whos frontal lobes are going through natural reforming from childhood to adulthood. After few years they start to understand again that what other people say could hold some validity even if it goes against their personal beliefs. But then they don’t come to OSnews anymore.
This site’s ‘discussions’ would be amusing if they wouldn’t be anti-intellectual to the uncanny extremes.
The proposal for introspection is interesting. Applescript has methods through which an application can expose interfaces to other applications. But this has the potential to be a security problem, but it seems like an idea worth persuing.
Heck yeah! IBM Mainframes can do this kind of stuff (whole applications written in XEdit and Rexx), the Amiga implemented application scripting, Mac OS has been doing it for years – the idea wouldn’t be around anymore if it weren’t genuinely useful.
Think of it this way: making applications scriptable without having to implement a script interpreter or ‘client’ program for each application.
Sometimes evolution needs to make a jump, and it probably should now. Go and write a better OS? At least I’ll try …
But I guess most are unable to distinguish between potenial and implementation anyway …
“It seems that this site is mostly visited by teenaged Linux fanboys.”
Uh, just for the record, this website also has plenty of Windows, Solaris, BeOS and Mac fanboys (and fangirls like the former webmistress) as well. Some of you must be new here…
“We can make better system than Unix, and not beeing satisfied by the status quo. That’s all he was saying. ”
So uh Nicolas, both of you are advocating Sun simply drop what their doing, remold Solaris in the shape of BSD and kiss off decades of research and development? Retiring Unix means retiring Solaris, afterall. Do you plan to explain all this to Sun’s engineers?
At least it is wrong to say that it couldn’t be better …
He mixes in some good points of problems with calls to the kernel and a need for improvement (fundamental changes) in X11 and even made a damn good point that we need to progress to something better but just as fast and portable as C promised to be.
Then he totally disses some of the fundamental points of Unix such as the use of pipes and the text as well as everything as a file model missing the point that many of his limitations are not necessarily unsurmountable problems.
There is nothing he listed that could not be fixed from asynchronous system calls to improvements to the use pipes right down to a way of using a universal vfs system to improve on the everything as a file model to the point it can still work for many more decades to come.
It is somewhat tiring to hear people talking about re-inventing the wheel over and over again.
Those who do not understand Unix are doomed to re-write it poorly.