“By now, Windows and Unix are functionally more similar than different. They both support the same major programming metaphors, from command lines to GUIs to web servers; they are organized around virtually the same panoply of system resources, from nearly identical file systems to memory to sockets and processes and threads. There’s not much about the core set of services provided by each operating system to limit the kinds of applications you can create. What’s left is cultural differences.” Read the article by JoelOnSoftware.
I implore all *nix users/programmers to read “The Art of Unix Programming” and “The Unix Haters Handbook”. You’ll get a deep understanding of why *nix users/programmers are anti-GUI, anti-binary formats and anti-closed. On the whole, this is an excellent article.
*Oh, and I still think Unix programming and philosophy is superior above all.* 😀
It seems the purpose of that article is to attack ESR rather than comparing differences. In my opinion Win and UNIX are quite different from technical and cultural view.
Even if any two OSes both support many same mechanisms, it doesn’t make them similar. They are only similar if they are similar in _how_ they do it. For example even if BeOS and Win both have same mechanisms commonly used, they are totally different.
Also, the author of the article should realize file systems of UNIX and Win are quite different (UNIX uses global namespace, different locking, devices and other special files, only / and are special characters, and the culture on use of namespace is totally different).
Also, Processes are quite different on unix (see fork()).
The big cultural difference between unix and win is, that win people believe that policies should be same for everyone, but unix people believe that _mechanisms_ should be same for everyone. This is actually a huge difference, and I think it’s good that the writer of the article brought it up. It can be seen in practice when two different UNIX OSes do things the different way. They all offer some mechanism for the same job.
“anti-binary formats”
Nah… windows programers are anti binary formats. Just look at all the VB.
And I’m not really sure about the anti GUI thing. I may be new to this (about a couple of years into Linux) but even the most diehard commandline folks use X with a terminal emulator.
Oh and the “The Unix Haters Handbook” is wildly out of date.
It seems the purpose of that article is to attack ESR rather than comparing differences.
ESR is the Rob Enderle of the Free Software world and deserves to be attacked. I haven’t read his new diatribe, and probably won’t, but what the author says in a roundabout way, is that ESR is represenative of the folks on Usenet who will tell you to “RTFM” instead of venturing to answer your question.
That is…if there is a FM these days, as many programmers have abandoned them altogether, which sort of throws the whole argument out of the water. Sometimes, the only help a package will have is the command-line output. It’s fun installing a Debian package and finding that template man page just to satisfy the Debian handbook.
I agree with his overall assessment of programming within frameworks being quite different. Look at qmail! No windows programmer in his right mind would create something that is so ignorant of the tools available on the host platform. (I’m not being critical, really, just pointing it out as a prime example….I use qmail on some of my servers)
That said, there are ways Windows programmers can pretty much be comfortable on Unix platforms nowadays. There’s Cocoa on OSX. There’s QT/KDE on others…. And you can do things pretty much the same way you would.
It’s just personal preference.
“and repeated efforts to make a pretty front end for Unix that Aunt Marge can use have failed”
MacOS X
Ah, perhaps my thoughts were poorly communicated.
“anti-binary formats”
Nah… windows programers are anti binary formats. Just look at all the VB.
By anti-binary formats, I meant Unix coders/users prefer their files (configuration files come to mind) be stored in formats they can read and edit, textual formats. Not the same in other OSes.
And I’m not really sure about the anti GUI thing. I may be new to this (about a couple of years into Linux) but even the most diehard commandline folks use X with a terminal emulator.
Most hardcore Unix users don’t believe you need a GUI for every single task. They are great and helpful for basic and straightforward tasks, but ultimately, they prefer the command line interface with a powerful and expressive language like Bash. “Uh, what is a file manager?”, type of users.
Oh and the “The Unix Haters Handbook” is wildly out of date.
Yes, but it’s still a good read, it mentions various weaknesses or flaws inherent in the design of Unix, most of which I disagree with though.
“ESR is the Rob Enderle of the Free Software world and deserves to be attacked. I haven’t read his new diatribe, and probably won’t, but what the author says in a roundabout way, is that ESR is represenative of the folks on Usenet who will tell you to “RTFM” instead of venturing to answer your question.”
That has nothing to do with Unix. That is just human nature. Do you think all windows “gurus” would go out of their way to answer every question you might have? They are all nice wonderful people who never tell you to go to hell?
“Unix culture values code which is useful to other programmers, while Windows culture values code which is useful to non-programmers.”
That is because UNIX culture has mostly been “by professionals, for professionals” and Windows has been to sell stuff to “consumers”. UNIX values peopele who do it themselves and Windows values people who pay other to do it.
That has nothing to do with Unix. That is just human nature. Do you think all windows “gurus” would go out of their way to answer every question you might have? They are all nice wonderful people who never tell you to go to hell?
No, of course not. But that’s not the point, either, and you conviently ignore what I said. There’s a difference between being unwilling to help, and showing your supeiority by being rude — and that’s something I’ve seen from holier-than-thou Linux users for years now. And I don’t see it elsewhere as much. I don’t remember it in the OS/2 community. I certainly don’t see it in the Macintosh community. In the Windows community, they’re less likely to help gratis.
And there are places where there is good help for people trying to learn in the free software world. My local Unix Users’ Group http://www.twuug.org is very good for that. But then there are the people in serious need of an attitude adjustment.
But it goes back to the central theme of the author’s article. There are tangible cultural differences between the Windows world and the Unix world. To ignore the positives and negatives of each is to ignore reality.
What I get that he is saying is
Unix: small program | small progam | … -> result
Windows: large program -> result
I think the difference is as simple as Windows is windows only. Unix is not. Because of an age difference of 20-30 years, technology advancement and because of inertia. You just don’t just like that go back and retro fit 20-30 years worth of programs into a windows format.
Technology advancement timeline (1960 …)
1. punchcards. They are very hard to do any windows with.
2. scrolling text terminals. Also hard to do windows with.
3. IBM 3270 full screen window/page at a time. Now prompts and answers are no longer a serial stream but randomly accessible.
4. Multiple movable windows not necessarily the same size as the physical screen and with scrollable contents. Just like papers on a desktop at tax time. This is the time when Windows came about.
“In the Windows community, they’re less likely to help gratis”
And that is why you simply don’t SEE all the windows snobs. All the windows power users who would quickly tell you to RTFM. They simply don’t show up in the first place. But they exist as much as in Linux. They exist as much as in any hobby. Audiophiles anyone? This has nothing to do w/ *NIX. It’s just as it happens to be more open, you see the ugliness more.
I was windows programmer once, working on Fax Server project,
and I still know that UNIX is superior in almost every
way.
DG
I was windows programmer once, working on Fax Server project,
and I still know that UNIX is superior in almost every
way.
No, you don’t. You believe UNIX is superior.
if one belives there are only cultural differences between windwos and linux – or even unix and linux, then one havn’t got a clue about operating systems.
/stone
I really like most of Joel’s articles, and I also liked this one. However, what is not very clear in this article is the (missing) distinction between Unix and GNU/Linux.
It is true that GNU/Linux was developed with Unix in mind but it’s on its way to invent it its own culture. Yes, a lot of GNU/Linux tools are still Unix-like and its usage is often command-line driven. But more and more Windows-orientated folks take part, and they are driving GNU/Linux development and culture to a more consumer friendly way.
The diversity of the GNU/Linux tool- and GUI sets steems from the ‘cultural’ value of freedom. This isn’t part of Unix.
And I’m not really sure about the anti GUI thing. I may be new to this (about a couple of years into Linux) but even the most diehard commandline folks use X with a terminal emulator.
And for a significant proportion of unix users, that’s it – all X is good for is getting a bunch of xterms on the screen. That’s not a GUI, it’s just multiple concurrent CLIs. For “hardcore” and “traditional” unix users, any “GUI” more functional than twm is just a waste of resources – even on multi-Ghz machines with gigabytes of RAM.
clausi wrote:
> The diversity of the GNU/Linux tool- and GUI sets steems from the ‘cultural’ value of freedom. This isn’t part of Unix.
No. Freedom _is_ a part of the UNIX culture, and part of the GNU/Linux culture too.
Also, see http://www.opengroup.org/pubs/catalog/n900.htm , even if it is a little bit of joke
Linux is a free and modern implementation of Unix, just like the *BSDs. Although, it is not entirely correct say it is UNIX, the word Linux, Unix and Unix-like are often used interchangeably.
Oh, and UNIX had always been about freedom before it was hijacked by commercial interests. Umm…hmmm, you know who they are.
Mac OS X shows that this guys just has an axe to grind against unix and linux. It is a horrible article that does not say anything. KDE is also very easy to use.
Hey, if windows is so damn easy, why are there a billion books like “Windows for Dummies” and a billion books about how to use every windows app. Why? Because they are hard to use and the software fails alot.
Unix is about software that works, Windows is about software that sounds great and is easy to sell.
That is the biggest difference.
I have read and own the UNIX haters guide and is totally irrellevent to the state of UNIX and GNU/linux today.
Unix applications developpers care as much about the end user as windows developpers do. The big difference was that all “end users” until recently were using Windows.
What linux is bringing to unix is a new user population which wants mp3 players and jukebox, spreadsheets, word processor, graphics programs, themable frontends, accelerated 3d for futile reasons (as opposed to CAD). And guess what, you now get a large number of applications that cater for this new user population, with applications that are trying to be easier to understand and to use.
But the unix concern about integration meant that most applications use clear protocols to talk to their host desktop environments.
Also, the taste for client server architecture of a lot of unix developpers (take giFT, most cd burning apps, mp3 rippers) also shows that linux developpers want to be able to improve the user interface without breaking the application’s logic.
So much about not caring about end users.
C’mon, all you (we) *nix lovers; Joel is not being anti-Unix here. Why do so many of us have to feel threatened by this stuff?
I say this as a complete FreeBSD-lover: Joel has some good points, and not just anti-Unix points. His points are more on the order of “to a man with a hammer, everything looks like a nail.”
I would say one large point Jeol misses is that most of us modern *nix lovers have actually cut our teeth on Windows, only to move to Linux or FreeBSD out of *intense* frustration. He fails to answer the question of why someone who was not “raised on the command line”, would gravitate toward it. The answer, I think, is one of personality. If you were the sort of kid who preferred playing with Lego and the Mechano set instead of the G.I. Joe Attack Helicopter, then you are a candadate for *nix. In other words, with Lego and Mechanoe, you made your own toys, or could at least take apart the toys made by others, while with the G.I. Joe attack copter, you could buy extra pieces, but you couldn’t really make your own, unless you carved extra pieces out of wood or plastic. And yes, I know this is not a complete analogy, and of course you can make many toys in windows, but the cultural differences are there. For example, I make tons of shell scripts in Unix to do little repetitive tasks. I could do the same with Windows batch files or WSH components, but I have hardly ever done so, even during the times when I was mostly a Windows user; the culture just doesn’t push you in that direction. Generally, you just accept making a few extra repetitive clicks to get the job done.
Yes, there is a certain pragmatism to the Windows approach; No denying that. The real thing is, what can we as programmers/tech_gurus learn from that? There is also a definite plus to the textual-oriented input-output nature of Unix. When I want a number of things to just “happen” in the background, it is often as easy (and as robust) to script a few shell tools together than to write a complete Perl/Python/whatever application, including 10 different modules for system interaction, etc…
The question of useability is another interesting one. IMHO Windows is more useable (as long as you stay on a certain level) for the semi-techie end-user. Mac is more useable for the non-techie. But once you graduate into Power User, or general tech kind of person, *nix is more useable. That’s right…MORE useable. For example, in the whole *nix world, there is no simple standard way of autodetecting hardware and popping up a simple reassuring message that the new webcam has been found and you should now insert the floppy with the drivers. But, by the same token, if Windows somehow fails to properly recognize the webcam, or the drivers don’t install for some reason, the Windows user generally either calls tech support or gives up. In *nix, it might take a smidgen of reading to get your webcam recognized, but the process is not opaque; you can dig into your system at any level to understand why something isn’t working.
To me the more interesting question (which Joel leaves unanswered) is “where do we go from here?”. As an application developer, I treasure the idea of a development platform that allows me to write software on my favorite (FreeBSD), but deploy it anywhere. Mozilla/XUL, for example, allows me that luxury, as well as scripting languages like PHP. To that extent, I personally don’t care whether *nix “wins” the desktop, but whether the operating system itself can be made irrelevant to the tasks I want to accomplish.
There are situations where I would definitely spec a FreeBSD/Linux desktop, such as a unified business office network, where one competent administrator can take the place of 4 harried Windows support people. But, I would still hesitate to tell a not-tech friend to just start using Linux, because that person will suddenly have to face the fact that all those neat little programs he/she likes to use, such as the Hallmark gift card creator, etc… are suddenly out of the question. (Yes, all those things can be dealt with, and everything has its alternative, but in the end, I know that it will involve much hand-holding, until my friend is ready for a brave new desktop)
Also, however, there _is_ one interesting Linux challenge to the desktop, in the form of Lindows and the Koobox. At least Michael Robertson understands the useability problem and has attempted to solve it with a subscription service that installs all that crazy open source software FOR you, charging you not for the software but the service. Smart, I have to say.
But really, I think we *nix users have to realize that the answer to the Windows/Mac/*nix question is quite complex. Exactly what IS the desktop, these days. Is the desktop going to become more, or less relevant (I say less), in the next few years? On the desktop side, is there a chance for cross-platform development to actually produce something decent? I say “yes”, if we look at Openoffice, Mozilla/XUL, PHP, Python, QT, etc…
This is the part where Joel misses the boat the most. In his mythical comparison of a Unix-vs-Windows developer, he says “The Unix programmer will create a command-line or text-driven core and occasionally, as an afterthought, build a GUI which drives that core”. It really depends on the developer. In the end, I think there are GUI programmers and non-GUI programmers. Each has their place, and in the future, they will have to worry less and less about exactly what platform they are deploying on. That is because the non-GUI programmers are working hard behind the scenes at making those things work cross-platform. Hmm…
…it wasn’t Apple that rearranged the directory structure before overlaying its gui… this was all done by NeXT in NeXTSTEP.
Apple just sucked it all up and repackaged it (with some changes of course).
And I am glad to see someone mentioned BeOS way up at the top. We have to mention BeOS in every post.
Mike
It’s rather rare to find such bigotry among Windows programmers, who are, on the whole, solution-oriented and non-ideological.
You got me, Joel. I’ve laughting at this so much. Do you really believe what you wrote here ?
Let me say that, IMO, you have it wrong, for a simple matter: You mixed up Unix and GNU/Linux. As said before, Linux and *BSD are both modern implementation of Unix. They are unfair GUIs (who is still using twm?) and nice usable interfaces (IMO windowmaker and MAC OS X). And the Unix (cultural ?) development model made it all possible.
Two final notes:
– Remember that 10 years ago, no one was speaking about the end-user interface in Linux. Everything was missing. Think how many years was needed to move Microsoft from Dos 1.0 to Windows XP….
– Usability über alles in windows ? You must be kidding again.
When I hear people talk about Unix (Linux inparticular), they talk about free software, security, multiple file systems, bash, etc.
The typical Windows user will say, “Yeah, but can it run app xyz” ? Which leads me to believe that Unix users care more about the power of the OS while Windows users tend to be more ‘application-centric.’
The way I see it, I don’t really care what OS I’m running just so long as it stays out of my way so I can run the apps I want. Windows (2k/XP), once properly ‘hobbled’, does that for me. Linux does too, but the latter is still missing about a half-dozen apps that I run almost data, so it’s power is of little use to me.
Actually there are tons of books on Windows such as the dummy book and books on just about ever application becuase most people use Windows (desktop) and there are tons more applications on Windows then on Unix/Linux.
Also you have to remember…..these books are mainly targeted at people who can hardly turn on the computer. So don’t say this crap Windows is hard to use because you no its not….and if you think so I really question how much of an professional you are.
However Joel needs to realise that there is a right way of constructing a program and a wrong and I am sorry, opening up VS IDE without thinking about the whole picture is a stupid way of writing a program.
The UNIX way, creating a small core then building the interface ontop of it is the correct way of laying up funcationality where as Microsoft and Windows programmers have this addiction that they *MUST* ram every piece of functionality into one big uber executable.
For example, we have gecko which was developed on this. Small core with an interface on top. What is the net result? the net result is that the core can now be re-used in numerous other products. That is how software should be written.
By taking this approach you don’t limit your self to the possibilities later on. For example, Microsoft now chants about the merits of CLI tools, but what do they need to do? they have to re-write all the tools again for the CLI when had they started with a CLI version then built up from there, they would have an evenly balanced CLI/GUI operating system.
I use a US goverment informex NASIS data base that has to allow thousands(and growing)of unique data fields open at any one time per data set, allow thousands of data sets access on demand, allow hundreds of users (and growing) to read, write data and execute script on that data as needed all at the same time. The OS has support objects, addition of future modules, data read-write and script running, group and user permissions, individual script writing, script data cross checking and report writing, fuzzy math, GIS, GPS. This OS has to remain up and running 99 percent of the time for the worlds soil scientist and only to be shut down for upgrades. The USDA had to use Nix to run operate the NASIS database itself. To my knowledge Windows could not deliver all that we need and it was droped from consideration in its development early on. We use Windows to access the system on a remote basis. This tells me there IS a difference between Nix and Windows. This tells me Nix IS needed to operate this our database and any GUI friendly OS is fine for us to access this NASIS system from an office. In most offices NASIS itself is not our frustration, its keeping Windows up and running/connected to access the NASIS database.
I’m a user and I care a lot more about useful software than technical expertise. Why would I use an OS that offers technical superiority but few applications that I want to use?
Spolsky is right, as most of the comments here illustrate.
— and if you think so I really question how much of an professional you are. —
Hmm. “easy” is a subjective thing, is it not?
Does it make less competent an IP network and UNIX guy that I uses no Windows whatsoever in the past 1,5 years?
That is quite shock to me.
Why does this type of debate always end up in insulting each other?
In the beginning there were quite a few good points.
Since I arranged all the buttons and their functionality in my WM, Windows is a hard to use environment to me.
Probably quite a difficult desktop for anyone not used to it.
As to what drives Windows world and UNIX world: Money. All big corporations care ONLY about getting enough money to keep their investors happy.
This statement excludes all GNU and BSD people of course
Somehow I always get more attention for my problems from open source project than I do from commercial support.
For example, we have gecko which was developed on this. Small core with an interface on top. What is the net result? the net result is that the core can now be re-used in numerous other products. That is how software should be written.
You mean like the IE core, which you can use in your own apps? There are far more IE-based browsers than Gecko browsers out there, so what exactly is your point? Not staying that IE is better than Gecko, but it’s not like there aren’t about a million different kinds of components you can drop into any Windows application and use immediately.
Huge uber-applications are certainly not perfect, but I believe that is a better option than building a GUI ‘frontend’ over a lot of CLI tools; the end result feeling like a sloppy hack job.
IMHO, the CLI is great for doing system administration tasks and things where very small utils are involved. Past that point though, if you’re going to go GUI, then don’t do it half-assed.
I’m a user and I care a lot more about useful software than technical expertise. Why would I use an OS that offers technical superiority but few applications that I want to use?
Applications and operating system technical superiority are worlds apart. Why bring up something that has NOTHING TO DO WITH THIS ARTICLE? If the operating system is poorly designed YOU the END USER suffer as a result, either in poor stability, poor security or simply your applications will not run nicely because the programmers from your favourite software supplier spend more time working around problems rather than trying to stablise and perfect their product.
So yes, technical superiority of an operating system IS important. Instead of ranting like a loonie, how about getting out there and talking to programmers and how a crappy operating system can result in a crappy product for the END USER. Yes, thats right, the END USER does actually get affected by a crappy OS, either directly or indirectly.
“However Joel needs to realise that there is a right way of constructing a program and a wrong and I am sorry, opening up VS IDE without thinking about the whole picture is a stupid way of writing a program.”
He does realise this. Take a look at the other articles on his site which relate to software design. He’d be the last one to advocate you just opening up VS and beginning to code.
“The UNIX way, creating a small core then building the interface ontop of it is the correct way of laying up funcationality where as Microsoft and Windows programmers have this addiction that they *MUST* ram every piece of functionality into one big uber executable.”
That happens in the Unix world as well (e.g. some CLI based programs are like this and probably should be broken down into separate executables which have more specific functionality). Just because you are in one camp or another doesn’t automatically mean that you will be able to properly design a program.
“For example, we have gecko which was developed on this. Small core with an interface on top. What is the net result? the net result is that the core can now be re-used in numerous other products. That is how software should be written.”
IE is based on a separable core engine as well, which a lot of programs embed these days. Same with the Microsoft Jet DB engine.
It is interesting to me that Windows people do not realize that Windows is a nightmare for a lot of people. I use Linux and KDE almost exclusively ( I only use KDE/Linux at work and I have a MAC & Linux box at home ). Fixing problems in windows is hard because Microsoft assumes that the user is an idiot and should ne be allowed to know what is going on on his system. Windows hides everything, and when something goes wrong, whammy!! The user has no clue.
I have setup a Linux computer for my 70year old mother in law and she has no problems using it. It looks a lot like Windows 98/XP and it allows her to comnnect to the internet and do e-mail. Hey it works and she has no problems. This is what owning a pc is about. I think that KDE provides an very easy interface to learn and work with. All the of the KDE programs do what I want and provide the functionality I need. I think the writer of this article is ignoring all the work in Linux in writing this article. If you only look at UNIX and only UNIX, then his arguement may have some legitimacy. But I have not seen AIX, Solaris or UnixWare lately but they were bad before.
//Since I arranged all the buttons and their functionality in my WM, Windows is a hard to use environment to me.
Probably quite a difficult desktop for anyone not used to it.
//
Yah, it might be difficult for maybe <5% of the world’s PC users.
A miniscule and unimportant minority may find it difficult.
It seems that all the people bringing up the Unix Haters Handbook seem to totally misunderstand it. Please, read it again, especially the foreword(s) and the history.
I believe we all use what is most familiar to us. This is why you can use MKS or Cygwin on Windows boxen (I’d be lost without them!).
It is the small, modular, tools based approach that UNIX culture takes to heart. I agree with Joel in that we value the developer audience the most. Since I’m a developer, I prefer UNIX (where regular expression support in many tools is common)… but when I’m just playing or doing “consumer” stuff… I love my Macintosh (I fell in love with Lisa back in 1983.. but couldn’t afford her).
There are definately cultural differences between UNIX and Windows programmers. After all, you’re talking about a bunch that put up with Hungarian notation for more than a decade. Only god can fathom the mind of people like that.
A Windows programmer is stuck at work until late at night until he can barely see or think straight, trying to remember whether the variable is called “lpszString” or “lspzString.” Meanwhile, the *NIX programmer, able to leverage the sheer power of her svelte and simple API, manages to clock out at 3:30 in the afternoon, having plently of time left in the day to go clubbing in the city. The next morning, the Windows programmer drags himself in at 8:30 in the morning only to find that he can’t understand any of the code he wrote the day before, and has to scrap it and start over. Meanwhile, the *NIX programmer saunters in around noon, gets herself a coffee, chats up the hunky secretary, and sits down to see what new coding adventures await her today.
So its obvious that *NIX programmers are so much better. *NIX programming is not only fun and easy, but whitens your teeth, improves your love-life, and gets rid of troublesome celluloid! Who wouldn’t want to be a *NIX programmer?
“The UNIX way, creating a small core then building the interface ontop of it is the correct way of laying up funcationality where as Microsoft and Windows programmers have this addiction that they *MUST* ram every piece of functionality into one big uber executable.”
That tends to be true anytime you start getting into GUI applications. And no, we Windows programmers do not ram everything into a single EXE. That’s why we have DLLs.
Of course, with careful C programming and using the API directly rather than going through a toolkit, it’s possible to ram everything into one executable. But it will be very small, NOT big. It might even fit on a floppy. And it will even run from that floppy because it doesn’t have any dependancies.
One really nice thing about this article… The author actually stated in writing what most of us already know. Eric Raymond doesn’t know what he is talking about when he spouts off about Windows. But cultural supperority ideas based on ignorance seem to be common in the GPL and FSF camps.
I really think that his holier-than-thou approach to using a specific OS needs some toning down. The majority of computer users just want to get a job(s) done with the least amount of hassle. Like most people, they consider whatever platform they are familiar using as being the best. However, for a professional to do this indicates not only a high level of ignorance but also a high level of arrogance. Yes each OS has its strengths and weaknesses and I do have a preference for the one I like best. That does not make me or the OS superior to all others.
I do not know any person who is a good ‘Generalist’, meaning that they are reasonably knowledgeable in all fields of human endeavor. The fact that a user is ignorant (note: different from being stupid) of the inner workings of an OS does not automatically place them in the ‘not-so-bright’ category.
Can any of you super experts who claim superiority over users who are not interested in the details of an OS take apart your car’s transmission and replace a broken band? Or can you buy the really cheap chemicals for proper lawn maintenance or do you rely upon a company to prepare what you need (and pay them well for making that correct mix)?
Yes, a car driver is a better driver if they know something about how the car really functions. It can also dramatically reduce their dependency upon ‘experts’ to fix even the minor problems and save tons of money. The fact is that a large number of drivers just want to get from point A to point B safely and do not have the desire/time/money to spend understanding how cars function.
We all have a finite amount of time to spend doing and learning things Each individual must decide how to best spend this time. Many people opt to devote most of their time on their respective jobs/profession. Just because that does not include knowing the inner workings of an OS or programming does not make them ‘lesser’ people.
I enjoy reading comments where a possible problem or flaw is noted but no one is ‘put down’ because of it. I also like to hear other’s opinions on OSs that I do not routinely use. Let’s hope that the discussions can reflect tolerance for differences and some human understanding.
It is interesting to me that Windows people do not realize that Windows is a nightmare for a lot of people. I use Linux and KDE almost exclusively ( I only use KDE/Linux at work and I have a MAC & Linux box at home ). Fixing problems in windows is hard because Microsoft assumes that the user is an idiot and should ne be allowed to know what is going on on his system. Windows hides everything, and when something goes wrong, whammy!! The user has no clue.
Windows can be a nightmare for people that don’t use it very often. It can also be a nightmare for people that aren’t familiar with fixing it’s problems. These people happen to overlap in that people that don’t use it very often tend also to not be familiar with fixing it’s problems. It goes both ways, as an experienced Windows ‘power user’ or administrator that doesn’t spend much time with Mac OS or *nix would find it hard to fix things on those systems for similar reasons. Windows doesn’t hide anything any more than any other operating system.
I have setup a Linux computer for my 70year old mother in law and she has no problems using it. It looks a lot like Windows 98/XP and it allows her to comnnect to the internet and do e-mail.
98 and XP look so different that I have problems diagnosing problems on pre-XP systems after 2 years of using XP at home, and occasionally have problems finding particular interfaces in XP because I have so few problems with it and tend to only remember the interface I’m looking for from 2k or 98.
Hey it works and she has no problems. This is what owning a pc is about. I think that KDE provides an very easy interface to learn and work with. All the of the KDE programs do what I want and provide the functionality I need.
It does, for some people, and part of that is taking cues from other GUI systems. In the most basic sense, KDE doesn’t work much differently from older GUIs for UNIX (or Linux), but what KDE does well is provide an environment to build on, giving you a series of GUI apps and a toolkit to work with (or for others to work with), essentially doing what Windows and Mac OS had already been doing for years. As for getting common users to use a particular interface, I find that anyone that isn’t afraid of computers can pick up any decent interface (even CLI) if someone is willing to help them. My grandparents (when they were alive, especially my grandfather who was older and had more problems with computers than my grandmother) tended to keep notes near the computer that detailed, step-by-step, how to do particular tasks, and they always knew that if something was wrong they could call my father or myself for help, and that because we helped them choose the computer we could do that, over the phone. Overall, when they did call, it usually meant they needed to write out another set of notes to perform a task they hadn’t before, or they needed simple advice (‘place your hand, flat with palm facing the side of the computer, about the same height as the CD drive, now smack it’). My grandmother was an amazing typist, and rarely needed help with the computer, regardless of which OS it was using (as we went through 5 or 6 different computers over the years, running DOS, GeOS, Win 3.x, and Win9x). I have no doubt that if we wanted to support a Linux system, we could put it on a computer for her and she could have used it.
I think the writer of this article is ignoring all the work in Linux in writing this article. If you only look at UNIX and only UNIX, then his arguement may have some legitimacy. But I have not seen AIX, Solaris or UnixWare lately but they were bad before.
His argument is equally legitimate with Linux, as it still has the same culture. Many forms of Unix have been adopting the various graphical environments common in Linux, including Gnome for Solaris in the near future iirc. The way *nix users handle GUI apps is certainly drifting around the center of the way applications were built before GUIs became popular, but there’s still far more focus on developing for developers than developing for users, even from people that claim to be developing for users.
“After all, you’re talking about a bunch that put up with Hungarian notation for more than a decade. Only god can fathom the mind of people like that.”
We still use Hungarian notation, and the Windows API functions are still documented using Hungarian notation. And once you get used to it, it is great. (It requires a little memorization, but after using it for awhile, it becomes automatic. So no, you do not end up sitting up late at night trying to remember what the variable is called.
The NIX programmer, on the other hand, sits up late at night trying to figure out whether “mystuff” is a char, a wide_char, and int, a long, or a byte, because he doesn’t have variable declaration (it’s declared one of 28 different header files), and the variable itself gives no clue what type it is.
Hungarian notation greatly reduces type mismatch errors because it is immediately obvious when looking at any variable, what kind of variable it is, even if you don’t have the variable declaration handy.
It has been modified somewhat and simplified in recent years, but it is still there.
Example, we don’t use szCmdLine anymore to indicate a null terminated string that holds the command line arguments. Now we use lpCmdLine, which indicates a pointer to the command line string.
Even Microsoft realized that Hungarian notation was a stupid idea. The whole “type-mismatch” thing isn’t a big deal because the compiler will catch type errors. Anyway, the type of an API variable in UNIX is implicit from the context. If we’re talking about a string, its always of type char*, because we use UTF-8. If its a single character, its always an integer. Longs aren’t used unless you know you want a 64-bit quantity on 64-bit machines. There is no such thing as a byte. And usually, we don’t need to know the type at all, because we don’t have to do stupid things like “foo.dwSize = sizeof(DDBLTBATCH);” Face it, you’re alone!
I like a quote about HN I saw on the ‘Net. “It helps prevent any accidental abstraction from creeping in!”
I don’t consider having a computer crash due to a full c: drive from log files, vm, and user files as cultural differences. Neither do I consider having to reboot my OS because I install new applications/patches as cultural differences.
I know you were half being sarcastic..
But come on… “lpszString” or “lspzString”? Even if I have been writing code for 16 hours and can barely see straight, that one is obvious.
lpszString = Long Pointer Null String Zero terminated.
lspzString = Long String Pointer Zero terminated.
Zero terminated pointers and long strings don’t make any sense, even I am working on 2 hours of sleep in the last 48 hours.
Tney might make sense to that NIX programmer however. The one that got out of work at 3:30 the previous day and went clubbing all night. :p
Oh, as for 28 header files? Its called modularity! Modularity means something that is *not* like Windows’s giant “windows.h” POS. Windows is the only system I’ve seen that does something that stupid in its API. Oh, and if you’re having trouble finding something in the UNIX API, you call “man <symbol>”. Way easier than digging around in a single giant header file. If its not a standard UNIX API, just do “fgrep -R <symbol> /usr/include/*” to see where the symbol is declared. This works, because unlike in Windows, 3rd-party libraries have the sense to install their headers in a common directory.
“Zero-terminated pointers and long-strings don’t make any sense”
—————
Neither do long-pointers. What the hell is a long-pointer?
“The whole “type-mismatch” thing isn’t a big deal because the compiler will catch type errors.”
It will only catch errors if there is a possibility of loss of precision and you didn’t perform an explicit cast. Take a look at this code:
#include <stdio.h>
int main()
{
int i = ‘j’;
printf(“%c”, i);
rerturn 0;
}
I gurantee that this code will compile and run correctly on any ANSI standard C compiler. So much for the compiler catching logic errors that result from assigning the wrong type of data to the variable.
“There is no such thing as a byte.”
There isn’t? What C or C++ compiler are you using?
ANSI standard C and ISO standard C++ both define a byte as an 8 bit signed integer.
“Even Microsoft realized that Hungarian notation was a stupid idea. The whole “type-mismatch” thing isn’t a big deal because the compiler will catch type errors.”
This wasn’t the case 20 years ago, when the hungarian notation was invented. C compilers were very permissive about type checking.
“There is no such thing as a byte”
Yes there is. It is the unit for the size of the operator sizeof : http://www.parashift.com/c++-faq-lite/intrinsic-types.html#faq-26.1… In C++, a byte is the size of a char.
“If we’re talking about a string, its always of type char*, because we use UTF-8. If its a single character, its always an integer”
Well, on C maybe, but with other languages, it is not. in C++, a string is of type… string. And there a unicode characters.
“Neither do long-pointers. What the hell is a long-pointer?”
They are a relic from the days of 16 bit Windows, when Windows used a segmented memory model instead of a flat memory model. In those days, you had near pointers and far pointers, also sometimes call short pointers and long pointers.
They were a major headache to deal with, and and 32 bit versions of Windows eliminated them by switching to a flat memory model.
But you will still see them show up in documentation sometimes.
You won’t be using any Windows API anymore in the future because Microsoft doesn’t want you to build systems, instead you will build solutions that are integrated into a product line.
The only developers building systems are the programmers at Microsoft, that point was made in the article, and it focused on source code, however maybe this fact is diluted because even though a Windows programmer has no control over the operating system layer, and therefore his investment (in code) is volatile since he is at the complete mercy of Microsoft (one entity with absolute control of their product), I didn’t see that point being established, instead I saw some argument about the source code’s accessibility and openness making a difference. This is not a strong argument in my opinion because open source development does not yet gain a strong advantage from the open code, it is lead more by commercial interest.
ESR made a valuable contribution to the open source community, and I thank him for that because I would like to read the book. It saves a person a lot of time to be able to leverage the perspective of a Unix programmer when trying to understand the motivation for Unix. I’m glad that ESR has made this information available to everyone and I expect to gain some key insights.
I agree with those who pointed out that GNU/Linux is not Unix. There are other cultures growing in the free software world and just because it’s hard to avoid the “classical Unix culture” when toying with Linux, doesn’t mean that there are no other cultures.
This is the typical mistake “analysts” make when talking about Linux or free software in general: They assume that things don’t change.
“Oh, as for 28 header files? Its called modularity! Modularity means something that is *not* like Windows’s giant “windows.h” POS. Windows is the only system I’ve seen that does something that stupid in its API.”
I didn’t say there was anything wrong with including 28 header files. Only that without the variable name indicating its type, it can be hard to find out what type it is if that variable is declared in one of those 28 header files, and you don’t know which one.
And fas far as windows.h, you do realize that windows.h is NOT an autonomous file right? It references many other header files. It’s a convienence for programmers since it means I don’t have to remember the header file that defines every single API function. But most of those functions are not actually prototyped in windows.h. They are prototyped in a different header file that windows.h references.
“You won’t be using any Windows API anymore in the future because Microsoft doesn’t want you to build systems, instead you will build solutions that are integrated into a product line.”
Meaning what? That I can’t build “solutions” using the Windows API? Sure I can. I can build anything with the Windows API that I could build using a toolkit like MFC. It might require more work using the API directly, but I can do it.
“Windows programmer has no control over the operating system layer, and therefore his investment (in code) is volatile since he is at the complete mercy of Microsoft (one entity with absolute control of their product), I didn’t see that point being established, instead I saw some argument about the source code’s accessibility and openness making a difference.”
The same is true with Linux if they change the API, your code breaks. Sure, since you have the source code for the API, you can change the API itself so that your code works again. But wouldn’t you be better off changing your code to work with the new API? Either way, you have to re-program something.
And as far as the Wndows API and my investment in code being at the mercy of Microsoft. The Windows API has remained incredibly stable thoughout the years. Sure, tons of new functions have been added, but most of the original functions are still there too.
1) I meant that there is no type byte. You were referring to bytes in a list containing other types: char, long, int, etc. byte isn’t a type in C, but BYTE is a Win32-ism.
2) When you circumvent the type system, you’re going to get type errors. Unfortunately, printf() circumvents the type system, but a *NIX compiler would have caught that because they usually do type-checking for printf(). In the end, its best to make sure your C code typechecks properly in a C++ compiler. That’ll get you compiler-enforced type safety without any silly naming conventions.
3) Windows.h is still a monstrously huge header file, and it drags in all its sub-headers at compile-time. The fact that nobody else does something like that should clue you in that its a bad idea.
@David: In UNIX, in any programming language, a string is the moral equivilent of a char*. Even when you’ve got unicode characters, its still a char*, because the unicode format in *NIX is UTF-8, which uses 8-bit characters.
“1) I meant that there is no type byte. You were referring to bytes in a list containing other types: char, long, int, etc. byte isn’t a type in C, but BYTE is a Win32-ism.”
What do you mean there isn’t a type byte? There most certainly is. And it has been available on every C compiler since K&R C.
“2) When you circumvent the type system, you’re going to get type errors. Unfortunately, printf() circumvents the type system, but a *NIX compiler would have caught that because they usually do type-checking for printf().”
It’s interesting you should say that, because the compiler I actually tested that with was GCC. No error, no warning, nothing. Compiled just fine. And it will on any other compiler as well.
Basically, there is nothing illegal about assinging a char to an int, and there are cases where there is very good reason to do so.
“3) Windows.h is still a monstrously huge header file, and it drags in all its sub-headers at compile-time. The fact that nobody else does something like that should clue you in that its a bad idea.”
The only real drawback is that it increases the possibility that I am going to have a name clash with one of my own functions, because I am importing a lot of Windows function prototypes when I include windows.h.
However, in C you usually define function starting with a lowercase letter, and almost all of the Windows API functions are defined starting with an uppercase letter. So that elminates most of the possibility for a name clash as long as I stick to the convention in C of defining my functions starting with lowercase letters.
“What do you mean there isn’t a type byte? There most certainly is. And it has been available on every C compiler since K&R C.”
Oops. Nevermind. I take this one back.
I was mixing up C and Java.
Java includes a type byte to make up for the fact that type char is 16 bits instead of 8 bits.
Your right, there’s not type byte in C.
Applications and operating system technical superiority are worlds apart. Why bring up something that has NOTHING TO DO WITH THIS ARTICLE?
This has to do with Windows vs. Unix on a cultural level, so I talked about it from that standpoint. As your comments illustrate, Unix users don’t emphasize applications nearly as much as the actual OS.
If the operating system is poorly designed YOU the END USER suffer as a result, either in poor stability, poor security or simply your applications will not run nicely because the programmers from your favourite software supplier spend more time working around problems rather than trying to stablise and perfect their product.
So yes, technical superiority of an operating system IS important.
Actually, the technical superiority of an OS is only of secondary importance. Let me explain:
Just for the sake of the dicussion, let me use a simple & dumb analogy to illustrate my point. Let’s say there’s an app that lets me measure the distance between my ass cheeks that I use on a daily basis. This is my ‘killer app’ and basically defines the reason why I use computers. Now, the OS I use to run this app is horribly unstable (crashes once an hour, causing me to save my work every 10 minutes) and I have to work for half an hour each day to plug the security holes.
So, my friend tells me about this other OS that has none of the problems that my current OS has. But, upon doing some research, I find that there are no apps for this OS that’ll let me do the ass cheek measuring. So, even if the technically superior OS is 10x more stable, 10x more secure, and 10x faster, since it doesn’t support the app I require (or has a version of the app with only have the features I need), the benefits of this OS are completely irrevalent.
This is something that a lot of Linux zealots simply do not understand – they think “Well, if only Gnome or KDE were a little more refined or if the package managers were just a little better, Linux will be the ultimate Windows killer!” They failt to understand that most of us on the Win32 platform care about none of that, for the reason which I described above. If Linux had better apps for making music than Windows did, I’d be happy to compile every single one of my apps from source and use TWM as a window manager if I had to.
For this reason, I say it is impossible to have a complete discussin about the merits of an OS without talking about the apps it runs. Now, the technically superior OS might have apps that are much better than the less technically superior OS in certain categories, so the degree to which this is a benefit will differ from person-to-person. If you notice that two OS’s have the same level of functionality in their apps in your favorite categories, only then would I look at the technical benefits of one OS over the other. However, I will always choose the OS with the better genre of apps that I use, no matter what technical the disadvantages are.
“3) Windows.h is still a monstrously huge header file, and it drags in all its sub-headers at compile-time. The fact that nobody else does something like that should clue you in that its a bad idea.”
Actually, though,. I think a lot of other people do that. Example, if I import GTK.H, I’ve imported a truely massive number of function prototypes. Same is true with WX.H).
As a general rule, when circumventing the type system, you are only going to get errors when there is the possibility of loss of precision, and you don’t perform an explicit cast.
None of the following will produce errors:
int i = 4000;
double d;
d = i;
int i = ‘d’
char a = 10;
In all these cases, I do not risk losing precision when doing the assignment, and the compiler automatically promotes the value I am assiging to the type of the variable that I am assigning it to. This won’t cause compiler errors, but it can cause logic errors because you can assign the wrong kind of value to a variable (logically) and the compiler will not complain.
This however, will produce an error:
double d = “5.5”;
int i;
i = d
I can’t do that because I will lose the decimal portion of “d”.
So I have to cast it:
i = (int) d;
But that is only time the compiler will complain is if I attempt an assignment that will result in loss of precision. If there won’t be any loss of precision, the compiler will silently promote the value to the type of the variable it is being assigned to.
You might be able to use -WALL and change this behavior so that the comiler will warn if it performs an automatic promotion, but it still won’t generate an error, as this is not an error. It will just print a warning. But by default, compilers, including GCC, do not even warn about this type of thing.
byte is defined in the C standard, 3.6#1.
5.2.1 has all the definitions that are relevant to bytes and character sets.
6.2.6.1#3 and 6.2.6.1#2, along with note 40, define more precisely the relationship between bytes and C types (essentially, an unsigned char is a byte).
bytes aren’t guaranteed to be 8-bit, they are guaranteed to be at least 8-bit. There is at least one embedded RISC CPUs whose compiler defines bytes as being 32-bit (and all other basic types, BTW).
A few comments have been made saying the user doesn’t care what operating system is being used as long as work can be accomplished. Well, everyone wants to get their work done without hassle. Programmers, business people, scientist, artist, and all the other people that use a computer would like to get their work done.
It takes stable and efficient applications to get the work done. But those stable and efficient applications won’t do you much good if the operating system is unstable and not efficient. So, in some respects, computer users should care about the operating system their computer uses.
Some people think of Unix/Linux/BSD as having to always use some command line text editor to change some configuration file just to use an application to get work done. It is not true. That is like saying you have to first go into the Options menu of a Windows based application to make changes before you can use it. I have found that Notepad works just fine without changing options and the same goes for Unix/Linux/BSD applications. Windows and Unix are the same in that you only have to configure the operating system once, after that you just get your work done. It is just as easy to do word processing on Windows as it is on Unix.
The biggest problem Windows has is it is self centered. Unix seems more friendly to other types of environments.
Unfortunately, computers are not mind readers. Anyone using a computer will have to learn how a computer does things. A computer needs to be told what to do and whether you tell it through a GUI or through CLI doesn’t make much difference. But the computer user will have to know the commands the computer uses in order to tell the computer what to know.
If you want to copy files, you have to tell the computer what files, where those files are, where you would like to copy those to, and you have to use what ever coomand the computer uses for copying. This doesn’t happen by magic. Some new computer users do not seem to know this.
Win32 has been stable, but I don’t know if it is going to be accessible anymore, because MS want’s you to program solutions through a new software layer (I’m sure you know which one). You will not be able to implement system, but rather solutions that target a flexible product line.
If Win32 is stable, which I agree, how is X not the same? With Linux I will always have control over the operating system layer, from which I can implement a system. There is no vendor that can close of the system interface and force me to be a solutions developer or a myraid of other things.
The solution on Linux is no more stable than on MS Windows, but that’s fine because this is the commercial domain and a vendor has to profit by locking in the customer just the same as they have to do on Linux.
Open source should focus much more of the source code. I admit that it is not a leader yet, because it needs to discover it’s competitive advantage through which it can be a leader in a way that a vendor can not compete. The advantage of open source code has hardly been realize. I know right now though that I do not want to be controlled by one entity but I want a decentralized platform which encourages competition through specialization and rewards the innovators rather than consumes their ideas. As a programmer I want to be rewarded for my ideas and I want to have the control in order to realize them.
…is just about the stupidest thing i’ve ever heard. Why not “everything is a data-stream”, much better don’t you think?
1) @Anonymous, @JBQ: Byte is most definately not in C. I can’t do “byte i = 0;” unless I “typedef char byte;” It might be defined as a concept, but there is no type ‘byte’ like there is ‘int’. Or did I misunderstand the statement?
2) @Anonymous: None of your examples are type errors. That’s not a circumvention of the type system, but rather an automatic conversion. Those things are legal in C++ too. The main problem with your original printf() example was that printf() is not type-safe at all. No notation will help you when you’re dealing with type-unsafe features.
3) Did you try compiling with -Wall?
t takes stable and efficient applications to get the work done. But those stable and efficient applications won’t do you much good if the operating system is unstable and not efficient.
It would be much better than using a more stable OS that does not have the applications you need to get work done. Some people require more/different things than others, so having the mindset (whether using OSX, Windows, Linux, whatever) that since it has everying you need to get work done, then it’s also sufficient for everyone else is the wrong kind of assumption to make.
I mean, some peopel are having a discussion about what a byte actually is. Great for programmers, but the rest of us could give a rat’s ass.
In order to have more applications for Linux, and more specialization, than Linux has to work on improving the foundational architecture, so that more vendors can base their product line on the Linux platform and we can have a competitive environment.
I think that the byte size is a Java type. And in Java, the byte size is the same size on all platforms for which a JVM is implemented. This is not true for C built in types because it depends on the processor family. It is possible though to use the byte type on Linux (through Java).
On Windows you couldn’t give a rats ass even if you wanted to. You could only give a rats ass if Microsoft allowed you to do so.
“2) The main problem with your original printf() example was that printf() is not type-safe at all. No notation will help you when you’re dealing with type-unsafe features.”
Yes, it will, because it will keep me from inadvertantly doing something like this:
printf( “%d”, myVariable);
Suppose tha myVariable holds a char value, and I intended to use a different variable that held an int value. This will still print a numeric value, but will not print what I want. It will print the ASCII value of myVariable.
But now try this one:
printf( “%d”, chMyVariable);
Thanks to the hungarian notation, I am more likely to catch the significance of what I am about to do because I know instanty that szMyVaiable is not an integer, but rather a char variable.
“anti-binary formats and anti-closed”
Aren’t you mistaking UNIX and Linux ?
“Win32 has been stable, but I don’t know if it is going to be accessible anymore, because MS want’s you to program solutions through a new software layer (I’m sure you know which one). You will not be able to implement system, but rather solutions that target a flexible product line.”
True. But MS is not going to drop the Windows API because that would break millions and millions of lines of existing code, and it would stop people from upgrading WIndows because none of their existing apps would work anymore. And businesses aren’t going to spend the millions and millions of dollars it would require to reprogram all of the applications that use the Windows API.
Visual Studio, for example, still allows you to write unmanaged C or C++ code if you want to.
“If Win32 is stable, which I agree, how is X not the same? With Linux I will always have control over the operating system layer, from which I can implement a system. There is no vendor that can close of the system interface and force me to be a solutions developer or a myraid of other things.”
I’m not saying that Linux is not stable. I’m saying that realistically, you are in the same boat as Windows programers. You are at the mercy of Linus Torvalds, Red Hat, and other such entities that make decisions about the direction of Linux. For example, suppose, for the sake of argument, that Linux decides to drop the existing Linux API, and completely rewrite it with something that resembles a .NET system. And that customers all upgrade to this new version of Linux that uses the .NET style programming. You are in the same boat as the Windows programmer. You have to re-write all your source code. The only difference is that you have the source code for the Linux API, so if you wanted to, you could re-write the API so that it would support your existing apps. But that would mean your apps would only run on your system. And it would also probably be more work than simply re-writing your application.
So my point is that if something like this happens, having the source code for the API doesn’t really do you much good unless you want to re-write the whole thing.
LOL, I’m hoping that you will remember what you just said.
“”rerturn 0;
}
I gurantee that this code will compile and run correctly on any ANSI standard C compiler.””
Err…no.
Unfortunately no amount of variable type-checking can eliminate typos :>.
On another note, my opinion on hungarian notation is that it is an okay solution for the domain in which it was intended. The problem is that the hungarian notation will be different in each domain that the aliases are specialized for, because they try to describe the behavior (the functions) and not just the type.
Microsoft wouldn’t use Hugarian notation if they had to do it again. They would use objects implemented in an object oriented language. Rather than object based structures and aliases that desribe the objects and the type of the object.
he seems like some sort of troll to me (with his own view BLOG hehehe).
He seems very intent on bashing UNIX (and UNIX like operating systems). Are there command line geeks out there ? yes of course all OSes have them. Are they the majority? In previous times yes, today no. The proliferation of various unices (linux distros, macos x, solaris, IRIX etc) is due to (1) stability and (2) usability. Do you think that people would flock to linux as an alternative to windows if they had to use the command line all the time? absolutely not! The embetterments of GNOME and KDE are good examples of the fact that most mainstread unix users cant go without a good GUI. You can accomplish much with a commandline, but you are dependent on (1) memory and (2) good typing skills ;-). GUIs make things faster and more intuative and unix like OSes have joined in (a long time ago).
What seems to me to be the dividing line between windows and unix IS indeed bi-culturalism. UNIX people want freedom to experiment and tinker if they wish. They want to not be shackled by monopolies such as microsoft, and they want stability which is what is offered by the unices of the world. Windows users on the other hand are comfortable where they are, they are content with having one provider and this lessen the choices that they have to make in order to maintain their system (either for personal use or server use), and when something hits the fan (i.e. viruses or worms) they can blame M$ for not releasing patches fast enough or making vulnerable systems.
Unix…Windows.
Each one is a toolbox.
Unix has it’s box filled with a socket set.
The Windows toolbox is filled with wrenches.
So I have a car to put together.
I use both toolboxes.
The other guy down the street only has a Unix ToolBox, so that’s what he uses to put his car together.
People seem to bicker no matter what the dividing line is…nations, religions, color, etc. Now the OS is included in there.
Sheesh. Quit trying to defend one against the other as better. They are simply different ways of doing things, each one probably better at some tasks than the other, but both are useful in their own respect. This bickering is getting old and seems very immature.
And I’m sure the bickering will continue
“Win32 has been stable, but I don’t know if it is going to be accessible anymore, because MS want’s you to program solutions through a new software layer (I’m sure you know which one). You will not be able to implement system, but rather solutions that target a flexible product line.”
I think this is sort of like saying that Sun Micrososystems would eliminate the Solaris native API in favor of forcing programmers to use Java. That’s obviously not going to happen. And it’s not going to happen with Windows and .NET either.
However, I do think that Java and .NET will collectively end-of-life C++ as a general applications language. This won’t be because Sun or Microsoft force the issue, but because programmers can be 5 to 10 times more productive in Java and .NET than in C or C++. So I think that the natural course of evolution suggests that programmers will write more and more new applications in Java or .NET, and that eventually, we simply won’t see C++ being used as a general applications language anymore.
I rarely agree with Eric Raymond, but I do agree with him when he says that the days where it makes sense to manage your own memory are gone. It’s far better to give up a few CPU cycles and save months of far more valuable man hours on programming.
Java has already pulled off an extremely successful coup at displacing C++ as the king of programming languages. Java is more popular and in greater demand than C++ these days. And I think that trend is only going to continue.
As far as .NET, Microsoft may have done too little, too late. I’m not convinced that .NET can break the power hold that Java already has in Windows, as well as other platforms. And Microsoft can’t make .NET the only platform that will work on Windows unless they want to end up back in court.
Language developers learned an important lesson from C and C++. And that is that for a programming language to really survive and become dominant, it has to be cross-platform. Java is very cross-platform, and that is one reason it has become so popular. I want to be able to share the same code base between my Linux server, my Windows desktop, and my Palm PDA. Java lets me do that.
.NET, on the other hand, is very NOT cross-platform. The cross-platform capability of .NET is vaporware, and I think it will remain that way for the forseeable future.
So basically, I have my doubts about .NET. History has taught us that programming languages need to be cross platform to survive and prosper. And .NET is about the least cross-platform framework I can think of. That and Java is so firmly entrenched already, having even displaced C++, that I don’t know if .NET can break the power hold that Java has.
Windows users on the other hand are comfortable where they are, they are content with having one provider and this lessen the choices that they have to make in order to maintain their system
Dude, how do you say this lessons choice? Sure, I don’t have 2,000,000,000 distros in which to choose from, but I see this as a good thing.
As for apps, right now I have 60+ apps installed that I use at least once a week, and perhaps 10% of them are actually MS apps. Need an office suite? Hell, there’s gotta be at least three dozen of them, including the precious Open Office. Same with web browsers, email, FTP, whatever.
However, I do think that Java and .NET will collectively end-of-life C++ as a general applications language. This won’t be because Sun or Microsoft force the issue, but because programmers can be 5 to 10 times more productive in Java and .NET than in C or C++.
I’ve been hearing this for the greater part of a decade now (at least in regards to Java) yet we continue to see the majority of code continuing to be written in C++ and C. Certainly there were performance issues for awhile, but now Java and .NET applications no longer run obnoxiously slow. So why do we see them limited primarily to internal programs used by businesses, as opposed to widely consumed retail software? I’m not going to venture a guess myself… but I’d simply like to point out that I don’t see C++ in any danger of going extinct any time in the near future.
It’s possible for MS or Sun to EOL C++ on their products (Windows and Solaris) however this will not happen on Linux because it is impossible, no one vendor controls it.
Actually though, MS is making a Standard compliant of C++ to run on .Net. At any rate, I myself could care less!
The other theory and the one that I agree with, is that Standard C++ will not be replaced until a language comes along that is more powerful and feature rich. The product line (Java and .Net) is very productive for a certain type of application (solutions) but it is not ideal for systems implementation. The core langauges of these middleware platforms is not as powerful as the C++ core language, but certainly the supporting libraries are far more extensive. I discussed this with Bjarne Stroustrup and he expressed some agreement that the Java goal was to destroy C++ or make it a fringe language, yet comparing Java and C++ will always leave one unsatisfied. Java is tired to a product line strategy, and Standard C++ is a light weight langauge definition which the original author looked at as systems implementation language. Give me a Standard compliant C++ GUI library or middleware system and I’ll use it any day over anything, but I want more than just a wrapper even though Stroustrup claimed that there was no evidence that a wrapper degraded performance over the native toolkit being implemented in C++. But I disagree with him, and he doesn’t understand Linux.
Anyway it appears that most of the things that you value are more easily accomplished with Linux rather than Windows. Linux needs to make the right promises. Linux could easily tell everyone, okay, if you want an API to base your solutions or even systems on (or one for each) and you don’t ever want anyone to take that away, than we can give you an architecture that will make that possible. On the other hand MS Windows can’t do that, because they have a product line that they have to push in order to drive sales. Do not look on the past deviances from the vendors ideal, as your foundation. It is not logical.
…in other words, MS does not want you to use their system interface, they want people to use .Net. The reason for this is because .Net is more flexible in it’s support of a product line and it allows the vendor to retain control over the factors of production (the source code and derived research and development of the systems). With your .Net solution you can achieve productivity, generic data, an integrated specialization, however you can not become a software vendor outside of the control of the vendor that you rely on (Microsoft).
“So why do we see them limited primarily to internal programs used by businesses, as opposed to widely consumed retail software? I’m not going to venture a guess myself… but I’d simply like to point out that I don’t see C++ in any danger of going extinct any time in the near future.”
But remember also that the vast amount of programming projects are internal. There is far more internal software out there than retail software.
But part of the reason you don’t see a lot of retail softwate being written in Java yet, is because most of that software is in maintenance mode. New versions are based mostly on old code, with changes, additions, etc. Rewriting from scratch is expensive. Old applications will continue to be upgraded in C++ and such, but new applications likely won’t be written in C++.
…obviously when the API was still open we had QT, GTK+, Apache, etc, etc, etc, that were ouside of the control of MS. Not any more!
“you are dependent on (1) memory and (2) good typing skills ;-)”
This is absolutely *not* true. In fact, the opposit is true, because if your programs support being called from a commandline, you can write a simple script that calls the application with the settings you usually use.
For example, I’m using a tool named unison ( http://www.cis.upenn.edu/~bcpierce/unison/ ) to sync my laptop and my company’s server. If I use the GUI version, I have to manually start the GUI and make at least a few mouse-clicks to get things going.
Since I’m using it to sync different directories, there are a few settings that differ per directory. If I would use the gui, I would have to open a different config file for every directory I would want to sync.
Enter the command-line. I wrote a simple script “sync_server” _once_, where I had to figure out all the command-line options. And yes, that took a few hours.
But now, I can just enter sync_server and off it goes. Once I’m convinced I have all the right options, I can easily automate the process by letting cron syncing my stuff every night.
So, in the end, I don’t have to remember anything and I don’t have to type anything too, since cron handles that for me. _that_ is the power of the command-line & scripting.
And there we’re back to the cultural thing. Unix culture tends to slightly adapt the system to your needs, while Windows culture tends to accept the way the system works and adapt your way of working to match the way the system happens to work.
Both are fine, as long as you get your work done. For me, it is just more effective to use the command-line, so I can automate my tasks to a much further degree then I could do with a GUI.
And yes, I’m the kind of guy that as a child always opened my toys to look how they worked inside. Drove my parents nuts. When I was 14, I was able to just about completely open & rebuild the engine of my Solex ( http://www.gironet.nl/home/jspiertz/historie.htm ).
And yes, I know how to re-install a gearbox in a car. I completely restored the 1970 VW Beetle my wife drives; rebuilt the thing from the ground-up.
And yes, I fix the dish-washer and washing machine myself because it is much cheaper.
I just like to know how the things I use work inside and be able to fix things if I need to. And that`s not just limited to computers.
I always like the example of my fathers ice-cream machine.
My father owned a restaurant, now run by my brother ( http://www.koprensmorre.nl ), and there we had an ice-cream machine. At some point, it broke. Called the supplier. “Sorry, that type is over 5 years old. We don`t service those anymore”. And then you`re talking about a machine that costs around 1000 Euro to replace.
So, he just went ahead and opened the thing. It turned out that it only needed a new bearing and the machine happily worked for a few more years, costing no more then 50 Euro and a few days of fiddling. And of course, you now know where to look for if you want to buy a new icecream machine… (i.e. bearings don’t like a regular supply of sugar water in the long run)
That’s the same thing. Most people would have just bought a new machine. But then again, there would also be a lot of people that would have tried to fix it too.
Both approaches are fine. It just happens to be that the Unix culture seems to fit me better.
On those products (Solaris and Windows), Standard C and Standard C++ will continue to be used to implement the systems (by the R&D teams at Sun and MS), however all of the solutions that are specializations of the Java and .Net libraries are only written in Standard C and Standard C++ by extension (through Java and .Net).
Java will never dominate Linux because it can’t. Linux is not a product, it’s a platform. The factors of production are decentralized across the community. No one entity can control the direction of a platform, especially with a decoupled product line. On the Windows product however, you can be sure that MS will dominate it and whoever uses it will be dominated.
“It’s possible for MS or Sun to EOL C++ on their products (Windows and Solaris) however this will not happen on Linux because it is impossible, no one vendor controls it.”
This change will happen in Linux too. It won’t be forced on anybody. It will occur as a natural process of evolution because of the amount of time that can be saved using Java, .NET or even Python.
In Linux, for those who have philosophical problems with Java, I expect to see more and more apps written in Python, which is one of the languages often compared to Java.
“The other theory and the one that I agree with, is that Standard C++ will not be replaced until a language comes along that is more powerful and feature rich. The product line (Java and .Net) is very productive for a certain type of application (solutions) but it is not ideal for systems implementation.”
For systems level programming, C++ will not be replaced for some time. But for general application development, it is well on its way out.
“The core langauges of these middleware platforms is not as powerful as the C++ core language, but certainly the supporting libraries are far more extensive.”
I think that depends on how you define power. The core Java and C# languages handle memory management and garbage collection for me. I don’t have to mess around with pointers and such, and Java will automatically take out the trash during periods of relative inactivity. To me, that makes it more powerful than C++ because it can things for me that C++ can’t do. And in so doing, it allows me to focus more on the problem domain of my application itself.
“Anyway it appears that most of the things that you value are more easily accomplished with Linux rather than Windows.”
To me it doesn’t really matter. What I value is being able to solve problems in my specific problem domain. I am not a programmer by trade. I am an ecologist. I develop specific applications within my problem domain of solving ecological problems involving animal behavior, or population biology.
Whether I do that on Windows or Linux isn’t really important to me. Both platforms will perform the tasks equally well. But Windows happens to be the dominant platform in my field. Most of the people who will be using my applications are running Windows. So I develop for Windows. If that changes in the future, and Linux becomes popular in my field, then I will start developing for Linux instead of Windows. It doesn’t really matter to me. I develop on whatever platform my applications need to run on. And I don’t really have a lot of control over what platforms my applications need to run on. So I don’t make the decision. I just go with what’s popular, and don’t have a choice in the matter.
So you said that MS will never undocument the API and effectively close it and force people to upgrade? Are you Bill Gates than? Do you have the authority to make that decision or are you just guessing.
“Java will never dominate Linux because it can’t. Linux is not a product, it’s a platform. The factors of production are decentralized across the community. No one entity can control the direction of a platform, especially with a decoupled product line”
But you completely fail to take into account that people are going to use the tool that makes them most productive. Yes, a langauge can dominate Linux. And it can do so because it allows programmers to obtain very high levels of productivity.
Unless you just program for a hobby, and productivity and time (which translates into money) is not important to you, than the usefulness of a computer language very much can make it dominate a platform.
“So you said that MS will never undocument the API and effectively close it and force people to upgrade? Are you Bill Gates than? Do you have the authority to make that decision or are you just guessing.”
No. I’m saying they won’t because they cannot afford to. Microsoft cannot afford to obsolete litterally trillions and trillions of lines of existing code. If they did so, corporations would not buy the upgrade to the next version of Windows because it would break all of their existing applications and they would have to rewrite them. Do you know how expensive that is?
Remember that the average corporation is still using 1970s vintage COBOL code for many applications. Why? Because it costs to much to rewrite complex applications in a new language.
“Microsoft will discontinue the Windows API and not document it anymore.” is just open source FUD. Microsoft has not made any statements to the effect that they intend to phase out the existing API. Download the latest Microsoft Platform SDK, and you will find that the API is alive and healthy.
I think C++ will never completely vanish, but it might loose ground on the application level.
However, I think that will only happen if you don’t have to throw your perfectly good and debugged code away.
Now I don’t know much about jave, .net of even python, but I know that python interfaces quite nicely to C++, given that bindings to, for example wx and Qt exist.
Therefore, I think python has a very attractive advantage.
As I said, I don’t know the details of .net, but I do remember that the CLR had some limitations that if I remember correctly make writing templates and multiple inheritance quite difficult, which is why these are not in c#.
That means that the c++ version will also be crippled for the time being, so it probably is not a real alternative for some time to come.
(1) comment about scripting
the comment on scripting is well received. bear in mind though that scripting is something you need to learn in order to use. a GUI is more convenient for the average jow who cares not about the internals of a computer (should users know more? yes, but they dont)
(2) comment about choice (by the dude?)
Choice is more about having your choice of OS, your choice of window manager, your choice of service. My comment was not going towards what apps are available to choose from.
“Now I don’t know much about java, .net of even python, but I know that python interfaces quite nicely to C++, given that bindings to, for example wx and Qt exist.”
Python has an excellent interface to C and C++. In fact, one can write an application in Python, and then when they find bottlenecks, simply rewrite the specific module that is causing the bottleneck in C or C++, and drop it in. Since Python treats Python modules and C / C++ modules the same, no modifications to the code that calls the module are necessary. The C/C++ modules is basically a drop-in replacement.
But like Java, Python compiles to byte code, so it not an interpreted language in the traditional sense. So often, one finds that the routine written in Python is fast enough, that it does not need to be re-written in C or C++.
The main reason I don’t use Python a lot is that I don’t like the syntax. Old C and C++ jockey that I am, I need my braces and semicolons. Python used intentation for delimiting code blocks (It does it in a natural way though, unlike the nightmare that was FORTRAN 77.).
There’s nothing wrong with Python’s indentation for delimiting code blocks. I just don’t like change. :p
I happen to like Java, but garbage collection is not part of the core language. I am talking about the features of the Standard C++ (ISO 1987) langauge definition. It is a broader definition than the Java language specification.
One of the largest contrasts is that C++ is multi-paradigm and Java is focused on OOP.
Ofcourse Java is more productive and less error prone for solution implementation than C++ (??? makes no sense though ???). In addition the Vendor solution that you create integrates with the rest of the Vendor product line (for example the services and servers of the Vendor). It appears that one of their goals is to make data generic between your Java solution and mine or the one of the server and the one on the workstation, it’s one of the things that will drive web service interoperability.
I see no conflict between Linux and Windows here. I would like to see a thriving competitive environment exist as companies base their product lines on the platform (Linux). It is to a companies advantage to adopt a software layer above the operating system layer to base their product line on because it is more flexible, allows companies to have more control.
Think of the operating sytem layer as the melting pot. This is the conflict between Windows and Linux or else Solaris and Linux or else Unix and Linux. Since Linux is an open source platform rather than a product, than control over the operating sytem layer is decentralized. In other words it will be a foundation for system implementation. A foundation or infrastructure from which ANY company can base a product line. So that the company that innovates will be rewarded for their innovation rather than absorbed by a monopoly that controls the product.
It is important to understand why people defend Linux, we’ll most people don’t know why, but they feel something, and it’s their freedom that they are defending, and that they lost and want back, there is a broader issue. Freedom is the only standard that allows the reward of innovation to be returned to the innovator, because it is out of an idea that the innovator will form a market and become a vendor if it comes to that, but their idea will be a unique system, and not just a collection of specialized vendor objects (inherited class instances) for a product line. That unique idea will be a generalization born out of the architecture that promotes organic development.
“As I said, I don’t know the details of .net, but I do remember that the CLR had some limitations that if I remember correctly make writing templates and multiple inheritance quite difficult, which is why these are not in c#.”
I don’t know much about C#, but Java doesn’t allow multiple inheritance at all. But Java has interfaces, which get around this and provide the same funtionality in a way that is type safe and doesn’t have all the problems that multiple inheritance has.
You should move your programs to Java because the Win32API is only going to be narrowly accessible in Longhorn. There have been numerous articles on this already here on OSNews. The API will only be about 20% of what it was in XP.
Me neither. That’s probably why I never like IDE’s, too
I can get my job done perfectly and cross-platform with vim, gmake and gcc. Why change?
Oh, and btw, I do agree with you on the Python syntax.
While it does force a sort of standard look of the code, which makes it easier to grasp, you can’t use indentation to clarify your code as much.
For example, I like to write simple error handling and other checks on a single line like this:
if ( !ok() ) { printError( “oops!” ); return; }
This way, it is clear that is just something unimportant for the big picture, but it has to be there.
If you`re forced to use indentation, you can`t do this anymore, which would make the code a bit less readable.
“It appears that one of their goals is to make data generic between your Java solution and mine or the one of the server and the one on the workstation, it’s one of the things that will drive web service interoperability.”
Well, that’s what XML is for, which even Microsoft is adopting. In fact, Microsoft has plans for moving an XML designed user interface system, where you will basically design the user interface to your application by writing XML code. (See http://www.thinlet.com for how such a system exists for Java).
Of course, if I develop my solutions in Java, then I don’t have to worry about a lot of the “will my code still work in the future?” questions. Even if the pendulum in my field does swing towards Linux (which is a very real possibility because UNIX is traditionally strong in scientific environments, and also because many projects operate on tight budgets), then I will have a very easy transition to the new platform since my apps will already run on Linux, and I will be able to continue to develop new apps the way I have always done, without having to learn new APIs and such.
“It is important to understand why people defend Linux, we’ll most people don’t know why, but they feel something, and it’s their freedom that they are defending, and that they lost and want back, there is a broader issue.”
I don’t have a problem with Linux, or free software. And I understand why people defend Linux.
What I do have a problem with, is the people who think that Linux can meet every single need of every single user, which right now, it simply can’t. I also have a problem with the extreme versions of free software philosophy which want to basically make closed source commercial software illegal.