Once upon a time, operating systems used to matter a lot; they defined what a computer could and couldn’t do… Today, there’s only one operating system: Unix (okay, there are two, but we’ll get to that). This is why I contend that the OS doesn’t matter – or that we need to take another look at the word’s content, at what we mean when we say ‘Operating System’.
Maybe you meant this one?
http://www.mondaynote.com/2010/10/03/the-os-doesn’t-matter/
So we can now close osnews ?
What matters is the experience you get, the high level stuff, not what’s under the hood. The OS, as a part of the system that a consumer needed to know before buying, died a few years ago. As we move towards a new age, these things won’t matter. Integration to Facebook will matter more.
Blah, blah, consumer. No, nothing else matters. Really.
As a user, this makes complete sense. However, JLG seems to contradict himself when he mentions the steady movement towards a particular “under the hood” implementation – if what’s “under the hood” really didn’t matter, why would Apple or Palm incur such disruptive expense to change it all?
One could suggest that what’s “under the hood” matters a whole awful lot, and individual companies can no longer compete on building a solid OS implementation, which forces them to use technology developed externally. User expectations about the capabilities of an OS are just too high to have a zillion competing reimplementations. The UX story, valid as it is, relies on having a complex OS with a large number of functions to expose to the user.
The underlying complex OS has pretty much been conquered in terms of the needs of a complex UI/UX. Take Android: It’s based on Linux, but assuming the drivers were there, would it be a different platform if it ran on top of FreeBSD? What about iOS versus Android. Threading, schedulers, I/O, etc., that’s all pretty much been conquered. What low-level feature does iOS have that Android doesn’t, and vice versa? What about Windows 7? It’s all the higher level stuff that’s the big differentiation.
It depends entirely on the perspective.
As a media consumer-user, the underlying OS to the applications means nothing. Beyond the original activation/configuration and the features/security updates, the only visible layer is that of the applications.
As a social networking butterfly, as long as browsing and Java applets are kept constrained for security of my machine/network, the underlying OS is also secondary.
As a thought experiment, one could imagine the same applications we like today being run on DOS – if the graphical drivers and file systems support existed to enable a similar and secure user experience. On the other hand, the complexity of achieving this would be fully enthrusted to the application developers – a point alluded by JLG in reference to the APIs and development tools.
The suggested return to the root UNIX (whether via QNX, OS X, or even its distant half-cousin Windows) when a speciality OS can no longer leap through its legacy to enhance the user experience is what disturbs me. This severely limits the freshness or wow factor of the applications created. Retrofiting the use of metadata to organize the bits of information on a drive and the web over a hierachical folder structure leads to inconsistencies in the intent and use…..often with frustrations from the users.
I simply wished a new foundation could be established which would lead to a higher consistency and simplification.
True and false.
When running OSX, I’m often hit by the poor IO scheduler. I assure you that it DOES matter that my multitask operating system can’t write a big file and switch windows smoothly.
But when back on linux, I miss tons of small details that makes OSX what it is. For some reason I’m more productive in OSX. Is it emotional or all the small things that gets in the way ? I don’t know.
Sure, when I’m on the web, what matters is the browser. And it’s so complex, it’s almost an operating system :-p
The first sentence of your comment pretty sums up what I think of the article : I rode the first part…
…and thought it was a good start. After all, an OS is an interface between human users and a complex machine. Then something horrible happened : the second part.
The implicit transition going on there basically says that user experience is all about high-level components. And THAT is shocking.
Because saying that is basically saying that…
-Responsiveness of the system (depending mostly on low-level components and especially the scheduler) does not matter to the user. Clicking a button and waiting for minutes before something happens because a random background backup task eats all I/O and CPU time is okay.
-The security model (enforced at kernel level) does not matter either. The user/admin model is perfect for desktop and mobile space applications. If an application has access to all of the user’s precious data or only has to ask for an extremely vague “privileged access” to get access it, it’s okay as long as the UAC window is nicely designed.
-Reliability doesn’t matter. Losing all data when a random awfully done low-level layer crashes, as on most Linux desktops with Xorg, is a fact of life. So does enduring a lengthy reboot as a consequence of a random crash, like on many smartphones today. Spending hours of work making self-healing OSs like MINIX that can endure almost every single low-level component crashing without the user noticing is a waste of time.
-Flexibility doesn’t matter. Portability of OSs across multiple platforms, while taking the need of each platform (power management, imprecise pointing devices, screen size…) into account, having for that to recode only a small part of the OS and no applications, while keeping performance near-optimal, would surely not give manufacturers a major advantage against their competitors.
-Performance does not matter. Phones that take minutes to boot and last less than one day on battery are something perfectly normal. Why waste time on optimizing low-level code that gets called all the time when we can spend that time implementing laggy kinetic scrolling (laggy because we didn’t take the time to fix our touchscreen input driver’s insane latency) ? What’s the problem with people needing top-notch hardware in order to use Word and the file explorer ? And waiting so long before applications dare to start up after being clicked (because very poor performance hurts responsiveness too in the end…) ?
It’s true that people don’t *see* other OS layers than the UI and programming tools directly. But they experience their success and failure everyday, without being able to tell exactly what went right or wrong inside “the computer”.
Because of this, and because low-level layers affect all the system, it’s also very hard to diagnose a low-level problem before fixing it. If button positioning is not done properly by the GUI library, you know right away who the culprit is and where to have a look in its code, but if a deadlock between the mass storage driver and a graphic driver that occurs only when launching a specific game and pressing a specific button at a specific time causes the whole system to crash, go and reproduce the bug, then find out what’s going on
Work on low-level OS components is hard, and not very rewarding because people only see it in an indirect fashion, and only geeks will ever tell you “thank you for fixing file I/O”. Higher-level components are much more funny to work on. But no matter how much ecologic house design is an interesting subject, building them on solid ground and in a flood-protected way remains important, and it’s the same with personal computers…
Edited 2010-10-05 06:23 UTC
A warning is due; be prepared when discussing application capabilities (rights) in the churchs (some channels) of IRC servers, as any idea deviating from the traditional model of applications (processes) having unrestricted access to the home directory of associated user, is the sign of the anti-christ.
Galileo, watch out !
Heh, physicists are doomed to get prosecuted and burnt anyway, even when they learn about CS as a hobby ^^
“The future of the browser wars is he who integrates with the OS best.”
Shouldn’t that give Apple and Microsoft a huge lead over Google and Mozilla?
Actually Apple will have an advantage but only because it is funding the infrastructure: LLVM. LLVM is used by several cross-hardware Java VMs as well as Mono. PNaCl will be a thinly wrapped LLVM layer around the hardware-specific version of Google’s NaCl native client, an browser-based VM that runs on native machine language.
http://nativeclient.googlecode.com/svn/data/site/pnacl.pdf
No, it gives Google and Apple a huge lead over Microsoft.. They have kickass os’ and browsers.. Mozilla does not have an OS.. they are not in this game!
Cannot believe what I read.
The author is right: the browser, the applications built on top of that, the games I can buy from an AppStore are what matters to people today…. but… saying that the foundations do not matter anymore sounds risky and very very inaccurate: Everything depends on the immediate layer below: A Java app depends on its Virtual Machine and the virtual machine depends on the underlying OS that hosts it.
Saying the OS does not matter anymore is saying that teaching how the computer works, memory management, processes, cachés, thread schedulers and that very low level stuff is not needed anymore; having a future full of ignorant programmers depending on software written by their ancestors (because they will not have the knowledge to write or improve one) is not something I would want to see.
Saying that LEDs are not needed anymore because our TVs (built using LEDs) are too evolved to think on those basic things; would sound reasonable?
Edited 2010-10-04 20:35 UTC
Is that the guy who fails everything he touches? Got kicked from Apple and then failed with BeOS.
Yeah, he seems to take the life lesson that stuff he sucks at must therefore not matter.
Let me guess, he is starting or financing some cloud-based startup…
Edited 2010-10-04 22:08 UTC
I can’t agree with the author. OSs may serve the same thing, but they definitely behave in a different manner. We have good and bad OSs, we have poor and good code, we have good and poor security mechanisms and implementations, etc.
The author’s point of view may be probobly valid – to some extent – in the case of some computer novice who doesn’t actually see the differences between various OSs, but it doesn’t mean that the differences are not there … doesn’t it?
Gassee is pretty far off track if you ask me.
It’s probably not his fault… he’s just ignorant about the importance the OS plays.
It’s kinda sad: Apparently we’ve done such a terrific job in the OS space that the work we do makes the OS seem transparent.
I suppose Mr. Gassee sounds somewhat different when he encounters an OS security problem. Then he probably claims the OS is *the* problem… and how we’re delinquent for not fixing it.
The dude is lost, if you ask me.
DD
I agree. The OS landscape is quite mature. It’s the userland that makes a different. Take any platform – slap it with a user experience which makes most sense and friendly enough it mimics how we think, that would make a lot of difference. I use Windows 7 (Happy with it) but I pine for a Mac more for its aesthetics. Should someone slap a better experience on Future Windows or Linux or any other OS for that matter than it is moot.
The OS matters if…
…your friend wants to play a multiplayer game against you, but the game company runs separate servers for each OS and your friend isn’t running the same thing you’re running.
…you want to legally obtain copies of a new OS, but you don’t have a lot of money to spend.
…your machine slows down because your filesystem needs to be defragged. Again. WTF?
…your machine has an application going haywire and sucking up all your machine’s resources and you can’t get to a task manager or process manager to kill off the offender. This happens to me all the time with Firefox 3.6.x in Windows XP!
…you remember the good old days running some other platform, and it frustrates you to no end that the machine you’ve been given at work still can’t do X after over 15 years!!!
The author claims that the only exception to the rule that there is only one OS (and that’s UNIX), is Windows. It may be the only major OS that is more than subsisting but there are more OSs that used to be big that aren’t anymore.
BeOS (and its open-source counterpart, Haiku) is much better than UNIX ever will be. Sadly, since it was built on GCC and other UNIX-like infrastructure, it doesn’t cut the umbilical cord enough.
AmigaOS and MorphOS (and the open-source counterpart AROS) are both pretty fast and light compared to UNIX due to their custom kernals, but unfortunately, as time goes on, they need to run more Linux code and less native code and are succumbing to the peer pressure.
Singularity was designed to be different. But according to their article in the Communications of the ACM, they are going to try to integrate those features into the existing OS over time. Look forward to Windows 8 or 9 when they finally ditch hardware-based memory management techniques and adopt software-based ones that have finer granularity and can be optimized away at the compiler level when not needed. Oh well, at least they put some of their research money to work instead of feeding only hype like most of their bosses do. Sadly, this software may only see the server by the time it comes out.
“BeOS (and its open-source counterpart, Haiku) is much better than UNIX ever will be AT DOING NOTHING”
There, I fixed that for you.
How exactly is BeOS, an operating system that has been dead for a decade, better than unix? It had a awful networking stack, almost no drivers, no multiuser, and basically no application base to speak of, no real value proposition other than it could boot up “real fast.” So on what metric exactly do you base you utterly qualitative and arbitrary statement?
Granted it had a filesystem which supported metadata. So you could boot up real fast, and do queries on useless data since there were no apps. That makes it “better” than UNIX how exactly?
Edited 2010-10-05 19:24 UTC
Since when is the quality of the OS dependent on the apps ported to it? The OS could be great but if there are no apps it is useless, agreed. But I plan on writing more software in the future for Haiku than for SCO UNIX. UNIX is dead, long live UNIX.
Linux I plan on supporting because there are a lot of users of it but UNIX is designed for server usage and multiuser supercomputers. Haiku is a single-user OS and designed for an entirely different market.
C++ kicks ass against the C underpinnings used to create UNIX but there are very few OSs built on object-oriented architecture. Haiku and BeOS are written in C++. MacOS X, as you mentioned, has many of its internal apps written in Objective C.
Also, see what I said about Microsoft Singularity. It’s written in an excellent programming language (Sing#) which defines it as something different and special from plain UNIX. I can foresee a time when a derivative from Singularity kicks UNIX off the servers for good. Then you’ll wish for the day that Haiku was on your hard drive, because then UNIX will truly be dead.
My point is that, IMHO, UNIX is the tried and true but been there and done that architecture that people only come crying to when all other options have failed. It is the least user-friendly OS that has ever been conceived. The sooner it dies the better. Same goes for Solaris and the parts of Linux that date back to the original GNU projects such as GCC. I hope that more streamlined approaches dominate the world someday and UNIX is the second-biggest obstacle to that occurring.
So basically your answer is “no” you do not have any actual quantitative argument.
There are interesting things in this post, but I think you misunderstood the original comment and went a bit too far in UNIX hate to make a good point.
The OP complained about several things. Apart from lack of applications on an OS that has been here for a long time, he complained about the low-leven layers (networking stack) and about the lack of vision, of plans for the future. You did not answer those.
Agreed, it’s good to point this out. UNIX advocates’ point is that it can be adapted to just about anything, but I’m more skeptical about this myself. It remains an OS based on text files and pipes, with a user/admin security model, not exactly the best start for a mono-user OS design.
This is highly debatable (and I’m writing a kernel in C++). Several features of C++ (exceptions, RTTI…) require runtime support and can be considered dead for kernel development. Other, like templates, are nice but more adapted to API coding than low-level OS code. When writing code at the kernel level, it’s mostly classes (for code separation in independent blocks) and operator overloading (for easier debug code than printf) which prove to be useful. Where most of C++ really shines is on the interface side, and you can very well make a C++ interface to a C kernel.
Yeah, it’s written in managed code, like those experimental OSs written in Java we’ve had for ages. And ?
This is ridiculous. Singularity has been canned by microsoft and doesn’t have the tools which server users use daily. Plus, being fully written in managed code, it will have a comparatively weak performance, maybe suitable for the desktop but not for the power-hungry server market.
Oh, come on ^^ Aren’t you burying it a bit fast ? Look at the current state of Haiku, it’s not desirable for anyone else than BeOS fans, and afaik there are no plans to change this.
User-friendliness is about UI, and that of Haiku is not nearly close to KDE for that matter. It’s for devs that UNIX apis are terrible, and they use extra layers like QT anyway.
This is just ridiculous. A compiler is a compiler, no matter how you see it. It has a well-defined purpose, so there’s not much room for improvisation. GCC compiles well-optimised code from many languages to a very wide range of binary targets, so it’s a good compiler, there’s nothing else to say about it
Re: Singularity
If you had the option of having fine-threaded managed code over course-threaded memory protection, the fine-threaded version would be more stable. The memory protection would be just as slow. With optimization, the main processor could be redesigned with a less obnoxious and invasive MMU to allow more cores and speedups for the same production processes that are used for existing processor models.
UNIX will only win the PC market in time for Windows to come up with something different in time to make the PC obsolete.
Re:death to the original GNU software
Clang/LLVM is also a good compiler, if a bit young. It is more modular than GCC. It optimizes nearly as well (with the single exception of autovectorization). It is being designed for C++0x from the ground up as well as Objective C and is used internally by Mono and several JVMs as well. It avoids RTTI in its code base by using templates internally to achieve the same results faster. It is used to optimize shader programs in OpenGL drivers, and is used as a backend by several functional languages as well, namely GHC Haskell and OCaml. GCC is only keeping up with the modular C++ code base of LLVM by sheer brute force of the development staff devoted to it. Put simply, GCC!=LLVM. A compiler is not just a compiler, if it is also a compiler framework.
what exactly is “obnoxious and intrusive” about an MMU? And what part of the memory management are you referring (the memory controller, the TLB, what?)
And why are you mixing apples with oranges? Managed code/coarse vs. fine threading/etc. What are the actual justifications for your claim regarding “fine grained” managed code?
That is funny given all the talk about cloud-based computing making the PC obsolete. And most cloud systems are running under Unix-like OSs.
The point is that you are claiming something to be dead, while at the same time proclaiming the future to be something that is based on an even more dead OS (BeOS).
I don’t think “keeping up” means what you think it does, since LLVM just started compiling reliable C++ code and it is at a beta level, with a lot of stuff missing. But as of this point, most of the effort in Clang land is to just be at the same level of features as GCC, so technically LLVM is “catching up.”
I’m not talking about threading at all. I’m talking about the fact that managed code manages it down to the byte while traditional hardware-based memory management techniques only manage down to about a 4k page size due to MMU limitations.
What makes an MMU so obnoxious is that it takes at least one stage in the main processor pipeline to implement thus causing the processor to be less agile. A short pipeline means that it can branch more readily without relying so heavily on branch prediction logic. This means that it is possible to build a faster processor if the MMU was lighter than it is.
Actually I favor even deader OSs. I’m an AROS programmer. But with the advent of the Gallium 3D drivers, getting cross-platform drivers is easier than ever. I’m hoping that this will liberate the desktop from the big-3 of OSs and get some better-written ones into circulation.
re:LLVM
I said Clang was a bit young. Look up at my previous post if you don’t believe me. They used to use LLVM-GCC for a while before the FSF tightened the screws on the GPL version 3. I like LGPL 3 but GPL 3 is pushing it. The reason for writing Clang is to make a lighter compiler with a lighter license.
As for the features of GCC, just compare the performance of the JIT compilers. Oh wait! GCC doesn’t have a JIT compiler! Personally, I prefer static compilers over JITs anyway but it’s nice to have the option there.
No doubt about that, but there’s room for several compilers around, isn’t it ? Especially since I’d rather not see my main compiler being owned by Apple, it seems a bit unsafe to say the least.
GCC is modular afaik. You can choose which parts of it you want at compile time.
Yeah, and for those who don’t use draft standards it still has to provide full and stable support for C++98. As you said, it’s a young project. The fact that several projects jump on alpha code like that, rather than impressing me, reminds me of PulseAudio…
Sounds interesting indeed. But if GCC implemented this feature too, like LLVM implements many of GCC’s optimization technology, what would be the problem with GCC ?
That doesn’t sound like a sensible accusation. As I said before, GCC has some modularity too. And if their developers are many and talented, how is it a problem ? As a user, I benefit from that anyway (just like I thank LLVM for existing and making GCC become even better).
Could you please go in more details about that ? I’m not sure I understand this part.
Did you read the whole paragraph I posted? Granted, I should have said that Clang and LibC++ are being built from the ground up for C++0x standards but the rest of them are using LLVM also. OpenGL drivers aren’t immature code nor are some of the other compilers listed.
re:LLVM!=GCC
The difference isn’t all in the code. It’s also in the license. I didn’t get into it earlier because I didn’t see the need but the UIUC license is more BSD-like than the GPL 3.0. I like my licenses short and easy-to-understand. GPL is neither.
For years I was trying to create a new BASIC compiler. My partner looked at GCC first and decided it wasn’t well-documented enough to be of any use to us even though it was already ported to all the platforms we wanted to support. We opted to go with LLVM instead.
LLVM is better documented code so even though GCC has many programmers their numbers will not grow as quickly as LLVM’s because of a code documentation problem.
As far as LLVM incorporating GCC’s optimizations, it’s actually a two-way street. GCC has been implementing LLVM’s features all along as well. Competition drives the market.
The statement that a compiler framework is not just a compiler was intended to convey the idea that modularity and ease-of-reuse play a role in frameworks. A generic compiler tooled to just one processor is not a compiler framework. GCC and LLVM are two frameworks used to make many other compilers. I felt your comment that “A compiler is just a compiler.” was overly broad. The term compiler covers a lot of territory. Once I wrote a compiler that converted music files into C code for a self-hosting player. That was a compiler but was not a compiler framework.
So we’ve got 10 dozen, “user experiences”, built on one reasonably consistent base but next to no interoperability between most of them (X11-based Meego excepted) at the, “UX”, level.
Well, unless you want to buy ‘an app for that’, to migrate.
It’s kind of odd how everyone but MS have made a go of MS’ own open source strategy, negating the user-visible benefits of a lot of open source code by burying it under a tonne of vendor-specific APIs and functionality.
…people paid attention to JLG but that was before he screwed up Be.
Huh? I thought Microsoft’s monopoly and marketing power, their can-you-say-antitrust “deals” with OEMs, as well as various other things were some of the biggest factors in the demise of Be. Now you’re saying its failure was caused by one man alone? Is there a good explanation to support this claim?
Of course MS had an iron grip on the market back then and that didn’t help things, but ultimately JLG bet the farm on becoming an Internet Appliance developer with BeIA and that backfired. JLG was just too far ahead of his time. No one had broadband and Facebook addiction wasn’t invented yet.
Yeah, he was so “ahead of his time” that BeOS did not even bothered to have a proper networking stack even for its “internet appliance” OS iteration. Sounds more like jumping the shark to me.
The whole BeIA was an attempt at staying afloat by throwing a bowl of spaghetti to the wall and seeing what stuck. I would’t call that as being “ahead of one’s time” but rather an act of desperation.
Specially if you consider that Be spent a significant deal of effort into their BeFS, touting their filesystem as one of their main value propositions. Only to end up targeting the BeIA platform towards basically diskless clients. Brilliant!
Edited 2010-10-05 19:32 UTC
tylerdurden, maybe Be wasn’t ready technically to deliver an Internet Appliance platform, but that doesn’t take away the fact that back then the Internet, as a platform, hadn’t arrived either.
Of course it was a last ditch effort, that is what “bet the farm” alludes to. You back a strategy which could sink you, if it doesn’t pan out. The desktop market proved to be unassailable and providing an Internet appliance platform was too soon, so yeah, Be sunk.
I wasn’t saying anything about the internet being a platform or not at that time. Whatever “platform” means anyway.
My point is simply that Be missed the boat monumentally. They focused on a value proposition that was basically almost 10 years too late by the time it became usable, Moore’s law had made most of the selling points of Be irrelevant really (small footprint, low latency, etc). While they missed things like proper networking which is where the market was going. It was in the end a solution for a problem most people had long stop caring.
I read a few articles about how JLG conducted negotiations in the days when Apple was looking for an outsourced solution for Copland, and it is clear that he is/was as daft when it comes to business decisions as he seemed to be making technical ones. He simply seems to be totally disconnected from reality.
And btw, I really liked BeOS. I still have my copies of DR8 somewhere. But in the end, it was an OS really going nowhere.
Which makes the appliance move double stupid and is exactly the kind of mistake you pay CEO’s big money not to make.
No, people stopped paying attention to anything JLG said after he screwed the mac. Which is among the reasons why Be failed miserably…
He calls OSX “another Unix derivative” which implies that QNX was derived from Unix.
POSIX compatibility in QNX was only added after it was designed as a microkernel, which is far different than the traditional Unix monokernel.
The OS doesn’t matter unless you want to run this specific application or hardware or plug-in or…..how did he reach his conclusion again?
You do realize that QNX was originally named QUNIX, right?
And microkernel has nothing to do with it being or not Unix, after all OSF/1 (Tru64 for example) was a microkernel-based unix.
Edited 2010-10-06 01:14 UTC