How will the future operating systems look like? How the user interface, the inner workings, the security policies and the networking will interact? In any case, innovation is the key.
If you visit OSNews once in a while, you will of course know everything about the present and about the future of operating systems. Somewhere between 2005 and 2007, Microsoft will release Windows codename Longhorn, and until that happens Gnome and KDE need to fill the gap between themselves and Windows XP. And if everything goes well, they will implement some Longhorn features as well. On the other side, we have the innovative Mac OS X. It is the user-friendliest computer system on earth, built on UNIX and has OpenGL acceleration of the screen.
Wait. Read that again, and think for yourself: how much innovation has there been and will there be? Let’s start with Gnome and KDE. They are mainly copying the user interface of Windows. Yes, Gnome places the application menu on the top of the screen instead of the bottom, and KDE has invented KIO. But almost everything else is plain copying. KDE even has the window buttons in exactly the same place as Windows. There is a reason for this. A quite simple one, actually. Most people today work with Windows, and when they make a desktop environment that behaves radically different, they are afraid they scare people so that they continue to use Windows.
But how is Windows doing? Is Windows innovative? This page says Windows is innovating, and says Windows is to the Macos what Java is to C++. That’s not entirely true: C++ was a bad fix to C, and Java cleaned everything up. On the other hand. MacOS was a clean, new implementation of a graphical OS while Windows was just a way to fix DOS. From that, we can say that Windows is to MacOS as if C++ had been invented as a reaction to Java. And when we look a bit closer: what things has Microsoft invented. They copied the overlapping windows. The Explorer is a copy of the Finder, while SMB is a copy of AppleTalk. Word was a reaction to WP and Internet Explorer is just an improved version of NSCA Mosaic. And there is a reason Windows does not really innovate: it doesn’t want to lose its market share, so it takes care not to scare users. When the Windows interface would radically change, they could switch to Linux just as well as upgrading to the new Windows version.
You might have noticed that Windows stole quite some things from Apple. So, are they innovating? In 1984, they were. The Macintosh was a nice new computer; one of the first (if not the first) home computer that was not character based anymore and had the mouse as a mandatory input device. Shortly thereafter, they invented AppleTalk, with which networking computers became as easy as plugging in the network cable. After that, only minor system updates have come out until Mac OS X was released. It was called innovative. But what does it do? It’s effectively a MacOS-like GUI with a UNIX-core, so in fact it does nothing more than combining two technologies, both being decades old. That has a reason, too: Apple’s marketshare is small, and in this way they can keep their former customers while they can also attract new ones: their OS is now built on the “proven reliability” of UNIX thanks to it being 30 years old. Apparently, they have not read the Unix-Haters Handbook, from which it seems UNIX was rather unstable even 10 years ago.
Does that mean the current operating systems are the best; that better is simply impossible? Most likely not, the most logical reason for the lack of innovation is the fear to loose market share by inventing something better, er, different. So here is my proposal: if you build an entirely new operating system, why not make it different from the ones that exist, so that it can try out ideas that might be better than the current ones, and it might even attract users, namely those who want a different operating system for a change, one with an identity. In the rest of this article, I will lay out such a proposal. I’ll need to see whether I have time to work on an actual implementation, but thanks to the nature it luckily isn’t necessary to start with the bootloader ๐
1 Virtual machine
Nowadays new processors are being invented: the Itanium and the AMD-64. To take advantage of these processors, the operating system and all applications that run on it at least need to be recompiled and parts of them need to be rewritten. That is not very practical, something Sun realized when it invented Java. Microsoft has also seen this and started on the .NET project. Both these implement a Virtual Machine that can run binaries specially adapted to it. The advantage is that the same binaries can always run on the virtual machine, no matter what the host OS or the hardware is.
As this is very practical, I will take such a virtual machine (VM for short) as the basis of the OS idea. Not very innovative, I know, but rather practical. It makes the OS and it’s applications completely hardware-indepent and also has the advantage that the VM can first be implemented as running on another OS, so that work can immediately start on the VM and OS itself, without needing to code a boot loader and extended hardware support first.
2 The user interface
The user interface should be friendly and practical, both for the newbie as for the experienced computer user. Therefore, no POSIX compatibility is needed and no GNU utilities need to be ported. And why should they? In this modern world, we want to use more than text. We want fonts, webpages, flash animations, music, pictures and movies. The command line is not suitable for them, so a graphical interface (GI) is really necessary.
2.1 The general layout
However, this does not mean copying the GUIs of Windows or MacOS. They can namely be rather confusing. For example, most GUI’s has overlapping windows, which are confusingThe Xerox Star people already knew this and therefore didn’t allow windows to overlap. The confusing thing is the following: imagine you have two windows, say a maximized Outlook Express and a normal New Message window on top. When you accidentally click the Outlook Express window, it will look like the message you were typing is lost. Of course, it’s just hidden behind the window you just clicked, but that is not obvious. . The solution is to take the idea of the original MacOS even further: not only hide other applications when you activate one, but make all windows maximized instead. That solves the overlapping window problem and does away with the title bar taking precious screen space.
Now you will probably notice that drag and drop is not possible anymore, at least not between applications and also not between windows. That is not practical, because it forms a much more visual way of moving objects than the copy-past way Windows introduced. Therefore, the GI should offer a split-screen mode, in which two windows, can be visible next to eachoter.
2.2 Dialog windows
Of course, configuration and property windows don’t need the entire screen. Therefore, they can appear in smaller, document-modal (see below) windows. If you open one, the full-screen view behind it should be grayed out, so that the window containing the things you can do appears lightened up, so that it really gets your attention.
The appearance of MacOS 8/9 also has this effect, but it is lots more confusing because it makes no difference between windows that can’t be activated because of a dialog (as in my proposal) and windows that are plain inactive and can be switched to with a single mouseclick.
2.3 The widgets an sich
Nowadays everybody points out that Gnome should be used instead of KDE because it looks more polished, that you should use MacOS X because Aqua looks so cool and that Longhorn is even better because it provides hardware accelerated control drawing. Sounds great? It actually isn’t. Those fancy user interfaces waste precious CPU and GPU cycles, making your computer slower than it needs to be, thus making you work slower. In return, you are distracted from your work so that your productivity is even lower.
Looking at future developments, however, it seems to be rather practical to have a resolution indepent GUI. In that way, applications have no problems running on low-res devices such as a palmtop or a TV, while still being able to take advantage of high-res computer screens. To have something to brag about, the VM graphics system should offer nested canvases remembering their content, so that one could say that ” the new OS has a GUI in which each control is drawn with hardware acceleration”.
3 The document model
After having done away the windows, we should also get rid of applications, because they are also confusing. On Windows, you can open documents in two ways: from an empty application window and from the Explorer. The same applies to creating new documents, but strangely enough not to saving them. On the MacOS, that is also the case and it is even more confusing: an application can be active and running without displaying any windows. That means you see the desktop and the finder, but that the menu bar is different because you are effectively running another application. Those are enough reasons to leave the application model.
Instead, the only thing the user should see are documents. Nothing more and nothing less. When the user clicks a document, it is opened, and when he closes it, the document is closed. What software is used to accomplish this should not be visible in any way. The way this can be implemented is by making applications effectively applets (like KParts or OLE objects). When a document is opened, a new full-screen view is created and the document is embedded into it, along with the application.
3.1 OLE!
The advantage of the applet model above applications is that no seperate logic is needed to provide embedding documents – the parent document’s applet just needs to embed another applet into it, and for that applet it is not visible whether it runs full-screen or embedded.
For normal documents, data will come from a file, either on disk or embedded in another document. That’s the way it also works on Windows, MacOS and KDE. However, sometimes that is not practical. Imagine that you want to make a chart application. You will probably want to link the data to the spreadsheet it is embedded in. It should be obvious that would not work. Therefore, we need to create the GUI equivalent of pipes, so that you can use (a selection of) your spreadsheet data as the input of the chart applet. That can create powerful systems: functions like bibliographies and tables of contents can be placed in separate applets that can be embedded in the document they are used in.
3.2 DDF
The documents can be stored in any format – the applet can determine it. To make embedding more flexible, a standard output format should be made, however. For this new OS, it should be designed from the ground up – this to support a few important things for the embedding to work properly. The reason for this is that besides the well-known object embedding, it should also provide text embedding, to make things like Table of Contents-applets possible.
Embedding objects is easy. The host applet defines, within the DDF, a region containing a sub-ddf and pastes the output of the embedded applet within it.
Embedding text is more difficult, and that has a reason. With embedded objects, the host decides how large the frame is in which the object is embedded. With embedded text however, the applet needs to decide the size as you can’t just rescale a text. Additionally, it would not look nice if embedded texts would not have advanced features like, say, automatic hyphenation. A solution would be to let the host application decode the DDF the embedded object outputs. That is not a clean solution, however, as the host applet needs to sport a complete DDF interpreter.
I believe the solution can come from breaking a monolithic program into several applets. The usual word processor can be split up into two pieces. The first one will be for composing formatted text. Let it support feautures like word wrap and embedding of other texts. The other applet will do the layout: it will put the text (and images) into frames, possibly on multiple pages, supporting text flow, page numbers and so on. In that way, you can still edit complex documents, but you have more flexibility, and can use the same advanced text formatting in both the word processor and the spreadsheet program.
4 The Network is the Computer
In these days, networking has become an essential part of every computer system, be it a standalone PC, a file server or a mobile phone. Well, you are right, the dishwasher has no network connection… yet. So the new OS certainly needs to be network enabled. That does not mean that there is no room for improvement, however. We have the Static, DHCP and Zeroconf methods of getting IP adresses, NFS and SMB to share files, Cups, LPR and SMB to share printers, NIS, NIS+ and LDAP to have the same user accounts everywhere, and Remote Desktop, VNC and X for remote logins.
These existing systems work. Sometimes. After editing a lot of settings and configuration files. And that is not how it should be. When networking should be practical, it should be really practical, for everyone. After all, a home user wanting to take advantage of the network they have made for internet sharing, does not want to dive into the world of TCP/IP, DHCP servers, gateways, DNS and so on. They want a network that just works. And what would be rather practical, is if you were able to edit the same document no matter whether you are working on the desk computer, the laptop or the refrigator.
Such a thing cannot be accomplished easily. Rendezvous is a step in the right direction, but it is still bound to one single computer: you don’t instantly have access to your documents – you need to search through other computers for the resource you need and login to that computer before you have access.
4.1 The basic idea
This is a rather interesting question. Imagine you have a home network with two computers. On the one hand, you want to be able to login to both of them, even when the other one is down. On the other hand, you don’t want that a hacker can enter the network with his laptop and have access to everything. And in a larger network, you don’t want each PC to store all user data, as such a network probably does have a server running 24/7.
That does already imply that there would be two “modes” of operation: one for the home user, where each PC knows all accounts, and another one for centralized networks where a server knows them. In a perfect world, these two could be matched, so let’s look how that can be done.
4.2 Peer-to-peer and server-client implementation
In principle, each PC operates in decentralized mode. Without a network, that means that it has one user (with associated ID) that owns everything. If two such computers meet eachother, both will learn the user data from eachother. Now, you can login to both computers with exactly the same result.
In a larger network, a server can be added. In a similar p2p-method as with decentralized mode, the server information is shared (but only its address, not the accounts themselves). When someone wants to login now, first the local user database is checked and when there is no match, the computer will also look at the server. The latter will send the account information to the local PC, and if everything is right you will get logged in and have access to the network, most likely the printers and drive space attached to the server. Additionally, the account now exists on your local PC too, so that you can use it even when you aren’t connected to the network.
4.3 Account modification
The only problem left is changing your password, as the new password needs to be propagated through the network without allowing hackers to change your password. Luckily, for this there is a solution, too, and it is rather easy. The new password will have the old one “within itself”, so that the new password can identify itself. In this way, no hacker can change your password without knowing the current one, while you can do it. To solve the problem for when two password changes meet, the date of each password can be stored in the account. This also makes it possible to remove obsolete passwords after a certain amount of time.
5. The end result
Finally, it might be useful to look at the results of the proposal: is it innovative, and almost more important, is it useful and user-friendly?
I believe the proposed GUI does indeed break with the current tradition and does this in a useful way. Doing away the windowed interface seems going back, but removes something which is rather confusing for new computer users (and has no advantage over split-screen like windowing other than wasting space because windows don’t fit to eachother). Not having a too fancy interface is also a good thing, as it doesn’t distract you from your work and does not scare away people (yes, people fear Windows XP as it is different from 98/Me).
The document format, on the other hand, does not offer much more than Display PDF or something like that. Combined with the linking model, however, it becomes more powerful than what we have today, allowing to use pipes, famous within the Unix world, within a graphical environment, which serverely extends possibilities and reduces complexity.
The network model, finally, unifies the traditional, server-based systems like UNIX and Netware, and the peer-to-peer networks like AppleShare and SMB in one package, allowing for one consistent, interface for both types of networks, still powerful but also comprehensible for the average home user.
Though this proposal might never see a working implementation, I still believe it shows there is a lot of room for innovation in the current operating systems. So I hope that they will not only innovate behind the scenes (SMP support, NTPL, WinFS, …) but that one of them will take the step to break with the past to allow new concepts in, so that the end user will finally get improvements as well.
I think this is a seriously flawed article, for example the following suggestions are obviously backward steps in UI design.
“The solution is to take the idea of the original MacOS even further: not only hide other applications when you activate one, but make all windows maximized instead. That solves the overlapping window problem and does away with the title bar taking precious screen space. “
“Now you will probably notice that drag and drop is not possible anymore, at least not between applications and also not between windows. That is not practical, because it forms a much more visual way of moving objects than the copy-past way Windows introduced. Therefore, the GI should offer a split-screen mode, in which two windows, can be visible next to eachoter.“
what do you do if you have applications that do not process any documents?
e.g. a calculator and i bet you’ll find many more examples…
greets philipp
… with the first post. I think this article is seriously flawed. Maybe it is too radical a change, maybe I lack mental flexibility, but I really think some of the author’s ideas as bad. The first one coming to my mind is this:
“Instead, the only thing the user should see are documents. Nothing more and nothing less. When the user clicks a document, it is opened, and when he closes it, the document is closed. What software is used to accomplish this should not be visible in any way.”
Hidding which software you are using will be source of confusion… unless you agree to be fed whatever the
OS provider want…
There’s other things I don’t agree with, or I find outright ridiculous. Example:
“KDE even has the window buttons in exactly the same place as Windows.” Conclusion: KDE is mimicking Windows… Well, I am not sying that KDE cannot be made to look like Windows, but still I cannot swallow that easily the comparison: Gnome, KDE == Windows imitation.
I’m not an expert, so I should be modest. This is just my opinion.
Finally, a last word. The GUI presented here look a little bit like a “shell GUI” for X. I’m sorry I don’t remember the name, but the idea is that windows do not overlap and that you can split your screen as you feel to accomodate new applications… Do someone remember the name of this GUI?
I agree with m, this is a bit of a joke.
How many people work with every window on their machine maximised? I know I don’t, as sometimes it is just impractical, such as WinAmp. Can’t imagine using that in full screen mode, even if it supported it.
I think the author of the article is pushing more towards Task Based desktops. And with bigger screens, and something like the Screen Tiles in Longhorn, and KDE/Gnome, would work better. Winamp, downloads and other non-important service based programs on a tile where they can be quickly accessed, while the main program being used is in maximised.
It is Ion:
http://c2.com/cgi/wiki?IonWindowManager
If I understood it right, in terms of the GUI the author is basically suggesting we take the Mac OS approach to window management, then make every application a seamless part of the OS. Then the user will be able to
(a) work with their desktop more effectively (I’ve seen lots of examples of the Outlook Express email problem, so that’s a good analogy), and
(b) be able to think entirely in terms of tasks, not the current OS + apps = solution model.
Why not a Longhorn style sidebar to hold all the mini-applets that don’t rate a full screen window?
@Barlin – Winamp with its media library thingy takes up most of the screen at 1024×768, so I can see it filling the entire screen
Funny that the author thinks up a “new radical approach” to UI and uses techniques that are from the past (full-screen non-windows) and the present (dialog non-windows).
Especially when he then notes that to achieve dnd, there should be two full-screen (ummm…) windows visible at the same time.
The GUI pipes are a good idea, but that’s already available by dnd or c&p. All you have to do is make the applications work like that (and that’s not operating systems problem anymore).
The general idea in the network section seems to be “replace old protocols with new ones”. I don’t know what good would come from increasing the number of protocols (yeah yeah, the new one will be better than those old ones, right? and everyone and his brother will use the new one) when there are already too many of them.
The only truly valid point in this article is the document-model.
If I would be impolite, I’d say that this article is crappy, but I’m not.
I agree that every application should be a seamless part of the experience (maybe not the O/S: an important distinction) for example, we will see blurred lines between local and remote windows, as much as local and remote applications, documents, storage, etc: I tend to agree that application per se should be downplayed: future environments are more about tasks, whether those tasks are documents (in a traditional sense) or interactive windows (chat, e.g.) or so on. I think the author of the article has a few valid points, but the points aren’t well explained, and too mixed up with other incorrectness. Quite simply, if the audience for this article is the “informed technical person” (I put myself into this category), then there are too many mistakes for it to be a worthy article to read, and if the audience is the “lay technical person” (joe six-pack, e.g.) then the article is deeply misleading.
If someone _really_ wants to write an article about the future of O/S etc: then realistically the only way to do it in the current world is to set up a collaborative system (a wiki, e.g.) that many people can contribute to and refine to have a consensus: in today’s world, no single person has the perspective.
Daan,
Thanks for your article. It takes courage to challange the status quo in a public forum.
Kramii.
you’ve gotta be kidding, there is almost no change suggested by the article. It is still WIMP (windows, icons, menus, and pointers)
Future? Operating system of tomorrow?
My view of the future operating system is a marriage between what can be seen at:
http://www.lcarsdeveloper.com/
and what can be red at:
http://www.seven3.modblog.com/
text rules baby, as long as is used the right way
This is literaly just tomorrow. Thats all what we have, just nicer, with more eyecandy?
Did the author ever heard of ubiquitous computing? It think when your jacket sleeve sports a personal assistant you stuck with those things? The questions are interfaces and they do not necessarily mean GUI. Heard of Sony Qrio? And you are talking about GUI? Those things need some more ideas for OSes than just some filesystem and widgets.
Mitchell wrote 15 years ago that we stuck with WIMP. And somebody asking for more? Seriously, a very very flawed article, lacking any foundation.
You don’t solve the overlapping problem by making everything overlap all the time. You solve it by making it impossible for things to overlap. The problem that overlap solves is depicting an arbitrary amount of window area on a screen of finite dimensions. And unless we use z-axis (overlap), we need to use either a curved plane with adaptive area (non-linear zoom + no overlap), or virtual desktops (replacing overlap by viewport cropping), a combination of those, or something else.
Suppose you go all the way with the document-centric UI. If we take the view that a document == file, we need to cram 200k+ (300k+ on OS X) files on the screen. Which looks like this in a zoomable fs visualizer:
http://fhtr.org/mugen/shots/240kfiles.jpg
Point 1 – virtual Machine.
This is definitely the way to go. You loose performance but gain safety, portability, easier development and much more.
Point 2 – away with overlapping windows
This is a good point too. A taskbar to show which windows exist and an emacs-like splitting (plus d&d) gives 99% of what you need. Anyone working with emacs knows that this is more comfortable than overlapping windows.
Point 3 – simple look
Again true that too many fancy widgets distract you from your work. Platinum uses greys for widgets and one color for text highlight.
Point 4 – only documents
Documents are not enough, sometimes I need to perform “tasks”: Duplicate A CD; Synch Files with My Laptop etc.
Point 5 – software is invisible
Software is important. I want to be aware of using Photoshop and not GIMP, Word and not OO etc. especially if I paid for it. And sometimes I want both installed and to be able to choose.
Points 6 – embedding text
This is in my opinion a technical issue. I didn’t understand fully what he means but I think there are better solutions.
Point 7 – network model
There is little innovation here, although much is needed in this area.
Well, the always-maximised thing may indeed be easier for non-techies; in my experience, most ordinary unsophisticated users simply don’t understand the concept of windows on a screen. Everyone I watch always maximises everything (eg mail composition windows) even though it’s pointless and less productive. They simply cannot understand the concept of sizing windows and leaving other things visible on the side etc. So though it’s less powerful, the always-maximised model is the one that most Joe-Blows already use!
Your networking sounds a lot like something I’m implementing.
I’m not quite there (public key crypto is slowing my progress).
It is very complex matter, and I’m not sure I’ll ever complete it….
OS designers have been using the vm idea for years esp. with WinNT. Here the OS is separated for the H/W through the use of a Hardware Abstraction Layer (HAL) and hence porting WinNT to another platform (e.g. the DEC Alpha) was simply a case of porting the HAL.
My conclusion on this article is that it is seriously flawed. The majority of the points raised are old ideas, and some are considered bad practice (e.g. the points raised on UI design).
The reason why lots of users maximise all there windows is probably less to do with overlapping being confusing than with the non optimal way that Microsoft set up there menu as being locked into the top of the window below the title bar. This means that in an unmaximised state the menu can be almost anywhere on the screen, and is therefore harder to learn the position that menu items will always be at (this has been a critism of OS X as the Application menu with it’s changing size can shift the position of the other menu items).
It also makes it slower to access as it will almost always be at a distance from the five fastest pixels to aquire (four corners and the one your over at the moment) unlike MacOS that were bound to the top of the screen and so had effectively an infinite area (as it is impossible to overshoot the clickable area of the menu item as that would require you pushing the mouse off the top of the screen), obviously good from the point of veiw of Fitts Law.
I agree that a truly document based interface would be a good idea, preferably one where the “applets” used for editing data can easily be extended and have functionality replaced with plugins and new applets.
The GUI you are looking for is probably ‘screen’, commonly found in /bin/screen
It’s a terminal multiplexer.
On the other side, we have the innovative Mac OS X. It is the user-friendliest computer system on earth, built on UNIX and has OpenGL acceleration of the screen.
If that was so, every living being in the world would be using OS X. However, the reality is that OS X is in fact harder to use for 90% of computer users, (Windows users).
In fact, it is so esoteric to use that 99% of the time, the Mac laboratories over at the University I work at are 99% empty. I run Jaguar on an iMac, and there have been several occassions where I just break down and boot into Windows XP, or switch over to my Linux workstation. And I’m a self proclaimed computer geek.
Usability and familiarity go hand in hand. If you are familiar with a concept, its process becomes usable. Apple’s window manager simply sucks! It is absolutely, completely horrendous and just plain retarded. And it is perhaps the fundamental reason why the Mac labs at the University are 99% empty.
When your user doesn’t know how to quit, maximize, minimize or shuffle between applications, there is a problem. How about not knowing where to look for applications? Suppose a user wants to see all applications available to the system, what does he or she do?
In windows, they start at the start menu and then logically, proceed to the programs menu. In GNOME, the “applications” menu is glaringly conspicuous. In KDE, the K menu is an instinctive place to check. In Jaguar OS X?
Uhmmm…you search for it in Finder. Or while messing around with the finder menu, they user inspirationally clicks on “Go” then “Applications”. How innovative. I don’t know if that has changed in Panther, because I haven’t bothered to upgrade and probably will not. Admittedly, Aqua looks sexy, pretty, and polished, even more so than any other GUI environment I’ve laid my eyes upon. But looks don’t automatically translate in usability.
It took me months to learn how to use OS X and unlearn many computer habbits. That hardly bodes well for usability, ease of use, and flat learning curves. Even as we speak, I feel a little crippled when I use Jaguar, compared to other OSes. On linux, I can effectively and efficiently use GNOME via keyboard input, no mouse.
On OS X Jaguar, I can’t remember how to select links on Safari, using the keyboard, nor do I remember how to minizime, maximize, resize, or move windows via the keyboard. I don’t even think you can shade windows in OS X. I think you could in OS 9, but I digress. I don’t remember how to expand the apple menu via keyboard, nor do I know a quick way to launch programs that are absent on the task bar. Anyone who uses Linux will acknowledge why I feel crippled in Jaguar.
To the best of my knowledge, I have to use a mouse to do majority of the window management activity in Jaguar. Even cycling through applications is a pain Jaguar. I understand that has been fixed in Panther. But that means shelling out $129 for a silly feature that even ‘free’ ICEWM provides.
I can go on and on about how Windows and Linux are an other of magnitude more usable than OS X, but it is not necessary. To be usable, Apple need to through away several of their window management philosophies, and present users with ones they are already familiar with. For Christ sake, I can’t even maximize windows in OS X. Neither can I set applications to fullscreen mode. I mean, these are necesities.
Enough of my rant, I disagree with you that OS X is the most usable Operating System on earth. I think that title should rightly go to GNOME. It is the most usable environment I’ve been in. And I’m willing to bet my sister, that Average Joe will have an easier time migrating from Windows to GNOME, than he would from Windows to OS X, or Mac.
Yes, I don’t agree with the stereotype, now shoot me.
The user interface should be friendly and practical, both for the newbie as for the experienced computer user. Therefore, no POSIX compatibility is needed and no GNU utilities need to be ported. And why should they? In this modern world, we want to use more than text. We want fonts, webpages, flash animations, music, pictures and movies. The command line is not suitable for them, so a graphical interface (GI) is really necessary.
Where did you get the idea that POSIX compatability means you can’t have a GUI? We’re ( http://www.syllable.org ) doing fine with POSIX and GNU tools. If you don’t supply basic tools, your OS can not function. What will you do without GNU tools such as a compiler and linker? Creating your own tools as well as an OS is going to take a lot of effort; porting non-GNU tools isn’t going to reduce your reliance on POSIX or Unix-like standards.
Existing POSIX based code offers most everything you list as “needs”. If you want fonts, you want Freetype2. If you want webpages you want Gecko or Khtml. If you want pictures and movies you want FFMpeg and friends. All of those are much easier to port if you have POSIX compatability!
Anyone who says that has not discovered the KDE Configurabillity that it is so famed for!
KDE is the universal operating system! You can make it look like nearly ANY operating system, or invent your own. KDE can look like Windows, MAC, BeOS, CDE, Amiga, QNX, RiscOS and more. Check out the Keramik and Plastik themes, completely revised for KDE 3.2! As for placing Window buttons, right click any window title bar and select Configure Window behavior. You can configure it until your hearts content!
Saying KDE is a Windows clone is just ignorance, and I will come down HARD anyone who says so.
Does Windows have the versitle KDE control center? Panel applets, built in Wallpaper Slideshows (and this was before Mac OS X had it, it was there in version 1 of KDE, in 1999!). SO repeat after me KDE IS NOT A WINDOWS CLONE! In fact, the longhorn beta is copying KDE!
The reason why lots of users maximise all there windows is probably less to do with overlapping being confusing than with the non optimal way that Microsoft set up there menu as being locked into the top of the window below the title bar. This means that in an unmaximised state the menu can be almost anywhere on the screen, and is therefore harder to learn the position that menu items will always be at (this has been a critism of OS X as the Application menu with it’s changing size can shift the position of the other menu items).
Menus in Windows are relative to the application to which they pertain. As much as people in the Mac OS world once adored the idea of the fixed menu placement at the top of the screen, it was confusing in a completely different way, because you had to be aware of which window it pertained to, and lead to more mouse movement if you used a larger monitor with resized windows (instead of maximized windows).
I find that the reason most users tend towards maximized windows is because they’re running at 800×600 or 640×480 (depending mostly on which OS they’re using), and those resolutions often are barely capable of displaying all of the relevant information on the screen at one time. Most of them use these resolutions either because they’re the default resolutions or because they find higher resolutions harder to read (because setting font sizes and so forth is not only too complicated for many of them, but is also inconsistent), and the higher resolutions require better accuracy with the mouse in most cases, which can be an issue for many users (that problem could be solved by resolution-independant icon size, but Windows doesn’t currently have that built-in).
I, personally, only use maximized windows for applications that convey a great deal of information and, in most cases, can have multiple documents available in the same window area. Most of the time I prefer my documents to be viewed in a window that’s taller than it is wide, as this is a more familiar orientation for documents often viewed in print, and, for me, is often easier to read (as the lines don’t become jumbled 1000 pixels into the 1600-pixel screen). Additionally, any application I keep maximized tends to become background when I start using other applications, so everything I use is constantly displayed with that one application in the viewing area, which is helpful because often what I’m working on in the foreground is related to what sits in the background.
It also makes it slower to access as it will almost always be at a distance from the five fastest pixels to aquire (four corners and the one your over at the moment) unlike MacOS that were bound to the top of the screen and so had effectively an infinite area (as it is impossible to overshoot the clickable area of the menu item as that would require you pushing the mouse off the top of the screen), obviously good from the point of veiw of Fitts Law.
Unlike MacOS, though, when you have a floating window in Windows with an attached menu you are usually closer to the menu than to any of those 4 other points (though they’re easy to get to if you like to toss your mouse across the room or use a trackball like a 5-year-old playing centipede). The idea of attaching menus to the windows is based on the idea that users will commonly be using the mouse in the window in which they are working, and will focus attention on that window. Personally, I’m more keyboard-oriented anyway, so the only reason it matters to me is because I don’t need another slice of my screen being used up by something that makes the interface harder for me to use, if not everyone else.
I agree that a truly document based interface would be a good idea, preferably one where the “applets” used for editing data can easily be extended and have functionality replaced with plugins and new applets.
This would be great for people that do nothing on their computers but edit and read documents, but computers are general-purpose things, and moving them towards a document-centric model simply alienates the rest of the user base. On the other hand, making document-centric work easier, and making the operating system behave well in a document-centric user’s day-to-day tasks is a good thing. It just needs to be balanced against the general purpose needs of the remainder of the system’s users.
you will be going back to the single static partition of memory if the only application that gets used at any one time must have the attention of the user.
there is a reason we left that model back in the dust…. it was inflexible and inefficient.
…if we all stood up and decided to “invent” our own OS. Wooopeee! I don’t know about you guys but I seriously think more energy should be focused on improving what we have (good or bad or ugly). Everyone has an opinion and disagrees with others. We don’t need 5000 different OSs. I bet future OSs will simplify and become more transperant. They will fundementally work the best way (OS Core) and will only differ cosmetically (Think OSX – UNIX core with any GUI you please). Less people will care about what the underpinnings of the OS are – it will just work. If anything, forget about reinventing the OS and work on designing X server + window managers/environments. The rest should be transperant and irrelevent. VM? Current Linux/BSD porting efforts should be given more credit.
imagine you have two windows, say a maximized Outlook Express and a normal New Message window on top. When you accidentally click the Outlook Express window, it will look like the message you were typing is lost. Of course, it’s just hidden behind the window you just clicked, but that is not obvious. The solution is to take the idea of the original MacOS even further: not only hide other applications when you activate one, but make all windows maximized instead
yikes! The problem isn’t overlapping windows. The problem is the default click-to-front behavior of the windows. windows should stay put until the user moves them (except for some “emergency” type warning messages.) woot?
If OS X is harder to use for 90% of the computer users (Windows users)? Most of them never used OS X. In fact I believe most of them have never seen a Mac for real.
or not?
For the simple reason that it has stimulated much debate over OS design. I for one agree with most of the Authors points.
Most windows should be always maximized as this stops the user spending time fiddling trying to get screen layout right. As for drag and drop between apps, it is useless, Just use cut and paste. In a filesystem environment your filemanager should start with split screen just as smart ones like MC (midnight commander) does, drag and drop should only affect the main app and maybe to items in the taskbar or elsewhere.
Posix has more to do with the programmers than the end users.
Windows should never be lost to the user. This is the #1 UI design mistake today. We need ways to see where the data we are working on is.
And yes not everything is document centric but very few people have trouble with non-document centric apps.
The claim that eye candy just wastes cycles, while true in some cases is not a universal. Drop shadows under menus and windows is a usabity plus as it allows you to determine the window that presently has focus quicker.
The GUI option that allows only one application to be visible at a time is a very poor idea. Very often I find myself (and I expect that other do too) wanting to have multiple things visble. API docs while coding, an email I am responding too, a article being reference in something else I am writing. The list is endless. The current system has enough flexiblity that everyone can find a system that works for them. If all maximized windows works for you, that great it wouldn’t work for me.
Replace documents with files and you have a good step.
I want an extremely intergrated enviroment. I want to click (or double click) on an icon and launch the nessacary app(already done)think the embedded viewers in KDE. I click on a text document, and in the same window the text editor opens. I click on a image icon a small menu opens and asks whether I want to view it, or edit the image. Photoshop is already a part of the system. If I select multiple images I can either drag them to another directory or open the entire selection in an slide show. I want tighter intergration of software, while using open standards. That way KDE can open stuff and i can send it to my friend who uses Gnome who changes it and they can send it to a windows user who can change it and send it back.
Microsoft’s tight intergration even tighter, with open standards for file formats. That will be a near term OS of the future(20 years)
KDE already has this potential: it’s called Kpart though I would prefer to call KomponentWare. Such approach are better suited to pen computer. Apple did it with OpenDoc and Newton, but chicken out when Job took over. The big problen with OpenDoc was that it was very purist and did not allow an evolutionary path to this goal.
On the RiscOS platform there was two alternative technology with would achive the same goal in the long term. Unfortunatly this platform has almost been totally crushed be the big giant of America with anti-competitive practise. They are now hanging on at shoestring.
What this mean that A4 sized pencomputer should of been common by now. It isn’t thank to the giant of America. The world computer has stagnated. Japan has realized this along with other asian countries.
Ask 10 people what makes a good UI design and no doubt you’ll get 10 different answers.
Some people like all windows to be maximised, other don’t. Some people like windows to come to the front when clicked while others find that annoying.
I think what makes a good UI is how flexible and configurable it is. The moment the UI forces the user to do things a certain way it loses potential users.
since when is GUI and apps the same as OS design? when i think of OS design, i think of microkernels and exokernels, page replacement algorithms and tcp/ip stacks. shouldn’t this article be called “Designing the GUI and Apps of Tomorrow”. everything discussed here can already be implemented on existing kernels that are out there.
KDE already has this potential: it’s called Kpart though I would prefer to call KomponentWare. Such approach are better suited to pen computer. Apple did it with OpenDoc and Newton, but chicken out when Job took over. The big problen with OpenDoc was that it was very purist and did not allow an evolutionary path to this goal.
On the RiscOS platform there was two alternative technology with would achive the same goal in the long term. Unfortunatly this platform has almost been totally crushed be the big giant of America with anti-competitive practise. They are now hanging on at shoestring.
What this mean that A4 sized pencomputer should of been common by now. It isn’t thank to the giant of America. The world computer has stagnated. Japan has realized this along with other asian countries. Japanise company has tried introduced a more reliable operating system to desktop and portable computer market. What did the GIANT of the USA do? They blocked such development. This clearly p***ed the Japanise corporation of badly. They are now switching over to Linux. The GIANT of the USA are pretty stupid in my opnion.
Put every document in XML, and then the namespace defines the presentation behaviour/software to use eg. svg, html, word(?); then piping documents would be easy, using XPointers or something, remote and local docs could be referenced.
I guess you register different software against each namespace/doctype. So no ms software if you want.
A VM running multiple browser windows displaying lots of different doc formats -> Mozilla !?!
– – – – – + + + + + – – – – –
Begin Analytical Comment
I think the Main Question here is how to make advanced machines like computers look as simple as a box of cookies (first thing that sprang into my head, just ate one).
The question is not IF it can be done. We’ve seen before that advanced things can become simple to the user. By putting an automatic gearbox into a car, for instance. But you won’t get the same feeling.
It’s just what’s more important to you, and how fast you’re willing to adapt to a new (and simpler) environment. But as said, be very careful when implementing.
Personally, I think that restriction and simplicity lead to creativity, as is proven many times. Give a child a piece of paper and a pencil, and it will start to draw. Give it a full painter’s kit, and it will run away. But that’s just my opinion.
Anyway, it’s good to have a real discussion about this.
End, thank you.
– – – – – + + + + + – – – – –
…reading when the author said, “C++ was a bad fix to C, and Java cleaned everything up”. This statement and most that came after were sheer garbage. Is it too much to ask that an author know something about their subject before writing about it? If I’m going to take the time to read three pages of material I’d like some confidence that the people asking for my time do some do some actual, honest research and become educated before establishing a dialog with the rest of us.
“Such approach are better suited to pen computer. Apple did it with OpenDoc and Newton, but chicken out when Job took over.”
“The GIANT of the USA are pretty stupid in my opnion.”
Perhaps you should get a sufficient handle on the GIANTโs rudimentary grammar and spelling conventions before you offer criticism. You undercut your own argument by demonstrating that development continues regardless of Americaโs presumed anti-competitive practices.
“Apple’s marketshare is small, and in this way they can keep their former customers while they can also attract new ones: their OS is now built on the “proven reliability” of UNIX thanks to it being 30 years old. Apparently, they have not read the Unix-Haters Handbook, from which it seems UNIX was rather unstable even 10 years ago.”
?!?!?!?!?!?
Compared to what?
For the simple reason that it has stimulated much debate over OS design. I for one agree with most of the Authors points.
It’s much easier to get a lot of discussion out of a bad article than a good one, but at the same time this particular topic on a forum dedicated to operating systems tends to be an attraction for discussion anyway. The article itself is highly flawed, but a point-by-point discussion on a 3-page article is nearly impossible on this forum without submitting a counter-article (due in part to the length limit on posts, and in part to the way posts are displayed).
Most windows should be always maximized as this stops the user spending time fiddling trying to get screen layout right. As for drag and drop between apps, it is useless, Just use cut and paste.
Users that might spend a lot of time fiddling with non-maximized windows would probably be the same users that rely on drag-and-drop. People confused by overlapping windows would probably be equally confused by cut and paste. Most windows have no need to be maximized, and often doing so just causes excess wasted space. Imagine Notepad with a short pre-formatted (with line breaks in the file itself) file open, at 1600×1200, or even 800×600. Even more complex document editors, like Word (or equivalents), tend to waste a lot of space in full-screen mode, because most of the documents they deal with are meant for printing on paper that’s longer than it is wide. Web sites often fall into the same trap because they’re forced to design around the small resolutions most viewers will have, leading to pages wider than they are long even though the information could be conveyed more clearly and with fewer pages in the other direction (then again, if you can span multiple pages you can get more ad hits).
In a filesystem environment your filemanager should start with split screen just as smart ones like MC (midnight commander) does, drag and drop should only affect the main app and maybe to items in the taskbar or elsewhere.
Why should your file manager be split-screen if you’re not going to use drag & drop? The whole idea of split-screen views in file managers is to make drag & drop easier. This is exactly why I don’t use split-screen file managers, and only occasionally use directory-view options in file managers (primarily if I’m moving things between directories with similar names in different parts of the hierarchy). If I need a second directory open in the file manager to make movement between directories faster, I should be able to open another file manager window and move between them easily, without worrying about the file manager having too much overhead to open quickly and run side-by-side. The options required in a file manager’s menu and toolbars are fairly limited, so the extra screen real estate from a second window should be minimal.
Posix has more to do with the programmers than the end users.
This is true, and people should be reminded that WindowsNT could just as easily be POSIX certified as many UNIX derivatives. With SFU there’s a POSIX subsystem in WindowsXP that’s perfectly fine for many POSIX-aware apps. It can all be made to look like Windows, or it could be made to look like OS X, and though POSIX does include standards for the UI services, the actual look and feel doesn’t matter.
Windows should never be lost to the user. This is the #1 UI design mistake today. We need ways to see where the data we are working on is.
This is the type of problem easily solved at the application level, though. Additionally, much of this behavior is user-dependant. Many people would not like it if they couldn’t switch between multiple documents of different types without hassle, or between a document and application windows, simply because some UI designer thought the user shouldn’t be allowed to shift focus away from the document they’re currently working on.
And yes not everything is document centric but very few people have trouble with non-document centric apps.
People have trouble with all types of apps, the document-centric ones simply tend to have the worst interfaces because the designers are trying to cram features into them while still allowing the user as much space as possible to view the document these features will modify. People using *nix like emacs and vi not because they’re simpler than applications like Word, but because they have most of the same features (and more or less in many areas), yet keep most of the interface out of the way (in favour of obscure keyboard commands that scare many users away, in many versions of each editor). Document work is fundamentally not drag & drop, point & click, but instead is done with hands on the keyboard, so of course those that know the keyboard commands for their editors can handle the documents much faster and with less hassle. Still, how many people would be willing to use (or mandate in their company) Word if the interface were simply the document window and every new user had a list of keyboard commands taped to their desk to figure out how to do anything more than simply type?
If people more commonly have problems with document-centric applications (as is implied by saying that people don’t have trouble with non-document-centric apps), what benefit is it to make the OS itself document-centric? In many ways, it already is, as it suffers from many of the same problems that are inherent in document-centric applications: they try to pack as many features in as possible while still trying to stay out of the way of whatever application you’re trying to focus on. No one has a perfect solution, I don’t have an OS no one’s ever seen before in my back-pocket that will solve these problems. However, the points need to be made and people need to start thinking about these things in different ways. The arguments are really not very different than they were 10 years ago, and at that time Linux didn’t have a GUI worth mentioning, MacOS was unrecognizable to someone first exposed to it with OS X, and Windows was still a DOS application that no one used.
A future computing paradigm like the one described at http://tunes.org/ really looks nice.
“People that don’t learn from the past are bound to repeat it”. There was almost nothing that was different to what people have tried (and suggested) in the past, yes you can do 90% of applications this way but it is the other 10% that break your model and they are very difficult to implement. Once you have to ‘hack’ you design to add these in you will find that it would have been simpler not to have constrained the model in the first place but now you are stuck with lots of source that you will have to change.
Most windows should be always maximized as this stops the user spending time fiddling trying to get screen layout right.
That won’t work very well with multiple monitors – I hate it when apps maximize across monitors. My dual headed matrox card makes 2 of my monitors (I use 3) act like a single monitor so whenever I maximize an app on either of those two monitors it expands across them – and I hate it when that happens.
Convince Apple to release Aqua/Quartz window manager under the GPL license.
Don’t get me wrong. I HATE Mac OSX. The “one menu bar” system drives me bonkers. The entire system is based on the fundamental principal of ONE FEEKING MOUSE BUTTON and the basic interface just isn’t intuitive.
But it has serious advantages over all competition in that it’s incredibly responsive, has smooth, flawless eye candy, perfect fonts, etc etc. It’s already based on FreeBSD and XFree86, so compatibility is not an issue.
It would take a matter of hours for someone to port a working version to Linux. You can even download and boot the Mac kernel and Darwin on i386-based machines, straight from the Apple site. Imagine KDE or Gnome running on the Quartz/Aqua platform. I prefer KDE in terms of customisation and usability, but I can appreciate the concepts behind the Gnome prject, and always keep an eye on development. With the Quartz back-end in place, Gnome especially would be a perfect desktop, and the support in development would simply explode.
An important issue is convincing Apple that this is in their interest. This is the difficult part. My personal opinion is this:- If Linux with Gnome or KDE was as flawlessly responsive as Aqua, then it would make Linux undoubtably the best PC desktop OS. This would not only push the popularity of Linux, but also push innovation in Mac OSX. This in turn would push the G4/G5 processors as a viable alternative to Intel/AMD.
Don’t get me wrong, I’m certainly no expert, and this concept is probably fundamentally flawed in a dozen way. But at the moment, in my head, it’s a great idea. I just wish I could sit down and have a one-to-one with Mr Jobs.
This would work well for the new computer user as an introduction to computing.
However, this system seems to me, mean that you are going to loose efficiency in production of comple models, and not even be able to achieve them.
Personally, I like having ovelapping windows, I find I can do it in a neat and efficient way.
VM machine would be too slow, esp. on top of modern systems.
folder navgiation will become difficult, if even possible according to the article. Folder navigation in my view is best done as in OS X with colomn view.
about diff protocols, doesn’t worry me, the computer sorts it all out for me.
apps, you need to be able to see your different apps properly, how would something like photoshop work?? tiles on another seperate part of the screen. No thanks. I like modern systems, despite lack of new inovation, becasue it’s not necessary. People have got used to that way of working, and to change it all now is a little late, and would slow productivity down for sometime while people got used to the new system.
If the Author likes it so much he should create a copy for himself and present it to the public.
Personally, I think that restriction and simplicity lead to creativity, as is proven many times. Give a child a piece of paper and a pencil, and it will start to draw. Give it a full painter’s kit, and it will run away. But that’s just my opinion.
I think if you give a child a full painter’s kit, then the kid’s more likely to make one very large mess instead of running away.
Also, how you restrain the child while still allowing the painter to paint if there’s only one set of tools available? ie – makeing the OS simple enough for the novice, while not makeing it difficult for someone with a clue to use the full power available to them.
<rant>One point worth mentioning is that many of the authors points have nothing to do with OS design, but rather with UI/application design. This point always bothers me: much of the debate I have seen between which OS is better are truly user interface and user experience arguments, which in reality have almost nothing to do with the OS at all! What is really being argued are the virtues of an operating environment which is really just a large program running on a host OS. As is seen with KDE and Gnome, the operating environment can run on different host operating systems. The only people who really ‘use’ an OS are programmers who are interacting with that OS’s system call interface, and dealing with the foibles of that OS’s design and implementation. The rest is UI and application design.</rant>
On to the article:
I think an example of the ‘everything is a document’ idea is the Gobe Productive office suite. It has a single unified document format for the word processor, spreadsheet, drawing, and presentation software such that essentially one user interface and document format is used to handle them all, and parts of each type of document can easily be dragged and dropped onto each other. It is a good idea, though as others have pointed out, not do-able across all application domains.
VM’s and fully distributed designs are where true OS design is headed in my opinion. I think MacOS X is a good example of how the VM idea may work. Currently OS X can run native OS X apps, and OS 9 apps through a module-like OS 9 support environment. This concept could be expanded to include support environments to run applications natively compiled for a larger number of other platforms. Perhaps we will end up seeing a single small kernel (perhaps stored in a ROM) with a group of software VMs. The kernel will be responsible for loading the proper VM based on what type of application is being launched.
A distributed framework would allow different networked devices to be used (hopefully transparently) to access your data, as long as the appropriate VM is available for that device.
These are very general ideas, but I think this is the way things will go.
Brennan
It’s already based on FreeBSD and XFree86, so compatibility is not an issue.
Quartz isn’t based on XFree86, it’s based on DisplayPDF which is a descendant of DisplayPostscript from NeXT. While Apple do ship an X Server (which, I believe, is based on XFree86) , it sits on top of DisplayPDF.
Imagine KDE or Gnome running on the Quartz/Aqua platform.
Why imagine when you can already do it:
http://primates.ximian.com/~aaron/doing/evo-osx.html for gnome and evolution and http://kde.opendarwin.org/ for KDE. Note that the GNOME stuff requires Fink and X, while the KDE stuff is native (though I think a Fink/X Window based KDE is also available). A native GTK on OS/X project can be found here: http://gtk-osx.sourceforge.net/
“Wait. Read that again, and think for yourself: how much innovation has there been and will there be? Let’s start with Gnome and KDE. They are mainly copying the user interface of Windows. Yes, Gnome places the application menu on the top of the screen instead of the bottom, and KDE has invented KIO. But almost everything else is plain copying. KDE even has the window buttons in exactly the same place as Windows. There is a reason for this. A quite simple one, actually. Most people today work with Windows, and when they make a desktop environment that behaves radically different, they are afraid they scare people so that they continue to use Windows.”
KDE and GNOME are the most popular (!) DE’s and WM’s. If you want an innovative DE/WM you’d better not take the popular or the popular’s default settings which is exactly what you chose to conclude from.
Instead, try to change the default settings. The default settings are meant for people who are new. And most new people are comming from MS Windows 9x/NT. You don’t want to scare them, do you? KDE is more like Windows than GNOME is, imo — which is why i believe KDE is more popular.
Want an innovative DE/WM? Try for example Enlightenment and see how innovative the various versions, themes, and features are.
Enlightenment doesn’t exist to “take over the world” either which is what you can note when you know it is BSD licensed or when you read from the authors (ie. Raster, Pixelmonkey, etc)
If you want to se ean innovation, you’ll have to look futher than you nose long is and prepare to move away from the status quo/popular. Mostly, the chances that the innovation will become popular are slim… which correlates with why it isn’t popular, why you perhaps didn’t knew about it.
Quartz isn’t based on XFree86, it’s based on DisplayPDF which is a descendant of DisplayPostscript from NeXT. While Apple do ship an X Server (which, I believe, is based on XFree86) , it sits on top of DisplayPDF.
I knew Quartz has something to do with PDF, which I found quite confusing, but interesting. But I didn’t know XFree was secondary. Is that what gives OSX its smoothness? I always wondered why there was such a huge gap in feel between Linux & Mac OSX GUI’s. Like I say, I’m no expert.
Why imagine when you can already do it:
I’ve never seen these projects before, and although they are interesting, my point here was to port the existing graphical backend & GUI to the i386 platform, through GPL. That’s where 90% of the market share is, after all, and I think, in the long run, it would be beneficial to Apple. Plus I don’t have a Mac, and crave a major update to XFree (or whatever makes Macs so responsive).
For all its faults, Windows is definately more intuitive and responsive than KDE or Gnome, and it’s the main reason the majority of i386-based users are stuck with M$. I feel until this is resolved, Linux will remain a hobbyists toy when it comes to home users desktops.
In saying that, I also have strong feeling over package management (which sucks on all platforms that I have experience with). But it looks like there is a lot more creativity going into that now (compared to recent years).
… translates to
The widgets themself
;o)
Cheers!
STIBS
First I would like to congratulate the author on a good effort to come up with an alternative solution to some aspects of computing that have troubled him, and which might lead to some better systems for some people.
However, I can’t help but agree with many other comments here.. this is not really very innovative. It is taking some old ideas and throwing them around in different ways. I really don’t see very much that is new or different. It’s more like a rearranging of furniture than a completely new building.
I think the author grasps some of the basic issues in computing today – for example the unification of individual computers over a network, making the user interface easier, etc… but I don’t really see a central core or reasoning or philosophy behind these decisions. It’s kind of just a bunch of pieces thrown together without a whole lot of coherent reason. I don’t see an underlying thread of consistency. I think the ideas here will have to be developed further and in more detail because at the moment this radical new model isn’t so radical.
Since this author focusses a lot on what is practical I am thinking they are a practical kind of a person, perhaps a Taurus, Virgo or Capricorn, astrologically speaking. Such people are now always able to come up with something new and original, it’s part of their style.. but I DO very much appreciate the focus on practicality and down-to-earth simplicity in this article, so thumbs up for that!
A good future os is going to need rather a lot more refinement and VISION, but this is a good step.
Brennan, I agree with your point… a lot of discussion is about user interfaces and applications rather than operating systems. But here is the point… every user is subjective and not everyone has a DEEP understanding of what an operating system is. They figure it’s the way they use the computer…. what image or face the computer shows to them as part of the way they interface with it. Yes, we know that there are lower levels and things going on behind the scenes but for a lot of users, an operating system is considered the same thing as the user’s understanding of how the computer works, and if all they’ve ever done is use applications with a limited knowledge of how they really work, as might be the case here, it’s no surprise that they’d be thinking on a level of applications and user interfaces rather than behind the scenes technology. I guess it raises an interesting question of what, the user, an operating system really is. Furthermore, I think an operating system should be transparent to the user, so maybe this author actually has a good point about focussing on what the user experiences rather than on what goes on in the background. Wouldn’t a good operating system be one that does not require that the user know about how it does what it does? Isn’t that what makes a good interface? Isn’t the interface the operating system, to the average user? A lot of us know that an OS is a lot more detailed than that, and in that sense this article does fall short, or at least appear to, but maybe in the author’s naievety he actually has the right focus?
Just another two cents… must be ranking up the coins now.
Since this author focusses a lot on what is practical I am thinking they are a practical kind of a person, perhaps a Taurus, Virgo or Capricorn, astrologically speaking.
http://maddox.xmission.com/c.cgi?u=astrology
:oP
I knew Quartz has something to do with PDF, which I found quite confusing, but interesting. But I didn’t know XFree was secondary. Is that what gives OSX its smoothness?
Yep, X is in there only to provide backward compatability with graphical UNIX apps (and, I suppose, to tempt *NIX users over to the Mac). The Server sits on top and uses Quartz (ie DisplayPDF) to draw the X apps on screen. DisplayPDF is what gives OS/X it’s smoothness.
my point here was to port the existing graphical backend & GUI to the i386 platform, through GPL
Probably the easiest way to achieve this “GPL’d OS/X” nirvana would be to use GNUstep (a clone of the NextStep/Cocoa API) on top of Linux and to revive GNU DisplayGhostcript (a long dead clone of DisplayPostscript). While GNUstep has had some life breathed back into it (probably helped by people wanting to port OS/X software to other *nixes – it had languished for quite awhile) I don’t think there are enough people with the itch (at least for now) to revive DisplayGhostscript.
As someone else pointed out, the title of the article should have been Future Desktop Concepts – not Designing the Operating System of Tomorrow. I thought this article was going to be about microkernels vs monolithic, and other technical operating system concepts. Not a gui design advocacy.
As far as maximized windows, I like my windows maximized and then I just alt-tab through them, but I wouldn’t want to force this on anybody. I run at 1600×1200 too. I tend to organize my windows into different workspace too, so I don’t have to alt-tab through too many windows.
I don’t think there are enough people with the itch (at least for now) to revive DisplayGhostscript.
I forgot to add – DisplayGhostscript sits on top of X instead of drawing directly (like Quartz does), so you wouldn’t get much benefit from reviving it unless it’s rewriten to draw to the screen directly.
It’s certainley a cool idea, even if it’s not realistic. By sheer coincidence, there’s a big post on Slashdot just came up about the X.org development team, the breakaway from XFree86, and promises of a “new level” of Xserver technology.
IMO they need to move very quickly, or this won’t be the “Year Of The Desktop” for Linux. If Linus genuinely has home desktop aspirations for Linux, then he has to seriously urge devolpers into intense and fast work on Xserver or its alternatives.
I still believe that in Quartz/Aqua, Apple has the power in its hands to steal a large chunk of the Windows market. It’s even rumoured (and I stress rumoured) that Apple have a working i386 version of OSX.
Think of it this way:- It’s much easier to seduce the Linux user to Apple products if they are the driving force behind open-source GUI development. And it’s much easier to seduce Windows users to Linux if Linux has a world-beating GUI. The way I see it, Apple has the home computing world in its lap, and doesn’t even seem know it.
Also, I failed to mention the XFCE4 window manager. Although in a basic stage of development compared to the likes of Gnome & KDE, I found playing with this little front-end to be an absolute joy, in terms of it’s lightning speed, smooth look and total simplicity. It needs a whole lot of development, but it’s an excellent step in the right direction for Linux. Not something I would use full time, but something to watch, all the same.
A window manager that fits the bill is called ratpoison.
I think this article is totally ridiculous. It basically says: New users expect [situation A] so let’s cater to new users across the board and call it innovation. Back to reality, those of us who use computers everyday are not only used to and comfortable with the current interface similarities, but we also are more productive with these interfaces.
On the comparison of KDE/GNOME to Windows. It doesn’t take a rocket scientist to see that all three are quite different. Each project makes their own UI decisions based on input from its userbase (except maybe Windows). The most notable thing about KDE specifically is that it can morph to practically any behavior that you might want. GNOME shares much of its UI design with that of MacOS. They may not look the same (those are called widgets), but they act the same in many ways and have thought out virtually every UI design decision down to should OK be on the right or on the left of cancel.
I think some real innovation is happening on the desktop in a few areas. Sun’s Java desktop is showing a new way of thinking, though it is severly limited in practicality thus far. Sphere3d for WindowsXP shows promise but needs some serious UI work before the benefits outweigh the badUI tax that it has. Finally MS is taking a different tach. They are scarcely changing the shell at all, but boosting and integrating applications in a way that will provide endless complexities for IT staff around the world. The end result of the MS’s work looks good when it works, but I shutter to think what it will be like when it doesn’t. Let me paraphrase and reapply Bjarne Stroustrup: With [WinXP] it was easy to shoot yourself in the foot, with [Longhorn] it will be more difficult to shoot yourself, but when you do, it will take your whole leg with it.
I am a total newbie, but even I recognise that the document model is outdated when talking about future OS designs.
I would have thought that Task-based OS’s would be the way forward, so you are neither thinking about applications (‘what program do I use?’) or documents (‘what file do I open?’) and instead you simply think “what do I want to achieve?” and then employ whatever task-based tools are required to make that happen.
C++ was a bad fix to C, and Java cleaned everything up.
I’m sorry, I can hardly contain my laughter.
… just about user UI, mostly GUI – windows, etc.
IMHO though GUI could (and probably should) be a part of an OS, it is not GUI that defines an OS.
If anyone interested about some innovations in OS – look at Plan9 – and it _is_ pretty old!
Also IMHO appearance of Linux effectively killed all OS innovations – last 10 years everyone seems busy with just coping/porting into linux the whole win/unix mess as it was by mid-to-end-90s.
Sirs:
I agree that there is NO need for overlapping windows.
The Apple Newton Message Pad had very few ooverlapping windows, but was still able to function.
HOW? by having a place on screen to store clippings. Too bad they never saw the true significance of it, nor the next step in functionality.
“what do you do if you have applications that do not process any documents?”
You have them process data. ( and shove the results in the temp save area )
For those that do not process any data ( like screen savers and games ), you dont need to care about these anyway.
Sounds like the author unknowningly fell in love with OpenDoc(tm).
Most people on this forum are throwing the baby out with the bath-water.
GOOD POINTS:
1) data pipes for GUI: abstract production/consumption of data
2) KDE, GNOME, and co. aren’t inovative (gasp!)
3) eliminate over-lapping windows: this extra power is not worth the confusion for most users IMHO
4) VM
5) interfaces should be more data-centric: users care more about their data than their apps
DUBIOUS POINTS:
1) I’m not sold on an entirely doc-centric interface
JUST PLAIN WRONG (or VERY funny):
1) C++ was a bad fix to C, and Java cleaned everything up: this is wrong on so many levels of a abstraction (hehe)
To be fair, Daan did not claim that is ideas were orig. This is only his vision for his ideal future OS.
Still, he presents a nice synthesis of ideas, some of which are still not perfected in any OS today.
oh pls linux for desktop is more A X server.
its not enough for apple reduces their hardware price, os x on x86? dream on. by the time apple releases osx for x86. its no match for windows or linux.
In every great revolution, to have the conservative right and the radical left. What will result from these ideologies and the way things work now will not be this particular GUI model, but a reevaluation from the ground up of our current GUI systems. The next generation GUI will be VERY different from the Windows/KDE/GNOME models we have now, but not that different.
The next answer will come from the person who decides to build this GUI, point out the character flaws and the general uncomfortabilities and decides to build on it with the ideas already in use.
It’s already know that ergonomically, the Windows GUI sucks. As he said in the article, it’s almost as if they INTENDED to break all the rules. The close, min, max buttons are on the exact opposite corner of the screen as the menu/quick launch. As my mouse is normally at the top right corner anyways, from the layout of the windows, I’ve moved my menu to the top of a right aligned docklet, but most people don’t know how to do that in KDE, let alone Windows. All of my utilities are on a bottom aligned docklet since I read top to bottom, it’s just in good flow for my eyes. There is no doubt that desktop icons need to be done away with, for boot speed purposes. But something tells me the masses WILL NOT be comfortable with that.
I think the document-model is decent. Have file associations so that the icon for the document shows what application is going to be used by default, but still allow double clicking to open it in another application.
Let me say that I would not use such a GUI, simply because I know how to manipulate blackbox/kde. But most users don’t.
I think that his concept on the config applets is very valid.
I’ve read a lot of bad reviews on 3d desktop (which technically z-indexing is 3d emulation), but perhaps a window rolodex of some kind. A portion of the window is visible behind the other windows, click on a title bar and the window is moved to the front and all the others shift back one to fill the hole.
Just a thought.
Hasta
oh pls linux for desktop is more A X server.
its not enough for apple reduces their hardware price, os x on x86? dream on. by the time apple releases osx for x86. its no match for windows or linux.
I said it was just an idea, which was most likely fundamentaly flawed, and I realise this is true. But you don’t present any reasons behind your little rant. Did you even read my post? ;o)
I really believe that all Linux lacks to become a genuine MS-Windows-beater is a classy GUI and a standard package management system. And those things getting close. I think that with the chaos in the XServer community, Apple are missing a golden opportunity to finish the job they started with the original Macintosh.
It’s just my opinion, and I know it’s flawed, but Windows is the single worst operating system on the planet. It is broken down to the very core and philosophy. I have it installed only for 2 games that I play online, which cannot be emulated with Winex3.
If I woke up tomorrow, and Windows was gone, I wouldn’t shed a tear, because I can do 90% of anything I personally need a computer for with Linux. The other 10% will follow soon. Microsoft had better start believing that.
Also, I said nothing of porting Mac OSX to x86. I said I hate Mac OSX. I just want the graphical back-end (apparently Quartz/DisplayPDF) to be ported. If Apple slapped their name onto that most fundamental part of the Linux Desktop, attatched to a GPL licence, then it would be in a position to steal back (at least) a big chunk of the market share.
Lets not forget that without Quartz, OSX is essentially beefed-up FreeBSD. Is it really such a controversial thing to suggest that they might have an x86 port in mind? Especially with the Darwin source-code and x86 install openly available on their site.
Okay so it’s a crazy dream, but a nice one.
You’ve definately got some interesting ideas in this article, The Future OS needs people who aren’t afraid to explore modern innovative ideas in these fields of user interfaces and decentralized computing. Keep up the good work; don’t be shy to stand on the shoulders of giants either. Perhaps look into things like Capability Security Systems, Microkernels (L4 etc), peer to peer virtual hard disks, mind mapping associative file systems, etc.
I think all dem buttons should be pushed and all da stuff work tagedder reel well. And den dah stuff all workz and uzers happi.
(future request to OSnews : please make sure writers have GED or equivalent)
The last page sounds very much like Plan9. People should
take a look at it, the OS itself has some very nice features.
(though it needs applications and a usable UI)
I agree that there is NO need for overlapping windows.
The Apple Newton Message Pad had very few ooverlapping windows, but was still able to function.
HOW? by having a place on screen to store clippings. Too bad they never saw the true significance of it, nor the next step in functionality.
Oops, someone just fell into a big gaping hole in logic. Every OS I’ve seen that works on devices with similar functionality and form factor to the Newton does something similar, and doesn’t have overlapping windows, even when they’re multi-tasking. You don’t need to store clippings on-screen to do this, as you can store them the same way you do in other operating systems (though storing them on-screen can be nice). This method of functioning is primarily determined by the form factor, low resolution, and normal method of use, though, and not some massive UI revolution that’s going to take over outside of the handheld space.
“what do you do if you have applications that do not process any documents?”
You have them process data. ( and shove the results in the temp save area )
Why? Not all applications have results, and not all applications need to export any results they may have.
For those that do not process any data ( like screen savers and games ), you dont need to care about these anyway.
This is very true, and it’s just a matter of not engineering the OS in such a way that it becomes cumbersome to design software in this way.
Sounds like the author unknowningly fell in love with OpenDoc(tm).
That’s quite possible, but again much of what he’s looking for can be done on existing systems without OpenDoc. The big difference is that it’s generally not forced on people by the operating system, and when it is, it’s generally not a current desktop OS.
> C++ was a bad fix to C, and Java cleaned everything up.
yes. all my JAVA friends always tell me how they “deploy” everything, just …. I ask how to define a “class variable”, and they need their eclipse work shell working to answer that. In this sense, JAVA is a big big playground for … ok I don’t write it :>
I agree with the comment that windows was INTENDED to work badly.. with icons in wierd places and unergonomic layout etc. Why? Because they figure most users aren’t smart enough to care or question it. They don’t bother to set an ideal or a high standard, they just give the user the minimal that they will tolerate and they focus on making a system which reflects what the market seems to be, rather than taking a leadership role. They realise that the ideal system, in business terms, will be one that looks at the current nature of society and maps itself to it. Win/MS is not interested in inspiring us or truly making our lives better or leading the way with honest practices and a truly pure system. They just want to implement whatever earns the doh. That’s why they never cared that the OS looked ugly. They figured most people are ugly so they dont care about looks. .. I know I know… sounds kinda goofy, but hey! My point is, MS makes a product that the most populance can relate to, in the position they are in, rather than how people COULD relate following a spiritual transformation.
Several of the ideas in this article are very similar to Jef Raskins “The Humane Environment”, but not quite as developed.
That is what I see w/ your “new” model of GUI. I like the flexibility I have to move windows around depending on how I work. No two people work the same and you plan to force a one-sized-fits-all approach on the user. The one thing I truely miss on a Windows desktop, that I have on Gnome and KDE, is the pushpin (always on top). I like the ability to make some app always viewable and yet still work in another app that is underneath. I can monitor a download or IRC while typing a document or surfing. I flike being able to watch a movie in a small window or window-shade xmms (either to desktop or taskbar) and still work or read other stuff.
Please don’t take away what we users have gained. The time for the model you propose was back in the day of the original Mac or Windows 1.0. Heck, when I had DOS I used Norton Commander to manipulate files and launch programs. I then had two panes open so I could copy files from one file tree location to another, but once a program was launched, the program took over the screen. Those days were OK, slow CPU and small screen real estate, but those days are gone and that 286 is gathering dust.
I know, I know, their UI has sucked since CDE but Looking Glass looks mighty yummy..
Take a look at;
http://wwws.sun.com/software/looking_glass/details.html
Congratulations you just invented emacs. Fire it up! Fire it up!
Enjoy your Lisp VM, non-overlapping windows, simple look, only documents, invisible software sub-components, embedded text, and network aware file system.
I read the authors points about 3D interfaces in Mac OS X and the upcoming Longhorn and just laughed.
The points the author made read like he/she had absolutely no idea what he/she was talking about.
I quote;
“Sounds great? It actually isn’t. Those fancy user interfaces waste precious CPU and GPU cycles, making your computer slower than it needs to be, thus making you work slower. In return, you are distracted from your work so that your productivity is even lower.”
As someone that has been a new user of OS X for the past 2 years, I’ve found that there has never been any hinderance to my productivity due to the use of a 3D enhanced, OpenGL powered interface. In fact, I would say that my experience and productivity within and of the OS has been empowered by it.
I also laugh at the idea that the computer is slowed down by these “fancy user interfaces”. Considering that Mac OS X is typically bundled on machines which could run an entire space mission, and still render a decent version of Final Fantasy (the movie), I found this comment redundant.
There is a reason why video cards such as Radeons, and GeForce’s are being shipped with the G3/G4/G5, they are OpenGL aware, and can handle most of the rendering in the GPU of the video card. His point about a slower computer is only true on machines that don’t have 3d video cards, and are slower than today’s 400MHz XScale CPU’s from Intel.
Other parts of the article are also amusing, however, other replies have covered why.
Yes, it takes guts to write and publish an article like this, but it takes brains to make the article a good one.
why the hell did this make me think of haystack from mit?
other then that i want to comment that while kde have the default gui look and act very mutch like windows it enables you to place the “start” button anywhere on a bar or have it appear as part of the desktops rightclick menu. hell you can have the taskswitch and “systray” on a totaly diffrent bar from the “start” button. this can allso be done in gnome.
from what i recall windows dont allow for this level of custom work on the basic gui (this can be a good or a bad thing, i know).
other then that i kinda like the ideas in the article, most of all the idea of software as applets and a document senteric gui. while i like the idea of windows never overlaping, i still want to comment on the fact that some stuff work best in maximized view, therefor there is still the need for a maximize button…
The one thing a lot of people seemed to like was the document model idea, but I disagree. I personnally don’t want a simple note to myself ( say: Chrissy’s number, 555-1234 ) to take up 50 kilobytes of space, just to accomodate all the header info and markup tags in order to make it a generic document format. This is what everyone seems to want to do with XML anyway. Not a new idea.
The idea of a VM in the operating system is an interesting one, but not too feasible. This implies that all software on the os, including the kernel itself, should pass through the VM. Obviously, this cannot happen. How can you write device drivers that way? You can’t. Ok, so maybe he’s saying application software should run on the VM, but the kernel and dd’s should be native code. Ok, that means the kernel has to be designed to communicate with application software through the VM. Bottleneck. The author did say that people today want video, music, etc, did he not? Well how exactly do you propose to get smooth audio and video playback, let alone editing, IF YOU REMOVE THE HARDWARE ACCELERATION FEATURES AND MAKE THE SOFTWARE RUN IN A VM?????? Ever heard of MMX? SSE? Ever heard of the fact that mp3 audio and mpeg video is compressed with wavelet math. Have any idea how complex they are to decompress because of the wavelet math and how much data is constantly flowing through the bus? And you really think you can actually accomplish this effectively utilizing a VM???
Anyway, the author threw around quite a bit of terminology, but did anyone else notice how he didn’t really seem to understand what he was talking about? Or the fact that he invented the abbreviation GI because he didn’t know about the acronym GUI? Please don’t tell me he actually got paid to write this load of bs.
I consider myself one of those users who uses the taskbar for almost everything. When I use Windows or Gnome (i don’t use kde) i look @ the taskbar constantly to know what i have running. I don’t see overlapping windows as a problem, because all i have to do when i can’t find something is click the taskbar button. If my windows were always maximized, i think i’d freak. I’d go for more of an “always minimized” approach. I minimize everything, and like to see like one or two apps up at a time. And virtual desktops make things even easier, so i don’t see why windows are a problem. If anyone’s used Enlightenment, i think someone should expand on that idea, of looking at only half a desktop at a time, so that you can switch between two maximized apps.
I also forgot to point out one thing..
Whats the matter with having your CPU and video card used by the OS? We all know just how much a waste it is to see your brand spanking new $500 video card sitting there for 98% of the time displaying a pretty chick in a swimsuit as your wallpaper. The power is there, so why not use it?
The same can be said of the CPU. Considering that there is development of having multiple CPU’s in the one package, why not take advantage of this? Intel’s Hyperthreading Pentium 4’s and Xeons have this ability (in a limited way, however).
The author then doubles back on his prior statements about fancy user interfaces, in this quote (critical error is in bold):
“Looking at future developments, however, it seems to be rather practical to have a resolution indepent GUI. In that way, applications have no problems running on low-res devices such as a palmtop or a TV, while still being able to take advantage of high-res computer screens. To have something to brag about, the VM graphics system should offer nested canvases remembering their content, so that one could say that ‘the new OS has a GUI in which each control is drawn with hardware acceleration’.”
Just what do you think Apple OSX does? Use fairies? As I said in my earlier reply, there is a reason why 3D graphics cards are shipped with Apple PC’s.
The same is true with Longhorn. Those “fancy effects” are not going to be applied by the CPU, but they are going to be specifically made to utilise the video card’s power. Not only that, but if the author had bothered to read the hundreds (if not thousands) of articles on Aero, he/she would have noticed that there are going to be different levels of the “interface experience” (the user will only see certain effects depending on how powerful their computer is).
Now that Longhorn and Apple OSX are utilising the video cards, it is entirely feasible to have a resolution independant interface, with applications and fonts scaling to the resolution natively.
The author makes further comments about removing applications, but instead using invisible applets. Whilst this idea is radical, it is flawed. The user must be able to control the applications, and see them, as well as interact with them. How the software is used, and integrated should become invisible, but retain its usefulness and useability.
I do agree with the idea of DDF. Having the document store the format itself is a brilliant idea, and one seen in Unix, Linux, BSD, and Mac OS.
The idea of splitting a program such as a word processor into two is flawed as well. It is crucial to be able to manipulate the document at all levels, and having the functionality split will only hinder the end user. If one was to split the functionality, it would be to make one half a “reader/viewer only”, and the other the editor. Both these applications would require a seamless link and method of switching between the two modes.
Err, when I say Unix, Linux, BSD, and Mac OS store the document format itself, thats not what I mean, I mean that they don’t require document extensions, because those are internally stored. That is what I meant, and sadly, DDF is not used by afformentioned OS’s.
I do still think its a great idea.
> Put every document in XML, and then the namespace defines
> the presentation behaviour/software to use eg. svg, html,
> word(?); then piping documents would be easy, using
> XPointers or something, remote and local docs could be
> referenced.
I like this comment better than the article.
I believe the author has the right idea when he talks about not needing Windows or a visible Application program. I would compare this with the game of dungeon and dragons. Yet it would be in line with the user’s daily living experience.
Take this interaction as an example:
Where am i?
Outside your front door.
Enter house
Done.
Describe
You are in the foyer
Northwest: long narrow hallway.
South: Door to Den.
Southwest: stairway going up.
Go south.
You are in the Den.
Get Address book.
Obtained.
Lookup B.Peren
Brad Peren
123 Anystreet
Anycity, Anystate, 00000
email: [email protected]
Send email.
Is the document already typed?
No
<Bring up box to type email>
*************
This is all command line yet it can be graphic with a picture of a home, place of work, place of worship, desk, addressbook and the like and all would be touch screen/ or with a mouse. Or a CAD drawing/Map. So in general the design should be in line with the user’s daily living experience.
Sounds like Microsoft Bob.
My thoughts on this article is that it is well written and makes a good point. For a computer to be worth having, it must not take up more time than it gives, i.e. it must make my life easier without making it harder. For total newbies, the system that he described would be absolutely awesome. It would allow users to focus more on creativity and doing something with the machine rather than babysitting it (as with windows most the time, and some Linux and Mac machines sometimes).
However, this is for “normal” computer users. Administration, however, by network and server administrators, will almost always be done in the shell, and this is where unix/linux shines by allowing us to use redirection and piping to make their job easier.
These things are not opposed to each other. The computer interface is only different because the jobs we do with them are different. I’m not going to compare Windows XP to Autozone software running on Redhat 7.3, because their functions are much different. Autozone uses a two-color curses-like interface for their parts ordering system, but it is rock-stable, and does the job it was designed to do.
So, in summary, I like the article. A computer’s purpose is to do a job, and the interface should be designed around that purpose, or around being good at all functions (for multi-use desktops, like the one the author described, would be good at).
If Apple slapped their name onto that most fundamental part of the Linux Desktop, attatched to a GPL licence, then it would be in a position to steal back (at least) a big chunk of the market share.
And without any ability to charge for that, how do you think they’ll make money and stay in business ?
Lets not forget that without Quartz, OSX is essentially beefed-up FreeBSD.
No, it’s not. OS X without the GUI is Darwin, which is basically a Mach microkernel what a BSD personality combined running a bunch of userspace tools ported from the various BSDs (not just FreeBSD). Under the hood, there’s little resemblence between FreeBSD and OS X.
This is really just semantics, however, the point you’re trying to make is that without the GUI, OS X offers nothing new and is, for all intents and purposes, valueless.
Which is precisely why Apple giving away their crown jewels would be a monumentally stupid thing to do.
Incidentally, I don’t know how you can call OS X’s GUI “smooth”. The main reason I still haven’t bought a Mac is because – even with the latest revision of OS X – the GUI remains chunky and unresponsive. Windows is much snappier and even KDE or GNOME on X are no worse.
Is it really such a controversial thing to suggest that they might have an x86 port in mind?
Controversial ? Only to hardcore Mac zealots. Naive and silly ? Yes.
There were a few years there where Macs based around an x86 CPU were a distinct possibility, due to the brick wall Motorola ran into with their PPCs. Indeed, somewhere inside Apple, there were almost certainly x86-based prototypes running OS X. Similarly, Apple almost certainly keeps an up-to-date port of OS X to x86 (and possibly other platforms) simply to make sure OS X remains a portable OS and platform dependencies don’t creep in (Microsoft do the same with Windows).
However, IBM’s PPC 970 removed any need for – or benefit from – an x86 Mac.
Also, this “x86 port” of OS X would *not* have run on regular PCs. It would only have run on Apple’s x86 Macs, which would not just be regular PCs with a fancy case and a 20% markup. They would have had fundamental architectural differences (eg: Open Firmware instead of a traditional PC BIOS, no ISA bus, no legacy ports, etc). It would probably have been possible to hack x86 Darwin to run x86 OS X on a regular PC – but the EULA would have made it illegal (much like the current OS X EULA says you can’t run OS X on non-Apple systems).
This is also ignoring the fact none of the important software packages would work on x86 OS X.
This is all because Apple doesn’t sell OSes, they sell systems. Their main selling point *is* OS X, true, but OS X’s real advantages stem from the way Apple control both the software *and* the hardware. Standalone OSes (like Windows and Linux) simply can’t match this (and it appears a significant number of people don’t consider that a big problem). Apple knows this, which is why they killed the Mac cloners and why they never ventured into the market of OSes for mass produced, commodity hardware.
Especially with the Darwin source-code and x86 install openly available on their site.
The technical issues around an x86 port – even to regular PCs – of OS X are trivial. Apple could probably have most of it done in a week. The political, practical and business issues are *huge*. It’s not going to happen. Even if it did, OS X wouldn’t run on regular PCs and Macs wouldn’t be any cheaper.
Great! Let’s start all over again with the Open Source movement. Longhorn won’t be there for two years or more. Everybody is afraid of what Micro$oft will produce. Well, I’ll tell you this: it will be a critical cleanup, not to say: a rebuild. They got a serious problem and will have to rebuild the entire kernel. When it comes, it will most certainly be bugridden and flawed. In spite of all the marketing hype, it will need at least another two years before it is ready to rock ‘n roll.
We got proven technology, rock-solid, and still plenty of room for improvement. Note that Unix is most of all an ARCHITECTURE and one of the most successful that the world has ever seen. You can expand it, plug new stuff into it and it will continue to run.
Instead of starting at square one, we should use those four years to expand the functionality, add new concepts WITHOUT losing the advantage we already got. Sure, we do need original ideas and new concepts and we should apply them, but we should not get lured into starting all over.
For all reasons, there is no need to! We GOT a good kernel, we GOT a good architecture. And now we got the opportunity not to imitate Windows, but surpass it. And once they got catching up to do, we win.
I’m glad that I can use my Linux without being bothered by alpha concepts like “objects”, “methods”, etc. I never understood OO until I found out that it was just a bunch of structures and pointers to functions. I’m glad I don’t have an operating system that forces this “geitenharenwollensokken” jargon on to me, thank you!
I’m sorry, but the article is interesting in scope but badly implemented. When discussing things we can’t always pretend to know nothing, we have to use established concepts to communicate. None of your ideas are new. (The object desktop example is good, exactly ALL the major players were heading that way some nine years ago) Now the interesting thing is, why don’t we have them today? Is it your development model that will be different this time? Or the unique combination of concepts? Pray tell us, because THAT is what would be interesting to read!
While reading your article I had to think of Oberon.
So before you start with your own implementation have a look at:
Have a look at http://www.oberon.ethz.ch/
It is able to run on an own os and the idea of the gui is quit simlar to your ideas.
… that we should continue to work on useability. It is an issue.
What he says about making the installation of a network, an OS, or whatever easier, is quite flawed. Any enduser who is not able or willing to address installation problems, as they invariably may occur, really, regardless of efforts to make it easier, should not perform the installation by himself, but find someone who has that ability and/or willingness to face such problems when they occur. Case closed.
Eugenia, why the hell do you put this kind of crap on your site?
I’m so mixed in opinion about osnews.com. Generally its good, and then there’s stories like this that make it appear as if anyone simple willing to write an article for you will get it published.
Ugh.
I can’t believe I just spent all this time reading such a useless article!
I did get a little teary-eyed when OLE was mentioned – I remembered the beauty of OpenDoc. I wish that technology would have survived.
There are few innovative ideas here. There is not much that I haven’t already heard of.
And for one thing, I don’t like the proposed return of Xero non-overlapping windows. Screen real-estate is small as it is. Beside, I like to see the cool windows semi-transparency with overlapping windows on Mac OS X and Linux. ๐
Well, I tried to read most of the comments posted, so I’ll reply to some of them.
I stopped reading at Java > C++ > C. Maybe you are right, in that case I hope you enjoyed the joke. The only thing I “know” about them is that:
1. Java looks like C# (as explained in C’T)
2. Gnome people use C and say C# is lots better than C++
3. C++ is an extension to C.
So how have I concluded the wrong thing? Please explain!
What you write is crap / You know nothing about GUI design. Maybe. I do not claim to be a usability expert. But that does not mean I have read nothing about usability. A quick investigation: I read about Fitt’s Law, Hicks Law
Now that I think about it: scrollbars at the right are useful, as (in maximized windows) they conform Fitts Law. Problem here is the mouse pointer, however, as you can’t see it when you move the mouse to the right -> bad usability.
Also, the OS X menu is a good idea. The *STEP one too, but it were even better if it popped up at the mouse pointer, like in RiscOS.
POSIX doesn’t mean no GUI But having POSIX encourages utilities like ls, cp, rm, tar, gzip and gcc to be ported, and those have bad usability.
Emacs! Is confusing. For example, there is no clear way of creating a new document. And it has no capabilities for spreadsheets and so on (just like Screen).
KDE != Windows. True. But I looked at the default setup and found that KDE copied the Windows fault of putting Maximize next to Close.
Overlapping windows are useful. For drag and drop, indeed. IMHO drag-and-drop is a clear way of doing certain things, as it is clear to the user what is moved from where to where. For the rest, they are confusing. My mother, for example, had that Outlook Express problem (and related ones) more than once.
By the way, both applications I am running now (Mozilla and Delphi) are maximized.
And now that I think about it: why is Tabbed Browsing so popular, when windows are such useful things? For normal webbrowsing, they are very practical, but not for File Management – at least not in Konqueror as there you can’t drang-and-drop files between tabs.
Aqua is the best GUI Now imagine the KDE Control Center were an Aqua application.
Drop shadows behind menus and windows can be useful, I agree.
It’s not innovative. It isn’t revolutionary, indeed, so there is (relatively) even less innovation. It’s still WIMP, indeed, but most computers are. Additionally, layouting and editing a text document is still most easy on something with a screen and input devices, I think.
Note that I have nothing against doing away the mouse+keyboard, make a tablet PC with a screen on which you write with a pen using handwriting recognition. Then, the PC becomes some kind of A4 paper, but with lots more of possibilities.
Innovation is difficult. I once wrote an essay about spelling reform, about all kind of aspects of reform I had never previously heard about. Years later, I read a book about spelling reform and it turned out that I had compiled a list of about all arguments other reformers had made, nothing more and nothing less.
IMO, the most “innovative” things are the GUI pipes. They allow things like placing graphs in a document without having a hidden spreadsheet behind it, like in Office. And they make document-based software more modular.
The network thing was mostly practical. I think about the network as a tool, that should be as flexible as possible. Work Anywhere, at Work and at Home. Automagically ๐
Your ideas are old and they simply never caught on. That does not mean they are bad. Video2000 was better than VHS, I have been told. VHS has become the standard, however, as it was cheaper. And MS-DOS didn’t win form Caldera’s DOS because it was better, but because Windows 3.1 was made not to work on the latter.
How about CD players/burners, …? The fixed-size windows containing no documents. Good question.
But then – my proposal was some kind of task-based UI. Editing a document can be seen as a task. Playing a CD too. So the CD player should get its own “screen”, just like the calculator. The application will then run as a dialog with just no document behind it.
Multiple monitors. If you have two monitors, then you should be able to either have one application appear on both, or have each monitor display one application.
You focus only on UI. Indeed. Technology works behind the scene, and in the end the User is where it is all about. They want a system that Just Works, no matter whether EtherTalk or mDNS+TCP+IPv6 are at work. Actually, I already wrote that, at the end of the article.
Behind the scenes, I have no doubt there are innovations happening. WinFS. ReiserFS4. mDNS. RendezVous. CUPS. IPv6. All great technologies (if I’m not mistaken). But they don’t really ease the life of the end user if they can’t be used easily.
Final note: nice to see it at least raises a discussion.
I think not, not in the way the author describes it. Yes, UIs will continue to be graphical, but not to the exclusion of CLIs.
The CLI is the best user interface for most activities. The mouse is a curse; as most UI experts will tell you, you want to minimize the user having to move her hands between the keyboard and the mouse. A predominantly GUI based UI maximizes the amount a user has to move their hands between the mouse and the keyboard.
The problem has always been that the CLI is limited in its ability to interpret what the user wants. A smart CLI would allow the user to tell the computer what she wants, and is the step in between what we have now and the future where the computer can communicate with the user with normal speech. Even then, the keyboard will probably lurk around for tasks that would be tedious to dictate, such as editing documents.