Linux will become ubiquitous in the year 3000. Okay, that was a horrible joke. Linux is just a kernel, the engine that runs an operating system. By itself, it is essentially useless. Kernels shouldn’t be discussed or noticed by normal users. And as such when providing these users with reviews, previews and “professional” opinions, computer consultants, computer reviewers and computer journalists should not spew headlines like “Linux is not ready for prime time”, “Linux on the desktop by XXX”, “Linux to takeover Windows”, “Linux is not ready for desktop” and so on.
Editorial Notice: All opinions are those of the author and not necessarily those of osnews.com
Linux, the Unix-like kernel, is as flexible, versatile and powerful as any other kernel on earth, proprietary or otherwise. The kernel is not only ready for prime time, it is prime time. It is not only used on desktops, it is used on wrist watches, cellular phones, mainframes, servers and other embedded appliances to mention a few. Perhaps what these computer consultants, reviewers and computer journalists mean to say is, “Are the software applications that run on Linux ready for prime time?”.
The response to the last question will vary from individual to individual. Given the already existing sophistication of operating systems, like Windows XP and OS X, operating systems running Linux are ready for the average normal users. “Really? Then why isn’t everybody switching over to operating systems running Linux if it is ready for the average normal user”, you ask. Well, I think the answer is simple.
Operating systems running on Linux do not provide any strong “user” incentives or reasons for people currently running OS X or Windows XP to switch, among other fears which is the subject of this essay. Computers in general are hard to use. It takes years of training and usage to become relatively proficient in using software applications. Many of us were trained to use Microsoft operating systems and applications right from primary or secondary school. We have learned to accept their shortcomings, and we have mastered ways to be most productive on them. Even if it means dealing with a multitude of inconveniences.
Some adventurous users have ventured into Apple’s operating systems. Their operating system has a reputation for being attractive, easy to use and coming with applications and services that make computing a joy for the average user. For many normal users, this is the holy grail of computing as it were today. It is the favorite among multimedia professionals and experts. But all of this come at a “price”. And a heavy price for that matter. Being expensive and scarce doesn’t help Apple’s operating system pervasiveness. And since the Apple management is devoted to quality above all else, we have come accept that the Apple operating system will continue to be targeted at niche few.
Linux is one of the most popular Open Source Software projects. Its popularity is heightened by the fact that it is philosophically free, and it runs a on a multitude of hardware platforms and many software applications have been designed to run on it. Most of the software applications that run on it are provided for free, philosophically, under several free licenses. But, though Linux is widely used on various hardware and for several task-intensive purposes, it has not really garnered much popularity in the desktop market or average home user market. And, unfortunately, it probably never will.
Let’s forget about Linux for now. Few users are concerned about Linux or cares about it per se. Linux, the Unix-like kernel, is doing just fine where it is today and it is advancing at an impressive pace. The home user doesn’t want to know what kernel he or she is running and if the code in the kernel is cleanly written or not. The home user doesn’t want to concern himself/herself with hacking drivers to make the Nvidia graphic card work. The home user just wants to turn on the computer and start using it without reading manual pages and online documentation.
Software applications running of Linux will probably never accrue a mass home user share, not because they are inferior, but because users will move to an unknown territory if, and only if, the new territory provides them with better, more convenient and easier ways of accomplishing their tasks. Software applications aren’t getting easier to use. They are getting more complex, increasingly dumb, and excessively patched with redundant features that many ordinary users will never use. Add to that horrible user interfaces designed from God knows where.
In the open source sphere, things don’t get any better for the ordinary user. Projects are hardly documented, and even when they are the quality is embarrassing or the documentation is written in an esoteric ramblings that only a Ph.d in Software Engineering and Computer Science, with a Master in Electrical Engineering and Computer forensics can understand.(Man pages anyone?)
Joeiamdumb, a young intelligent person with a genuine phobia for computers, panics when he reads the synopsis of the second line of a man page, and promptly orders the services of the help desk to format his hard disk and replace whatever is on it, operating system running Linux, with a program he is familiar with, Microsoft or Apple operating system usually.
Users are not resistant to change or use something new. But if using their new environment is more of an inconvenience than what they are used to, I really don’t blame them for abandoning the alternative. If open source software developers and contributors are even remotely going to steal 50% of the market, then they have 3 fundamental challenges ahead of them. Let my rant begin.
Design software for the lowest common denominator
If you want your average home user to use your software, it must be designed with simplicity, ease of use and the user in mind. When your average non-technically oriented user says, “Hey, that’s so easy. I could never do that on Windows or Mac without 27 clicks. And they said “Linux” was hard. What’s so hard about it?” Congratulations, you have fundamentally succeeded. You have spurred an interest. You have impressed a non-technical user. He’ll probably send you a check in the mail for your efforts. He will join your forum and mailing list. He will advertise you software, like a zealot, to his friends and family. Others will get to use software, they will find it amazing, they will do the same. You wouldn’t need to spend millions of dollars on marketing and promos. The word will spread.
Given the state of open source software at present, this is unlikely to happen. Instead, developers follow horrible design approaches used by proprietary software projects. The average user takes a look at software applications running on Linux and doesn’t find it any different or easier from what they already used. Add to that the fact that their proprietary 15-button mouse will not work, and their surround sound card isn’t yet supported. Their wireless network card module just won’t load. Then you tell them manually edit some configuration in /etc/X11/XF86config just to get their monitor resolution right. And then you wonder why your average user says “Linux” sucks.
If I had to go through all the above, forget about it. I’ll stick to windows’ insecurities and limitless viruses. And even if I got all the above to work, there still needs to be a reason to keep me glued to this new environment. How about the open software community begin to develop intelligent algorithms for our application software rather than copy the dumb, stupid approaches used by other operating systems. Let me illustrate with a hypothetical example.
Sending an email with a little intelligence from the computer
Averageuser: Errr…I want to write an email.
Computer: Who do you want to write to?
Average user: Jack
Computer: Scans address book (In less than a nano second)
Computer: “Provides user with all names and nick names contain Jack”
Averageuser: Oh, Jack Wane.
Computer: Seamlessly launches email program
To: [email protected]
Subject:
Hello Jack,—————-
Sincerely,
Averageuser.Computer: Proceed to enter subject and body of your email.
Averageuser: Types away happily.
Computer: Corrects spelling mistakes, suggest alternative sentence structure, etc. provides suggestions.
Averageuser: Makes necessary adjustments
Computer: Secures, via encryption, and other means and sends automagically.
Computer : Done, Have a nice day.
Averageuser: Proceeds to do something else.
Now if I can do all the above either through voice activation or by simply clicking an icon, which then launches a wizard that takes me through all the process, I’ll be impressed. Even someone who has never sent an email before can do so easily with little to no prior email using experience. The above illustration can be translated to so many other applications.
When users can use computers with no need to read a man page, or info pages, or documentation found at www.thisisosmuchstress.org, they wouldn’t care whether they are working on Windows, Mac or “Linux”. All they care and know is that they sent a secure email, and they did so effortlessly, painlessly, and easily. The user probably doesn’t even know the name of email program he used, or the language it is or was written in.
If open source software has to be ubiquitous, you need to provide applications that are easier to use, intelligent and make Mac and Windows computers look like the stone ages of computing. And believe me, only open source software has the resources, time and the power to make this dream come true.
I’d like to contribute but to this project but it doesn’t look professional. No Documentation
Developers please document your projects. And please write them in a simple language my 5 year old cousin can read and understand. If you want people to participate in your project, it has to be thoroughly documented and easy to decipher. I personally do not participate in and support projects that are designed for elitist. And I’m sure many people don’t.
Coding is an art. Your fans want to know your thought process, your challenges, how you solved problems, how to use your code, how to use your program, how they can participate, how you came about the project, what plans you have for the project, how you designed the project, the mistakes you made etc. All these an perhaps more need to be documented if you want individuals, other developers, business organizations, sponsors, or nations to take an interest or help with your project.
It’s not just about hacking codes that only makes sense to you. It’s about being considerate to your users, your potential developers, your sponsors, your supporters, generation of scholars and so on. If Einstein didn’t document his works, think about the valuable knowledge we would have lost today. Spend as much time coding as you would documenting, or get someone who is interesting in doing it for you.
Of course, most elitist developers will argue that they don’t have time for those who don’t understand their code and they are doing it out of their freewill and for fun. My response is, “well if the code was only intended for you to understand, then don’t publish it to the public. Make it proprietary and sell it to some dumb person or something. If you intend to make your code public, then you should be responsible to clearly document working innards of your project, for criticism, assistance or correction.” To cut the long story short, adequate documentation can and will attract a horde of supports, followers and users. And it’s sometimes even more important than coding.
The state of open source documentation is pathetic. You only need to take a look at some man pages to understand what I’m talking about. No, they couldn’t have been written for the average user. At least that’s not that bad. I know lots of projects that aren’t even documented. No, manuals, not developers resources, no comments in code, nothing. Just a broken program and it’s free source code, that doesn’t even make sense to me if I wanted to fix it. Given the fact that the average users tech support will be some form of documentations in the open source software realm, how do you expect him/her do more advancing tasks without a well documented manual or FAQ or paper?
The “Who cares, its free” Attitude
Don’t release half-baked attempts at hacking to the public. If your project isn’t complete fundamentally, if it still contains bugs you don’t know how to solve yet, if it is not properly documented, if you have no intentions to complete your project or support in the near future, just be responsible and don’t release it to the public until you are sure of what you are doing. I know several people who just got tired of packages breaking their system, and went back to safer Mac or Windows.
Even if it takes 5years to get your code and algorithm right, and your project well documented, then we can wait. Nobody is chasing you. You are not under any deadlines from publishers, board of directors or whatever. You are free to do as you wish, so do as you wish and do it right. Most of us tend to think that because open source projects are open and free, it gives us and excuse to submit bug ridden, poorly hacked, irresponsible code for public consumption. That’s probably the meanest thing any developer can do. If your code isn’t reasonably stable or is not properly documented, just don’t publish it to the public it is a considerate responsibility. I know a lot of developers who guilty of this. In the name of popularity, fame and exposure they submit code that can be potentially harmful and say, “Hey, I’m not getting paid to do it. The users can deal with it or not use it” Well, it would have been better it you didn’t submit it to them to use in the first place.
When the open source software community has scaled all the above mentioned challenges, then perhaps by the year 2030 we will be purchasing Apple softwares for $10 and a copy of Microsoft Operating System will be going for around $5. But that will never happen if the open source software developers and community continue in the fashion they are at present. Unfortunately, I don’t share the optimism many open source software/Linux enthusiast share about bringing Linux to the Desktop in the {1-5}years, even though I am an enthusiast myself. At least not until it is better, it is easier, it is more valuable than Mac or Windows. And not until there is popular semblance of management, control, documentation, creativity and testing in most/all open source software projects. Open source software should solve problems and not copy the failed solutions of others. (I”ll probably get flamed for that.)
I’d love to see an adoption of a variety of operating systems by users all over the world. But most importantly, I’d love to see computers and software application becoming intelligent and easier to use for just about anybody. And something inside me keeps telling me only open source software can make this happen. Our fellow proprietary cousins are more concerned about locking their customers to their products, patenting innovations for eternity and keeping the register ringing. Personally, I still thin we are yet to rip the benefits of open source software, and scaling the challenges I’ve mentioned is a good starting point.
About author:
Mystilleef is a computer enthusiast who believes computers are hard to use and software is diminishing in quality. He will be graduating from university this year. His field of specialization is Accounting and Finance with a minor in Computer Science and Economics. Among other things, Mystilleef enjoys arguing, reading, playing video games and outdoor sports.
I highly doubt humanity will exist in the Y3000, surely we would of leeched the earth dry of its resources, made extinct every biodiversity apart from the cockroach and a few rats, then eventually blown each other up?
It’s human nature.
I’d put money that Open Source technology would still be in use too the end though – probably not Linux, but I can’t see all this GPL code nearing extinction .
I’m often hearing people saying that computers should be easy to use for someone who has never used one. The result of this is we have attempted to make analogies to the real world — ‘files’, the ‘desktop’, et cetera.
It is my opinion that we should not force ourselves to follow real world analogies when designing computer interfaces. The fact is computers are becoming a key part to nearly everyone’s life in developed countries. They are now in the same group as cars, televisions, and telephones. I would not expect someone who has never driven a car to be able to drive one on their first try.
When we make the decision to attempt to model computer user interfaces after real world analogies we are truly making a compromise. We aren’t making this decision because we believe it to be the most efficient method of using a computer, or the most correct; we are making this compromise for the sake of ease of use.
In my opinion the value of this familiarity (through the ‘desktop’ or ‘file’ analogy) decreases as computer usage becomes more prevalent. Most children by the time they are 10 today know how to operate a computer better than the average 50 year old. If computers are going to become this essential to our lives perhaps we should look into designing computer interfaces with principles like efficiency in mind.
If we are going to all be using computers daily for the rest of our lives do we really need to retain these analogies to make computers usable? I really believe that (so long as it is minimal in complexity) that people would be able to learn a new interface which is designed with efficiency of use in mind. Lets not be weighed down by our desire to model interfaces after reality — face it, a computer is nothing like a desktop or a filecabinet. It is capable of much more than we have even imagined thus far.
Nothing original here. This is just every tired argument rolled into one nice FUD article.
Now there will be 100 posts of: Linux works on my desktop and 100 of: Linux SUCKS!
The fact is that every one of these suggestions is being taken into account by all the “desktop” programs.
IT TAKES TIME. Just because this guys says something, doesn’t mean its going to happen overnight.
All these ideas are being implemented,and will be perfected one day. Don’t rush volunteers.
All the stuff in this article is known. Easier said than done.
The problem is users see and demand a “product.” That adds perhaps 2-8X effort to the development, if you believe Fred Brooks. (This is from memory, I don’t have a copy of his book at arm’s length.)
So, the question is, who should do the productization? An aggregator such as Redhat? Or should people pay a consulting fee to someone, ideally one of the project authors?
People who I never would have expected to be interested or even aware of Linux keep coming to me and asking me to install it on their machines.
So say what you like about linux being ready/not ready for the desktop. Whatever its status, it is gaining users and/or mindshare at a frightening (for commercial OS vendors) rate.
i see where the author is coming from, users not needing to care about their kernel. but unfortunately its not practially true. take a an average user i know who wanted to read files from his ntfs partition. i installed linux as dual boot for him. he insisted on redhat, while i wanted mandrake for him. now the default redhat kernel could not read ntfs. he didn’t need to know that. but he was forced to. becuase the kernel supplies some functionality – or lack of.
great, kde etc runs on bsd, linux, solaris etc… but some devices don’t work with bsd. or linux. or whatever.
i’m afraid te article didn’t recover from such Islingtonian thinking.
Just read your article it’s pretty good except some spelling and gramatical errors such as using where and were.
I agree completely with “I’d like to contribute but to this project but it doesn’t look professional. No Documentation”
However, your other points, umm… let’s just say that the voice interface isn’t exactly very good usabiltiy wise or productivity wise. and you have to understand that simplicity and very few features are a bad way to design softare. It’s as if instead of cleaning your house you just destroy all the rooms that are messy, yeh you have a clean house, but would you want to do that?
Anyway, some features do need to be removed completely or at least reorganize dina lto of sfotware projects I know, these feautres jsut get in the way, but most are useful.
Now the other point you made “The “Who cares, its free” Attitude” is also clearly wrong.
This isn’t proprietary software, you release half-baked projects in OSS so that many developers can help you release a truly good version fo your software. This is what version numbers are for. responsible developers mark these as pre 1.0 and clearly say they are not ready for primetime.
Also, while nobody is chasing OSS developers deadlines in thsi market odn’t come from your boss, but rather from your competition. I’m afraid, that you can’t wait forever making your little project perfect, because by that time ther will be a clear market leader which has gathered so much momentum and now includes so many more features than you had planned that its just not worthwile. OSS develoeprs might also be discouraged seeing taht the proprietary alternative is soo much better and that catching up is virtually impossible. OSS projects have as much pressure for release as proprietary IMo.
Also, OSS does not copy MS or other companies excpet when tehy have come wih a good solution, for example the taskbar in windows and windows are very good and not copying these would be stupid. You also need to make the 98% of people who don’t use Linux feel at home and not introduce them to something completely different, at least not until linux has at least 10% f the market.
BTW: I’m rooting for mid 2005 as the date Linux will be ready for primetime and I mean really really ready. Too bad that’s when Longhorn comes!
Just for the record I am writing this on a Red Hat 9 laptop.
Just a quick observation. If open source development is considered to be so much faster (given the resources of many people) than how could something like OS X become so easy, so pretty, so stable so fast.
I only have one answer VISION, Steve Jobs has it weather you like it not. Someone is sitting there with a whip making sure the the goals are met, one interface, one file system, one way to install software etc.
They have no time and money to argue KDE vs GNOME vs xxxx, rpm vs apt vs tar etc ….
So yes there is an overwhelming resource base in the open source world but most of it is wasted. It is one big research project. The only place where it seems to work is for example the kernel,why? Because there is a vision and Linus is the on with the whip.
Back to my original question.
How did Apple manage to create OS X to be so easy, so pretty, so stable and all in such a short time?
Time to upgrade, and get some work done.
Cheers
Do some research. OSX’s entire core is OPEN SOURCE. Hell, even their browser is.
Yes, I know their GUI isn’t, but when all the internal system work is done for you, you have time to focus on a GUI.
And why did OSX evolve so quickly?
Because APPLE is a rich company. Money makes the development go quicker.
We all know what the problems are, no need to spell them out again.
The solution is to become one of the 1% who contribute to Linux, rather than the 99% who just use it. If you can’t code, or don’t have free time, you can give cash.
The time spent writing that article could have been spent writing documentation. Think about it.
Well, let me put up one of my own idiotic comments again:
I’ve got 2 words, and these 2 words can make ‘Linux’ succeed:
– Approximations
– Egos
The former should be embraced, the latter should be dropped.
It’s simple as that.
(and yeah I know that’s hard, very hard, but in the end it’s good for all of us)
My point exactly, we already have all the core stuff (kernel) but the energy is wasted around the gui, app installation etc. Remember we are mainly talking about linux on the desktop.
Are you saying that only money can create a single vision for say the gui or simple installation? I think you are right.
If there is money there are deadlines, there is no time for having 3 solution and picking the best, when money runs out project is dead. Quite a motivator to get it right.
Cheers
Do you really think that there is a battle between OSS and the commercial OS world? Nope, people in the OSS comunitty are doing it for fun, not for somthing else. Some of the code makes it in commercial apps, some of the code makes it somewhere else, some of the developers get better jobs or build up an impressive curiculum/reputation.
I agree, I don’t like to have windows pushed down my troat when I buy a Dell or some other brand name PC, but that’s beside the point.
I accept that Linux distros aren’t quite as polished as OSX or XP. But the linux desktop is far better than win 95 or OS9. The move from Office to OpenOffice is trivial compared with switching from WP5.1 to MS Office.
Installing most linux distros is far easier than installing windows 3.1 from a stack of floppies.
Linux is good enough (for many users) now and rapidly improving.
The author can’t resist a dig at Apple:
“But all of this come at a “price”. And a heavy “price” for that matter. Being expensive and scarce doesn’t help Apple’s operating system pervasiveness. And since the Apple management is devoted to quality above all else, we have come accept that the Apple operating system will continue to be targeted at niche few.”
Perhaps the people at Virginia Tech who decided to put in a cluster of 1100 G5’s failed to do their homework on price/performance?
http://www.walmart.com/catalog/product.gsp?product_id=2233757&cat=4…
Can you please point me towards a similiarly priced new Apple computer? Thanks.
Also, I’ll keep Apple in mind if I’m planning on building a cluster of 1100 computers to type my papers and post to OSNews.
Or maybe Virginia Tech got a HUGE discount for buying so many.
He’s talking about normal users.
And anyway, how many Intel/SGI/Sun/etc clusters are there compared to Apple?
Yeah,that’s what I thought. Even Pixar(owned by Steve Jobs) doesnt use a cluster of Macs.
Good heavens man !!! Did you tell this to Microsoft ? Are you in fact saying that Microsoft releases its products completely baked and bug free ? Come on….. really !!!
I do not use Debian, but invite you to realise that a Debian Linux beta release of that distro’s “half-baked” code is so darn good as regards bug freedom and stability that it is widely considered the equivalent of a Microsoft release after about 3-5 service patches have been applied, and even then I am not sure I am being kind to Debian.
This particular comment is downright stupid as far as I am concerned. As at least one journalist has said: Microsoft is the ultimate solipsistic company. And if you don’t know what that means, it is essentially that everything Microsoft does is designed for their benefit….got that ?…. THEIR benefit…not the consumer. As the journalist said: it may look that way, but every piece of code is designed to lock you into Microsoft and ensure that they have an income. As a result, Microsoft code is released to ensure that consumers are fed more upgrades (buggy or not) based solely on economic principles (or lack of them…whichever you like) which mean Microsoft has an increased cash flow. And that’s why Linux is being attacked. It actually DOES threaten Microsoft’s cash cow of the desk top.
Last but not least: the only reason that people are able to use Microsoft (as you point out) is that they grow up with it. But we now have experiments which show that if children grow up with Linux, they feel the same way about Linux as the current lot on Microsoft feel about Windows. Why do you think Microsoft is fighting to make sure Linux is not bundled on computers at point of sale ? Hmmmmmmmmm ?
I leave it at that.
How could something like OS X become so easy, so pretty, so stable so fast?
Answer: It didn’t. Mac OS X (NeXTSTEP) has been improved and worked on for over ten years. It’s ease of use is attributed to several years and many millions of dollars of HCI research.
I completely agree with the spirit this article, but implementing answers is a difficult ask.
Most Linux documentation is very hard for mere mortals to understand. Leaving aside the very-hard-to-digest man pages, have a look at this email reply that a sender gets to an email sent with a suspected virus attachment:
============================================================
Our Linux email gateway has detected that your message to
<email@address>
MAY contain hazardous embedded scripting or attachments,
or has been rejected by our site security policy for some other reason. If you have a question, please reply to this notification message.
It is POSSIBLE that your message was infected by a virus.
You should make sure your virus signature file
is up-to-date and then rescan your computer,
especially if you do not remember sending this message.
If the macro scanner score is large yet your virus scanner reports that the document is not infected, try saving it using a different format (such as Rich Text – “RTF”) that will remove all macros.
REPORT: Trapped poisoned executable “bqqep.exe”
REPORT: Not a document, or already poisoned by filename. Not scanned for macros.
STATUS: Message quarantined, not delivered to recipient.
============================================================
I am a sysadmin of a small (~30 user) Windows network, and the network users would have no idea what this meant. All this talk of:
email gateways
site security policy
hazardous embedded scripting or attachments
macro scanner score
macros
poisoned executable
already poisoned by filename
not scanned for macros
message quarantined
is just a foreign language to them. They would just tune out after the first difficult term and stop reading. They may have heard of some of the following, but their understanding would be limited:
virus signature file
up-to-date and then rescan your computer
document is not infected
a different format
This is typical Linux documentation and shows clearly that the writer has little understanding of the technical ability of the average Windows user. Until software and documentation is written for the “lowest common denominator”, Linux will struggle.
I am going to offer my services to start rewriting some of this undigestable stuff into text that people can understand and enjoy reading. However, I feel that I will meet some opposition from hard-core Linux heads that are unwilling to make it easy for them and will insist that users must understand their techno-babble.
We’ll see.
Raylene.
If Linux takes off on the desktop, it could happen first on college campuses. There would have to be some killer app that many of us might not be too thrilled about, for example it might be a new rich client for file sharing and media playing. The app might be created by some random kid like Fanning. If that happens the VCs and entrepreneurs will move in to take advantage. I suspect this wave will be slick businessmen like Robertson and Lindows – they won’t kowtow much to the FOSS community, won’t be allergic to DRM, etc.
College kids use Windows for the same reason the rest of us do – they have to create MS Word documents, use IE for compatibility with certain courses, etc. But Linux is free or close to it (in price), so a dual boot can be set up for little extra cost. Contrast this with Mac, which would require them to buy expensive hardware over and above their Windows PC.
Assuming that the goal of “Linux” is to compete in the market, please consider that having it available pre-installed in at least 50% of all new computers purchased would completely nullify most of the author’s argument.
And as for calling it Linux instead of the Linux kernel or GNU/Linux, welcome to the wonderful world of marketing! In a world where cognitive skills and attention spans are so low, you take what you can get. Now you know how the Free Software people feel when they get the “how good can it be if it doesn’t cost anything?” reaction.
It says the author is a student who enjoys arguing. I look forward to his progress.
I think the author brings up a lot of valid points. First off, everything needs to work if you’re going to get people to switch.
I know I’ve had hardware support problems here before.
Then there’s the apps. I agree, most either have really bad docs or none at all! And really Linux apps don’t offer anything more than Windows apps.
I’m all for open source, in fact I run a ton of open source apps here, and I have a couple Linux boxes running mostly network services. But for me, Linux is just too much of a hassle to run as my desktop OS.
And when are they ever going to fix the fonts?? Even with all the latest and greatest fonts for Linux they’re still not as good as the fonts in Windows.
So for now I stick with Windows. Even though I can’t stand Microsoft. Ease of use means a lot to me, as it does to most people. I love playing with Linux and I fully support it. But it’s not ready for my desktop…
contrasutra wrote:
“Or maybe Virginia Tech got a HUGE discount for buying so many.”
I don’t know how much Virginia Tech paid, but I doubt they got a huge discount. Just price comparable alternatives to get a sense of the probable price. (See IBM’s eServer p series).
“And anyway, how many Intel/SGI/Sun/etc clusters are there compared to Apple?”
You’re leaving out IBM’s eServer p series (now in use at Celera Genomics – see:
http://www-3.ibm.com/solutions/lifesciences/news/celera.html
).
“Yeah,that’s what I thought. Even Pixar(owned by Steve Jobs) doesnt use a cluster of Macs.”
Well, that was before the G5. Who knows what Pixar will buy next.
Regards,
Mark Wilson
<< Developers please document your projects. And please write them in a simple language my 5 year old cousin can read and understand. If you want people to participate in your project, it has to be thoroughly documented and easy to decipher. I personally do not participate in and support projects that are designed for elitist. And I’m sure many people don’t. >>
Not trying to sound disrespectful but if you need documentation worded for a 5 yr old, you probably dont need to be coding. Projects are not basic programming 101, major instensive programming is done by elite hackers/programmers. If you dont know how to code you will not be making Openoffice a killer bug free app overnight. my advice is if you wish to code go with something small, WindowMaker, XMMS, Scribus something like that.
Linux needs a lot of work, there needs to be solid applications with standards. Yes, you will most likely have to pay for the software, but normally when something is ‘free’ it is not worth much. For starters, a Office suite that has a email client, and so on with a common user interface.
Another area that Linux needs work is drivers, with Windows more than likely the driver is bundled with the operating system and it works. With Linux it may be but it will not work correctly. X-Windows needs a lot of work to, it is a resource hog and is slow.
The beauty of Windows is that the applications follow the same guidelines, point and click. I like Linux but it is going too many directions and just has a overall lack of organization. So it will struggle on the desktop, Windows will remain the number one pick for all users in the home/business worlds.
You complained about the fonts. Fair enough, linux in its old days had font issues. Not anymore, Bitstream fonts look great, and MS fonts are copyable into a linux system. I am using Mozilla 1.4 on this laptop running Suse 8.2. Looks good from here.
All my linux machines have nice fonts. I run lets see: 6 linux boxes. My company uses linux on the Desktop exclusivly. SO does my home computer. My wife prefers linux over windows. So whats the problem?
Well, I’m sure MS does some insane UI analysis and that is what counts. I used to use Trillian as my IM, but I recently switched back to Messenger. Trillian was amazing. It supported all format, had skinning, had groups/sub groups, user naming…
Pardon my non-techniess, but MSN has easy to launch checkers and head-2-head mine sweeper. It also has the best feature, most UI feature. File transfers, games are not presented to the user using in your face message boxes, but with inline text in ur chat window.
Yeah, I was willing to dump all the great features of Trillian, including subgroups…as I rarely used any of it.
But as to the other aspect of the editorial about open source itself…Coding will never be what Joe Schmoe will be intersted in. If there is somthing he wants in a project, he’s not going to take the time to go into the code.
Don’t flame this part—anyone got any real stats for it.
**************************
Most projects, whether Open Source or not, typically relegate themselves to a contained development team. I would imagine the actual # of active, capable developers working on the linux kernal is about the same or less than those working for MS.
**************************
My only real issue with the Open Source is that’s based on a world which no longer exists. Yes, you can make loads of money on service using Open Source. But that really only applies to corporations. Its that only philosophy or some company selling you a telephone relay: The software is just part of the phone relay…not an entity on its won. Similarily with IBM servers…
But the desktop market is not service based. I buy a game or whatever, I am buying the software for its software application. Unreal 2003 does nothing real…but does great things in software. Hence, for it to make money, it has to sell the software itself. Applications like that could never became open source and be profitable. Unfortunately, that is the desktop market.
Yamin
I would say the big challenge is the lack of integration. It is basically a whole bunch of small pieces glued together with gum. A company like MS has one vision, and it codes everything (kernel, desktop environment, GUI, filesystem, management tools) for that purpose. Linux will never be able to do that.
When a new company hires new programmers for a completly new product, alot of times they will look for fresh programmers that have a clear mind and a fresh attitude. If Linux is to make it to the desktop its developers need to take on a different attitude.
Sure, but some programs are completely lacking any documentation. Some are so badly documented that only the developers themselves can understand it. While I agree that they shouldn’t target newbies, they should at least target intermediate programmers. It’s their loss, after all. The less accessible their documentation is, the more “elite” the group will be… and the less people will be willing to cooperate with them, including people that could help them with user documentation and graphic/GUI design.
Pardon my non-techniess, but MSN has easy to launch checkers and head-2-head mine sweeper. It also has the best feature, most UI feature. File transfers, games are not presented to the user using in your face message boxes, but with inline text in ur chat window.
Pardon my trollness, but what background pictures, games and custom emoticons (things that probably made you switch) have to do with “ease of use”, let alone with an IM client? Personally, I can’t show any respect for a instant messenging program recommending a goddamn P3 (I’ve tried on the min req and it’s unuseable). As for file transferts, I hate the way MSN handles them. The progression bar hardly tell me what’s going on.
Anyway, I guess it depend on our personal preferences… I don’t flame you for your choice. However, don’t say that MSN is easier to use than Trillian just because it doesn’t handle games. Ease of use and bloatness are two different things.
Personally, I can’t show any respect for a instant messenging program recommending a goddamn P3
Personally I can’t show any respect for someone bitching because they can’t run new software on their 10 year old Pentium 1! GMAFB!
A P3 is *old* hardware man. Thats like the bottom of the barrel these days.
I’m typing this on a dual p3 btw.
http://www.walmart.com/catalog/product.gsp?product_id=2233757&cat=4…
Can you please point me towards a similiarly priced new Apple computer? Thanks.
Also, I’ll keep Apple in mind if I’m planning on building a cluster of 1100 computers to type my papers and post to OSNews.
All I have to say is, what a cheap piece of shit. Not cheap as in low-cost, I mean cheap as in “cheap and nasty”.
No monitor, no modem, cheap substandard speakers, ugly case, cheap/crappy keyboard and mouse, and only a normal CDROM. About the only thing there worth any money is the CPU and memory modules, the rest is pure junk.
Oh and to top it all off, it is only available in the US. So for the other 5.9billion people that live outside the US, they’re screwed.
b&^%ing. People seem to like doing that these days.
There’s Apple for wealthy people who want everything done for them, Windows for not so wealthy people who want everything done for them, and a wide selection of Linux and Unix type operating systems for everyone from flat-broke college students to multi-billion dollar companies, and ranging from people who want everything done for them to control freaks who want to tewak out their boxes until they explode. Where’s the choice? OSS.
You can’t tell me something like Lindows or Mandrake is all that difficult to use. If my mother and my girlfriend (both utterly computer clueless) can use it with less difficulty than Windows XP Home or Windows 2000 Pro, then it’s got to be good at something. Likewise, there are other distros for those who have more technical needs or more technical experience.
And by the way, the BSDs give a coherent whole, all developed as one operating system. Unfortunately, in that regard, Linux does not. But that is a strength or weakness depending on how you look at it. Mandrake Linux my mom can use. Slackware would be more my style. But there’s a linux we can both be happy with. But properly set up, FreeBSD, Debian and the like are just as easy to use.
Bottom line – if you’re not happy about it, the do something about it. Getting on a soapbox and complaining does nothing but make you look like a spoiled whiner. I think a big part of why Windows is so popular is that it lets people get away with not having to learn anything. The Marketing Age has destroyed us and exposed us for the lazy pathetic bunch we have become at its hands.
If you can’t code, write documentation. Or help someone else understand what you understand. Or learn to code. Or file a bug report. Do something worthwhile – don’t just complain. That could just as easily have been an in-depth explanation on how to use some command – for Windows users. Instead, all it did was waste my time to read.
As allways… half for, half against. Linux zealot find it easy enough because it’s free, and if you’re not happy, do better yourself while the rest of the world do not like to use cryptic program written only for people with a Phd in nuclear physics… without any doc.
Really, you like to use hard to understand software? Make a software usable by a 5 year old is stupid too, I do not know any 5 year old child who can understand most advanced concept of life
But I’m an advanced user(can read AND understand some source, have written myself some), but when I work I don’t want to struggle in the code to make my apps work nor I want to hack my graphic card driver so I can maybe see a badly centered 800*600 on my 22″(the other didn’t work? too bad, go back to the hacking with vim).
So yeah my father, my mother, my grandma, my girlfriend can use Linux IF I configure it for them. The same people in the same time can install and configure by themself a MacOS X or Windows 98/2000/XP pc without a itch… maybe Windows 2003 too?
Maybe it’s time to educate Linux zealot too : hacking thru various source, config file to finally using an apps where all function are a mystery since there’s no proper doc is not a good way to use a computer Maybe if you want to make some money with support?
Make me remember a joke I’ve received by mail with a fake interview of the creator of C++(not sure of the spelling of his name… :p) where he said he create C++ just to be sure programmer will have back a good salary ’cause it’s cryptic and hard to use (the joy of virtual? template? no one? )
MoronPeeCeeUSR:
“Personally, I can’t show any respect for a instant messenging program recommending a goddamn P3”
Personally I can’t show any respect for someone bitching because they can’t run new software on their 10 year old Pentium 1! GMAFB!
No, you most certainly don’t get AFB. There is no reason for an application that works perfectly well on a CELLPHONE [1] to require in excess of one billion clock cycles per second.
ILBT,
Good Grief
—————–
[1] viz. SMS
You don’t seem to understand. I wouldn’t bitch about that if it wasn’t an instant messenging program. I don’t think it’s normal to need at least a P3 to run an instant messenging program. I’m not even talking of games, I’m talking of just running the program and chatting with your friends… Decoding an MP3, something that should theorically be harder on the CPU, only needs a Pentium 1 (although it uses ~90% of a P75).
I know that the IT industry must evolve, but gimme a break. A chat program… Oh well, I guess this lack of optimisation from some developers (including, but not limited to Microsoft or Windows) is making hardware manufacturers really, really happy. Anyway, this is a bit off-topic…
I am quite offended by a lot of the lies you have put in your article. First of all, on documentation.
Documentation is hard to write, especially if english isn’t your first language, man pages are deprecated in a lot of distros. I certainly don’t write man pages, I use XML based documentation, which ends up as easy to read, understand web page like documentation. Remember that professinonal documentation isn’t easy to write. It has to go through extensive revision and proofreading, which is expensive. Most developers would welcome somebody to help proofread their documents, but remember that documentation dosen’t get written over night.
Remember, it is not “linux”, it is the distros and software that make up the experiance. If you use a distrobution such as Debian or Slackware, then your points ARE valid, but not most distros. Even the powerful Gentoo linux is powerful, simply because it has extensive and easy to understand documentation on its site.
Have you tried Gnome 2.4 or KDE 3.1.92 yet? They are really easy to use, even easier than Windows XP, they “just work” and they have got extensive documentation. My dad can easily listen to his music, my little brother can visit his cartoon websites, my mum can check her bank account and play her favourite games and I can do my development work too. All without editing cryptic text files and reading man pages! Try it when Mandrake 9.2 comes out, your fud will be rendered invalid.
As for complaining about stuff like sound cards, wireless modules screen resoulution. They are up to the distros to handle. Again, distros like SuSE and Mandrake will automagicly detect hardware, while distros like Debian won’t.
Your also complaining in the wrong place. Most developers don’t read this site, they are busy making things easier to use. If you wan’t to complain then complain to the proper places such as the the bug report page for the software youre using. KDE even makes it easy by including a “report bug” option in the help menu.
Also, be more polite. If you write something like this to the developers you will be ignored.
You are also forgetting one important thing, open source is mostly OPEN, if you have a problem, you can peek inside and see whats wrong. You can proofread the documentation, you can polish the interface, you can make packages and more. You don’t have code. I do a lot of help, even though my programming skills are poor.
So, in conclusion, instead of posting Yet another rant about linux, check the latest versions, and help provide CONSTRUCTIVE feedback. If more people like you stopped complaining and started helping, then imagine what could be possible.
Linux has always been ready for the desktop and its just a matter of if you’re ready for linux.
I’m a 4th year computer student. I did 1st and 2nd year of my course 6 years ago, took a few years out and now I’m back finishing my degree. In 1995 when I started here all the computers were running Windows 3.11 but we used DOS for all our projects. In 1996-97 the college started to upgrade to Windows 95. Nobody really knew that much about Linux. It was in no way part of the course.
Last year, in contrast, about 70% of our 3rd year projects incorporated Linux. Now in 4th year (IT management degree) Linux is a major part of the course. To my knowledge it didn’t feature here until 2000.
What am I talking about? Basically it comes down to this. The kids that were leaving college in Ireland in 1997 were likely going to work on Windows systems. If they were working as IT Managers or System Admin people and were responsible for installing systems, they were likely going to install what they knew… Windows.
The 54 guys and girls in my class are going to think twice. Perhaps 70% of them will install Linux if given the choice. Let’s give it a few more years and see how much Windows stuff they do when my brother is doing his degree in a few years.
I would disagree with the author that Linux is just the kernel. Linux is an OS. Regardless of how a technically minded person might want to view the use of a word the fact remains that a Linux distro is an Operating System not just a kernel.
Linux is not the greatest OS ever. And something better will come along. But at the moment it is the only alternative to Windows. It’s free to nerds, but to the average user it costs more than Windows. Why? Windows comes free with their new computer (as they see it, go on ask average users). Linux costs them about $69 for a boxed distro. Come on, you don’t expect the average user to download three 650Mb ISOs and burn them onto CD. Linux will have to get onto hard drives of new Dells, not just $199 cheap machines. If it wants to crack the desktop market. This will happen.
Anyway my original point should have been simple, and I could have just said: As college kids start to get educated on Linux as well as Windows, graduates will make their choice. While this may only be the case for computer students, it is fair to say that computer students can play a part in effecting the industry. A lot can change in 5 or 6 years.
Linux, the Unix-like kernel, is as flexible, versatile and powerful as any other kernel on earth, both proprietary and otherwise. The kernel is not only ready for prime time, it is prime time.
All of these points are highly debatable. Every major kernel release sees massive restructuring and reimplementing of core functionality (scheduling, memory management, IO systems, etc). Heck, sometimes it happens halfway through an allegedly “stable” development cycle.
Then there is the disinterest Linux developers seem to have with retaining binary compatibility between releases.
These are not indicators of a carefully designed, mature, easily maintained and stable platform.
Perhaps what these computer consultants, computer reviewers and computer journalists mean to say is, “Are the software applications that run on Linux ready for prime time?”.
Undoubtedly, most of them do – but all they’re trying to do is simplify an extremely complex field for an audience who knows nothing about it, and that should not be criticised in and of itself. However, the more technically inclined ones who actually are talking about the foundations of Linux are justified in asking such questions. The “sudden” replacement of the VM system halfway through the 2.4 series, for example, is going to make a lot of people nervous for some time.
Perhaps the people at Virginia Tech who decided to put in a cluster of 1100 G5’s failed to do their homework on price/performance?
Highly unlikely. More probably is that they have some software – or will have some software – that will benefit greatly from specific architectural features of the G5.
How did Apple manage to create OS X to be so easy, so pretty, so stable and all in such a short time?
Because a significant chunk of the work was already done for them. OS X is an evolution of NeXT, which was around a long time ago.
FUD from someone who wants everything on a silver tray. I am a the CTO for a company with 5000 desktops. Mosf ot those were running Windows NT 4. Upon my arrival 3 months ago, we began a transition towards Mandrake 9.1 that consistef of a 1 hour training seminar in which the NT 4 computer got reimaged with Mandrake 9.1
We have had zero complaints. None. We have migrated over 1000 desktops and plan to continue on this path. The most interesting thing is that I also run a local Linux User Group on weekends. Most of the folks at the company have asked me to install Mandrake for them on their home computers.
I told them bring them to the LUG meeting and we’ll do the installation. When they arrive at the meeting. I create a share on the file server for each person and ask them to back up their files to it. Once that’s done, CDs are handed out and I ask people to do their own installations. This is a sort of anthropological exercise for me. 80% of users have no issues, with a few of them asking a question or two.
In my apartment complex, I am known as the computer guy. Most of the folks there run Windows machines and most of the folks there are unable to reformat and reinstall their own OS.
So to sum it all up, this article is one giant pile of FUD. Can Linux improve? Sure, it’s doing so, all of the time. Can better documentation be written, sure, but the one that’s available is already excellent. For example, have a look at Mandrake’s excellent documentation here:
http://www.linux-mandrake.com/en/fdoc.php3
Or if you are running Red Hat, look at the documentation here:
http://www.redhat.com/docs/
Want to learn how to use OpenOffice/StarOffice, look here:
http://docs-pdf.sun.com/816-7367/816-7367.pdf
Stop talking from your pulpit and start doing. Teach, enlighten, share, write better documentation, help new users.
Time for me to walk my dog and prepare breakfast for the kids.
I’m not complaining, but when I saw this thread and read the article, I rolled my eyes expecting another stupid flame war. The vast majority of the postings have been very insightful.
The only point I’d like to make (and one of the other posters alluded to it) is that we need to make a distiction between “easy to use” and “easy to learn to use”. We should definitely be focused on the former rather than the latter.
Mystilleef has his points.
If only he actively joines the discussion afterwards, which he doesn’t seem to do right now .
Anyway, he has his points. For the masses, oss just isn’t finished enough. If you’re not interested in computers, then you look at the outside, how it looks and all, you don’t really care about the inner workings.
Would you buy a car that functions really well, with a good, stable engine, but it lookes like Nixon on wheels? You probably wouldn’t.
Honestly what’s so hard about some of these things you’re talking about? I just sent that “automagic” email. I opened up evolution, typed part of my friend’s name into the “To:” field, selected his name from the drop-down box that appeared, filled in the subject, typed my email, and hit “send.”
There. That wasn’t so hard now was it?
If you want exactly what you’re talking about, throw on RedHat 9 and go get Ximian Desktop 2. RedHat’s beautiful install combined with Gnome’s top-grade interface and Ximian’s polish puts linux as number three among the other heavyhitters (MS and Apple).
God forbid you would have to learn something new! There is always a learning curve with everything. You had to learn how to drive acar to get your drivers license right? I started in linux with Redhat 4.0 Every version after that that i have installed 5.2,6.1.,7.1,8 and 9 are quantum leaps in ease of use and power! Every time a new version comes I find myself uttering “wow! this is cool”. I personally think that the major linux distro are extremely easy to use maybe not windows easy but like i said there is a learning curve with everything.I have a prediction though, once Linux becomes the everyday OS alot of us geeks will move on to a more exciting operating system. It has already started with me at least i have found my flawless gem of an OS in NetBSD(Shamless plug).
using and administrating are two different things. GNU/Linux is ready for the desktop, it has been for ages now. The problem is that it is not ready for luser administration. Face it guys, configuring GNU/Linux is not for your mom/girlfriend/baby son, etc. now, with every major distro release, we get closer to that, but we are not there yet. As it stands, GNU/Linux in the office where there is a compitant tech department on hand is not only doable, its preferred. A fully configured RedHat setup is just as usable for your average luser, saves the company money, saves the user time in productivity, saves the admin time as linux is far easier to administer (if you know what youre doing). I have gotten friends hooked on linux. But I alwas have to be “on call” to fix their screw ups.
An example of Windows vs. Linux on the desktop. /etc/X11/XF86Config vs right-click on the desktop, and click the last tab on the right. Now i will get the linux camp start on and on about how you have more power, you can tell what goes wrong, you have the ability to fix almost everything, etc, etc, etc. dont bother, i agree. that is why I use GNU/Linux. however, if you dont even need the little power windows offers you, then why would you want to go through the pain and agony of learning something like XF86Config?
One final note, the author talks about the Kernel, and userspace software. There is something in the middle, the OS. The OS is called GNU, Linux (the kernel) sits at the heart of GNU. Linux was written by Linus Torvalds and co. GNU was done by the free software foundation. One thing that we have lost in the vast rush of new users over recent years, is the knowledge of where we came from. I heartily suggest that everyone reads both Eric s. Raymond and Richard m. Stallman, because quite frankly if i hear one more mashing of philosophies by people who dont understand either, I am going to scream. At the very least, understand Linux, GNU, and userspace software. The ambiguity in this article is pretty typical. After that, understand the difference between free and open software. Personally, i think ESR is a genius, and RMS is a zealot, and a jerk, but that doesnt mean that people should not respect him or give him credit when its due.
By Wrawrat (IP: —.130-201-24.mtl.mc.videotron.ca) – Posted on 2003-09-29 06:09:00
Pardon my non-techniess, but MSN has easy to launch checkers and head-2-head mine sweeper. It also has the best feature, most UI feature. File transfers, games are not presented to the user using in your face message boxes, but with inline text in ur chat window.
Pardon my trollness, but what background pictures, games and custom emoticons (things that probably made you switch) have to do with “ease of use”, let alone with an IM client? Personally, I can’t show any respect for a instant messenging program recommending a goddamn P3 (I’ve tried on the min req and it’s unuseable). As for file transferts, I hate the way MSN handles them. The progression bar hardly tell me what’s going on.
And hence a perfect example of not understanding the user. Yes my friend, playing checkers is part of ease of use. I like to play checkers over MSN. Now its much easier to click on a person’s name and choose play checkers on MSN than it is to get their IP address, go to the command line, and type “checkers –join “123.456.789””. It’s also much easier than spending a few minutes logging into Yahoo games and find a channel…
As to the requirements of Messenger. I have a 1gig Duron. Not exatly state of the art, but MSN has never impacted its performance. I code, play games, and MSN is nicely doing its thing in the background. I don’t care what its requirements are. If I see my system slowing down, then maybe I’ll have a look at it. MSN and other ‘background’ apps are not the bottleneck of my PC.
Yamin
All I will say about Linux is that the hacking config file shite has got to go. I’m not saying to remove it all together, but make it an options … don’t FORCE me to do it until I’m ready. For example, if you’re going to create a program that has an editable file (myapp.conf), there’s really no reason why you can’t include a small frontend for somebody who wants to use it. This is especially true for apps that have multiple config files … having a frontend with tabs (even if it doesn’t contain all the advanced options) is much easier to deal with out of the starting gate. Later on, once I’ve got things set up and running, THEN I’ll start to tinker with the config files, but not before.
I don’t really care how good the app is .. if it’s gonna take my 3 years to set it up, I’m just not interested.
Then there is the disinterest Linux developers seem to have with retaining binary compatibility between releases.
What do you mean here. If you are talking about drivers, then that is not too big an issue, I personally think people should not have to upgrade their kernels all the time anyway. Unless you run a webserver or some internet thing, there is absolutely no issue with kernels. Heck I can use ten different kernels with the same Linux install. So compatibility is not really an issue.
Good points in the article though nothing really new.
Open source software is usually developed by voluntary developers for themsleves and their kind = for expert users. Usability, from a non-expert point of view, is only a secondary goal. That has been changing a bit with such OSS projects like Gnome, KDE, Freedesktop.org, Open Office.org, Mozilla etc. (that deserve all the support they can get) but it is basically still so that OSS is not usually developed for non-expert users, and maybe that’s just the nature of open source in general(?).
I doubt if such classic open source OS projects as FreeBSD, Slackware or Debian could ever produce a main stream OS for the masses, that is just not the nature of those projects, as excellent sofware as they producce for experienced users, though. It seems that often there needs to be a commercial company that takes open source software, modifies it and makes it friendly enough to the masses.
Commercial companies like Xandros, Ximian, CodeWeavers, Lindows, SuSE, Vector, Libranet etc. take usability and ease of use – from a non-expert point of view – seriously. “Geek users” should really see the value of those commercial companies better. Despite those companies may (have to?) use some proprietary components, may have to charge money for their products etc. (just to make some profit), in the end they usually only benefit the open source world & developer community too by introducing more people to open source software.
If e.g. Linux has future as a main stream OS, it is companies like the ones mentioned above that make it possible. The same with open source software in general.
heh, good points except one. lindows. i have no problem with making a buck off of open source, if this model is going to work, that is what will happen. but, i do have a problem with charging a yearly rate for apt-get with a pretty front end. i mean, theres a difference between making a buck and fleecing your customers.
while i agree with the fact we need to be more accepting of the more capitalist OSS companies, we also have to remember what sets us apart. as it stands, 95% of commercial linux companies are companies operating with honour. while that may seem to be an insignificant moral abstraction for some, for others it isnt. why does virtually every distro provide free downloads of not only the source, but complete working copies of their products? because they have to? no. because by doing so they give back to the community that creates what they make a living off of. that i respect a great deal, and will continue supporting. lindows is fantastic for newbs, probably one of the best newb distros. but i will never support them, or recommend them. because they are applying the same “get ahead, no matter what the cost” principles to the GPL world as is normal in the propriatary software world.
they are talking about!
They may be some issues with Linux, but if you’d ever had to use the Microsoft documentation/help, you’ll know that the Linux documentation is one of the best available. At Microsoft you get strange error messages with idiotic troubleshout help (like: “Is your monitor plugged in?”), but I never saw a usefull help. Usability, an other topic: WinXP may be for the first usage of a computer (but how many of us use the PC now for the very first time??) more suitable than any free desktop, but at least the third time you’ll do again the same task, you should know it how it works. But than this, so userfrienly, long lines of underlined text (“links”) instead of short and pregnant buttons in WinXP will annoy you: it’s simply vasting your time! And even worst: you’ll have to live with it the rest of your life, because you can’t get the buttons back, not if you stay with Microsoft!
Or, what about integration? One of Microsoft weakest points: completely different shortcuts, menu stuctures and behaviours for similar tasks under Word and Excel, both part of the same suite. The same applies for Outlook and IExplorer. Also much criticized is the difference inbetween Gnome and KDE applications, hey, look at WinXP: every application comes with its own skin and behavior! But, this is cool, because it’s WinXP!
Other points:
– in the last 6 years, I’d never had to edit my XF86Config file by hand.
– how many of you (Linux bashers) have succsesfully installed Windows as your second OS on your HD, and still be able to boot both OSes with the Microsoft bootloader?? Or acces other file system?
– the last two years I never had any issues with Hardware and Linux, but many with Hardware and WinXP!
It’s a subjective world, and your reasons not to use Linux could turn out to be only excuses for your comfortable life. And therefor your article may turn out to be only FUD!
The problem with Linux applications is that the developers seem hell bent on copying Microsoft’s UI designs to a T. Microsoft’s UI sucks. That’s why I use a Mac. Why would I want to switch to Linux when most of the desktop environments and apps look like half-baked Microsoft clones? The really bizarre thing is that the latest distros, Red Hat in particular using the Bluecurve theme, look *BETTER* to me than Windows at this point. Good, but the usability is still bad — because the “let’s copy Windows” mentality is blinding everyone to the truth: computer interfaces need to be redesigned nearly from scratch. The current metaphors are breaking down — complexity and feature bloat is making it more and more difficult to use computers. Something, somewhere, has to give.
Jared
yeah, you go for it !
dumb down the documentation in linux for the average guy in the street.
If you need someone to join in as a partner, give me a shout.
Too many of my friends/collegues will not even try linux after they had a look at it, for mainly the reasons you gave. Too complicated for them.
Not that they are dim…. well, not “too” dim hehehe
but you are right.
Who cares?
I couldn’t care less if Joe User is using Linux. Let them continue to use Windows if they want to, what difference does it make to you and me?
More Linux appreciation would be nice. (Like Tech Support people supporting Linux and not just hanging up on you when you tell them you’re not running Windows.)
I’ve used Linux 99% of the time for the past 6 months. I love my Linux system and wouldn’t trade it in for anything – this doesn’t mean that everyone should use it if it means dumbing down the interfaces, having one standard desktop, etc etc.
If you don’t like Linux, don’t bitch and wine, just use Windows and get on with it.
They may be some issues with Linux, but if you’d ever had to use the Microsoft documentation/help, you’ll know that the Linux documentation is one of the best available.
This point is made moot by the fact that most Windows users don’t even bother with the documentation – they’ll bother the closest person to them who ‘knows about computers.’ For example, if my parents have problems with their PC, they call me
And hence a perfect example of not understanding the user.
AFAIK, I am an user, too.
Yes my friend, playing checkers is part of ease of use. I like to play checkers over MSN. Now its much easier to click on a person’s name and choose play checkers on MSN than it is to get their IP address, go to the command line, and type “checkers –join “123.456.789””. It’s also much easier than spending a few minutes logging into Yahoo games and find a channel…
Of course, I can’t disagree that it makes cheesy games a bit easier to play, but tell me… What it has to do with an instant messenging program? It could burn CDs remotely, give food to the remote user’s cat while he’s away, cook turkeys in remote ovens… but it would still have nothing to do with “ease of use”. It might be easy to use right now, but it’s still bloatness, something that is usually making software harder to use after a while.
As to the requirements of Messenger. I have a 1gig Duron. Not exatly state of the art, but MSN has never impacted its performance. I code, play games, and MSN is nicely doing its thing in the background. I don’t care what its requirements are. If I see my system slowing down, then maybe I’ll have a look at it. MSN and other ‘background’ apps are not the bottleneck of my PC.
If you don’t care how crappy a program is as long as you can run it nicely, good for you… Obviously, I don’t share that opinion.
Anyway, it’s useless to argue because we simply don’t have the same expectations and don’t see the current situation the same way… and I sadly know that I’m in the minority.
If you are talking about drivers, then that is not too big an issue […]
It is a big issue if everytime someone has to install a new kernel for a security update, they also need to recompile and/or reinstall their hardware drivers. It either means that a) all drivers have to be available in source form or b) any new kernel release must also be accompanied by new driver releases. Neither of these alternatives is especially attractive to hardware vendors.
It also applies to low level software like VMWare. I imagine that kernel-level HTTP server needs recompiling when the kernel version changes, as well.
I seem to recall Linus and other kernel developers saying on several occasions not to rely on kernel interfaces remaining stable between releases – including ones in the “stable” development tree. This is not the sign of a mature and stable platform ready for “prime time”.
I personally think people should not have to upgrade their kernels all the time anyway.
Ah, when there’s security issues fixed that require kernel patches, “people” don’t really have much choice. This is the situation I am thinking of, not the compulsive download-compile-install activity of the average linux geek. People in that latter category have the expertise and/or the spare time to deal with problems. Other users do not.
Unless you run a webserver or some internet thing, there is absolutely no issue with kernels.
There’s even *less* incentive to mess around with different kernel versions on production servers than there is on end-user desktops. Changes on such systems should be few and far between.
Whilst I am sure there are lots of people out there running their production servers on Gentoo or Debian boxes who have their systems setup to [semi-]automatically install updates whenever they are available – and getting away with it – this is *not* good practice.
Similarly, there are lots of people out there running production FreeBSD boxes on -STABLE. Again, not good practice, but probably somewhat less risky that doing the equivalent with Linux.
Heck I can use ten different kernels with the same Linux install.
And how many closed-source drivers do you use ? How much closed-source software that might need to interact at the low levels of the system ?
When I installed my first Linux distro, a couple of years ago, the most intimidating part was partitioning the hard drive. Luckily the distro in question was RedHat and at that point it already had a graphical partition manager. I can only imagine the horror of people who as their first experience of Linux meet cfdisk. Linux really needs better partitioning software. I’ve heard good words about QtParted, but at the moment its NTFS support is allegedly not that good.
Of course, there are live CD’s from most leading distros and these may be a good introduction to GNU/Linux world. It’s asking people quite a lot to make them install an OS that they haven’t tried out before. With a live CD people can play around with Linux, discover its capabilities, and if they like it they may even decide that it’s worth the trouble to install it permanently on their hd. 🙂
Good documentation is always a positive thing but my experience is that people usually decide whether they want to use a specific piece of software by first launching it and just playing around in the hope that they find out how it works. The time for reading documentation comes when people want to use some advanced features. Programmers, who have no idea how people actually behave, expect that users first read carefully all the available documentation and then launch the program. This, I’m afraid, is wishful thinking. 🙁
Wrawrat,
I’ll agree we come from different angles, but if I may. Just as there are car enthusiasts and sound enthusiasts who wonder why the general public doesn’t appreciate their market, there are those in the computer marker.
Playing CDs has as much to do with driving a car as playing cheesy games has to do with instant messaging. I don’t know about you, but I would like my car to have a nice mp3-deck nicely installed. Metaphors aside, I think playing cheesy games is something many people would consider an IM related activity.
As to efficiency. I stopped counting startup ram use a long time ago other such issues a long time ago. I bought the cheapest new system I could at the time I bought my PC, and it runs windows XP and the like just fine. I guess the diffence here is I don’t consider messenger inefficient or wasteful. I haven’t noticed any performance difference between messenger and trillian. Messenger may even be faster. But I’m not going to touch that one
Sadly, I cannot say the same about windows media nine. It’s really slow on my system and has some full screen control bar issues. Hence, I don’t use it. I’ve been thinking of dumping winamp3 as well for winamp2, but that’s another issue.
Yamin
Spot the user who doesn’t know what he’s talking about.
It is a big issue if everytime someone has to install a new kernel for a security update, they also need to recompile and/or reinstall their hardware drivers. It either means that a) all drivers have to be available in source form or b) any new kernel release must also be accompanied by new driver releases. Neither of these alternatives is especially attractive to hardware vendors.
This is complete crap. The only recent security hole in the kernel that I’m aware of is the ptrace bug – and that only applied to local users. If, as a sysadmin, you have local users that you can’t trust to have to upgrade, and can’t recompile the kernel, you have no bussiness being a sysadmin.
It also applies to low level software like VMWare. I imagine that kernel-level HTTP server needs recompiling when the kernel version changes, as well.
I run VMware on a custom kernel. Guess what, they handled it. As did nVidia. It works.
I seem to recall Linus and other kernel developers saying on several occasions not to rely on kernel interfaces remaining stable between releases – including ones in the “stable” development tree. This is not the sign of a mature and stable platform ready for “prime time”.
Link?
Whilst I am sure there are lots of people out there running their production servers on Gentoo or Debian boxes who have their systems setup to [semi-]automatically install updates whenever they are available – and getting away with it – this is *not* good practice.
Correct – this is why my Debian production boxes are upgraded by hand. Every week. If I had a lot of machines, I’d make damn sure they all had similar configs so I could run a repository, and get them to automatically update from that.
Similarly, there are lots of people out there running production FreeBSD boxes on -STABLE. Again, not good practice, but probably somewhat less risky that doing the equivalent with Linux.
You are confusing stable kernels with stable software.
And how many closed-source drivers do you use ? How much closed-source software that might need to interact at the low levels of the system?
See nvidia & vmware. Both have managed it. And don’t give me crap about hardware manufacturers taking a long time to update interfaces. The whole point of an interface is to make it easy to update without touching the core code. IF the hardware manufacturer doesn’t comes out with updated interfaces when a major security patched kernel comes out, then they’re a crap supplier and you shouldn’t use them. If you do, then that’s your own stupid fault.
…people like him appear on heise.de quite frequently…
IMHO, he neither understood the aims, nor the reasons or the thinking behind OSS development.
The OSS dev. community is very well aware of the lack of documentation. Things are improving constantly in terms of ease of use, and what a pace I have to add. And dammit, man-pages are the full reference manuals, so don’t complain bout the load, do <command> –help, for pete’s sake and a shorter parameter overview. Try some googling, or use GUI tools BUT HECK STOP THE HELL RANTING bout people who really contribute to this great step in history! Doh, you do try coding a new app from the ground up. Yeah right, it starts with ideas, and _right_ after you do the docu and – maybe – later implement the features.
Heck, if you do think GNU/Linux/OSS suck on the desktop nowadays, come back in a year and try if you can handle it.
And *I* think, programming in the OSS world is generally more efficient, more creative and organisatorically more advanced then propietary software development.
cya Marcel
P.S.: I think linux on the desktop might even get some market share by xmas. Or, if not this one, next one, definite!
This is complete crap. The only recent security hole in the kernel that I’m aware of is the ptrace bug – and that only applied to local users. If, as a sysadmin, you have local users that you can’t trust to have to upgrade, and can’t recompile the kernel, you have no bussiness being a sysadmin.
Firstly, I’m not just talking about things that have happened recently.
Secondly, for the home user who *is* their own “sysadmin”, it’s an important issue. Centrally-managed systems with dedicated support staff are not, and never have been, a real problem.
The important issues I raise remain – and you have not addressed them.
I run VMware on a custom kernel. Guess what, they handled it. As did nVidia. It works.
But you have to manually recompile the relevant kernel modules. Yes, it’s mostly scripted but a) it’s still to technical for the average user crowd and b) if the platform was mature and stable it simply wouldn’t be necessary.
VMWare and nVidia’s solution doesn’t “work”, it hobbles by with an ugly kludge. I shouldn’t need to go around manually updating drivers and the like just because my kernel might have been patched – I don’t need to everytime there’s an update to the NT or Solaris kernels.
Link?
Heres a quick example of how Linus feels that I pulled from a google search:
http://www.cantech.net.au/plug/1999-02/msg00181.html
Trawl through lkml and you’ll find a lot of similar sentiments echoed.
Basically, a lot of people think its a good idea because they somehow think it will “force” hardware developers to open source their drivers. Linus doesn’t – his reasons aren’t philosophical – but nevertheless it means that the interface to the kernel is inherently unstable.
Correct – this is why my Debian production boxes are upgraded by hand. Every week.
Every *week* ? I suspect you don’t have many boxes and aren’t running anything that’s mission critical. In other words, you’re not in a demographic that’s relevant.
You are confusing stable kernels with stable software.
No, I’m not. -STABLE does define more than *just* a kernel, but the kernel is part of it. -STABLE, despite what it seems to imply, is *not* stable and should be treated as if it will break after every cvsup.
A kernel on its own it completely useless. Trying to have a discussion about managing system updates talking *only* about the kernel is meaningless.
See nvidia & vmware. Both have managed it.
No, they kludged around it (in the time-honoured Unix tradition of avoiding the problem instead of fixing it). Having to recompile drivers or some other thunking layer is not a “solution”, it’s an ugly hack.
The whole point of an interface is to make it easy to update without touching the core code.
Which is why the lack of any guarantee kernel interfaces will remain stable presents a problem, doesn’t it ?
IF the hardware manufacturer doesn’t comes out with updated interfaces when a major security patched kernel comes out, then they’re a crap supplier and you shouldn’t use them.
You completely and utterly miss the point that they shouldn’t have to.
Basically, a lot of people think its a good idea because they somehow think it will “force” hardware developers to open source their drivers. Linus doesn’t – his reasons aren’t philosophical – but nevertheless it means that the interface to the kernel is inherently unstable.
I just realised this might not be completely clear, so I’ll rephrase.
The sentiment that Linux shouldn’t try to do anything to help binary modules (ie: no stability guarantee for the kernel interfaces) is fairly widespread. The reasoning behind it varies, however – some people think it will “force” developers to open-source their drivers. Linus, OTOH, doesn’t want to commit to a single specification so he can change it on a (technical) whim without having to give warning.
As I said, this is *not* an indicator of a well-designed, mature and stable platform.
There’s also a fair share of people who realise the problems with this attitude, but they’re in the minority.
I can’t comment on anything except ptrace. It’s the only one I’ve ever dealt with. Any comments about kernel bugs before either didn’t affect my boxen, or was before I was playing with Linux. In either case, I’m not qualified to comment.
Local exploits (at the kernel level) by and large, don’t affect home users. Since this is the only one I can comment on (see above) I’m stopping here on that point.
Re: Weekly Upgrades.
Yes, I have relativily few (six) boxes that I need to worry about. Geographically and functionally disparate. If I had large numbers, as I said before, I would ensure they ran the same, or close to the same software. I would then maintain one box for upgrade testing purposes, and having tested the packages, roll them out to a central repository that all box’s would automatically update/upgrade from. This is one of the points of Debian’s stable tree. Functionality does not change.
Stable BSD – Outside of my knowledge. Will not speak further on that one.
Unless I’m mistaken, kernel interfaces (at the program level) change very little. I fail to see this as a problem.
Agreed. VMware and nVidia have klugged it, because they have no choice. They wish to supply drivers to the Linux market. That means they play by the Linux communities rules. They can try and push for a standard interface, but if the Linux developers don’t want to, that’s the solution providers problem, not ours. It would be a problem if they’re the only supplier of whatever widget they happen to make, but mostly, they’re not.
I also will point out if a vendor is stupid enough to only supply modules for a given version of whatever distro, that most security ports are backported – thus keeping the magical binary compatibility that vendors seem to be up in arms about.
When it comes down to it, I’m not letting binary only drivers anywhere near my servers (as opposed to my desktops). I’ve used closed source software before, and it almost always turns to shit, and when it does, I can’t push to manufactuers around to fixing it. Why? Because I’m too small. If I have the source code, I can look at it myself, and if it’s really important, I can hunt down a programmer to fix it.
I realize that nVidia et all have things like patents to worry about. Good for them. I have things like stability to worry about – so closed source drivers have no place on a stable system.
As for well-designed, mature, and stable. Compared to what? Windows? Solaris? Prehaps SCO maybe?
As I said, this is *not* an indicator of a well-designed, mature and stable platform.
Well I see, you have programmed SO many kernels/OS that you can just spot a badly designed unstable one from a mile away.
If you had jack shit of a clue you would understand that Linus doesnt like sticking to doing things one way because he, like other people, know that there could always be a better way to do it, and setting in stone rules is a real quick way to fuck yourself and your users when you realize that something is a piece of shit system. Take for example module handling from 2.4 to 2.6 Drastic infrastructural changes happened between those 2 kernels. Not because it was “badly designed and unstable” but because they found and developed a better system to handle that process. As someone said earlier, Linux and GNU/Linux is more of a giant lab project. People are just trying to make it the best it can be. That is the entire point and philosophy of OSS, to make something the best it can be. Your definition of the best is different then theirs. They look at it from an engineer’s perspective, not your gui user point of view. And no the kernel and applications are not perfect. But thats why they are developing OSS, to make it the best it can be. To make the technology the best it can be. Not to take 50% of this market or that market. And its obvious that you simply dont understand the motivation of the *real* people involved with OSS. Im talking about the developers.
This idiotic “it needs this and that” bullshit is just completely retarded. Quit saying it here where no one gives a shit, tell the people in the driving seats. Your opinions DO matter. They do help the community, but saying crap about how its poorly written is just plain bullshit. If its worth complaining about then its worth improving. And all this crap about it needs these step by step instructions is complete FUD. You obviously havent used Linux in the last 1-2 years. Some points are valid but you act like nothing is being done about it. Yes drivers need to be handled differently, guess what, its called HAL. I could counter every point you make but frankly its not worth the effort. If you wont take the time to open your eyes and see what the hell is in front of you, what will i say matter to you?
I can’t comment on anything except ptrace. It’s the only one I’ve ever dealt with. Any comments about kernel bugs before either didn’t affect my boxen, or was before I was playing with Linux. In either case, I’m not qualified to comment.
You shouldn’t need to have dealt with bugs to be able to consider the implications.
Local exploits (at the kernel level) by and large, don’t affect home users.
Eh ? They affect home users at least as much as other users.
Unless I’m mistaken, kernel interfaces (at the program level) change very little. I fail to see this as a problem.
The issue is that they can. Without warning. The problem is that this (usually) breaks things compiled against the older kernel version unless they can be recompiled. This is bad.
That means they play by the Linux communities rules.
Here’s the problem. The tail is wagging the dog.
Never forget that at this point in time the Linux community is a minority market. Nvidia and probably VMWare could pull out of it without a huge problem. A (relatively) niche OS needs hardware vendors support a lot more than hardware vendors need the (relatively) niche OS’s support.
They can try and push for a standard interface, but if the Linux developers don’t want to, that’s the solution providers problem, not ours.
When you’re a sysadmin, you *are* the “solution provider”.
It would be a problem if they’re the only supplier of whatever widget they happen to make, but mostly, they’re not.
Yes, you’re really spoilt for choice with Linux hardware virtualisation software, aren’t you ?
When it comes down to it, I’m not letting binary only drivers anywhere near my servers (as opposed to my desktops). I’ve used closed source software before, and it almost always turns to shit, and when it does, I can’t push to manufactuers around to fixing it. Why? Because I’m too small. If I have the source code, I can look at it myself, and if it’s really important, I can hunt down a programmer to fix it.
And I’m sure hardware vendors will be quite happy to ignore you.
I realize that nVidia et all have things like patents to worry about. Good for them. I have things like stability to worry about – so closed source drivers have no place on a stable system.
This is just open source rhetoric. Open source software is neither inherently more reliable or stable than closed source software.
As for well-designed, mature, and stable. Compared to what? Windows? Solaris? Prehaps SCO maybe?
SCO Unix I have no experience with. NT and Solaris are certainly better designed, more mature and easier to maintain than Linux.
Well I see, you have programmed SO many kernels/OS that you can just spot a badly designed unstable one from a mile away.
I know enough about operating system design to know what’s good and what’s bad.
Have you programmed many kernels, or are you being hypocritical as well as obnoxious ?
As someone said earlier, Linux and GNU/Linux is more of a giant lab project.
No arguments there. It’s a very successful one as well.
The trouble is there are some people trying to sell this “lab project” as a designed, mature, stable and finished *product* (note that Linus is not one of these people).
Your definition of the best is different then theirs.
I have never said anything about “best”. I am discussing good features and bad features and how they are relevant to the posted article.
In any event there is no “best”, there is just “sucks” and “sucks less” (or, if you’re an optimist, “good” and “better”) – and even those are subjective.
From an objective point of view, Unix is a freakin’ nightmare. However, it fits into the “sucks less” category, so it is successful.
They look at it from an engineer’s perspective, […]
No, they don’t. If they looked at it from an engineer’s perspective things like kernel interfaces would be carefully defined and remain consistent throughout the product cycle. Activities like replacing VM systems would be carefully planned beforehand and carried out in the *development cycle* with plenty of forewarning.
They look at it from a *scientist’s* or, if you prefer, “hacker’s” point of view. Constant change and refinement. No specific design or goals. No quality guarantees. No timeframe guarantees.
There’s nothing wrong with this in and of itself, by the way, and I’ve never tried to say there is. It’s just not indicative of a mature and stable *product*.
[…] not your gui user point of view.
I’m looking at it from system administration, implementation and maintenance points of view, with a dash of third-party developer.
And its obvious that you simply dont understand the motivation of the *real* people involved with OSS. Im talking about the developers.
The motivations of the developers are wide ranging. Some of them I can appreciate and understand (like Linus – he’s a good bloke), others I cannot.
I would be highly sceptical of *anyone* who tried to paint even a small part of the OSS community – like the kernel developers – with a single broad stroke.
Your opinions DO matter. They do help the community, but saying crap about how its poorly written is just plain bullshit.
At no stage have I said – or even implied – the Linux kernel is poorly written.
I have merely pointed out that some of the development methodologies indicate claims of a product ready for “prime time” are highly questionable.
Incidentally, “stability” is more encompassing that just how often it kernel panics – I’m not sure if you realise that.
And all this crap about it needs these step by step instructions is complete FUD.
Huh ?
You obviously havent used Linux in the last 1-2 years.
Obviously. The numerous Linux boxes I am responsible for are just figments of my imagination.
Some points are valid but you act like nothing is being done about it.
That’s because the article and discussion are about what’s available *now*, not what might be available in two years if it doesn’t get changed around again before then.
Remember, it usually takes *years* after a finished product hits the market for it to become widespread – and that’s in fields that move quickly, like end user desktops.
I could counter every point you make but frankly its not worth the effort. If you wont take the time to open your eyes and see what the hell is in front of you, what will i say matter to you?
If you can politely conduct reasonable discussion, I’m more than happy to participate, I fail to understand why you would think otherwise. Certainly none of my posts in this discussion have been even half as abusive or misleading as yours.
If all you can do is spew vitriol, lies and insults like you have here, because you don’t my opinions, I can’t see why anything else you have to say is going to be particularly interesting.
Consider the evolution of software. We’ve come a long way from poking holes in punch cards. Many of us are with you when we hear your inspiration that “OSS can make a better computer system.” But that kind of reality needs direction and resources to achieve. What you see these days is created by people with direction and/or resources.
If you want to build a luddite-friendly OS, of course you’re not going to add unfinished, difficult to use software. But saying that unfinished software should not be released is absurd. In the spectrum of Computer Dummy to Software Developer, you find people capable of using all states of released software. Unfinished software is necessary and even useful. Not all users are Computer Dummies.
To actually make software as easy to use as Mac software (for example), takes a lot more time, study and effort than your typical volunteer can put a finger on. Major institutions are dedicating themselves to Human Computer Interface (HCI) design issues. Such futuristic interfaces might be so advanced that us “archiach power users” might find them completely unusable because they would be so unfamiliar–likewise a luddite user knowing just how to get around Windows might think the same thing…it’s different…don’t touch it. Major interface change is just as risky to your current user base as it is appealing to your Computer Dummy user base. Such things have to happen through evolution.
What you’re saying now is “it’s not perfect–make it perfect.” Who wouldn’t say just that? Guess what: we’re all trying to make it perfect. We’re witnessing one of the greatest democratizations of software development the world has ever seen…and we’ve discovered that we still need to create the technical underpinnings of a perfect operating system before we get onto the job of creating the perfect interface. Nothing will be perfect for a long, long time.
However, I must mention, just to frustrate you: the nature of computer users is changing as well. Kids that grow up with todays UIs, or even command lines, or even with access to read online software development sites like SourceForge, are quickly learning to become more proficient computer users then their parents. When my son becomes 15 in 2018, will he NEED the Perfect Interface? He’ll say, “don’t be st00p1d, D6D, I’m 2l33t for that!” And proceed to bust out of my firewall again just after I said, “stop it!”
It’s people who didn’t grow up using computers as kids who have the hardest time using them. I believe society will find an equilibrium between “Luddite Interface” and “Average User Sophistication” that will satisfy 90% of computer users and obviate the need for a “perfectly non-technical” interface.
Unfortunately, many of us adults living today, will unlikely see a lot of advance in intrinsic computer interface until we’re in our autum days.
I was also commenting on the article as well as your post.
Well i did have a long post written out. Then i just decidedly came to the realization that i really dont give a shit. Use what you want to use. Dont use what you dont. I could give a shit less. Linux is what it is, and saying anything in here isnt accomplishing anything. Nothing is gonna change cause i say something about this or that. Your complaints sound like complaints with redhat or suse and their “products” as you put it. If you had a real gripe with the kernel take it someplace where it will actually do some good.