A reader asks: Why is Linux still not as user friendly as the two other main OSes with all the people developing for Linux? Is it because it is mainly developed by geeks?
My initial feeling when reading this question was that it was kind of a throwaway, kind of a slam in disguise as a genuine question. But the more I thought about it, the more intrigued I felt. There truly are a large amount of resources being dedicated to the development of Linux and its operating system halo (DEs, drivers, apps, etc). Some of these resources are from large companies (IBM, Red Hat, Novell). Why isn’t Linux more user-friendly? Is this an inherent limitation with open source software?
I won’t pretend to give an authoritative answer to this question. All I hope to do with this article is posit a few possibilities and open the topic for discussion. First I think we should try to clarify the question by defining user-friendliness. ;Often, user-friendliness is conflated with beginner-friendliness, and this is a grave error. Just because the use of an object isn’t immediately obvious to a new user doesn’t mean that it’s not well-designed or even that it’s not easy-to-use. I have a nail pulling tool that’s unquestionably the most effective nail-puller you can buy. But whenever I hand it to someone, even an experienced carpenter, who’s never used one, they give me a blank look, and it usually takes five minutes or so to figure out how it works. They make nail pullers that are much easier to understand, but they don’t work as well. They take longer and damage the wood more. But the thing about the slide-action pullers that I like is that even though they take a few minutes to learn, once you know the trick, they save you several minutes with each nail. Over a lifetime, a tool like this might save a professional carpenter several entire workdays. You could say the same for computer users.
Point and click interfaces are much easier for computer neophytes to understand. You can click and hunt through the menus to figure out what to do, and once you have the basic concept of moving the mouse and navigating menus and buttons figured out, you can muddle through most tasks. Most importantly, you don’t have to memorize any arcane commands. But for all-day computer users, mousing your way through menus wastes a lot of time and makes things hard that can be easy. Learning a few keyboard shortcuts for the things you do every day (opening windows, saving, etc) and learning to use the command line, or command line-like tools (Such as Quicksilver or Colibri) will give you a major productivity boost. In short, often the attribute that makes a tool easy to learn actually makes that tool less useful to an expert.
In some cases, “easy-to-use” tools aren’t better even for newbies. Training wheels for bicycles don’t really help kids learn to ride a bike properly, and in fact can easily delay learning. A CLI can be superior for neophytes in some cases because it can be much easier to do complex procedures by copying and pasting strings of command line code from a tutorial or forum posting than following a circuitous walk-through of the same procedure (with screenshots) using a GUI method.
Conventional wisdom would dictate that the menu-driven GUI of a Mac or Windows machine would be the major determinant of its user-friendliness, but I disagree. I actually think that the time-worn “just works” measuring stick is a much bigger factor. Even if you have to learn a few conventions or [gasp] memorize some basic commands, if your computer operates in a reliable and consistent manner, it’s easy-to-use. I would hasten to add that installing new hardware or software and configuring your preferences should also be as simple as possible and should not break the computer in the process. The times that using a computer becomes the most challenging for any user is when something goes wrong, so the easiest to use computer is most certainly the one that never fails for any reason. But since we don’t live in fantasyland, a computer should minimize opportunities for either spontaneous or user-caused failure, and should fail in a way that minimizes collateral damage and guides the user through the proper steps to bringing the machine back up to proper operation.
This is where we can start to look at the relative user-friendliness of Mac, Windows, and Linux. Traditionally, Macs have been seen as the most user-friendly, and have adhered to the “just works” philosophy. In some cases over the history of the Mac, this has been taken to the extreme, where simplicity has actually stood in the way of power users’ productivity. But “just works” comes at a price, and that is the vertical integration of the Mac marketplace and the dearth of hardware options. In its current incarnation, the Mac OS is reliable, fails gracefully when it fails, makes installing hardware and software easy, and makes configuring features such as WiFi and Bluetooth quick and easy. This is user experience exists in large part because of the extreme amount of priority that Apple puts on user-friendliness and the tight control over the hardware ecosystem that Apple maintains. But it’s not all Apple. Ironically, one of the most important aspects of Mac OS X’s user-friendliness, its reliability, comes from its BSD-based underpinnings. The Pre-OS X versions of Mac OS, particularly those after version 7.6, in the late nineties, were not very speedy or reliable, despite being easy to use in other ways. It was only harnessing some of the inherent advantages of the open source development model that brought Apple’s OS to where it needed to be.
Windows is generally thought to be pretty user-friendly, but even a quick survey of the average computer user’s experience with Windows will show you that most people are not at all comfortable with it, especially once something goes wrong. Using Windows for everyday tasks is quite easy, and seems to get better with each version. In fact, I’d say that in some ways Windows has easily equaled or surpassed Mac OS X in some ways when it comes to basic usability like application launching and file management. Installing software and hardware are not as elegant as the Mac, but don’t pose a problem in most cases. And a clean Windows install on high-quality hardware will run extremely reliably compared to the standards of the nineties. The problems arise when poor quality software or hardware are added to the mix, because troubleshooting stability or performance problems in Windows can be very difficult. Also, doing advanced configuration, such as networking, can really be a brain-breaker, even for experienced users.
And thus we come to Linux. Once you get over the initial learning curve, Linux makes a very powerful tool, and its flexibility and stability can even make it a superior tool. But Linux suffers from the same problems as Windows when it comes to obscure or poorly-built hardware or software, and problems with troubleshooting and advanced configuration can likewise be difficult for non-expert users. Linux has the advantage of superior reliability, but this advantage gap has narrowed substantially over the past decade as both Windows and Mac systems have made major strides in this area. But where Linux really falls down is in “operating in a reliable and consistent manner.” Getting new hardware or software to work in Linux can be easy, or it can be hard. And the methods that you might use to make something new work in Linux are varied, and in some cases, quite a drawn-out process. You might need to install frameworks or libraries, and in each case there might be more than one way to do each thing, with no clear indication of which method would be best or why. The hardware you might want to use might not be supported, or might only be supported by way of complicated hacks. If you want to make a configuration, there may or may not be a user-friendly graphical config tool, and if there is one, it might not work the way it should, depending on other factors. I won’t even start into the whole X Server and DE conversation.
It seems that the most user friendly OS is also the most tightly-controlled. Microsoft brings on a lot of its own troubles on itself by providing so much backward compatibility and making it easy for its developers to be lazy by using cheap hacks and old APIs, but a Windows developer has more freedom than a Mac developer does. Linux developers, of course, have even more freedom, and that’s one reason why Linux is all about options. There are at least two ways to do everything. In large part, the freedom is a contributor to Linux being difficult for non-experts.
But that’s not the whole story. There’s also the decentralized nature of Linux development to blame. Both Apple and Microsoft can rely on a central authority for user interface conventions, and of course can focus their efforts on a single desktop environment. The Linux community is riven by fundamental philosophical disagreements on how to do things, GUI-wise, and even where there’s agreement, the various players are still largely acting independently, or at best are organized into a loose confederation of interested parties.
Then we have the “geeks” angle, as posited by our questioner. Because so much Linux development is driven by advanced users’ imperative to scratch a particular itch, the areas of Linux development that receive the most and best attention are the areas that are of particular concern to its most highly-skilled users. That’s why stability and performance are Linux’s strong suits. Most Linux alpha-geeks are quite content with the everyday usability of their particular setups, and their advanced skills make things like troubleshooting and configuration pretty easy for them. So yes, the fact that “geeks” are developing Linux for themselves is a major contributor to its user-friendliness deficit.
But what about the big companies working on Linux. Don’t Novell or Red Hat have some interest in improving Linux’s user-friendliness? Well, yes, but not really. First off, no company working on Linux has nearly as many engineers working on GUI issues as either Apple or Microsoft. Second, they never will, because the Linux companies aren’t very interested in Linux as a desktop or workstation OS. The big money in Linux is in servers and in “enterprise” applications. Linux as a server is easily as easy-to-use as Windows or Mac. Both Microsoft and Apple make great server OSes, but in some ways their adherence to desktop conventions put them at a disadvantage against Linux and all of its UNIX server heritage. So Linux companies are putting a lot of effort into making Linux more user-friendly, but more user-friendly as servers. Companies like IBM, Red Hat, and Novell do some business setting up big networks of workstations or even desktop PCs, but they tend to be relatively locked-down, centrally-managed systems where professional sysadmins wouldn’t want the users mucking around too much anyway, and the available DEs and apps for Linux really do get the job done in that setting. Or they’re workstations for “geeks” so the previous point applies. So I wouldn’t expect much action there.
I think the best hope for a sea-change in Linux usability would have to be an initiative like Google OS, where Linux is chosen to be the underpinning of a new, user-friendly OS by a large company that will unilaterally undertake to create an elegant user interface on top of Linux, much as Apple did from BSD. And just as an Apple user can launch the terminal and get all UNIXy, such a project would still be Linux underneath, and would hopefully still accommodate those people who treasure having half a dozen ways of doing everything.
So this question of Linux user-friendliness can be attacked at its roots, by debating what really constitutes user-friendly, or it can be addressed head-on, by examining what could be done to improve Linux’s accessibility by new users. I’m sure there’s a lot more to say. What do you think, OSNews readers?
Nice dig.
I doubt it was meant as a dig, more of an observation really. By its very nature, open source software is susceptible to meeting the developer’s needs as opposed to what works for the masses. I’ve noticed that in general, projects with the widest audiences (Gnome, KDE, Amarok, Firefox, Thunderbird…) tend to have the most user-friendly interfaces. Less popular or lesser known projects tend to be more esoteric, and sometimes make no sense at all (RipperX, Ardour, JACK, etc).
For the record, that wasn’t a dig from me either. Personally I love Linux, and I’m eagerly awaiting the release of Slackware 13. Regardless, I still can’t wait for the day Linux has a UI as good as Aqua. From what I’ve seen lately KDE is on the right track though.
the answer to his question is yes
Sorry, I can’t see an inherent limitation. Three things are true:
Linux isn’t supervised by non-technical people, so geeks rule.
Open source development is likely slower (more projects, fewer resources), so it takes more time.
Open source has a long tradition to copy existing technologies (OpenOffice, Mono, GNUstep, etc.), so development often stops as soon as it’s on par with the other OSs.
Companies like Mandrive, Connonical and Novell focus on non-tech user distributions as those are core goals of those product lines. I see anyone being able to take the commodity parts and lego together there own distribution based on different goals as a strength though. You have non-geek focuses and geek-focused distributions and this is how it should be.
OSS development speed depends on the project but in general, seems to be a bit faster. Windows Vista did things that Linux based systems where doing years before with less hardware requirements. The kernel began in 1994 publicly and has exceeded Microsoft’s 30 year old experience in platform design based on stability and hardware support. The last development report showed that the kernel is accelerating it’s development rate each year. OpenOffice went from a twinkle in the milkman’s eye to something quickly matching Office function for function. Creative started drivers for the XFI line then handed source and specs over to the Alsa project; XFI drivers are pretty solid in the testing build I’m running and that’s for all platforms that happen to use ALSA not just Windows. There are examples of slowly developing projects of course but based on meritocracy, the better projects tend to have a much faster development rate.
To look at your examples specifically for existing technologies; every office suite has been a copy since the first word process or and spreadsheet applications, the latter being one of the first applications ever developed. Mono is an implementation of .NET so obviously, it’s going to be limited to keeping up with MS development direction. It’s not all copy and wait though. USB 2.0 was implemented natively in the kernel before other platforms caught up. Aero came second and a distant second at that. Bluetooth was standardized across adapters within Linux platforms yet I still can’t get either a dlink or Motorola re-branded bluetooth thumbdrive working right on Windows. The flexibility and open license of the platform makes it a preferred environment for developing new technologies.
The platform has it’s issues but I see these being more to do with things like the forced need to reverse engineer hardware support due to vendor imposed conditions rather than slow development or only a copy of existing technologies.
Then things like VLC and Firefox would be just as user-unfriendly as Linux.
So why isn’t Linux user-friendly? Linux is just a combination of too many different projects, many of them competing with other alternatives, and not all of them intended to primarily cater to end-users. “KDE or GNOME? Let the end-user decide, and throw in XFCE and maybe even E17 in too for good measure!”
All we need is one distribution with the courage and resources to infuriate the geeks. Too many distributions just put together everything, make sure they work well with each other and nothing horrible goes wrong, and then if they’re hardworking they will change the look of the default DE and add some GUI control panel and package manager.
If distributions were done just like Linux is used for PDAs and smartphones, the world would be a better place.
To expand on this idea:
To me, user-friendlyness to a large part comes from the computer acting as if it had one voice and one mind. I.E. No matter what the user is trying to do, they know where the computer would put it, in the same way that they would know how late their roommate will be in paying the rent.
Linux, being made by many groups that all come together with an established history of what “user friendliness” means to them, lacks that overarching voice. There are dozens of different ways of doing anything, some of which may or may not work in any given circumstance.
Assembly and customization is the responsibility of the distribution; same with car manufacturer’s assembling commodity parts and customizing the appearance; same with soup vendors taking the same vegitables and putting there own spin on the cooking and flavour; same with toothbrush manufacturers customizing the look and rigidity of a stick with bristles on one end; same with … …
I just don’t get why it’s somehow special when it comes to Linux based platforms yet no one has an issue with choosing a breakfast cereal or any other product category that offers more than one choice. If one doesn’t like how one manufacturer assembled the car or Linux based platform, look at another manufacturer.
User friendliness?
It depends: who are the “users”?
What do we need more: user-friendly tools, or user-friendly interfaces? Do (should, might) they exclude each other?
Friendly means nice and easy clickety, or friendly means powerful, flexible, lots of options, high customizability?
Are users developers? Are developers users?
Friendly to me, to you, to newbies, to devs, to pros, to grandma, to whom? Here flexibility and versatility can and can’t [at the same time, even] mean friendly.
Friendly as flexible, or friendly as tight-wrapped with limited usage and options? Some can feel cozy moving around in a limited environment, some won’t.
If I am the user, then I say it’s friendly enough. But I’m not the user, I’m just a tiny portion of the user, if we think of the user as a humongous entity comprising all computer users on this planet.
A hope should be that convergence is a part of the process, development, polishing, and time maybe will provide a result that most would call user-friendly. And again, it might not. But, during that time even the meaning of user-friendly might change enough (given enough improvement and change in user behavior and knowledge and expectations) so that convergence happens faster.
Thanks for saying that for me. ‘User-friendliness’ is not quantifiable because it is very, very subjective. There is absolutely no way to claim a piece of software as universally ‘user-friendly’ or not.
“Often, user-friendliness is conflated with beginner-friendliness, and this is a grave error.”
Yes, but that is exactly what the initial question was pertaining to.
I use linux BECAUSE it is user-friendly to ME. I use Mandriva w/ GNOME and Fedora 11 w/ GNOME because it is user-friendly to ME. I would never claim that they are user-friendly to anyone else.
I would also like to add a philosophical point. The old saying, “Give a man a fish and feed him for a day. Teach a man how to fish and feed him for a lifetime.” holds wieght for software too, in my opinion. Dumbing down user interfaces only helps new users and only for a finite time. Eventually even new users become proficient and holding them back with lame hand-holding doesn’t make sense in the long term.
Make interfaces logical, consistent and efficient. That is the way towards user-friendliness in my opinion.
This holds true for a certain set of applications (esp. server side software). However, some software is meant for “one-off” operations where you are not all that interested to learn to fish in the first place. X configuration with multiple monitors is such a thing, as well as setting up your wlan (or rest of the network). Incidentally, this is not a problem with ui design but rather your usual buggy software that can be corrected by finite manpower.
Exactly, but for most “users” that is “user-unfriendly”, since they don’t see and don’t care how or why it doesn’t work, and they don’t want to spend any time or effort for finding it out. That’d be ok, but instead of using what they’d feel more friendly, they come out and start complaining how bad that is, and wondering how come this sw, app, os, etc. doesn’t live up to their expectations. It should be understood that if you get something for nothing then your demands should be requests, your complaints should be reports, and lacks in usability should be amended with your friendliness and will to contribute – if not with dev or support, then with patience and some effort.
No problem with that, you can make your choice. Fun explodes, when people come about with demands of their own user-friendly criteria, thinking they can represent somehow the general user public. Also, for a lot of people user-friendliness means that everything works (and sometimes looks) the same as things used to work on some other OS they are accustomed to. Ehh, I can’t see an end to this question, no light at the end of the tunnel. All devs can sanely do is work towards consistency and logical behavior and hope for the best
User friendliness comes from uniformity. Stuff fitting together, with a familiar feel and interface to all the parts. Simply, if possible.
The problem is that a software “ecosystem” will always be ill-fitting and messy; all that you can expect is that it will work.
All the polished, easy-to-understand computing platforms I can think of have been “cathedral” efforts: NeXT, BeOS, Apple (and for industrial users OpenVMS, BSD).
This is a great article I often refer to:
http://www106.pair.com/rhp/free-software-ui.html
Catchy quote:
I agree. It is a common problem in open source that developers do not even want their programs to work correctly.
As a real-world example: XFCE has a bug, developer confirms that it’s a bug, knows how to fix it (it’s easy, even I could do it), but DOESN’T WANT TO!
The reason for this madness is that the current design is clean and elegant, and any way of fixing it would make the design less elegant and be a “hack”.
Linux suffers from this old Unix idiom (quoted from the Unix hater’s handbook):
• “Being small and simple is more important than being complete and correct.â€
But worse, Linux also suffers from the new Linux idiom:
• “Since everything is already so big and kludgy and full of useless features, one more hack doesn’t matter for the obscure feature that I, the developer, wants.â€
It really is the worst of two worlds. No one wants to make programs behave correctly and be done with it because the programs would become too complex. Everyone wants their silly feature in regardless.
So we have complex, big programs that don’t behave correctly.
As someone who doesn’t use KDE (or Gnome or other such GUIs), I would prefer Emacs didn’t make any assumptions about the environment in which I prefer to manage the resources of my computer. I’ve got no problem with it presenting the option to integrate with them, but to turn it on by default? Emacs already provides a paradigm for killing and yanking text (a far superior one: the kill-ring), the fact that it ignores the gnome or kde clipboards should illustrate how silly it is for a DE to provide its own version of one.
Yes, there are users that don’t want stuff fixed because they like the broken version more for (questionable) philosophical reasons. Fixing stuff like this as “preference” absolutely sucks, Gnome community got that one right. I recommend you read the rest of the linked article as well.
I don’t consider changing Emacs to correspond with the latest popular DEs to be “fixing” anything. Emacs isn’t broken, it just happens not to have some UI behavior enabled by default that a certain version of a certain DE that some users have installed.
Emacs has been around decades longer than KDE, and will still be around after KDE has been long forgotten. I won’t speak for the entire Emacs community, but I can say that most Emacs users recognize the latest ephemeral UI notions as the fads they are. Who knows what will be the latest craze a few years from now (though I would wager a guess that it’ll keep getting more Windows-like). This isn’t to say better ideas won’t arise, but if the goal is to change something to fit an established standard, well, the standards had already been around long before these newer projects started.
Clipboard isn’t just a DE thing, it’s an X thing.
My theory is that emacs is/was just too stagnant as a project to make “brave” moves like this. Though times may be changing – they just added anti-aliasing. Party like it’s 1999! You can easily consider the current clipboard contents as last entry in the kill ring, I think win32 version of xemacs does this at least.
If all of us remained sceptical of all the new software, nothing would improve. It’s safe to wager that clipboard is mature technology by now, dont you think?
I don’t think so.
My misses (who is completely non-technical) runs Linux on 2 out of 3 of her laptops:
* Xandros on her EeePC (she loves it because it’s quick loading and very simple to use for her basic internet needs (web surfing, skype, msn, etc)
* Xubuntu + compiz (she used to have Vista laoded but after I loaded Xubuntu, she finds her laptop runs now quicker, is more user friendly and looks “prettier” than before)
* XP (this is a works laptop so she doesn’t have a say about the software loaded.
Now I admit that the misses wouldn’t have been able to load Linux on either of those machines herself, however she wouldn’t have been able to load XP either – so we realyl are talking about someone who just knows the bare basics.
Plus, the fact that I’ve not given her any lessons in Linux yet already finds Xubuntu more user friendly than Vista says a hell of a lot.
So I think the biggest problem facing Linux is:
1/ it’s myth that Linux is an elitest’s OS of choice and thus isn’t designed for “ordinary” folk.
2/ lack of applications (true, some commertial apps don’t exist. But for *most* PC usage, there’s usually a FOSS application.
3/ lack of end-user support. (people like to be reasured that there’s other people who can help with their “stupid” questions. While there’s plenty of mates of mates, or friend’s kids, etc, who can help with Windows queries, there’s not much visable / tactile support for Linux out there)
4/ the fact that it doesn’t come pre-installed (a lot of people struggle with installing Windows, so they’re not about to go installing a whole other OS they have no experience with)
5/ lack of uniformity. (do users opt for KDE3, KDE4, GNOME, XFCE, *buntu, Redhat, OpenSuse…..etc. While I personally love the vast array of choice Linux offers – there’s no denying that it confuses many who are not so clued up and/or those who don’t have the time/inclination to learn about which distro/DE is best for them)
So, to summarise:
Linux has it’s faults, but no OS is bug-free.
Linux just needs “critical mass” amongst the comptuer-illiterate. By that, I mean there’s not currently the demand from end users to push OEMs to pre-install Linux to the degree that end users need to get exposed to Linux to push Linux (the catch 22 situation).
So, to summarize:
You make your wife use three laptops, when one powerful laptop running Windows 7 RC1 would do everything she needs.
Gotta love waste.
I don’t make her do anything.
She had a Vista laptop originally. then she went back to uni so wanted something portable to carry to lessons (hense buying the EeePC) and then got the 3rd laptop as part of her job.
Also, we’re not married (yet).
I think the saying about assumptions is very relevent here (“assumptions make an ass out of you, me and uptions”)
So, instead it sounds like her single Vista laptop could’ve done the trick. But, I guess some people like making excuses for buying more stuff.
And if she ever left you, how much would you bet she sticks with Linux (assuming her next boyfriend isn’t another Linux fanboi)? Face it, the only reason she runs non-Windows machines is because you installed it.
This is a typical “Look at granny using Linux!” type story, where some nerd installs Linux for grandma but without him there to play help desk, granny would fall flat on her face.
You’re making an awful lot of assumptions about people you know nothing about.
Hey, that’s almost like how most people use Windows because it came with their computer.
Just as she would do if Windows fail and her favorite child/grandchild isn’t there.
No because (and as stated before):
1/ it wasn’t portable enough to cart around to university lessons
2/ it wasn’t responsive enough (took too long to boot, apps ran too slow, etc).
Hence her buying an EeePC.
Plus, if Vista was really “doing the trick” (to use your words) then she wouldn’t have asked me to wipe it and “put something else on” (to use her words).
Granted she didn’t specifically mean Linux, but I’d have just as quickly wiped it and stuck XP on if she wasn’t completely happy with Xubuntu (a point I repeatedly told her).
After all, there’s no point forcing an OS (Linux or otherwise) on someone if they can’t get along with it.
Isn’t that the one of the foundations on which western society is built?
Most people don’t need x, y or z but like shiny new toys.
Besides (and as stated already), one of her laptops is a company laptop (ie she didn’t buy it)
Her EeePC came with Linux preinstalled. So I’m betting she stays with Linux on one of her 3 machines.
How many granny’s do you know that can install and problem solve a Windows install without a nerdy grandson to play help desk?
At the end of the day, most users choose their OS because someone else chose it for them (be it an OEM or be it their nerdy grandson)
For the record, I’m an OS-geek and not a Linux fanboy.
I just believe that the OS should work for the users rather than the users having to adapt to an OS – which means, for example, for my music production I use Windows and for backing up my data I use OpenSolaris (for ZFS).
So the only thing I push on other people is the same choice. If they install, for example, Ubuntu and hate it – then I’m not going moan when they switch back to Windows because they were open to and aware of their choice but preferred Windows.
On this instance, I gave the misses the same choice and she preferred Xubuntu to Vista and XP. I may have physically installed the OS but she had/has the choice to switch back to Windows (and I would readily done this for her) but she chose not to. Her choice, not mine.
What a huge, unfocused and rambling article. If you’re going to write something that long, it could at least grant some new insight, but instead it relays the old “Linux is less polished and less consistent due to a lack of strong guidance & holistic design” sort of line.
Which isn’t to say there’s nothing to that argument, just that you don’t need to use 9 bazillion pages to make it. The article even contradicts itself quite strongly, first seemingly asserting that Linux nowadays sees a comparable amount of corporate investment as Windows or Mac OS, which is a very dubious claim to begin with, but later on in the article it is acknowledged that much of this investment is targeted at the server side of things, and as such is hardly relevant to a comment on desktop usability.
Anyway, in my opinion I think there have been some great strides made in Linux usability recently, from Ubuntu most significantly but from others as well. KDE 4 for example seems to represent a renewed focus on this front for that project, albeit with mixed success so far. I am sure that with Moblin and Chrome OS and others the good work will continue. Where I really want to see improvement is with integration into hardware, to see Dell or whoever really do some hard work to deliver a machine which is well-integrated, with eg. DVD playback, 3D, bluetooth, printer support working well out of the box.
I don’t think that user-unfriendliness and open-source software necessarily go hand-in-hand. Firefox and the Geany IDE are two counter-examples that I’d point to.
If anyone wants to see “user-unfriendly”, try tweaking the settings in Word or IE – you’ll be tearing your hair out in no time.
Having said that, I’d agree that Linux could be a bit more user-friendly in some ways. For example, getting desktop search sorted out would be good.
On the “plus” side, my 80-something mother uses Linux almost exclusively now, and is really sold on it!
So, it’s getting there.
Edited 2009-08-24 10:29 UTC
How can the most widely used OS with the most diversity and choice be un-friendly ?
Put it in the hand of someone who as no idea what they are doing who are used to buy computer with windows or Apple Pre-installed or with a Rescue/OEM DVD. Then it become so called un-friendly.
But then the same happens with cars , bicycle , boats , electric tools , cooking instruments , etc …
Almost all the time there is a comparable option staring the user in the face , but since they don’t know it and are looking for something else they go : “well it’s less friendly then others because it don’t have the thing that other have ” when it’s >> RIGH THERE <<.
Or they start comparing a 300 million budget game with a 2d game someone made for fun on his spare time …
Or you have the other who ask for other platform software they should be intelligent enough to know the other OS won’t port to GNU/Linux because there developer are incompetent or stealing.
Geek use and work on and for all OS , there are more normal user in GNU/Linux then there are real geek , even in the developer ranks.
Let’s not forget that since it’s the #1 OS by usage , that make other OS makers jealous and that some of them and there marketing and user’s are known to make up stuff and make up lies about others.
Finally you can’t make an OS that is perfect for everyone , neither should you try , because you can try and make something stupid proof , but the stupid will reinvent itself as more stupid or change there goal or wants.
Edited 2009-08-24 10:27 UTC
//How can the most widely used OS with the most diversity and choice be un-friendly ? //
What? Linux is *hardly* the most widely-used OS. Outside of supercomputing and web servers, it’s way behind.
” GNU/Linux ” the OS and ” Linux ” the kernel are the #1 widely used OS and kernel.
Well not on your false and erronous data there not , but in reality GNU/Linux dominate every domain it’s in.
Where it’s #1
Where it’s #1
Nope :
It’s #1 on desktop. ( Your only watching sales not reconversion ).
It’s #1 on cellphone.
it’s #1 on integrated device.
it’s #1 on communication.
it’s #1 on Banking.
it’s #1 on real trading platform.
it’s #1 in Movie rendering.
it’s #1 on search engine.
it’s #1 on CRM
It’s #1 e-commerce ( Ebay , amazon , etc … )
It’s even #1 in game development and server.
Etc …
GNU/Linux is distributed legally in more country and in more language then windows and Apple combined with Solaris and all BSD’s and all other fringe OS combined … The only segment that is seen as problematic is when you add all the pirated windows desktop , then it’s still superior but Microsoft is seen as a lot closer.
It’s portable and adaptable and Free Software , and people support and like it and can do as they want with it.
http://www.linuxfordevices.com/
not like anyone as to inform you of anything or send you a memo about it …
Linux is so incredibly not #1 on the desktop, that I have to think you are just making a perverse argument for the sake of trying to win a debate despite being completely wrong about reality. You can invent whatever statistics you like, but the whole truth remains the whole truth.
Linux is not the most used desktop OS.
Keep telling yourself that. Real data by country and language and usage and addition of all distributions togheter show otherwise.
Must be why Apple is diverting in other area and Microsoft keep firing employees … Your definition of desktop and how you addition them is innacurate.
There is nothing to argue with thing’s like you …
There is no debate with thing’s like you …
No , I am real , use my real name and use real global numbers.
That’s your thing , inventing statistics that are erronous and show whatever.
Truth in your writing as a lying anonymous coward as me so scared … NOT.
GNU/Linux the OS and Linux the kernel are the #1 desktop OS and #1 kernel for desktop by usage.
I’m not arguing with you because I don’t believe you actually believe what you are arguing.
No , your too coward to argue with me using your real name because then you would have to face the consequence of your lies , You don’t believe your argument are worth putting your real name beside them.
So, where’s the proof? You claim Linux is number one everywhere, but you need to back that up.
Lets start simple for you. Show me proof for either Linux being number in phones, or being number one on desktops.
Good luck!
In all the stories per week you personnaly refused to publish and discard without reading them …
I don’t make claim , and you know that very well.
I use real global data , confirmed by at least 5 different real source , that say it’s #1.
No , it’s already done.
But I don’t need to do simple research that you can’t do or show you anything … Just reach into the archive of submission.
Whats the #1 phone by usage ?
Whats the #2 ?
Whats the #3 ?
Whats the #4 ?
Whats the #5 ?
What OS do they all run ?
In what country do they ship ?
http://www.infoplease.com/ipa/A0933605.html
I don’t need luck with hamster’s 😉
So you have no proof, and you’re talking out of your bum.
News at 11!
http://www.infoplease.com/ipa/A0933605.html <<<
Your the one with all those ready made proof that invalidate me and my comments in one single slides.
I showed one link …
You wanted to join this futile discussion , as Thom_Holwerda , show your proof that invalidate it …
We got all day news channel and the internet now …
Did you know that Rotterdam is considered the 6th lagest port :
http://www.infrastructurist.com/2009/08/11/ranking-the-worlds-large…
A list that shows worldwide cellphone usage per country is your proof that Linux is number one in cellphones?
You’ll have to do better than that.
On top of that, remember that YOU are the one making the claims, and as such, YOU will have to back them up. The burden of proof is on you, not on me. And remember, cell phones is just one market you claimed Linux was number one in. If you can’t even prove one, what about the countless others?
Edited 2009-08-25 07:29 UTC
You said I was wrong , YOU said I made claim and that they where out of my behind …
So far your wrong , YOU are the one making claim and are the one with no data to backup is own side.
It’s your turn to show your data.
I never said you were wrong. You need to read more carefully.
All I said was that I wanted to see some proof backing up your claims – nowhere did I say you were wrong.
But it’s pretty clear now that you have no proof.
Your right , I read back and it’s my misunderstanding. Sorry.
I don’t make claims , replace that words with sentence and we will be in agreement.
No , what is clear is I have no intention of sharing any of them at this moment in time. I am impressed by the numbers I have seen outside the US , in south america and in Asia particulary , but it’s quarter numbers they could be a one time thing and symbian is still the #1 overall. But GNU/Linux was #1 in sales last quarter globally by usage. If you don’t have numbers of your own that contradict them , I don’t see the point in showing them.
I stand by my sentence “How can the most widely used OS be Un-friendly”.
Someone said: “Linux is way behind”.
Behind what?
An insecure OS, an OS that is only allowed on certain hardware? What do you mean with “behind”.
Userfriendlyness starts with “user”, a human, these happen to very very divers, so we can discuss this matter for the next 10 years to come and it will not bring anything.
Win/Lin/Osx there is not that much difference once one get used to one or the other. Pick the one that does the job.
Edited 2009-08-24 16:23 UTC
“Someone” … me, a few posts above? Lintards can’t pay attention to save their lives. Thus, invented dreamland statistics.
As far as Linux on the desktop … please:
http://linuxhaters.blogspot.com
Edited 2009-08-24 18:35 UTC
I am 100% sure your not “someone” that your name is not rockwell or duke leto ( a fictional character in the Dune universe ).
I guess , we don’t pay attention to lies , slander libel and defamation with no connection to reality.
What statistics would that be ? I personnaly spoke of rankings by usage. What Your usual scripted nonsense is not adapated to my real comment yet ? what a surprise.
It’s GNU/Linux , your discussing the OS.
More libel , slander and defamation , Hater nonsense , hosted on GNU/Linux , blog.
I guess , I missed your very large submission/accepted contribution here on OsNews , that show your point :
http://www.osnews.com/user/uid:3004/submissions
>>> No data found. <<<
Figured as much …
First, retard, it’s you’re, as in “you are.” Second, oooh, you called me out for not using my real name on the Internets. You’re such the detective. Bravo.
Classic freetard response. MyStatsAreTrueYoursAreFalse(TM)
Um, the bullshit “leading OS on the desktop” statistic? The most blatant lie of them all? What a surprise that you didn’t notice that.
Sorry, dipshit, 99.9% of technical folks mean the OS when the say Linux, except for you and Richard Stallman’s ballsack.
Um, it’s a blog about Linux ON THE DESKTOP. Not on servers. So, hosted on a GNU/Linux server is pointless to the content of the blog. What a surprise that you didn’t notice that. Freetard.
You got me. My self worth is not connected to any submission/accepted contribution here on OsNews. Sad that yours is. We don’t care.
Your really sure that mater ? Not for me.
No your the retard who think I am gonna put any value behind anything you have to write or say.
You keep talking about statistics …
No need to notice it on my part , I wrote it , since you are the anonymous liar who use statistics and scripted answer your usal answer don’t work on ranking by specifics.
Another invented statistics with more of your usual personnal attack , with libel and slander as an anonymous coward … I am sure you would not repeat those word or admit they are your in a court of law.
I dare you to repeat them using you’re real name under oath …
No , it’s a blog about GNU/Linux hatred , filled with lies , libel , slander , defamation and fake and false technical data due to the instability of the main writer , sound like you.
No what is pointless is pointing out the obvious
comedy central , steven colbert , colberr nation
style of writings.
That’s because your nothing hence you can’t be worth anything … You have no respect for your familly and your own name , why would what you put your own self worth into be of any value to anyone at all ?
Who is this “we” ? your multiple personnality , your other fake anonymous nicknames/accounts or your under the impression that because your on drug and imbriated your more then one individual writing on it’s lonely keyboard ?
You already Lost , you just don’t know it :
” But my point is merely that they rounded every pup up into that schoolhouse because they fancied that everyone should think and talk the same free-thinkin’ way they do with no regard to station, custom, propriety. And that is why they will win. Because they believe everyone should live and think just like them. And we shall lose because we don’t care one way or another how they live. We just worry about ourselves. ”
You got nothing of value to say and ar enot someone of value , this ends my reply to you under this thread , under this article.
//No your the retard who think I am gonna put any value behind anything you have to write or say. //
Again with the “you are” vs. “your” but whatever.
Yet you took, again, several minutes to respond. You’re an unbelievably hypocritical dipshit. Have fun living in your parent’s basement with your 1970’s OS.
If you dedicate a blog to your hate of Linux (or Windows or OXS or …) you’re a moron with too much time and too little life.
Who said it’s my blog, moron? I have nothing to do with it. Oh, and the 3,000 plus comments on an blog article that is WEEKS old says nothing about the utter failure of Linux as desktop OS. Got it.
The fact that Linux can raise the ire of 3000 windows fanboys tells volumes about the “failure” of desktop Linux. “First they laugh at you…”
I’m sometimes a bit baffled why Windows users bother about Linux in the first place. If they are so content with their OS, why bother flaming others that choose otherwise? I recall we had various MSX/C64/Amiga “factions” back in the 80’s, but we were in elementary school back then. Seems the childish spirit lives on in the Wintendo scene…
Not me. It seems you need to brush up on your reading skills.
It says exactly as much about Linux failure on the desktop as all the thousand of anti-MS comments by Linux fanbois ever did say about Windows failure on the desktop. That is to say, absolutely nothing.
(Note: My post might anger some people. I apologize in advance.)
I had a similar intensive discussion with my computer tech buddy. We all agreed on one thing: operating systems with Unix or Unix-like architecture are more suitable for workstation/server and industrial embedded equipment applications than ordinary home desktop application.
Methinks the next user-friendly open source OSes will not have a Unix(-like) architecture. But Linux and BSD indeed have contributed many stuffs by enriching a firm foundation of open source movement and projects. That’s for sure.
Linux and BSD are perhaps the indirect motivators of open sourcing more mainstream Unix systems or similar Unix-like ones in the future; enriching possibly brand new generation of Unix(-like)-inspired operating systems when new computer technologies will arrive.
I currently have a mixed feeling of today’s open source home desktop because it’s good that those projects grow firmly and steadily, but the fact that Linux or BSD are rather better for workstations/servers.
Edited 2009-08-24 10:32 UTC
Oddly enough, what you’re talking about is BeOS. Unix inspired and very consistent.
Moving on…
A couple of points:
1) The people funding Linux/*BSDs are funding developments in the server, workstation, and embedded spaces, so yes they are better in those areas. If someone made desktop utilization a priority you would see an improvement. For example, Apple has taken a bunch of open source technologies and made a very polished OS.
2) Hardware manufacturers don’t necessarily play ball with the FOSS community or they release wonky implementations of confusing standards. There are two great examples, ACPI in laptops is notorious for being wonky, incomplete, or both, and Broadcom’s blind hatred of FOSS OSes. Once laptop manufacturers get their product running with Windows, they’re happy. It takes a lot of time to document hardware, so most companies don’t. (Broadcom doesn’t see the profit in helping out FOSS OSes, or they just do it for fun. I’m not sure which.) The amount of stuff that does works with Linux or BSD really is a superhuman accomplishment.
3) X11 was designed for remote dumb terminals. X windows gets the job done, but it could be better designed for single user desktop use. It also has a lot of momentum, so it’s not going away anytime soon. Finally, the kernels themselves have deficiencies, like the DRI situation in FreeBSD, that other kernels, such as NT, don’t have, which goes back to point number 1.
BeOS was more of a POSIX-supporting operating system with a command line tool. BeOS and Haiku are not really Unix-inspired per se; more like POSIX-respected.
From a Haiku development member: BeOS was only “unix-like” in that it shipped with a bash shell and a full complement of commandline utilities. It also sported a relatively good POSIX compliance layer (Haiku is much better even)… but these do not make “UNIX”, and that’s where the similarity to UNIX pretty much ends.
http://www.haiku-os.org/community/forum/what_makes_beos_so_special_…
Edited 2009-08-24 21:30 UTC
Linux is user-friendly, but IT IS NOT WINDOWS.
Newbs automatically try to do things in Linux the same way they did in Windows, which of course doesn’t work. Then they look online for a HOWTO, which might be outdated or written for Gentoo (or written for headless servers), and it involves use of the terminal.
Case in point: Samba. There’s always people asking how to set up file sharing in Ubuntu. Even the Ubuntu documentation goes through opening up smb.conf and making manual edits, but in reality all that’s needed is for you to right-click the folder you want to share, go to Properties, click the “Sharing” tab and fill in the details. Easy. But people seem to think it’s hard to set up Samba even when you run Gnome.
Maybe it’s because newbs get told that Linux is hard, so they don’t bother trying the easy way? It’s like when they ask “How do I set up my HP printer on Ubuntu” – they’re expecting that it requires them to compile a driver from source code. They’re not expecting it to be as easy as “Press the ‘on’ button and you’re ready to print”. I’ve even seen newbies download the HPLIP source code and ask how to install it, without even trying plug ‘n’ play!
That, a million times that.
Hell, I picked up my friend’s mac and I was daunted at first because even if I was using an interface with a few elements of similarity (kde 3 with the menubar on top) it still wasn’t the same, and then when I started digging I stumbled through the mix of BSD and Apple utilities making up its userland. Then I got a mac for my laptop and whenever my brother (who has only used windows to any extent and could probably deal with the professional side of it even on winserver where I get lost in windows) looks at it “why do you keep complicating your life” – same reaction as he has with my linux “desktop” box.
Most of linux’ userfriendliness for my completely computer dumb grandparents didn’t come from it being unfamiliar, it came from the fact that they struggled to get used to Windows and that was already new to them – there’s no such thing as a friendly computer to them. I doubt anything is inherently userfriendly – we jsut get used to it.
I spent a while trying to figure out how to install downloaded software on an osX machine before double clicking the .img and copying the contained icon to the applications folder. It was too simple where I was expecting some kind of more complicated install wizard. Nothing but a slight speed bump the firs time though, once you get it.. your all set.
The reason why this is is because Linux distributions are collections of different, autonomous pieces of software that don’t always work well together or want to work together. That’s not their fault.
It’s the fault of the people putting it together – the distributors. They don’t seem to realise that they have to put something more in than just compiling and packaging up the latest releases. They have to have a list of what gaps need filled, have a long-term technical vision for how sound will work better for people and what to use, create a unified set of graphical administration tools that Windows and Mac OS users take for granted and start bringing things together.
Administration tools are a good example because it brings different parts of an operating system together in a way where it becomes obvious if things don’t work well together. We’ve belatedly had things like NetworkManager, but quite frankly taking command line output and parsing it is never going to bring the integration and reliability required. YaST is possibly the best administration tool around on Linux but it is incredibly slow and why it’s GUI interfaces aren’t integrated properly with KControl of Gnome’s Control Centre I have no idea.
The secret, once again, is developers. There needs to be a large set of easy to use and reliable APIs that developers can tap into from installing a service to reading log file output to enumerating network interfaces and settings. I did think D-Bus would have largely filled this gap by now but nothing has happened. Once software components start talking to each other things get easier and you can start actually using a GUI to manage your computer.
I don’t think that the average Linux distro anno 2009 is user unfriendly anymore. The biggest problem facing new users is not the way Linux works. It’s the way they think it should work.
Linux newbies (I once was one of them) try to assail Linux with Windows expectations and when that doesn’t work, Linux sucks. It’s true that you have to factor in stuff you don’t need to do on Windows. With Linux you need to make sure the hardware you are going to buy has an in-kernel driver. You have to make sure that software runs on your distro. You have to trust the package manager and stop trying to find installers. You have to let go all those programs that don’t have a Linux port. But if you do accept these different ways, Linux works hassle free.
The problems arise when you try to get that almost supported hadware to work. Or when you don’t want to give up that piece of Windows software and start mucking about with Wine and derivatives. Or when you can’t find that deb or rpm for a package and you get cocky and despite lacking the depth, delve in to the ./configure, make, make install magic.
Linux is easy when you don’t leave the ordinairy path. When you decide not to heed that advice, you probably are going to find yourself deep in the woods. That might be the biggest problem with Linux, nothing in Linux puts up roadblocks if you try to get into trouble. Linux willingly gives you rope when you want to hang yourself.
Users with a pretty good sense of their capabilities don’t have to fear Linux. It doesn’t simply go poof when you use it in a normal way. It’s when you are inexperienced and try to hack on stuff that isn’t ready yet, where things go ugly. Especially when you also expect that hacking to work flawlessly.
Unfortunately, my needs require that I use a Tablet PC. I’ve been working with Linux off and on for well over 11 years. I always end up going back to Windows for one reason or another.
This weekend, I decided to try Linux again. I wiped my Tablet PC of Windows 7 RC, and installed Fedora 11. I went through the now familiar routine of setting up the machine to recognize my Wacom digitizer, etc. Then I found that the digitizer calibration is not remembered after suspending and resuming (I knew about this bug — I submitted a bug report to Ubuntu when I first discovered it. I was hoping it was fixed). It worked in a previous version of the wacom-tools, so I install Fedora 10 and built a custom RPM of the version that works. I also install custom RPMs for GDM (from Fedora 8), so I can log in with the stylus. So far so good.
Then I get to audio recording. Right now I lecture using MS OneNote, and I record my notes and the audio. I thought I could do the same with Linux, using Xournal and a generic audio recording program. No go. For some reason, my audio chipset will not record audio. Sound works fine, but nothing I’ve messed with allows me to get audio recording from the built-in mic to work.
Long story short, I went through Ubuntu 9.04, Fedora 10 and Fedora 11 this weekend. None of them worked for my needs. Probably because my hardware is not 100% supported.
So, so unfortunate. I love messing with Linux. It works great for the desktop, but not for the Tablet PC… And here I am again with Windows 7 RC.
Aargh…. I just found the answer. I’ll post here in case anyone else has the same problem. For the Fujitsu T5010 Tablet PC, to enable recording from the internal microphone (using the Realtek ALC269 audio chip), put the following in /etc/modprobe.d/alsa.conf:
options snd-hda-intel model=fujitsu
So simple. And yet so frustrating.
On what planet is that “simple”?
And how did you come across the solution?
Take two people with zero computer knowledge, give them two empty computers, give ubuntu to one and windows to the other. Come back after one week and see that the one running linux manages to do his stuff, the one running windows has not yet got anything to work except viruses.
The user friendliness of windows is very fragile. It works only when you have a driver cd, a software shop nearby, a lot of money, some security knowledge.
Linux might be limited to compatible hardware and software in the repositories, but if you stay within those limits, it’s actually easy to use, and it’s getting easier every day.
Actually, after a week they’d both be getting things done just fine but make them switch and they’ll be like fish out of water.
Wow, you’re an idiot.
Let us all bow to your superior debating skills.
I do think that a large part of the problem with Linux is the lack of constancy.
One commenter mentioned that there are many open source projects (most of which are available for linux) which are feature equivalent to commercial software available for other OSes. However, the problem is that they often end up imitating the other software, which means that a lot of the software for Linux has different ways of doing things, different UIs etc. which leads to inconsistency.
You also have the problem that if a non-technical user sits down and learns how to user “linux” for simple everyday things, they would be quite confused if they were to use a different person’s “linux” computer and found that it was quite different: While the great variety of distros are a blessing, at the same time they are a curse.
Lack of consistency is a red herring. On Windows you have little consistency and people seem to get by okay. On OS X, the poster child for UI perfection, you have a few tons of inconsistencies. Consistency is nice but it is not required to be *sufficiently* friendly and it is not what’s holding Linux back.
What hampers Linux:
* Hardware vendors who don’t support it themselves, forcing extra effort by the community.
* No one/few people selling it to people who just want a computer.
* Limited acceptance from joe-average IT guy. Grandma can’t take her Linux box to Best Buy to troubleshoot a problem.
These things are all medium grade issues. The biggest problem is one word: polish.
Linux software is most fine. It’s good enough as, as good as or better than the crap you get on other platforms, with a few notable exceptions. Polish and integration, however, is total crap compared even to Windows, much less OS X.
Ubuntu looked at first like it was trying to tackle this issue, but they succumbed to the same downfall as everyone else before them: Just package everything and it will be fine!
What we need is for a distributor to finally say “You know what? Screw Linux. I am not making a Linux Distribution! I am going to make Foobar OS!” And then they package just exactly what they need and they make *everything* they package work with *everything* they package. Make it all smooth and seamless and, if you can’t, don’t include it!
I don’t want to see a dialog window with the same ALT shortcut key for two different buttons (thanks intrepid!) I don’t want to have six different tools for anything. No, wait, to be fair I do want those things, but I want to see a distributor not want those things. Do not give way under pressure from users to add this or that application. You know what? If they need ’em that badly they’ll compile them themselves, or find someone who did. Your job is to make sure *your stuff* works beautifully the first time and every time, and upgrades cleanly, and stays secure. Once you’ve done that perfectly for a year you can start adding more perfect, beautiful software. Or, better yet! Publish a SDK describing how to make an application integrate beautifully with your system and let the upstream developers do it for you, or let some wannabe middleman package it for you.
POLISH! It’s all very well and good to create ~/Pictures/ for me, though please don’t do that, but it would be nice if picture apps saved there by default. And, wouldn’t it be nice if I could *change that location* in my preferences and then picture and image apps switched to saving to the new location by default? I dare you to try this right now, today. Go to your awesome GNOME desktop apps and find out which ones, when first installed, use these fancy folders by default. Hint: On most distributions they wont.
Suppose they did, though. AWESOME! As the system administrator, where do I control whether these default locations are used and or what they are? Can I set a system wide policy that says “Pictures go in ~/IMAGES/ instead of ~/Pictures/”? If I set it, does all image manipulation software that is subsequently installed, or already installed, use that location by default? If I apply the change after the system is up and running for a while will ~/IMAGES/ get autocreated or did someone stupidly rely on skel for that? POLISH!
This is one pithy example here, please do not post replies telling me how my example is flawed and pictures work just fine on this or that distribution. I don’t care! I’m just trying to get you to see the sort of thing which actually hurts Linux user-friendliness and adoption. It’s not ever-more-limited GUIs with fewer and fewer options! It’s not making all the OK buttons have the same icon. It’s not having or not having toolbars. It’s (say it with me now) POLISH!
My first reaction upon reading this was: WTF! he must be joking.
On second thought it occurred to me he was right, well sort of. While neither KDE / Gnome are perfect GUIs nor is OS X – when compared to “what people are used to”.
If people are used to the whole Windows 95 – 98 – 2000 – XP – Vista and now 7, the GUI of both linux and OS X will be troublesome to understand.
But hold on! Most people who have used Windows will actually feel more at home using KDE / Gnome than using OS X….
So let’s get it out in the open, the OS X GUI is not user friendly! Biggest problem ever: Finder (press enter to rename a file? You have got to be kidding).
Second biggest problem: Window management. The amount of clutter an OS X desktop has once you have more than 2-3 windows open is amazing.
Oh well, to each their own – I know plenty of people who love OS X and plenty who despise it so much they will never ever buy a Mac – and we are talking people who really dislike using Windows but see no other viable alternative.
Enter Haiku-OS, ya baby! The Be lives.
Canonical (i.e. the company behind Ubuntu). I think we can easily say that they have had the most interest and put the most programming hours put into making Linux more user friendly than any other company in the past 2-3 years. I think if there are a few more companies like canonical we might start to have some really nice user experiences (without an expert doing the set up).
“They make nail pullers that are much easier to understand, but they don’t work as well. They take longer and damage the wood more. But the thing about the slide-action pullers that I like is that even though they take a few minutes to learn, once you know the trick, they save you several minutes with each nail.”
That’s it in a nutshell. Do we really /need/ our grandmothers to be able to forward LOLcat emails under Linux? (BTW mine does – with LinuxMINT.)
Unless the very idea of proprietary software offends you (and I hope you’re not Mac users), let people use what they’ll use. That’s freedom.
User interface research, design and implementation cost TIME, MONEY and requires skilled personell.
None of which Linux/Opensorces has in that area. That’s not to say the contributors to open source arn’t talented – They are… but talented programmers != Talented UI designers.
Apple would spend hundreds of millions on UI design and engineers. Open source = bunch of C/C++ coders.
Only so much you can do with just code.
Decisions are made by programmers and not by marketing experts.
Here is the explanation.
This is the same for WordPress developers that made the foolish decision to make revisions the default status and so annoy the whole Web (including hosts).
Thanks for this excellent article.
The OS should not matter, it is the programms a user runs. My wife jumps from Linux to Windows and vice versa because I installed the same apps on both.
Actually she finds all computers user-unfriedly.
Problem with Linux is that so much choice is not always the best, my friend who os a developer is alway complaining because there is no standard API for Linux, I guess that is the main problem.
Progress has been made, KDE and GNome run each others programms but much is to be done.
Installing linux is much easier then installing windows (or OSX), there is hardly a pre-installed linux to be bought, the install has improved dramatically the last few years.
Fact remains, what is “userfriendly” depends on the perception of the one called the “user” and those are all humans with all their differences, so this discussion will never end.
Jan
Have you stopped beating your wife?
http://en.wikipedia.org/wiki/Loaded_question
I have always thought that X was the biggest problem with the *nix desktop. If Chrome OS really is ditching X for a proprietary windowing system, it could be the best thing that ever happened to linux.
As far as X desktop environments go, some of them are ok, but none of them are inspired. Whether you like OS X or not, it is the best GUI for both novice and expert users. It doesn’t get in your way, and if you need something, it’s there.
People fall back to the Windows model, because that is what the majority of users are used to. That’s why we have lackluster desktops from Gnome and KDE.
Every time someone creates a really good GUI, it’s because they broke the mold and tried new ideas. Amiga, NeXT, Be…..all have big followings to this day, because they were great. We haven’t had a great X desktop yet. Hopefully Google gets it right, because if they do, Chrome OS could be a legitimate third alternative to Windows on the mainstream desktop.
I fail to see what X has to do with user-friendliness. If I replaced X with a framebuffer what exactly do I gain that a user will care for?
You seem to think that X dictates something about how apps that happen to use it for a graphics layer look, feel, work, or something. This assumption is at best a mistake and at worst vile propaganda.
If you come back with “3D effects and compositing” or anything related to that I am going to simply hand you a dunce cap and point to the corner. So, don’t.
First of all, X is a hot mess. It’s 20+ years of trying to be everything to everybody, and as a result has way too much code in it for a windowing system. Don’t go on a tirade about how X is not just a windowing system, I know what X is.
A core group of developers starting with a clean slate can come up with something much, much better than X.org. It will be much smaller, much more focused, much more defined by that small group of coders, and will integrate with the host os much, much better than X ever could.
Second, guys like you need to get out of your parent’s basement more often. You take this stuff way too seriously. Really? You get that upset because I made a disparaging remark about your beloved X.org? Do some homework and give me some solid reasons why that aged, fat dinosaur is preferable to a clean, new design. If you’re so worried about your desktop eyecandy don’t be. Apparently it isn’t that tough. Amiga OS 4 and MorphOS integrate that crap into more cohesive GUIs than anything X provides, that are also much smaller libraries of code and have a much smaller impact on system resources in the host system.
Loosen your underwear and get a life.
I really don’t think you do.
A small group of developers could come up with another graphics layer. In fact this has happened several times already. The reason we don’t use them are many including that they mostly have no compelling advantage over X.
Again I ask, what would replacing X actually get you? Please do not throw in generalizations and hand waving about ‘cohesiveness’. X doesn’t have a jot to do with that (nor does Quartz, nor does GDI).
What leads you to believe I was upset?
I will always object to you and other people who frequently and loudly complain about X and cite it as the cause of a wide variety of problems which have nothing to do with X. Here’s an analogy for you: “I don’t like the way cars pollute, I think we should use square wheels instead.” Can you see that changing the type of wheel does not fix the problem that I have, namely pollution?
Why don’t you do some research and give me some solid reasons why new code is better than old code in general. Old does not mean bad. GCC is pretty old, you know, and yet it remains one of the best compilers in existence. There are serious advantages to a well known and well tested codebase with broad support.
Here, I will get the ball rolling: Everyone already supports X. Okay! I have given you one actual, real, practical and important reason not to get rid of X. Can you give me one why we should? If you can I’ll give you another and maybe we can get somewhere.
I don’t give a damn about desktop eyecandy. People who want to throw out X often cite eyecandy as a reason to it, often citing OS X as an example of how a new GUI layer allows for good eyecandy. What they don’t understand is that Quartz does not have something radically special and different that makes its eyecandy smooth and pleasing, it just has a couple of specific features X mostly doesn’t have *yet*, but which are being added.
Why would you want to throw out X in favor of a new, broken, badly designed and ill conceived replacement just to gain a feature you are getting anyway?
Like I said, if you are going to reply and claim that we should throw out X because we need more eyecandy… here’s your dunce cap, go sit in the corner. If you were not and are not repling with that claim then this does not apply.
I will say this again, slowly, so that you will get it: X is just a graphics layer, it is not intended to provide a “cohesive GUI.” Whatever that is. Few people and almost zero applications use X drawing primitives directly. Replacing X with another system that does not enforce policy will not gain you one thing. Not one! Adding policy to X would be a better solution, but nobody is going to agree on that.
Why do you assume that I have no life merely because I don’t agree with you? Since you disagree with me, and replied to me, mustn’t I also assume that you have no life? I think this follows.
I’m still waiting to hear why it is X needs to be replaced.
Edited 2009-08-25 18:25 UTC
Ok, besides the fact that everyone is used to X and every distro, every flavor of *nix has an implementation, what is the problem with starting out with a clean slate? X has code that dates back to a time when c compilers probably didn’t support passing structs as arguments by value, among other things, which contributes to the brain dead way Xlib handles windows and displays.
Seriously, would you like to tell me that the documentation for X is good, that X provides a clean interface to the coder for working with windows and displays? I think you are just comfortable with it.
Again, why would a clean slate be a bad thing? A linux-based os meant primarily as a desktop does not need X’s feature set. Wouldn’t it be nicer to have something smaller, better documented, more sanely designed, and again, something that has less impact on the end user’s system resources?
I’m sure you’re quite intelligent, and know what you’re talking about when it comes to a discussion of the relative merits of X.org (my apologies for my terse reply yesterday, I had just woken up and was cranky- no excuse for being a jerk though). I’m not saying X has to die. X isn’t going to die anytime soon. I just want to know why a new windowing system designed without 20+ years of legacy hanging from it’s neck is a bad thing? –Provided it supplied the features that were needed to supply a modern end user experience, and achieved the goals of being developer, end user, and system friendly, which I argue that X really isn’t.
So maybe I’m guilty of blankly stating a preference for a clean slate design over X without listing all the reasons that this would be a good idea. I’ll admit that you are probably in a better position than I am to intelligently discuss the relative merits and shortfalls of X.org. I still ask, what is the problem with a new system that achieves the goals I expressed? OS X certainly provides a nicer end user experience than FreeBSD running X. Why would a linux-based solution (Chrome OS) with a new, proprietary windowing system be such a bad thing?
Edited 2009-08-25 19:53 UTC
The fundamental answer goes like this:
feel free to write one!
We have a good starting point already (Wayland), and quite a few are enthusiastic about it. Interested parties just need to put their money where their mouth is and start hacking on it, to at least port *some* toolkit to work on it.
I don’t think anybody has stated we need to stick with X forever. X just doesn’t seem to be broken enough to warrant pouring actual development money on the alternatives.
Edited 2009-08-25 20:28 UTC
^ this, precisely.
Of all of X’s inherent flaws, (and unfortunate tradeoffs) which no one ever mentions in these little discussions, none of them is so bad that it’s worth it to go back square one.
I doubt Linux will be the OS we’re all using in 20 years and I doubt X will be the graphics layer we’re using in 20 years. Neither is the be-all or end-all of its field. When they are replaced it will be because there’s something better that’s working, not because we hate them.
Your argument is flawed. Some code in X is old, therefore X is probably bad. Seriously?
I asked *you* for the *advantages* of a clean slate. Go ahead, I’m waiting.
A new system does not automatically have better documentation. If the documentation isn’t good, fix the documentation. That will be faster, easier and achieve the same effect.
Few programmers are required to deal directly with X or Xlib. We can argue about the relative merits of how X does things vs. other systems vs. a theoretical replacement. A different way might be nicer for you, or maybe for everyone, but do we actually gain any functionality? Efficiency?
Besides that, since when were we talking about APIs? I thought we were discussing fundamental architectural flaws that can only be fixed with a completely new system.
Which of X’s features does Linux for the desktop not need?
Wait, let me answer that for you: Network transparency. That’s what you mean by X being a big mess, even if you don’t say it. That’s the first thing all of your ilk complain about. This is the primary way I know you don’t know what you’re talking about.
You imply X has an insane design, but you don’t cite specifics and you don’t offer evidence. You cite documentation again, but a new system is likely to be *less* well documented, not more.
Is it your contention that X consumes more CPU or RAM than necessary and that this cannot be fixed without entirely changing its architecture? I understand that a lighter weight graphical layer can be created, and that’s not a bad thing to want. In fact, so far it’s your only legitimate request. I don’t know that we’d get something with the power of X into a significantly smaller footprint than we can already get X, but it’s a worthy goal. The real question is: Is being a bit lighter (combined with whatever other advantages you imagine we’d reap) worth the trouble?
A new system isn’t a bad idea, but it is a bad thing. You decry X’s 20 years of legacy, but in the real world it does very little harm and a whole lot of good. People can (and do) right now, today run Linux with a GUI without X. There are a handful of ways to do it. None of them are as good as X. Maybe one day one will be as good and maybe then we’ll all switch. If so it will be on the merit of the new not any theoretical disadvantage of the old.
Again, you are confusing the issue. I believe now that you are doing this unintentionally, so I will go over this again.
OS X doesn’t get a nicer user experience from Quartz. It gets its nice user experience from a well formed HIG, a good toolkit, good documentation on said toolkit and a single provider integrating the base system components. Quartz contributes two small things:
* Easily make PDFs from anything
* Smooth and flicker free resizing/video playback/etc
The former is a small advantage in overall user experience, though it is nice. X could be made to do something similar, but doesn’t. It’s a small deal.
The latter is entirely possible with modern X. The fact that it has not been doable until lately is irrelevant. This is an example of an improvement that X can grow which does not require a ‘clean slate’. I is my contention that there is no feature of a raster graphics layer which X cannot grow given developer interest and thus that there is no technical reason to throw it out.
Yes.
X is one of those few things we can agree on. At the moment I have to worry about at least two desktop environments, at least two toolkits (in practice many more), many different filesystem layouts, many sound systems, etc, etc.. But, at the least, I don’t have to worry about the basics. Imagine if when running KDE *no* application not written for KDE would run. This is precisely what you’re proposing: A new graphics layer means apps and toolkits need to be ported. Until they do you have a completely incompatible environment. Once everything is transitioned, if it happened, you would have… the same bad user experience as X, minus (maybe) some little things which can be fixed anyway.
How many years did you just waste?
You keep asking for why a clean slate would be bad. Are you familiar with this?
http://www.joelonsoftware.com/articles/fog0000000069.html
This article is entitled “Things You Should Never Do, Part I” and goes into some of the reasons why you shouldn’t throw out and rewrite from scratch. Start with that as a reason to keep X.
All that said, there *are* some advantages to a completely new approach and design, as Be OS proves. What you must remember is that Linux is not Be OS and never will be. You can give people a new graphics layer that’s better than sex, but it wont get you a nicer experience if the toolkits and DEs remain fragmented and incompatible. You can try to introduce a new toolkit, too, but be prepared to drag developers kicking and screaming no matter how far superior your toolkit is (and woe betide you if it isn’t clearly superior!)
Chrome might do well, but it will be because of its integrated environment, apps, services, etc. and not because of the minor detail of whose code draws to the screen.
Fair enough. I could tell that I shouldn’t argue with you halfway into this, because you are obviously more knowledgable than I am in this area, and I’ll just make myself look more and more stupid if I persist.
When I talked about a clean slate design, I was thinking along the lines of leaving GTK and QT, and every other toolkit, behind forever. Not for linux, just for Chrome OS. If Chrome OS is just another distro, big deal. If it’s going to be a whole new system that uses the linux kernel and the GNU userland, that could be very interesting. If they manage to get a deal with a hardware vendor, I think it places them in a position to gain users in a way that linux never has and never will. Again, I’m not suggesting that the linux world follow suit, but what is so wrong about a new system that may actually provide a better end user experience? (and no, I am not blaming X for providing a “bad” end user experience)
You mentioned Be. Exactly. Also MorphOS and AmigaOS 4.x come to mind. All three of those systems boot to a usable desktop in like 10 seconds, look good, and have small memory footprints. I find systems that behave like that very compelling. I’m sure that I am not the only person who would find such an operating system attractive. Can we do that with X? I don’t know. It just seems like my linux and FreeBSD systems in 1996 were a hell of a lot spunkier than any Ubuntu distro I’ve tried in the last couple of years, and that was on much slower hardware.
Edited 2009-08-25 21:41 UTC
If Chrome does better than Linux in general it will be because of its integration and polish and, as you say, contacts with OEMs. Google could get that using existing toolkits and X. If they don’t it’s their loss, spending a lot of time and effort that they probably don’t need to (but isn’t it neat?)
Look also at SkyOS and Syllable for systems not even using GNU userland but using the Linux kernel. They don’t use X either, but they could. Their pleasant environment is due to having a single vision for how things are put together.
Nothing is wrong with a new system, except that it’s unnecessary. I’d much rather a company direct its employees to work on something missing, like improving existing toolkits, adding points of integration between components of the desktop, and so forth, rather than have them reinvent the wheel. They have to add in all the nice polish that makes the user experience great anyway, so why not do that with existing things under open licenses so that we all benefit? Isn’t that sort of the point?
Additionally, a proprietary graphics layer helps no one. If Google contributes to X and existing toolkits then we all benefit even if their product flops. If they release their own closed-source toolkit and graphics layer and it succeeds, that’s bad because less free software is in use. If it fails that’s bad because a lot of effort was wasted and the community can’t pick up the good bits.
Of all the things that have gotten slower over time X is not one of them (quite the opposite). You can still get that blazing fast speed today if you shut down a lot of the handy but not essential background services and limit yourself to the kind of bare bones window manager you had in 1996.
That is a very good point that I hadn’t considered.
I’m happy right now with Ubuntu, it’s currently my favourite OS.
User friendliness is basically UI. What’s needed for everyday users is a single way of doing things that’s always valid, with a single look and feel. So, do a repository with only apps that forfill that. Only “advanced” users need stray outside of the “normal” repository.
The other thing is just make sure everything ties nicely together out the box. Which mostly things do already.
To some extent Ubuntu is already there or nearly there. It’s ready for me and my wife now. I don’t care about Windows. I have to program on it at work, and now I’m more educated about OSs that’s painful, I’d happily leave it and not look back if the right job came along. I’d be more then happy for Windows to just disappear, it’s of no interest to me.
Be nice for more people to move to an open *nix of some flavour, but I see the biggest thing is getting people ready to even really try other OSs.
Installing any software that’s not in the repositories is often a huge pain in the ass even for experienced users, let alone Windows software.
While this is not entirely Linux’ fault it’s still a problem for most people that are used to just downloading an installer and double clicking it.
It’s possible to provide one-click installers for the big distributions (like .deb for Ubuntu/Debian) but the way things currently are that doesn’t help much.
Each distribution manages their own packaging system which convieniently is incompatible to the rest of them. Even if some use the same system they might still not work well with each other.
Most devs won’t even bother with thinking about that because the distributions’ package maintainers do most of that work anyway and if not you could still build it from source.
However, building from source is a practically unsurmountable problem for a beginner and even for experienced users it creates a lot of unnecessary annoyances. Uninstalling from source is even worse and sometimes there isn’t even a rule for that.
So Linux users can neither install Windows software nor their own software. That doesn’t seem very user-friendly to me.
It doesn’t? Strange, I could have sworn that being able to do exactly what you asked for would be a good solution.
So what? It doesn’t matters to Joe user who rarely, if ever, switches between distros.
As opposed to Windows where there are numerous different systems for creating installers (InstallShield, WISE, NSIS etc) not to mention that some companies even roll their own. Yes yes, everyone should use MSI but that’s not the case in the real world.
Good thing there’s no such thing as a package manager (or Add/Remove Sofware), eh?
Wonder how I get all these things done on my laptop without having to compile anything for years.
I never said it wasn’t good. It just doesn’t help having a system like that if noone uses it.
To be more precise it’s actually just very inconvenient to use for the reasons I stated in my original post.
The difference with the Windows installers is that they all work for everyone.
That’s beside the point. All the .deb installers in the world won’t help you if you don’t run a Debian based system.
My whole comment was on installing software that’s NOT in the repositories.
Edited 2009-08-24 14:27 UTC
This is a strawman argument. Installing software that is not in a repository for any major distro is a rarity. People always trump up the non-repository application installation process like it is a serious problem but it isn’t. There is nothing stopping anyone from creating an installer that acts like Windows installers and statically links everything and then throws it into /opt. There is just little incentive to make this a standard way of installing applications because it is so inefficient and unnecessary.
If installing software outside the main repositories of a distro was such a non-issue, people wouldn’t raise it as an issue.
Repositories are great, but you are at the mercy of the maintainers. They work quite well when you stick with popular software, but you are pretty much left on your own when you need less-known or new programs. Even if your software is packaged in the repositories, it doesn’t mean they are up-to-date (take Ubuntu/Debian with its 3-years-old Eclipse).
Of course, you can compile/package the software you need, but how user-friendly is that?
It’s not because you don’t need something that nobody needs it… If there is something I am missing from the Windows/Apple world, it’s a package system that doesn’t rely on central repositories.
Edited 2009-08-24 16:20 UTC
Ubuntu is not dependent on central repositories either. Anyone can create a PPA where you can build and upload your “unofficial” packages. Or just upload the .deb you built yourself somewhere people can find it.
Relying on PPAs doesn’t really solve the issue, as you still depend on whoever maintains the PPA and its choice of distribution. As for building your own packages, you shouldn’t do that unless you want to be their maintainer.
I’m not saying that standalone packages is the way to go. However, pretending that everything is solved by using repositories is burying your head deep in the sand. There is room for improvement.
But you always depend on someone doing the package, even on windows.
Distro dependency is not a real issue anymore; you have a right to complain if you are using Ubuntu (which is the de facto standard now) and you still can’t install the package. People running everything else are on their own, and they know it – it’s the life they have chosen.
No, it’s not the same. On Windows the developers of the software usually also package it while on Linux most of the projects only provide source code access.
The software you get in the repositories is already part of your distribution, whether or not you install it and its content is neither complete nor up-to-date.
Installing anything that is beyond your distribution is a mess.
Another strawman. The only people who raise it as an issue are the anti-Linux crowd, most of whom haven’t spent more than 30 mins using a Linux distribution because if they did they would realize that it isn’t a problem.
Using Debian Stable as the basis for your argument about old software versions is laughable. It’s not an operating system that focuses on having new software. It’s focused on having stable software. It isn’t something a new Linux user is going to be using as a desktop OS. It’s just another bad argument.
Like what? What are these magical missing programs on what linux distro? I find it very telling that these arguments are always generic without anyone specifying what package is missing.
If there is something I don’t miss about proprietary operating systems it’s the awful security nightmare of decentralized package installation.
Your argument is a strawman, as you assume that only unknowledgeable people would be in favour of decentralized packages while everyone else would agree with you.
Red herring. It was merely an example. Now that you mention it, Ubuntu is not based on Debian Stable, Debian Testing is still packaging that archaic version while Debian Unstable/Ubuntu Karmic got a mix of recent (3.4) and archaic (CDT at 3.1) versions.
Up-to-date packages for Eclipse on Ubuntu. Many console emulators on Ubuntu and Fedora. Hotkey utilities for my previous laptop on most distributions. Hundreds of small libre programs or libraries you can find on the Internet (either on Freshmeat or SF). Needless to say, most proprietary software are not in repositories, even if there is no libre alternative. I could go on.
Now, you won’t have to go beyond repositories if you merely use your PC for mundane tasks. Obviously, some people have different needs. To be honest, not all software deserve to be in a centralised repository… yet, they do exist.
There is no doubt that updating these systems is quite a chore. That said, a centralized system is not necessarily more secure, as it could become a central point of failure.
Edited 2009-08-24 18:59 UTC
I don’t see why this is so hard to understand. If you use Linux the way to install software is (mostly) by package repositories and if you use Windows it’s by standalone packages. Which is better is largely about what you’re used to and how ready you are to accept change (no matter if you switch from Windows to Linux, or the other way).
If you don’t like either way you should obviously use the system that does things the way you like it.
I dont know which is more sad; that this discussion exists or that I wasted time on it.
I am not debating on which way is the best, as it would be completely futile for the very reasons you have mentioned…
To clear things up, my point is: repositories are great, but it would be nice to have a completary system for installing foreign packages, as you won’t find everything in those repositories. I am well aware that you can download and install DEB or RPM packages you found on the Internet, but these packages are usually tied to specific distributions.
That’s all, really.
I can see why that may look like a good idea but it just adds complexity. Instead of having a single coherent system for keeping track suddenly you have X+1 systems that may even break the default one. Its all about trade-offs. You trade some flexibility for simplicity and consistency when you use a package manager and vice versa with standalone installer packages.
I could be wrong but I would think that any deb package would work on any deb distro. Maybe the same goes for rpm’s.
Either way, is that really so much different from the Windows apps that have different installers for pre-Win2000, XP and Vista/7?
There is nothing in the deb package format, nor the system of repositories and package managers, that is distribution specific.
Some newer packages may be compiled with a specfic distribution and version in mind, however, and hence would introduce dependency conflicts if they are installed on other distributions or even different versions of the same distribution.
However, with a bit of care in specifying what the dependencies are, it is easy enough to make a deb package that will be installable without problems on almost any debian-based distribution.
It is also easy enough to place that package on your server, make the necessary additional descriptor files, and hence create your own repository. You could have your own repository containing just the one package if you like. You don’t have to make a matching source code repository as well, if your application is proprietary then just the binary package in your repository is OK.
End users of your package can add your repository to the list of repositories they use, and in this way your package becomes installable on their systems just as are packages in other repositories they may use. What is more, if you update your package in your repository, then it will be included in the next auto-updates for all your users.
There is no real need for “a completary system for installing foreign packages” when the existing repository system can handle these nicely (including updates), and it really only requires of you that you compile, package and host a version of your code in a compatible way.
If you are already able to make a .deb file, then making your own repository in order to distribute it to users is just a few small steps away.
Edited 2009-08-25 06:47 UTC
I’m well aware of that, lemur2… Actually, I am creating some packages that I might publish on Ubuntu PPA.
However, most users want to use packages, not build them! Not everybody uses Debian-based distributions, either. Personally, I am switching between Ubuntu and Fedora quite regularily. Many developers might face the same problem and don’t want to publish packages for multiple distros (or wait for the repos maintainers to roll their own packages), hence why my proposal.
Are you listening to yourself? You argument is that because you claim it is bad then it must be bad. That’s not very convincing. Your argument is even less convincing when you start putting words in my mouth. I never said only un-knowledgeable people disagree with me.
Admitting that you can get the latest version of Eclipse on Debian doesn’t support your argument in any way. It does quite the opposite.
You could go on? Then please do because the only specific package you mention is Eclipse and no average user is going to be using Eclipse for anything nevermind the fact that the latest Eclipse is available for Ubuntu.
So are you changing your mind now or what? You’re starting to agree with me.
It’s not just a chore. It’s a disaster. There is no central reporting tool to tell you when a new security release is available and little to no package verification when you do actually download an update.
Edited 2009-08-24 19:29 UTC
Where did I claim that it is bad? Looks like you don’t even bother to read.
Except that Debian doesn’t have the latest version…
Does it matter that the average user won’t use Eclipse? Repositories are for everyone, from the clueless noob to the developer.
Anyway, I won’t bother to list packges you don’t know as it would be futile, just like discussing about Eclipse. By the way, the latest version is 3.5/CDT 6.0, while Karmic/Unstable got 3.4/CDT 3.1… Fortunately, you can run the IDE from the tarball found on Eclipse’s website.
Actually, I never claimed that repositories were a bad idea. They are quite great.
However, you seem to claim that everything you will ever need is in a repository… From my experience, this is not the case, hence why I’d like to see a system for installing packages outside repositories in a distribution-neutral way.
To what I remember, there is Autopackage but it never really caught on…
You seem to confuse what you think would be “nice to have” with what you really need.
Autopackage was relevant before Ubuntu came about and cleaned the table.
What you really need is a way to install software by downloading the installer and double-clicking it. And you got that already, the installer is the .deb file.
Yeah, I forgot that the entire Linux community is behind Ubuntu. Thanks for the reminder…
To the extent that it matters for discussion at hand, it is, for better or worse.
Others distros can cater for themselves well enough; e.g. Fedora users are supposed to be more “sophisticated”, RHEL users don’t need installers for their server side stuff, etc. etc.
You claimed that the repository system is bad because it doesn’t always have what you want. The point of what I was saying is that you can’t prove something by just saying it and that’s what you’re trying to do.
You’re right about that. I didn’t realize that a new version came out two weeks ago. Calling that a deal-breaker is a bit far fetched though, especially since it is relatively easy to obtain outside the repos.
Nice cop out. It just proves you are just repeating BS and don’t have any personal reasons for your dislike of a centralized packaging system.
The only thing you have shown is that you cannot get a two week old version of Eclipse. While that may be a tad annoying for someone who has more interest in the latest and greatest than getting stuff done it is really a stretch to claim that it makes Linux user un-friendly.
My initial message was: “If installing software outside the main repositories of a distro was such a non-issue, people wouldn’t raise it as an issue.”
The only claim I’ve made was that a centralised repository system isn’t perfect, as you depend on the maintainers. If the maintainers don’t care about your software, you’re out of luck.
A few months. You don’t even know what you’re talking about, yet you keep telling me I’m wrong?
I never said this specific case was a deal-breaker. However, you claimed that getting software outside repositories is a rarity, yet a significant part of my software is customly installed on my system.
I don’t think I’d represent the average user, but does that make this a non-issue? Should we only consider average web-surfin’ grandmas?
Geez, I’m sorry to be a Linux user that doesn’t think like you… Seriously, why would I bother with such futile discussion if I had no personal motivation?
Exactly. It’s a strawman. I could just as easily say that it isn’t an issue because I said it isn’t. You don’t offer proof. It sounds a lot like that BS line you hear from cable news anchors trying to pretend to be un-biased when they say “some people say”. It’s a load of crap.
Like I said before it’s a non-issue. It doesn’t hurt the user friendliness in any way. The only example you can even come up with is the fact that the latest version of eclipse isn’t available yet in the repository which is hardly a deal-breaker.
You keep making this claim but the only piece of software you mention is Eclipse and it is available in the repository just not the latest version. This entire discussion is about the user friendliness of Linux yet your whole argument has nothing to do with user friendliness and more to do with the up-to-dateness of one distribution. Don’t blame me for misunderstanding what point you are trying to make because it seems completely irrelevant to the discussion.
It does make it a non-issue. I mentioned before that it is a rarity to have to install something outside of a repository. I didn’t say it never happened. You only proved my point when the only example you can come up with is available, only as a lesser version number. It’s not ideal but it really is a non-issue.
Who knows but I’m still wondering why you have so much time to make these points but so little time to prove them by naming the specific issues you are having. I concede that Eclipse isn’t available in the very latest version but it doesn’t change anything I said from the beginning. You are still basing your whole argument on one piece of software, which makes that issue a rarity in my opinion, which is exactly what I said before.
<rant>
You really want examples, don’t you? s1bl (there was another utility for my previous laptop, but I don’t remember the name), fceux (obsolete version in Debian/Ubuntu), gens, PACC, any commercial program (MATLAB, Maple, Mathematica)… Many other smaller libraries that don’t really worth mentioning, as they are quite specific to my research domain (vlfeat, libsiftfast).
Of course, it probably look like some meaningless grocery list to you. That’s why I didn’t bothered.
Now, I understand why you don’t find some of them in main repositories. For this reason, I believe there is a place for something like Autopackage, even if it’s not an ideal solution. It could be a nice way to get the latest version of a package while being less of a chore for the developers (a single package to build instead of a package for every major distros).
As people already mentioned before, you can find DEBs on the Internet, but you’re a bit screwed if you don’t use a Debian-based distribution.
I can live with compiling, but many non-developers could have an hard time dealing with compilation. It’s not some hypothetical situation I got out of my ass, either. I remember ditching Linux and getting back to Windows when I was a newbie because I couldn’t install the damn packages I wanted due to some compilation error. Eventually, I prevailed (after all, I’m now a computer engineer), but I wonder if I would have came back if smartasses had told me that it was a complete non-issue?
Make it easier to install software you can find outside the repositories. That was my point.
</rant>
If you still believe that it’s a complete non-issue, so be it. I’ll live. 😉
Except when the uninstaller removes shared DLL’s that are actually needed by other apps or when the uninstaller simply doesn’t work or when you get dead entries in the list of installed software or when the installer won’t run because it thinks you’re already running it etc etc.
Windows way of doing things is hardly troublefree or ideal.
The few times I’ve needed to do that a .deb has always been available. Granted there may not be a package for obscure Linux distro X but that’s not what Joe User will be running.
Edited 2009-08-24 15:12 UTC
Your last statement is what I explain to all the Linux gurus that I talk to.
http://stoplinux.org.ru/category/project/FAQ_why_Linux_suks.html
Ah yes, a page on a Russian site filled with porn and “contact” ads is surely the place to go for accurate and professional Linux (or any, for that matter) information.
I think the big stumbling block on Linux usablity can be highlighted with Firefox’s use of GTK.
When someone tries to change the way Firefox opens a file, they are presented with a view of the file system and they need to provide the full path to the app they want to use. So the user needs to know that Adobe Reader is called acroread and is stored in /usr/bin. That is inexcusable behavior.
Conversely using Konqueror you are presented with the application menu and just have to pick the app you want to use. That is intuitive behavior that users can understand.
It is the little things like this that turn peoples’ perceptions, not adding new hardware or configuring dual monitors. Having a menu item called “Add/Remove Software” that calls a terminal window with “apt-get _” pretyped for you is not usable, but that is the level of the behaviors we have.
I really don’t have an idea of how to fix it other then avoiding app that use GTK, but hopefully someone else has a solution.
I just checked Firefox in Fedora11/Gnome, and it gave me a listing of applications to select. The only time Nautilus appears is when I select the “Use other…” option.
What DE are you running? Firefox integrates with Gnome much better, due to Gconf, then other DEs.
You should move to Fedora, so you can use the wonderful Yumex tool.
THAN!
We use KDE at work for various reasons on RedHat 4. Upgrading to Fedora is out of the question since management wants official support, and there is no budget to purchase licenses for 5.
So it sounds like Firefox needs Gnome to work properly on Linux, which explains why the users find it a pain to use. I guess this underscores the problems with usability in Linux. We need to use FF, but it does not work easily with our desktop, and retraining people to use Gnome is out of the question. Too bad Konqueror does not work better on the sites users have to access.
I was ribbing you about the apt comment; It wasn’t serious in the least.
It really does. I run KDE on FreeBSD, and Firefox really is just a pain to use in that environment. It works, but not as smoothly as it does in other environments. Yeah, there is not a good replacement. As much as people complain about Firefox, everything else is immature, Arora, or not tested for, Konq.
Actually, no. I use KDE, and what the previous user stated regarding Gnome holds true for KDE. If an application is registered as the default in KDE, Firefox picks up on it. Firefox from Mozilla lacks integration with KDE file dialogs, but integrates nicely with all other KDE styles and MIME types.
It sounds like the issue is with the relatively old version of Red Hat Linux that you are using (and that KDE is something of an afterthought with Red Hat). For the record, I am using Firefox 3.5 and KDE 3.5.10. I would also add that Konqueror works pretty well for me, but that Firefox with extensions is a better browser.
Unfortunately, Firefox does have some Gnome-based dependencies (at least on FreeBSD). If you’re trying to make a system from scratch, clean of packages from the Gnome and KDE stacks, you’ll want to avoid FF 3.5.x, since it’ll drag in stuff like gnomehier and gconf2. FF 2 doesn’t have this problem, but still uses GTK. Of course, most people could care less about this.
Yeah, Firefox’s lousy desktop integration (mostly bad handling of downloaded files, painfully so with KDE) sticks out like a sore thumb, and because that’s what you use the most, it keeps nagging at you all the time.
Best solution would probably be to remove the download manager from Linux version of firefox altogether, and let an external program sort things out.
To author:
I’m a GNU/Linux user and I find my GNU/Linux installation to be just as friendly as I need or want it. I’m not an elitist, neither am I a zealot, and I don’t hate Microsoft either.
GNU/Linux does not need fixing, it’s not broken. It needs development since it’s not done yet (and never will be?). And guess what, development is happening even if you aren’t seeing it!
Do not assume that $some_user == $other_user. Do not assume $your_need == $somebodies_need. Just don’t assume, mkay?
To osnews:
Please stop posting rants as articles, thank you!
I said that one reason that Linux is the way that it is (and I don’t say that it’s bad at all — in fact, the gist of my conclusion about Linux is “Linux makes a very powerful tool, and its flexibility and stability can even make it a superior tool. But Linux suffers from the same problems as Windows when it comes to obscure or poorly-built hardware or software, and problems with troubleshooting and advanced configuration can likewise be difficult for non-expert users.”
I’m not sure that you read my article, because I really don’t think it’s a rant.
First thank you for answering, second I apologize I had indeed not read your article.
Articles questioning the fitness of Linux for a particular purpose have become tiresome and annoying. They usually focus on needs of a specific user (the author, mostly) and how Linux fails to provide; Usually coupled to a comparison to OS X or Windows. It’s not so much how or why they were written, but rather the way in which they are presented.
After reading your article, I admit I failed to follow my own advice “Don’t just assume.” 🙂 It wasn’t really a “Linux disappointed me” article, but I have the impression you only had a few points to make:
1. Linux is only “user-friendly” if the user is a “geek” because mostly geeks use Linux and only develop software they need.
2. Complex tasks will not be easy at first (or ever).
3. Sometimes you have to learn/train to be good at a task
4. People are lazy and complain a lot.
I don’t see anything to discuss here, though. To me these are very obvious observations about Life in general (2, 3, 4) and the FOSS culture (1).
Still, I question it being presented here in this way. You seek some form of communication and discussion, but I’m not so sure whether this is the right place or not. A mailing list or forum would be a much better place to start a discussion.
This isn’t any different than Windows. Most software is easy to install but software installation fails enough on Windows to make it a pain in the ass just as often, if not more often, than Linux. It all depends on the software. Hardware is even more of a problem on OSX. Linux supports much more hardware than OSX, or even Windows. Hardware configuration on Windows can be more of a pain in the ass also.
An example of this would be helpful. Exactly what are you talking about? Hardware? Software? Both? It sounds like you’re just repeating yourself to fill up space.
You’re repeating your complaints again.
Again this is something not limited to Linux. Some applications in Windows require that you use the registry to change configuration options.
This is pretty much a re-hash of every “Linux isn’t ready for the desktop” article ever written and it’s just as bad as the rest of them. Linux has its sore spots for sure just as other operating systems do but this article doesn’t even begin to make any valid points about the lack of Linux uptake or the apparent “un-friendliness” of the operating system.
I think the issues Linux faces have little to do with points made in the article. The number one problem is application familiarity. Linux has a ton of good applications but people are comfortable with their brand name apps and are reluctant to give them up even if the alternative is better. The same goes for Linux itself. It’s a good system but there is little compelling the average user to switch.
The biggest software issue with a standard Linux distro itself is X. I’m not going to go on an anti-X rant because it works quite well on my system with Intel graphics. The real problem right now is growing pains. All of the issues people have had with X in the past have been addressed or are being addressed. Right now X is in limbo between old APIs and new ones. From most of the discussions I have seen on the net lately it seems most people, especially critics, are completely unaware of this.
The only other actual software issue that might be identified as a problem is sound. Again this works perfectly on my system but I don’t use pulseaudio and 90% of sound complaints that I have heard can be directly attributed to pulseaudio. This is another case where the software is still being actively developed. From what I have heard pulseaudio is in a much better state than the last time I tried to use it but I don’t have any real personal experience to back that up.
Allow me to give you a glaring example.
In KDE there is a GUI for adding a Samba user, except that it doesn’t work – you go through the motions but no user is added. The user has to be added at the command line (easy once you know how).
This acknowledged bug has been present for years, is not considered important by the developers (even though it would be simple to fix) and seems unlikely to be fixed in the near future.
For people who expect things to work and have to exist in a heterogeneous OS environment this is a complete showstopper. It is hard to believe that this bug is given no priority and indicative of the non-seriousness of the developers in producing a desktop environment.
As a corollary, why should a new user not be automatically added as a Samba user (or at least a dialog asking whether to)? OSX does.
No, no, no. There is far too much of this sort of thing in the Linux environment and until attitudes change it will never be a suitable system for the masses.
The Cutter
I use GNOME so I have never run into that problem. It is something that should be fixed but it isn’t a Linux problem, it’s a KDE problem.
Your example has nothing to do with the “Linux” environment and everything to do with the KDE environment. That bug is probably present on every system that KDE runs on. GNOME doesn’t suffer from this bug and is pretty much the default desktop nowadays anyway. That doesn’t make it any less of a problem on KDE desktops but it cannot be considered a show-stopper when KDE isn’t a necessary component of the Linux desktop.
I don’t know if anybody has mentioned this yet, as there are a lot of comments already but, why can’t linux have a logical file system?
Windows has one, Mac can hide it’s internals to give an illusion of one why can’t linux?
It is simple guys:
System files
Program files
User data files
not bin, sbin, opt, boot, usr, mnt, media, etc….
installing a program in linux is a guessing game as to where the files actually went.
please someone reorganize the filesystem
It does?
Users generally don’t, or shouldn’t, normally care about files outside their home. I say normally because sometimes you’ll need to go outside but when you do I doubt “System Files” is any more clear than “/etc/” and “/sbin”. Out of curiosity; what goes in “System Files”? executables? only libraries? configuration files? It’s not as clear as you may think.
Wow, that’s just how I feel when I install something on Windows and the installer puts files all over the place in “C:\Windows” in addition to the folder where I told it to install the application.
Edited 2009-08-24 13:49 UTC
The vast majority of windows programs put the executable in Program Files and also include a uninstall executable and put most configuration files all under the same folder. While there are things stored in the registry and under the windows directory most of the everything thing that is important is in the App specific folder.
Please explain to me how those are virtual since what you’re talking about is the actual file system layout.
No, it doesn’t.
Maybe it seems to make some sense when you get used to it, but that’s not the same as being “logical”.
/boot
/
/home/user
There you go.
Because, say, %windows%\system32\drivers\etc\hosts makes so much more sense.
Yeah, it sucks that you have to untar your packages and place every file manually… oh wait.
If you feel so inclined, launch gdebi and you’ll get a list of all included files along with their path.
Please, installing a program in Windows=double click icon, Mac=Double click icon, Ubuntu, if it is synaptic great, very easy to do just search for the file. However, if it isn’t in there like skype, or chrome beta, or numerous other programs, then it is a crap shoot were it can be as easy as editing an apt source file or as complicated as a never ending loop of dependencies.
Your programs are installed in /.
If you need details about where did every file go exactly, ask the package manager.
If by “your programs” you mean the executables for user programs your have installed (vlc, firefox, etc…) they are in /usr/bin.
What’s exactly being mounted in /mnt?
Your automount daemon will use /media, /mnt is there to provide a place for your manual test mounting of filesystems.
Most programs install most files under program files? OK, I can feel the consistency right there.
Installing a program on linux = single click icon. There, win
I’ve installed both skype and chrome beta through synaptic, but anyway… downloading dependencies is not any worse than finding out you need some runtime libraries on windows.
This might strike you as odd but I’ve done more dependency hunting on windows than on linux, and there was no getdeb to save the day.
I do find it it odd what programs require you to look for dependencies? I use windows and linux everyday and outside of a download of .net, I rarely have to go find any files. Jut download and double click
You’d still be installing a deb file, which is still tracked by the package manager.
If you are not doing that, you are doing it the wrong and hard way.
Did you add it manually to fstab?
My “shared” (not actually) ntfs drive is mounted on /media/disk.
It’s exactly that way if you download a deb file.
On the other hand if the package is available on synaptic, it’s obviously faster and easier.
Adding a repository takes longer than just downloading the deb, but then you save time every time it updates.
I was talking about the case where you had to download the deb package from the web, as you would do with an exe installer.
If it’s available in synaptic then yes, it’s faster as you don’t even have to hunt it down through web pages.
Last time it was some HP corporate software on Windows 2003. It needed some package I had to search in the MS web just to be able to run the installer.
It also utterly fun when some program tells you half way through the install process that it requires IIS/snmp/whatever, and you have to go out of your way to locate the windows cd, install the components and restart the install process. Considering some HP software enjoys wasting 1+ hours of your life just to get installed, imagine how fun it’s to come back and realize you have to start again
Last time it was some HP corporate software on Windows 2003. It needed some package I had to search in the MS web just to be able to run the installer.
Dude, you are talking about some very specific software in a server environment we are talking about the desktop environment with common apps the average person uses.
Is it easier to edit a repository file sure it is for you and me, people who understand this. We are talking about the average user and for the average user, editing a file is significantly harder than going to skype.com clicking download and installing.
I thought we were talking about user friendliness.
It was a windows piece of software installed through msi. The software’s purpose is irrelevant, it uses a “standard” (one of them) windows install process.
You don’t even have to edit anything, there’s a gui for that, but you can also go to skype.com, download the deb and install, anyway.
Edited 2009-08-24 17:22 UTC
You can also download a package file and install by double clicking it ( still use the package system and should keep you system sane enough )
from skype or opera related to your package system for linux.
I know opera is doing a nice job in distro detection.
Though still more complicated than your adverage setup.exe (which can bear the same name for different application) or quite equivalentto the nightmarish msi format (and personnaly I don’t like either of macOS install system with some hidden way to remove your application), but it is not as complicated as you implied.
As removing application, package management os still better overall, windows html hack to provide a nice interface is painfull to use ( inconsistent from app to another), uninstall button and select component can be at differen place depending on your application, and removing multiple apps for a spring clean up is long and (yes !) painful, Apple still have a lot of progress to do in that regard (but as most mac user don’t use that much application anyway they should be fine for another decade).
Ok macOSX is doing a nice job in hiding the underlying filesystem to the user, but even some (most of the tweakers) would feel frustrated to not to be able to access directly the filesystem, Windows os file layout is the worse one, riddle with legacy from the past and hence not consistent with the different version ( at leat from XP to Vista ), and the ability to quickly mess up with system file (I remember a game installation that overwrote critical file like, c:\boot.ini ), true it could happen on any system ( I think that linux package system all run as root, and macosx user would happily answer to an install prompt for sudo elevation.
did you have to edit synaptic repository files? I bet you did. I windows I click download, wait for it to appear on my desktop then double click and I am good to go. Not so in ubuntu or any other distro
Installing a program on linux = single click icon. There, win
Ok, lets test:
>> apt-search acrobat
libpentaho-reporting-flow-engine-java – report library for java
libpentaho-reporting-flow-engine-java-doc – report library for java documentation
balazarbrothers – 3D puzzle game
libpdf-fdf-simple-perl – Perl module to read and write (Acrobat) FDF files
libpdf-reuse-perl – Reuse and mass produce PDF documents
xpdf – Portable Document Format (PDF) suite
xpdf-common – Portable Document Format (PDF) suite — common files
xpdf-reader – Portable Document Format (PDF) suite — viewer for X11
xpdf-utils – Portable Document Format (PDF) suite — utilities
xpdf-chinese-simplified – Portable Document Format (PDF) suite — simplified Chinese language support
xpdf-chinese-traditional – Portable Document Format (PDF) suite — traditional Chinese language support
xpdf-japanese – Portable Document Format (PDF) suite — Japanese language support
xpdf-korean – Portable Document Format (PDF) suite — Korean language support
Ooops…., ok lets go to adobe.com, download installer navigate to download dir and doubleclick AdbeRdr9.1.2-1_i486linux_enu.bin. Now we get some “Open with…” dialog asking for an application to handle the filetype “.bin”. Ooops….
At this point any non-expert is simply lost. Of course you all can now start with lengthy explanations why this was the wrong approach, something is not configured, etc, etc,…. BUT, in reality this is a testament to the very MESS that is linux.
– repository has no acrobat (for religious reasons)
– .bin is not known as a filetype -> FAIL
– there is no standard way to mark executable files
….oh wait, there is…*does chmod 775 on AdbRd9-xxx.bin, doubleclicks* -> FAIL
You have to rename to blabla.sh and the filemanager will offer “execute” option (thunar by the way, but still *linux* right?)
cheers
How about going to adobe.com, selecting your operating system (say, ubuntu) and downloading the deb file?
LOL! Have you tried or are you just trolling? FWIW There is no “operating system” called ubuntu at adobe.com (are you really expecting Adobe to ship a native package for X+100 PMs out there?).
cheers
There’s basically deb and rpm. that’s 2, not 100.
Oh noes, you have to select the deb package
Too bad for adobe for not providing a short list of common deb based distros inside the deb option, just for reference (namely debian and ubuntu), but still.
They could take a look at skype’s download options.
Edited 2009-08-25 08:02 UTC
Guess what: linux identifies files for their actual file type and not by their extension.
A .bin though wont execute until it’s been marked as executable (which it’s not by default, and that’s a good thing).
Right click -> properties -> permissions? I still haven’t found any file browser that doesn’t let you change file attributes (ok, maybe you can find some obscure app that doesn’t, if you dig deep enough).
Edited 2009-08-24 20:35 UTC
Now we’re far away from “single click”…,and NO even with chmod 0755 it didn’t work because Acrobat happens to be a text based installer and if you “execute” it thru the file manager there is no /dev/stdout -> HANG!
Maybe this is just to avoid paying royalties to amazon for their “one click” patent?
Have you bothered checking that there’s a filed bug for nautilus regarding that?
For linux packages, sure. It’s a single click.
For binary installers, you have to make them executable. You say you were aware of that, yet decided to turn it into an issue with the “no standard way” crap.
Again, single click install works for linux packages.
Is it so awfully hard to download the deb package?
Is that all you wanted?
Under Linux your executables are installed under /usr/
In some rare cases they may be under /opt/, but I am personally against this and as far as I can see it is not used much any more. (Just like on windows you rarely see C:\APPNAME\ any more).
If you want to be specific, 99% of executables are installed in /usr/bin/. In some cases it’s /usr/local/bin/, as in FreeBSD for example, but this is mostly a per distribution differentiation.
Your problem is probably that you don’t like seeing ten different directories in /, or maybe you don’t like your files being in /usr/*/package/ instead of /programfiles/package/. Please understand that this is your problem, not a design flaw.
Would it make you happier if I moved /tmp/ to under /var/ and put /boot/ /dev/ /proc/ /sys/ /lib/ /bin/ and /sbin/ under /linux/? And renamed /usr/ to /programfiles/ and /etc/ to /settings/ and /home/ to /users/? Would it? Because now we’re talking semantics not function.
How about this. If you can convince Microsoft to stop using slashes going the wrong direction and also to stop using ‘drive letters’, instead placing things in lettered directories in / (e.g. /c/windows/), then I will consider changing the way Linux lays things out just to make you more comfortable, too. Do we have a deal?
Edited 2009-08-24 17:35 UTC
Let me tell you a story.
Once upon a time I was in ##linux on Freenode and a guy came in complaining about Linux being hard and not being able to install some little app he wanted. We went back and forth on the issue a few times and it came out (eventually) that he had done this:
* Gone to download.com and couldn’t find the app.
* Used google to locate the app’s web site.
* Gone to the web site for the app, downloaded the Windows exe. Saved it, double clicked.
* Gone back to the web site, downloaded the source tarball. Saved it. Double clicked. Didn’t know what to do with what he saw, didn’t finish extracting.
* Gone to google and found some generic build instructions. Could not figure out where to type them.
* Went to IRC and asked about unpacking tarballs. Eventually we got him in a terminal and unpacked the tarball.
* Still on IRC, asked what next. We found he didn’t have GCC or a build environment.
* Asked him if he’d tried his package manager. My what?
* Sir, what distribution are you using? GNOME? No. Distribution. Debian? Okay. What application did you want?
* Type this: apt-get install whateveritwas
At this point he went from “Linux sucks balls, why would anyone ever try this junk” to “You have got to be kidding me. It’s really installed now? Computers can do that? That’s amazing.” Followed by “What else can I install this way?” and some comments which lead me to believe that he was grinning like a madman.
This same fellow came back the next day wanting some software not in his distros repo. I remembered him, which saved time. This time it took only a few minutes to tell him about .deb files and what to do with them. After one Just Worked he left and I never heard from him again.
What was the point of this story? I don’t remember.
You know, now that I think about it any good handler for .deb files (for when you double click them in your file manager) should really run them through apt for a depends check and come back with a “foobar.deb requires additional software to be installed.” dialog offering buttons for “Install everything” and “Details…” giving a list of exactly what will be installed and “Cancel install”. That would fix 99% of depends issues.
The other fix would be a little repo stub file which basically says “Download package foo from repo bar.” and a handler app which figures out whether you have that repo and, if you don’t offers (in simple language with a one button accept) to add it and attempt the installation. This would be the most ideal solution for purely third parties.
What was I talking about? I don’t recall.
Who cares where the files went? The actual executable will be in your $PATH so it can be launched just by typing its name. You can create a launcher if there’s not already one.
It’s not a “guessing game” either – your package manager will have a way of telling you where the package has installed files to.
Just because the sight of the / directory makes you feel like a n00b, does not make the system broken. In fact, the system works extremely well BECAUSE you don’t need to know where things are. Also, it’s a well thought-out system because indexing services like “man” don’t have to scoot all over the filesystem looking for man pages – a place for everything and everything in its place.
Class dismissed.
What is not logical about that? Just because you have not bothered to take the 30 minutes or so, that it would take to read up on and gain a good understanding of the Linux File Hierarchy Standard? If they put the binaries in boot, mounted removable media to bin and the bootloader files in mnt – I might see why that is not logical.
Yes, because ‘rpm -ql <packagename>’ or ‘dpkg -L <packagename>’ is so very difficult – compared to a Windows installer which provides no straightforward means to find out what files it has placed on your harddisk.
The Linux File Hierarchy Standard and package management tools may not be obvious to use, they may not even be ‘easy to use at first’. That does not mean they are not easy to use, nor does it mean they are not user friendly.
What is not logical about that? Just because you have not bothered to take the 30 minutes or so, that it would take to read up on and gain a good understanding of the Linux File Hierarchy Standard? If they put the binaries in boot, mounted removable media to bin and the bootloader files in mnt – I might see why that is not logical.
Typical linux fanboy, I do know because i have read, i am talking about the average desktop user like your mom or aunt, who will not take 30 min to read how to use linux when she already knows how to use windows or a mac
Yes, because ‘rpm -ql ‘ or ‘dpkg -L ‘ is so very difficult
Either you have very strong sense of sarcasm or your a dumb***
Edited 2009-08-24 16:05 UTC
What makes you think a cryptic “System Files” (what’s a system file?) or “Program Files” is easier for the novice user? Not to mention the litter under the “Windows” folder. Oh yeah, that stuff is a breeze for the un-initiated.
Maybe you can enlighten us then. How do I find out exactly what files an installer installed and where in Windows?
lol, I like your attempts at making a point.
Points convincingly made if you ask me..
Nobody was asking you and even if they were, you would be wrong but, thanks for playing 😉
Postdiction, tell me do you live in a cave or under a bridge?
I like how you cant respond to the right post with the right comments.
Gee, I don’t know. Maybe a system file is a file that is required by the system and should not be deleted? And maybe Program Files contains the files needed by programs to run. Wow! It is logical.
Browse to \Program Files\Program and there are your files. Your user files are either there or in \Documents and Settings\User\AppData on XP, and in \Users\User\AppData in Vista.
Oh you mean like any file outside the users own file space? Again, once you actually get to the point that you have to or want to screw around with system files it does not matter if it’s called “System Files”, “Windows” or “/etc” or “/bin”.
People aren’t idiots, they can figure this stuff out when they need to.
Except the ones that goes in “Common Files”, “c:\Windows” or one of it’s many subfolders. This is of course only “documented” in the undocumented file format the particular installer system the application used, if at all.
Yes, I can see clearly now how that is much better than to actually have a built-in system for keeping track of what files goes where.
It’s nice to see that Microsoft has learned from *nix to give each users his/hers own file space.
How is “Documents and Settings” and “User” any easier to understand than “/home”?
While you and me can figure out what goes into /etc, most people get afraid only by looking at its name. But if the system is cleanly designed, the user will not be afraid to experiment (see Amiga).
Can you tell me which sane piece of software installs things in \Windows? And Common Files are… common files used by the apps?
It’s not. In fact, /home is more user-friendly. Again, I’m not saying x is easier to understand than y. They are all equal. I’m talking about the illusion of user-friendliness (yes, there is one).
Oh come on, dont pretend that no software installs DLL’s and such in System32.
By what apps? Some folders in Common Files are named after companies, some after products and some are entirely generic.
I also have a “System” folder in Common Files. What are system files doing there? Why arent they in the “system” folders?
Clearly an excellent system with no room for confusion.
You pretended that you have to hunt for things in Windows and its subfolders, while it’s just \Windows\System32.
By the app X made by company Y. I also think “System†in “Common Files†refers to files shared by multiple apps, but not required by the system to run. Is it confusing? Yes. Is it more “user-friendly†than the Linux file system? Yes.
You’re missing the point. Average Windows users have no clue where things go when they download them nevermind when they install software. There is absolutely nothing inherently superior or easier about Windows’ filesystem layout to the average user. Linux is quite sane actually if not a little confusing at first.
You’re making a fool of yourself by calling someone else dumb.
This has been brought up and discussed many times. I wont go in to it again right now, but here are the highlights:
* Believe it or not it’s like that for a good reason.
* Some room for improvement exists, but there’s little gain to be had for the minor improvements we might actually agree on.
* The kind of user who cares into which directory the installer put his files ought to take the time to learn why it is the way it is and why that’s a good thing. Everyone else doesn’t care anyway.
That is what it boils down to. You cannot ‘reorganize’ without losing something many of us cannot afford to lose. You shouldn’t ship radically different FS layouts for different people (no server version vs desktop version please!)
Any specific suggestions you have for improvements would be welcome, but please understand that it is a large effort to change things like this and if you cannot convince people that there’s a major gain to be had they wont do it for you. That said, you are welcome to do this yourself and make your own distribution.
Or, try GoboLinux.
Meanwhile, I unzip an app on Haiku and move it into /boot/apps.
And maybe I want to delete some preferences… there they are: /home/config/settings
Believe it or not, making the filesystem easier to understand (even if it’s a change from /bin to /apps) makes the user to want to understand the system. Part due to it’s simplicity. Of course when you have 30 directories the user will get scared and will want to close the window.
I did not say that there was no advantage to other filesystem layouts. I said that the existing layout isn’t purely sadistic. It is like that for a good reason and any attempt to change it will necessarily have to take in to account the advantages of the current system, throwing out as few as possible.
Most people wanting to ‘fix’ *nix directory structures do not understand them and so throw the baby out with the bathwater. Their proposals fail to gain acceptance and they are left wondering why.
I am all for a revised FSH, but you can’t simply throw it all away because it’s ugly or because you found something else that worked for you, once.
To reply to a specific point regarding /bin/ -> /apps/. This may seem like a good idea, but most apps are not in and should not be in /bin/ (thus not in /apps/). So, you see, it is never that simple. You could simply rename all bin dirs to apps or system-apps, which is fine by me. But, as I said, there is only a small gain in comprehension if you do this and there is massive work involved in converting existing software (and people) over to use it. The gains are thus not great enough and people don’t do it.
Not that you couldn’t. I encourage you to start your own distro and do just this, and other things if you think they’re improvements. Maybe it will catch on (but I doubt it).
Another dynamic worth considering here is that application / software developers within linux are first and foremost competing with similar projects in the eyes of the distributor, before they can even think about competing between projects in the eyes of the end user.
When he write “Linux” which part of Linux is he talking about?
Linux the kernel is really user-unfriendly but that does not matter at all, unless you are a kernel-developer or are making your own linux distribution.
So he must be talking about either the “linux distributions” or the “linux software”
If the question is about linux distributions, then my answer would be: “Which one”. Linux distributions are made by different people, and with different goals so there exists no answer that can cover them all.
But having said that,I do think that the question is invalid. I do at least think that the Fedora Core 11 I run right now, are far more user friendly then Windows Vista. And here we are talking about Operation systems/environments and the software needed to manage and control them. This does not include 3. party production applications.
But he might also be talking about the production applications. That is: The software that you use because you need to do something with the computer. But then the question should be rephrased as
“Why does the userinterface for some linux software suck” and the answer would most like be the same as the answer to “Why does some windows software sucks” having something to do with the fact that its difficult to design good user interfaces, and sometimes software change its focus without changing its userinterface.
Let me take Itunes as an example: Some time ago a Windows user friend asked me how to use ITunes(Version 6 i think, or maybe 7) to rip the audio cd that was in his cd-rom drive. Well after having spendt 15 minutes, looking around in the entire interface, I simply had to give up, and said he should use some other software, because I could not find that function in iTunes.
I also tried the windows version of iTunes to see what all the noise is about, and thought that if this is where the “state of the art” for UI design is at, Linux has absolutely nothing to worry about. It was a slow pig with unexciting UI.
For slick UI’s, Qt Creator is the one to learn from:
http://static.kdenews.org/danimo/qt_creator_visualizeqstring.png
Well, it looks good at the first glance… but its UI doesn’t match the theme of your OS and “slick” becomes “cumbersome” when you use the window designer. It’s not a bad IDE, but it lacks maturity. Sometimes, SDI/floating windows are more functional/easier to use than MDI.
Hmmm…
This is a pretty windoze-centric article. Easy to use? Easy to maintain? There are thousands of corporate and independent troubleshooters out there making a living from the fact that Microsoft’s operating system is code-heavy and cumbersome. ‘Nuff said on that!
Otherwise, although NOT so elitist as to write a book about how “in the beginning there was the line command” it’s pretty lowest-common-denominator to refuse to learn ANYthing about the inner workings of software. Most people don’t know anything about how their car’s differential works either! But to assume that people need to rate the operability of any operating system on its “user-friendliness” is pretty damn insulting to anyone’s intelligence!
Lastly, there are lots of Linux distros that aim for use by the more casual user: Mepis is quite nice (if hooked on KDE!) and Puppy is good for beginners, while gOS is targeted from get-go at the user new to any sort of computer. Articles like this one are sort of a waste of bandwidth!
In The Beginning Was the Command Line was elitist? Have you read the essay itself, or just the title?
I won’t say anything new, just a summary of what I think are the deepest technical problems with GNU/Linux, and some hints on what to do about them.
I think the main problem is that the Unix model was developed for a very different computing environment than modern desktops, especially regarding hardware compatibility, graphical environments and third-party software.
In the big servers of old hardware was more rarely replaced or updated, not to mention hot-plugged, so hardware incompatibility problems were not an everyday concern, and high-quality drivers were taken for granted. So it was okay to use monolithic kernels where all the drivers must be trusted, for higher performance.
An open-source operating system which must support drivers from a myriad of sources, sometimes of dubious quality, some of them FOSS, some of them proprietary, should be especially resilient when it comes to driver failure. A micro-kernel architecture, as in Minix 3, or the Hurd, seems more adequate. Of course those systems also have their own problems, but I think it’s worth to explore any trend towards more modular, resilient hardware support, such as managed kernels like Sharpos and Cosmos, or new architectures like Genode.
Then there’s the fact that GUIs are treated like a luxury, as if anything beyond the CLI were superfluous eye-candy. That may be fine for a server, but when it comes to a desktop, the GUI is a basic service; most modern end-user applications don’t even have a CLI version.
Recent experimental improvements like kernel mode-setting and rootless X look like steps in the right direction, but maybe the X windowing system should be replaced altogether. In any case, recovery from graphical problems should be as quick, automated and incident-free as possible.
Last, but not least, regarding third-party software, it’s my understanding that in the ’70s most software either came with the computer or was directly written by its users, often in the form of shell scripts and small C programs. There was no need to integrate dozens of applications using different versions of hundreds of libraries; package management was not really an issue. Most package management systems today have a strong bias towards memory-efficiency and against stability, I mean, they do a great job of replacing redundant libraries with slightly different versions, but they can’t selectively leave alone a crucial application’s libraries if the user so wishes, so updating an application can always break some other application. This is not frequent in other operating systems, where user apps are clearly separated from system libraries on which they depend.
There is indeed some beauty to the notion of blurring the line between system libraries and user applications, as package management in GNU/Linux seems to imply, but then the system should be much more carefully thought-out, in order to support different versions of libraries and maybe even different system and user configurations. Alternative distributions such as GoboLinux and NixOS, and systems like Zero Install, may be leading the way.
Windows and MacOS are so “user-friendly” due to the proper OEM support. What I’ve heard about OEM Linux is not a good news. And an average user’s desire is a computer with pre-installed OS with full hardware support.
The other point: Linux is a freedom of choice for all software. I won’t grant this freedom for anyone else. If I don’t need anything I really don’t need it to be installed with my OS. It’s a Windows way. I had pre-installed Vista on my Toshiba laptop with 1 GiB of ram, and by default Vista used 850MiB and Toshiba utilities used over 100 MiB. So I had 50MiB for user apps and great swapping therefore. After all tweaks Vista eats 800 MiB and I’m trusted there are unneeded features by I have no chance to disable them.
With my ArchLinux installation on the same laptop I have 10-13% of ram used by OS so I’m able to run several virtual machines without swapping at all. Also I’m able to tweak almost anything, even recompile the kernel without any deep knowledge of a process due to ABS capabilities. Also I’ve chosen only needed software so my ‘ps aux’ list is very small. =)
And also there is a great community-written documentation, tips and tricks at ArchWiki. There is a solid amount of build scripts (PKGBUILD)
in AUR for any-purpose software including packaging tricks for proprietary and commercial products such as NeroLinux.
After all I’m trusted that Windows and OS X have the greatest usability issue – there is no freedom of choice, so I need to spend my computer’s disk space and ram for unused software. Do I really need it? No.
After all – if you want to use something you need to learn how to use it. Otherwise every little problem will be a great headache. It’s so for some Windows users: “I’ve clicked everything, now I have BSOD. Why?”
So: Linux usability is far ahead. Beginners and end-end users need a service where they can pay money to have their system configured by specialists. If custom Linux installations will become a sphere of business, other OSes will have great competitor at desktop market. But of course if will be too expensive for users, I think. =)
This is what happens when you don’t know about SuperFetch. Vista didn’t eat 800MiB, it cached parts of your most used apps to load them faster. When a new application requires memory, Vista gives it up.
Granted, Vista trashe[d/s] your HDD like there’s no tomorrow, but that should be due to the Indexing service.
As pointed out, Apple have an inherent advantage due to controlling both the hardware and software.. Both windows and linux can fail to work correctly when faced with certain hardware.
One of the biggest issues with linux is familiarity, people are familiar with windows and find linux difficult because it’s different… People who have never used a computer before and therefore have no famliiarity with anything, will often find linux easier.
I have introduced linux machines (cheap/free) to several people in their 60s and 70s who, having no prior computing experienced picked it up easily enough. If you showed any of these people a windows machine they’d hate it for being different. Several of these people drive, and have been driving cars with manual transmission for years, yet none of them were willing to consider a car with automatic transmission… While there are advantages with a manual transmission such as greater control and better use of engine power, these people don’t care about that – they just want what’s familiar.
Another thing worth considering, is where the article mentions that the cli is often the better way of doing things… Most inexperienced users will ask for help from someone experienced, and an experienced person is likely to use the more efficient way rather than a slower but simpler to learn method – i know i would. Also when explaining something verbally, a commandline is much easier to describe than a gui.
Commands are not archaic, they are a language that the computer speaks, and mastering it will allow you and the computer to communicate far more efficiently. Communication between two people works best when they both speak the same language, having to draw pictures and point is extremely inefficient and really only useful as a fallback.
Think of a computer like an animal or a small child, it lacks the intelligence to learn to speak fluent english, but it can still understand commands it has been taught and is much better than a human at doing certain tasks.
Well, it depends on what you do. Generally speaking, I always found the KDE UI in particular more sensible and easier to use than Windows XP. Then again, I’ve always used a very wide array of operating systems and user interfaces over the years, and I’m what they’d call an “expert user”. I’m predominantly using Macs right now, but still deal with quite a few Windows XP and Win2003 servers. I still find that there’s functionality of KDE I miss in both, and I find the XP environment actually quite annoying at this point (even in comparison to Mac OS X 10.5).
I wouldn’t say KDE isn’t user friendly in so much as that it’s sufficiently different from Windows (yet superficially similar) as to be outside most casual-users comfort zone.
Aside from the usual argument for powerful but not-easily-discoverable interfaces (such as the Unix CLI or emacs) of “It’s user friendly. It’s just picky about who its friends are.”, I think Linux user friendliness has come a long way.
I think it’s been the case for a *very long* while that Linux has been easily friendly enough for someone to use for work tasks on a daily basis, as long as someone else administers it.
I think it’s been the case for quite a long time that Linux can actually be “friendlier” to both many undemanding users and many “extreme” users. An undemanding, novice user will need a machine with pre-installed Linux and good hardware support (nb. these users would have the same requirements with Windows) but then typically find all the software they need already there and have no wish to install random other things off the internet. “Extreme” users will benefit from the extreme tweaking opportunities Linux provides.
But there’s a large band of power users in between who I think are not sufficiently well served. To a certain extent these users will probably have a lot of power tweaks on whatever OS they had before, which will not transfer, making Linux less attractive anyhow. But I do think that the market of people who, say, want to use a GUI but also to configure conceptually advanced or esoteric things is possibly not so well-served under Linux. Also, these are the kind of people who are likely to want to download particular versions of particular apps and install them, yet will see compilation as a waste of their time since they don’t *need* the flexibility of working with source code (lets face it, most of the time few of us do).
1) A kind of implementation of the installable and infamous .EXEs, able to run cross-distributions.
2) Your favourite nice little app running on Linux.
3) Your favourite hardware coming with a 3-click installation process CD media.
4) UI consistency
EXE’s? Why would we need the Windows executable format? Do you mean installer packages?
My favorite hardware comes with a no-click installation process, “It Just Works”.
Let me introduce you to Office 2007, Google Chrome and iTunes (to name but a few). What was that about consistency?
I think one of Linux strong suits has always been that it’s driver model is much more flexible. Generally speaking you should never need to install a driver for hardware – it should just work. It’s been years since I’ve not had hardware “just work” in Linux.
I think an editor or commentor on this site once mentioned the similarities between app stores and repositories. And it’s true…
The model of “you can get anything within our repository but stuff from outside is hard” has been the subject of critical comments for years that say “But what if you want something else?”. That’s true, it is a disadvantage to some users but to others it’s convenient to have a one stop shop for pretty much everything. Mobile computers, such as the iTouch / iPhone have done very well out of what seems to be an incredibly similar model. Sure, I imagine Apple have done it very slickly but really “There’s an app for that” could have been a Debian (for instance) slogan a decade before Apple used it.
Distros repositories typically have *better* functionality than the app store as you can quite easily add another repository (i.e. multiple app stores). It would be very nice to have friendlier, more intuitive GUIs for repositories presented to users by default on more distros. But the fact remains that we’re seeing in the commercial marketplace that the concept of a centrally-controlled app repository can be successful, even if not everybody likes the particular approach Apple has taken.
Linux is a Kernel. It is hidden from the users. It need not be user friendly.
I assume you are talking about X windows not being user friendly.
I have read the post, you are talking about the other 2 major OS, I assume you are talking about MS Windows and Apple. So you are comparing Linux with Windows and Apple.
User friendlyness is subjective. Let me ask you, how many clicks (least possible means) do you take to copy & paste a text in windows. select text, right click, move the mouse to copy, click the mouse on copy, move the cursor to the desired location (couple of clicks here), right click for the pop up menu, move to paste and click on the paste.
Under X it is simple, select the text, place the cursor on the desired location (couple of clicks here) and then click both left and right button together.
So do you see that X does much efficiently the same task with little effort from you compared to windows.
There are many more such features were X does beat Windows. To me X is more user friendly then windows.
If you are taking about the nice looks, color, special effects then I agree X may not be that efficient. I have a laptop with 2 GB Ram and I can’t even open MS word as quickly as I can under Wine / Linux. I have to close all the special effects under Aero. So tell me what is the use of all these special effects if a user can’t use them (so windows is not user friendly again.)
I have been using open source OS since 1992. I can say that open source OS distributor have done a great jobs.
Can we stop with the “X windows user friendliness” language? X is no more user friendly than GDI is user friendly. Your graphics layer contributes not very much to the overall user experience and zero to friendliness.
OS X isn’t a friendlier system because it does not use X. Not using X allowed them to take certain eye candy shortcuts and (arguably) provided a way to quicker way to get a smooth (and thus pleasing) graphical experience. None of this affects applications to any significant degree.
If friendliness is the subject, X does not enter the picture. End of story.
Replies about mode switching, flicker, crash recovery and such can go to hell and die. Just because X doesn’t handle these things well right now doesn’t mean it can’t and wont in the future. These things also contribute little to the user experience and are typically not what users complain (or care) about.
Using CUA keys:
– Select area to copy
– Ctrl+C
– Select target
– Ctrl+V
If you have an app with editable text area, that’s 0-1 clicks optimum.
Everything else feels very error prone and clumsy to me. Especially the X 3-button copy-paste, which never worked ok – back in the day we had 2 button mice we had to press both buttons simultaneously, and now we have to press down scroll wheel. Sucks. Luckily this is rarely needed anymore, and as far as I’m concerned it should go the way of focus-follows-mouse…
In summary, I don’t think this is a benefit of X, and not something we should make much noise about ;-).
Unless you need to copy to/from the cmd, obviously, although that’s not a GDI problem, it’s just that the cmd app sucks.
It has always worked for me. Some of us have a three button mouse you know.
You don’t really need the third button in Linux anymore but if focus-follows-mouse is removed I’ll be pissed. Click-to-focus is the most annoying “feature” of Windows to me. I don’t always want to raise a window just to type in it.
It has always worked for me. Some of us have a three button mouse you know.
I see your three button mouse, and raise you a five button mouse…
Well – what can I say?
People keep comparing Linux to Windows. As if Windows has set some standard everything has to comply to. Problem is – that’s not the case! You see – there are different ways to do things. Windows does not always has the best way to do things, but Linux has also some quirks. To say Linux is not user friendly depends heavenly on the things you expect to see. If you expect Linux is some Windows “clone” you will easily say Linux is crap, because of the simple fact that Linux is NO windows “clone”. If you turn things around you should easily say windows is total crap because it acts different as Linux does. Windows is a very bad imitation of Linux.
Let me illustrate it with a little story.
A man uses a video recorder and was very happy with it. He loves his movies and uses the apparatus very heavenly. On a bad day his recorder broke down and he cannot play his beloved movies any longer. A relative felt sorry for him and want to surprise him. He buys a top-of-the-range DVD player/recorder and gives it to the man with the broken video recorder. A few days later the relative calls to see if the man likes the DVD player. To his surprise the man is angry and asks if it is some lousy joke or something. The relative is totally baffled and asks what’s the problem. The man replies the DVD player is a piece of junk. It’s total garbage. None of his tapes fits in the damn apparatus!
A DVD player is no video recorder. Linux is not Windows…
Despite of all the bashing and bad-mouthing about Linux it seems there is one thing nobody can ignore. The use of Linux is still growing! If Linux was such a bad and unfriendly OS it would sink away into non-existence, but it does not! A growing number of people are discovering Linux, and also like it! And that’s something nobody can ignore or deny!!
Our church took the plunge and switched to Linux. I had already switched from Internet Explorer, and had familiarized the staff with OpenOffice. Let’s just say I keep getting reminded how one needs to refine the definition of “user friendly.”
Linux replaced XP Home. There was one Windows user, the administrator, and there was no login password. Multiple people used the machine and had done something to the OS, so it would no longer accept updates without blue screening. Our secretary had trouble with the concept of user name under Linux. She also had trouble for a few days logging in, as the password wouldn’t work for her (it worked fine when I did it). She also had trouble understanding that the user password and the e-mail password were different things.
From a point and click perspective, I made the desktop similar to Windows, except the icons to launch programs were bigger in Linux. I even made sure there was a “My Documents” icon on the desktop. The concept of multiple desktops left the secretary nonplussed, but after some fiddling around, thought that four desktops was too many, but two might be useful (she got two).
Over the years I have learned that program launching from the KDE or Gnome equivalent of the “Start menu” is better than on Windows, as Start menu items tend to be sorted by company on Windows and by category on Linux. That advantage vanishes in the “real world” of converting Windows users; they almost never use the Start menus to launch programs. It’s either on “quick launch” or it doesn’t exist. Setting up quick launcher like options on Linux saves a ton of trouble.
Getting staff used to programs available on Linux prior to the actual move to Linux saves considerable trouble. The switch to Firefox is easy. In our case, changing from Microsoft Office to OpenOffice was another story. Two people were involved.
Our secretary got basic training on text formatting (“no, you don’t just keep holding down the space bar to push text around”). A second person got her own user account. She was a competent Word user, and objected to its absence, because she needed her files saved in DOC format. Here separate user accounts let me set OpenOffice to default to the Microsoft versions of the files for her. She had no more objections.
My conclusion: If you put Windows XP-Pro and Linux side by side, I think Linux would be slightly easier to use from basic user perspective. Neither enjoys a big advantage. The problems that I encountered would have happened had I changed to XP-Pro from XP-Home. I suspect that they also would have happened on a Mac. Increased safety and security impacts user friendliness.
Sounds more like you decided for them.
Sounds more like you’re treating the symptom instead of the real problem: dumb users. But then, this is a church we’re talking about, so the people there are probably used to having their mental faculties permanently set to `off’.
While your comments are a little pointed, it’s hard to argue against them.
Linux needs to be more userfriendly how? The current GUI paradigm is 30+ years old, and some people will just never get it. Short of interfacing via voice activation which incorporates strong A.I., computing will just be one of those activities meant for smart or otherwise interested people.
A modern distro like Ubuntu would not be any more or less userfriendly than commercial offerings if they enjoyed the same OEM share.
Let’s get something straight: most people are forced to user Windows, and a large portion of them don’t even know the interface well enough to get around. Windows isn’t necessarily userfriendly, it’s just familiar due to marketshare, and it still takes a Windows geek to fix co-workers’ and grandmas’ computers. And these dolts are hailed as geniuses for the effort. Just goes to show what perceptions are worth coming from the average user.
Edited 2009-08-24 19:42 UTC
Indeed. And at the point where the computer will be doing all the thinking, why does it even need a user? Where’s the value-added on the users’ part, if his only contribution is to point and drool?
Like a lot of the other OS-nerds here currently do, I once would try to install what I felt was better software and OSes on the machines of everyone I knew. But, becoming the neighborhood help desk wasn’t really my goal in life.
Some users don’t know and don’t want to know, and IMO, there’s nothing wrong with that. That’s why communication services are heading to gadget platforms like phones. I say that’s good for them and us. They want the services, not a high-maintenance computing platform. For us, terminally-incompetent end users contribute nothing back to the FOSS OSes and, if anything, just end up making them more like OSX and Windows (and, if I thought those OSes were so great, I’d go use the real thing).
In my case, the issues had more to do with issues common to all OSes: Passwords, formatting, file formats. These aren’t OS issues, though; they are issues with all computers.
A couple of other points: The competent Word user had no trouble working with OpenOffice Writer. She said there was a lot of overlap in concepts. The church secretary has no issue with a point and click interface. Point and click basically works.
The major usability issue I encountered involved security. XP-Home, when set up as a typical home use system with a single user, is easy to use. Any time you enforce user privileges, things get complicated.
Wow – you insulted non-technical computer users, Jews, Christians, Muslims, Buddhists, etc. all in one shot. Any other groups you want to insult? You could probably take a shot at nationalities or income levels next.
Chronically-incompetent computer users typically want their thinking done for them. People that voluntarily file into a church to receive a monologue about the nature of the universe without demanding anything resembling proof are doing the same thing.
At least choice of operating systems is something less likely to get people to kill each other over though.
No, the vast majority of computer users do not want to think about the computer at all. They just want to think about their work, the thing they are getting paid to do. I think that is the problem with too many tech folks (of which I am one) – they expect users to take the time to be “power users” like they are. I love fiddling with my computers. This weekend I installed Ubuntu 9.04 32-bit and 8.04 64-bit, as well as Solaris 0906 (gave it a shot on my laptop – but no suspend/resume yet). The fact that someone else spends their time dissecting legal cases – but can’t stand the user interface to their computer – doesn’t say anything about the competence. It could be our interfaces are needlessly complex.
BTW, I am a Senior is Computer Science, I have a 4.0, I scored in the top 1% of the country on the ACT test – and I go to church. That means you can throw out most of what I just said. I’m probably a dim bulb. Sleep well!
Nice, weldone!
bm3719>
I questioned the nature of the universe and found a loving relationship with an awesome God.
Doesn’t make me turn off my brain, I’m using Linux everyday for general usage, also while I’m learning to program.
By analyzing the contents of your posts I have come to the conclusion that you too are a member of a church.
Why do folks always think there is something wrong with Linux? It’s capable of improvement sure, but name me something that isn’t (aside from banoffee-flavoured ice-cream).
Is it because desktop Linux has only a small market share? It is always going to have a small market share on the conventional desktop, I suspect. The entire produce supply chain in Western economies is based on the idea that something has a price. If something does not have a price, it isn’t supplied and therefore it is up to the user to get hold of it.
This alone will ensure that Linux retains a small share of the market. In addition, Linux has always been for people who want it and who are prepared to make the effort to learn it. Most folks aren’t, which is fine, and sensibly choose Windows or Mac – and pay for it all the way down the line. That supply line, incidentally, is just as keen on pay-for as are Microsoft or Apple. Without it, tens of hundreds of thousands would soon lose their jobs.
If Google wade into desktop Linux then it will be for the sole benefit of Google and its stock-holders. That’s what monster corporations do. There may be some spin-offs come trickle down but no guarantee the results will be useful or desirable outside of whatever magic wand Google has in mind to grease the delivery of adverts and streaming media right into their sorry your desktop.
I guess it’s just as important to ask where the desktop OS as we’ve come to know it is really going and whether the way we use computers and smart devices generally is going to change the game in the coming couple of decades. Linux may turn out to do brilliantly in a new world, for example, even if it is very different from the Gnome/KDE desktop Linux so many folks like to moan about today.
XP and Vista ? Just kidding
OK I tried ALL mayor distris on an supposedly supported Dell Inspiron (Ubuntu, Fedora, Xorg specifically say so); tried suse, Ubuntu, Debian, Mandriva, etc and ONLY Knoppix could configure the graphics. Now, that is a bog-standard Intel graphics and if this doesn’t work in the year 2009 it has to be XP again. (Intel 845 – Xorg also wrongfully believes they sorted this out but they did not) So I keep fooling around with with all sorts of distris in a VM, but I don’t see myself replacing Win 100% any time soon. Until a year ago, NO distro could boot a Shuttle SN25P with nforce4 chipset – I tried them all periodically for 3 years. Go figure. (But I do run them all, incl PCBSD and Solaris in VM on that box)
This article was an excellent analysis, with valid points along the way. However, I feel that the conclusion that Linux-based operating systems are not as user-friendly as MacOS X or Windows is wrong.
From my experience, distrubutions like Ubuntu, openSUSE, Fedora and Mandriva are *at least* as user-friendly as Windows and lag MacOS X by *at most* a small margin.
The two problems I have found (and these are not trivial problems), is when installing Linux-based OSes on hardware that was only designed for Windows; and when users try to run certain “must have” Windows programs and games on Linux-based OSes. People also tend to value Windows and MacOS experiences highly because it is what their training in school was based on, or they have become familiar and heavily invested in one interface over another.
None of these problems are intrinsic to Linux-based OSes or the Open Source development model in general.
Edited 2009-08-24 22:16 UTC
I agree with David as I agree with Tristan Nitot:
“Geeks are different from 97 per cent of the population.”
Read
http://www.itpro.co.uk/124832/love-and-usability-drive-firefox-succ…
It has been said here: “see VLC and Firefox”. I also agree with that. VLC and Firefox are pure win. Why? The answer is clear for me: They have been built with 97% of the population in mind.
It has been said here that distros do not try to uniform things enough and don’t care about long-term. This recalls me the answers to
http://lists.debian.org/debian-project/2009/08/msg00092.html
, where in front of what is for me an extended hand to collaboration and effort joining with mid-long-term in mind, answers such as “there is nothing wrong now”, “upstreams don’t care about what arrives to users and neither should” are given.
Considering all, the surprising fact would be that Linux ever raised from the 3% of desktop usage with this attitude.
I’ve just spent 2 days trying to get two different cellular broadband services working… unsuccessfully… with Linux.
The problem is clear. Linux is badly fragmented. Fragmented enough that 3rd parties just shrug their shoulders and support the targets that are easier to support. Windows and Mac. I’ve been an advocate of Unix on the desktop for 20 years. And I’m about to give up. I hate Linux today. To the point that I considered just buying a copy of Vista and having done with it all. (Those who know me can feel free to be appropriately shocked now.)
It’s the anarchy. The community, a tiny minority of the greater world, is so busy insisting that everything has to be done in their own way… except that they can’t even agree on a single way, or even 2 ways of doing anything, that the greater world just laughs and walks on.
We’re never going to win because we sabotage ourselves so, and I’m thinking I should just cut my loses… my 21 years worth of them… and join the human race.
Wireless-less and tired in Oklahoma City, and $529 poorer for having made the attempt.
-Steve
P.S. This “reality” thing is really weird.
Edited 2009-08-25 04:15 UTC
Steve, take a deep breath and calm down. You seem to have been under a lot of stress lately and you’re not acting rationally.
Seriously, wireless broadband is a pain in the ass on any platform. Even when it “works” it doesn’t actually work well. I like to liken mine to having the speed of a 19200 modem but none of the reliability.
$529!? For cellular wireless broadband? Holy crap they’re ripping you off over there. My, admittedly prepaid, wireless broadband cost me ~$50 including the Huawei modem thing and 5 hours of airtime. Works flawlessly with Linux (Ubuntu) I might add.
Well, did you try Sprint? As much as I hate them, I was using the Ovation UV720, (at least I think that was the model), with OpenSUSE and fedora. There is unofficial Sprint support for it on Linux. Is a USB modem, and it worked last year when I passed through Oklahoma City in September 2008.
Funny that you’re mentioning that. My wireless broadband USB dongle works fine in Ubuntu but not at all in Vista.
A lot of comments, so I’ll just add my points:
1. A lof the so-called user-friendliness in Linux is not really friendliness but more like GUI-wrappers to good-old CLI-programs.
2. A lot of the so-called user-friendliness in Linux is pursued only to cater the newcomer masses, often at the expense of oldtimers.
3. A lot of the so-called user-friendliness in Linux works exactly to the opposite direction: what was once an user-friendly CLI-interface is now sometimes a complex and morbid mess of GUI-interfaces and huge XML-files.
4. My hypothesis is that the recent trends in the so-called user-friendliness in Linux will lead to the very same symptoms witnessed in the Windows-world. As an analogy: once there are enough equivalents to Windows registry and management of the system can not be done in decades old UNIX fashion, the system is not any more user-friendly to administer.
Beware of the forthcoming idiot box.
Edited 2009-08-25 07:47 UTC
Unfortunately: for most of the casual users “user-friendly” means in fact “idiot-proof”. They expect it to be candy-sweet, glowing thing which is also able to guess their expectations.
I have quite opposite definition of user friendliness:
It’s a straightforward *simplicity*.
You can built a candy-sweet interface, but – unless it’s built on top of the rock-solid, clear and well-structured base system – you get nowhere.
Personaly I prefer to use more straightforward approach of “user-friendliness”. I prefer simple, yet highly specialized system structures able to build that rock-solid base. Clear configuration files, directories, simple tools, and that : “you get only what you need and nothing you don’t need”. That means I prefer to build OS from the *blocks*. I don’t like to get the whole bunch of crap I won’t be using anyway.
That’s the user-friendliness I know, If you ask me.
If you were to give mom a CD of Ubuntu and one of Vista, which would be more friendly? Or…make it a fesh install of each. I know from personal experience that people are often quicker to befriend Ubuntu than Windows.
Correct. My mom uses highly accustomized Arch installation, same happens with my grandma …
My father uses Debian. My kids use Linux in college. It wasn’t a case of them installing anything. I did the installing.
Had they just purchased a machine, it would have run Windows, and it would have been loaded with crapware. I could have spent the time removing what wasn’t needed and locking down the system, but installing Linux takes the same amount of time. Once Linux is installed, there isn’t that much maintenance needed. Does that count as user friendly?
When it comes to the larger issue of user friendliness, I tend to avoid the “user friendly” distros, and go with Debian. I find it “friendlier” to install a base system, then add X and a desktop environment. With KDE, I install Kdebase and add other items on an as needed basis. That tends to produce a lean, responsive system. Yes, such an install routine requires a basic knowledge of what you want, but if you have that knowledge, Debian is one of the most “user friendly” distros out there.
So, God bless “user friendly.” I just wish I knew what people meant by it.
I had to use Windows 7 for the first time the other day. I couldn’t find the control panel that downloads and installs applications (handling dependencies automatically). I actually had to manually go to websites, download ‘installers’ and ‘run them’. Weird. It didn’t come with an office application, and the browser was rubbish. What gives?
I have used Gnu-Linux systems almost exlusively for several years now. I recently used Windows on a friend’s machine the other day. The network went down. I found out later it was an ISP problem, not Windows. I found it harder to find out what was wrong, not easier even though I have spent several years in the past using Windows exclusively. The point it, it’s about familiarity. The Linux-based systems are now eisier, and thus more user friendly to me.
To say “Why is Windows more user friendly?” is to say “I use Windows easier.” I can do my daily work faster and easier now with Linux than I did with Windows years ago. I’m not saying Linux is faster, I’ve gotten faster myself at typing, finding files, and researching with search engines and databases. Linux is more user friendly to me anyway.