More than a dozen technology companies, including IBM, Red Hat, and Novell are planning to support a new integrated server and desktop Linux standard unveiled at next week’s Linux Desktop Summit by the Free Standards Group. The FSG is a nonprofit organization that has worked for years on a number of open standards including a server specification called the Linux Standard Base.
but it will be hard work to create a standard that is compatible with both KDE and Gnome, said Bruce Perens…In the end, the Linux providers may be forced to simply choose one desktop, he said.
Considering that UserLinux was supposed to provide something approaching a finished out-of-the-box distribution to do this I don’t know why they’re talking to Perens. UserLinux failed utterly, and it didn’t even approach many problems such as providing graphical front-ends to many of the functions within it.
Many distributions may well have to pick one desktop to focus on, but the criteria that needs to be used certainly isn’t that which Perens used.
LSB choosed a desktop at the begining but after some KDE troll power they decided to include KDE as well and now they are facing the concecuences, just like I predicted.
Edited 2006-04-21 16:00
LSB choosed a desktop at the begining but after some KDE troll power they decided to include KDE as well and now they are facing the concecuences, just like I predicted.
Choosing a desktop (or more precisely, a toolkit) based on what criteria? Does that criteria actually matter a damn to enterprises and users in general out there? That’s aside from any LSB politics or trolling from either side.
That’s my point.
Choosing a desktop (or more precisely, a toolkit) based on what criteria?
The no strings atached criteria.
Does that criteria actually matter a damn to enterprises and users in general out there?
I guess the answer is this:
Perens…In the end, the Linux providers may be forced to simply choose one desktop, he said.
The no strings atached criteria.
Yep – which matters not one jot to enterprises, companies, organisations or users when considering the relative quality of the toolkits they’re using.
I guess the answer is this:
Perens…In the end, the Linux providers may be forced to simply choose one desktop, he said.
Which brings us right back to suqare one – based on what criteria?
The no strings atached criteria
The discussion has shown that now licence other than pure public domain matches this criteria as any licence has implications on the possible usage scenarios, some more than others.
My guess is that the decision to also include Qt is based on the likelyhood of it being used, especially by close-source software vendors due to its proven Windows/Linux crossplatform capabilities.
and includes flawed or rather imprecise sections like this
“Matters are not helped by the fact that Linux supports two competing desktop environments, called Gnome and KDE, making it hard for developers to create one piece of software that will run on all versions of Linux.”
The article’s target audience might be the cause for this wrong wording but I am not aware of any software that will not run if suggested the desktop environment is not the active one.
There is usually only a difference in the level of integration, i.e. usage of desktop framework services, but since such service are moving to an abstracted communication modell, quite like web services, the most likely outcome is that applications will be using some form of highlevel API, for example one of the desktop framework APIs, which in turn will delegate the work to the currently active service implementation.
I agree, in this times desktop integration is easier thanks to FreeDesktop.org, at the end maybe software vendors will have to pick only witch graphics front end to use.
The article’s target audience might be the cause for this wrong wording but I am not aware of any software that will not run if suggested the desktop environment is not the active one.
Well, in a slight defence of Perens and the article, I imagine they are saying that it may be easy to integrate relatively simple things but once you get ISV software that integrates more heavily with the desktop, specific technologies and applications – that’s where things get difficult. Do you replicate an entire API on top of both desktops (in effect a desktop API in itself) just so ISVs have one thing to port to, duplicating effort and software?
That’s where, for me certainly, it gets difficult.
The biggest reason Linux usage has increased in the past several years is due to standards adoption.
Look at all of the amazing work freedesktop.org has done to create truly cross desktop standards. Look at all of the work Robert Love and the gnome guys have done for Project Utopia to make hardware that “Just Works TM” out of the box. Now KDE and other desktop environments are adopting these same technology and ideas are being shared. Instead of everyone reinventing the wheel, people are using the collaborative nature of open source to their advantage.
Bonobo in gnome was a great idea with an overengineered implementation that is being largely replaced by DBUS. DCOP in KDE also isn’t the best form of IPC. KDE is also using DBUS more every day. Both projects are improving faster because they are helping eachother.
Instead of “Your Linux sucks, mine is better!”, it’s more like, “Your Linux and my Linux are different, but we can help eachother out.” The more developers/users realize this, the more of a behemoth open source will become.
I agree.
First there has to be some divergence, some new stuff has to be tried in more than one way.
Then the best concepts and implementations of these concepts survive, and the different platforms converge on these best ones (diverging at the same time at different, new areas).
I used to believe that closest possible integration and stabilization and exsposure of as much as possible abis was the only way to make Linux a serious contender.
I was constantly dissapointed by constantly changing APIs and and disregard for backward compatibility and fragmentation. I was fascinated by the sophisticated infrastructure build around COM and Miguel’s efforts to emulate it. I was fascinated by MS way of buildnig complex software over complex frameworks. I even had ideas for another component model that would bring the unity.
Years passed and nothing happened in this department.
Did any disaster happened. No. We have two slick desktops that are perfectly productive enviroments. They get more and more integrated and get by without any of those technologies.
Gnome ditched bonobo and is alive and kicking.
KDE is doing wonders by squeezing last drops out of much simplier technologies that are proven to work well.
You may not be able to copy/paste between warious apps, embedd documents and even copy pixmaps system wide but in the long run nobody seems to care.
Generalization of those office centric technologies enjoyed at most a modest success, even in the Windows world and brought some damage to the propeller (ActiveX).
Now I start to think that trend to public every single, tiniest OS functionality as an extension api have some madness to it and will backfire as those guaranteed binary back compatible apis turn from being advantage to liability. They drag innovation down and increase bloat. They cover the most evident disadvantage of being closed source OS.
Maybe what suffice is set of basic integration apis (mostly in terms of basic user information exchange and synchronization). They will because the most os level plumbing and fixing tasks are more than well handled by open source solutions.
How much would be lost if the desktop was monolytic and set in stone as single blessed technology. All the great suprises as Xfce and ROX with stunning potential could not exist. The lack of fruifull competition on ideas between KDE and GNOME would a left us with maybe not so rough but borring environment that starts to loose community interest.
The choice does not allways incurs damaging fragmentation, especially if handled by responsible people.
Yeah, but will they do it? Why can’t I freely drag&drop text, or copy and paste non-text, or even do it with file operations? Will I still be complaining about this in 2010? Will people still think they need to work on the applications, that they just aren’t there yet?
For at least the last year, it’s mainly been the little details that OS X and Windows do, but not a typical DE, that make it hard to stay on Linux, despite superiority in many areas (some possible to use in OS X, but often difficult).
All the stuff to enable it is in place. It’s just apps that for some reasons don’t bother supporting it.