Yes it would be nice if X.org could use OpenGL directly for it’s display and composition, but to date, nobody has made this possible. Is it wrong for a business to make it so? Since when does developing software for GNU products mean that they aren’t allowed to do it privately? If Novell is developing XGL behind closed doors, and paying the developers to build it… Where’s the problem?
Last time I checked, Open Source contributors had the
option of making XGL reality. Numerous different projects have been born of the same idea, and quickly anesthetized due to lack of interest, talent, resources, or all of the above. This is not a case of a person or company stealing licensed open source code and re-branding it for profit (ex: CherryOS). This is a company building,
what is essentially a plugin, for an open source project. Hundreds of
companies due this as their core business.
The reality of
Open Source project management is chaotic. It’s not as stream-lined
and “open” as proponents of the movement make it out to be.
In fact, it is very cluttered with people who want to contribute, but
just don’t meet the criteria. A perfect analogy can be made of the
recent hurricanes in the States. After the damage had been done,
millions of people flocked to the devastated areas to offer help. Now
imagine all of those volunteers are developers and contributors
wishing to help build XGL, and only 1% of them are capable of
actually contributing in a helpful way. Would you:
A) Announce
the project, open the project, wait for applicants, scan through the
applicants, reject the bad candidates, find the 1% of useful
contributors, organize contributor roles, organize the roadmap
and start releasing code?
Or:
B) Skip the first 5
steps and get right down to it?
I find it shocking that
certain member(s)
of projects that depend on X.org would have a problem with this. I
haven’t seen one speck of a KDE mounted movement to develop what they
crave. In fact, all I’ve heard from either KDE or GNOME camps are
wishing. Wishing for a more capable and modern version of X, or maybe
even a better alternative. Most may remember that Red Hat is doing the exact same
thing as Novell right now, only Luminocity is for GNOME. So where
does KDE fall into this? Way behind unfortunately. So far you have
two companies who have been very successful in developing Linux based
OS’s and products. Red Hat apparently has working demo’s of a GNOME
acceleration technology, and Novell hasn’t specified what XGL is
capable of just yet, but it’s no secret that their Novell Linux
Desktop brand is centered around GNOME for the window manager. If
both companies make OpenGL acceleration work for their products and
exclude KDE, KDE is in serious trouble of being overlooked when it
comes to choose a look and feel for your desktop. Would you want the
beautiful musings of water rings rippling across your desktop when
you move your mouse? Or do you want KDE with “vanilla” 2D
looks and simplistic graphics features.
Please do not mistake
any of these remarks against KDE as being discouraging or
mean-spirited. I am a KDE fan/user myself, however, I do realize the
possible danger that it could be in. I also know that it would be
very hard for me to resist an OpenGL accelerated version of my
favorite Linux distribution running GNOME instead of KDE. I am a
die-hard Suse user and have been for years, but if Red Hat makes it
out the door with Luminocity first, I will be the second or third on
that order list to try it out for myself as I’m sure most Linux users
would. So if Red Hat has chosen to work on GNOME acceleration, is
Novell doing the same? It seems not. So far it seems like Novell is
working on a “plugin”, if you will, to X.org that will
enable hardware graphics acceleration on the display server itself.
This could be good news for KDE after all, so why would that
previously
mentioned KDE contributor be complaining about the possible
saving grace of X? After Windows Vista is released in 2007, both
major desktop OS’s will have graphics accelerated interfaces, leaving
X in the dust until someone develops similar technology.
Apple
developed the best looking (and possibly performing) graphics display
system available to date. I don’t personally use Mac OS X, but
I’m not foolish enough to try and argue otherwise when it’s so
perfectly clear that Mac OS can graphically do everything X users
wish X could do. I don’t think anyone could argue otherwise. In
saying that, Apple does not furnish code for it’s display server to
anyone. It is strictly closed source and has always, and most likely
will always will be so. People pay for that display when they
purchase that operating system as a whole. For those that don’t
know, Mac OS X is run on top of a (mostly) Open Source OS
called Darwin. Darwin is the bastard of Unix and BSD which was
developed by Apple to be the heart of OS X. Darwin was developed
in-house by Apple behind closed doors and later released to the
public. That is to say, Darwin is Open Source, and available for
download at
http://developer.apple.com/darwin/.
Everyone knows that you’ll never find Apple’s display managers in
that code for Darwin because Apple struck a chord where nobody else
had when they released OS X. As it stands, all other display servers
are years behind.
So is Apple wrong for doing this? Are they
“evil” for making a product which people want to use? Is it
bad that they want to sell things for profit that tie into Open
Source projects (of their own making mind you). I think most people
will find that Novell is well into the ethical clearing on this
issue. If nobody else is going to build an accelerated X server on
their own time, and Novell wants one bad enough, they have every
right to do it themselves. You can’t complain because your friends
built a fort, and you don’t have one.
This editorial was submitted to OSNews’ news submission form by an anonymous user.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
I don’t need this stuff. Sure its fun to play with, by for day to day stuff I find it more annoying than anything.
The desktop used to be a tool to get to other programs, but now we need fancy animations and things on the desktop. Why are people wasting so much time there? And don’t say we need the speed. Anything recent can handle a variety of graphical special effects already.
New 3d effects without a new paradigm for the desktop and how we react with it is just wasted time.
Maybe you don’t, but the thing that pisses me off EVERY time I use Linux is just how jerky the UI is.
Accelerated X is more than just wobbly windows (and that is something that would be silly to actually keep – its likely there for just showing what CAN be done, not what WILL be in actual implementation)
Going from OSX’s beautifully double-buffered, OGL accelerated Aqua to Windows’s lucky to not split windows as you drag them quite as badly as in the past, to some Linux desktops where window movement is just appalling, please.
Linux needs this, and updating workspace managers etc will also come in useful.
I don’t need this stuff. Sure its fun to play with, by for day to day stuff I find it more annoying than anything.
If it’s annoying, than the designers aren’t doing their jobs properly. Putting the display server on GL isn’t about fancy animations, it’s about a number of very practical human interface issues:
1) It’s about making anti-aliasing pervasive on the desktop. There is no excuse in 2005 to not have a fully anti-aliased desktop. NeXTStep showed us the way decades ago, and they were running on ridiculously primitive hardware by today’s standards.
2) It’s about polishing the UI by eliminating flicker and tearing. Again, this is a very old concept, but has yet to reach the desktop except in OS X. Flicker-free animation has been the standard for games since I loaded my first NES cart, and probably a good deal longer.
3) It’s about enabling user interface designers to come up with new ways of interacting with computers, inside the existing WIMP paradigm. OS X’s animations are actually a huge part of this. Fundementally, animations are about establishing transitions. Transitions are important in a UI, much like they are important in a story or presentation, because they help the user track the changing state of the desktop. Well done animations make transitions more natural, and enable the user to figure out where everything is without thinking as much about it.
4) It’s about enabling artists to create more graphically-rich user interfaces. I don’t know about you, but all else being equal, I’d much rather read a polished, glossy magazine than a simple, drap newsletter. The computer interface is no different.
Ultimately, it’s about providing the mechanisms to allow designers to make interfaces that are both more pleasant to use, and more efficient to use (through a reduction in cognitive load).
I agree with everything rayiner said but the thing he (sorry if you are a ‘she’ rayiner 🙂 is that when most people are using the desktop they have an extremely power processor sat on the graphics card doing nothing.
Xgl will offload much of the graphic processing to this currently unused processor and freeing up the load on the CPU!
1) sorry but i dont buy the need for that. maybe im old hat but i just dont buy it.
2) can be dealt with without the need to go opengl. what is needed is some buffering of the desktop graphics. going opengl for that is overkill…
3) maybe so. but again i wonder about the need to go opengl to handle animations. 2d acceleration have been around a very long time.
4) just as long as we dont end up with a polished piece of glass that will shatter if you as much as look at it funny. build it to work reliably first!
as for cognitive load. silly me, but i find that simple recognicable shapes helpe there more then a million and 1 shades of any color
having the ability to create buttons in 1001 diffrent shapes dont help much. it just means that i have to rember what the button is in that interface vs this interface…
This is so true and well-written, I had to blog about it. http://djst.org/blog/2005/12/22/animated-desktop-for-the-sake-of-us…
and that is what we call a lack of vision
compositing is designed to speed things up, to make actions possible. these actions can and should serve a purpose in the arena of user feedback.
not everything comes down to useless animations. but you are probably only used to windows so its immaterial.
Why move away from commandline? Wasn’t win 3.1 good enough? What acceleration provides is capabilities to innovate in ways that couldn’t previously be done.
While there will always uses of new technology that have no merit other than looking pretty while taxing your computer, advancements like the OpenGL acceleration have given the tools to create more useable interfaces that would be otherwise impossible to create. And often you need the technology in place so you can experiment with it to find a new paradigm. One area I hope they go to is some standard backend that all programs communicate with, which will resize icons, text, and like dynamically as you change the resolution. Among other things it would help those with poor sight to easily move to higher resolutions.
Also, as others have mentioned, there are immeadiate practical benefits as well with anti-aliasing and eliminating problems like flickering.
One technology I believe will be key in the future is storing normal (not game) window contents on the graphics card in vector form.
People keep missing the fact that Cairo antialiases objects to a fixed sized buffer. If the window manager transforms this buffer in any way other than a straight copy all of the antialiasing is lost. Even simple scaling ruins the antialiasing computation.
Storing the window contents in vector form lets the antialiasing computation be deferred (and done on the GPU) only after the window manager has decided on final window placement. Doing this requires GPU based glyph bitmap generation, but this has already been demonstated to work.
Yes, all of this can be done on the main CPU and that line of thinking gave us WinModems!
Simple example:
Ubuntu has this gksudo thing that shades the desktop when the user needs to submit a password. The shading is done to make it clear that you can’t do anything else while the dialog is showing.
The problem is that this shading effect has to be done in a way that makes it essentialy just a full screen screen shot, hiding the actual state of the desktop.
With the help of OpenGL an effect like this could be implemented so that you still see the real desktop.
I was thinking that it would be best to put a blurry glass (transparency) effect over (using shadows to imply heigt difference) the desktop to communicate the input block.
Now OSnews really has reached a new low.
Letting anonymous trolls post stupid flamebaits as articles really is far beyond anything acceptable for a side that anyone could take serious.
It’s a shame really, but this side is dead.
I don’t call that guy a troll, but I totally agree about that anonymous part. If you write an article (even if it is only an editorial) at least have the courage to tell your name/nick.
It’s also sad that osnews noe publishes articles from people hiding in the shadows.
Whether he uses a nom de plume or presents his passport and has it notarized has absolutely no influence on the validity of his ideas.
For all we know he has to work at Novell or something.
>Whether he uses a nom de plume or presents his passport and has it notarized has absolutely no influence on the validity of his ideas.
Nothing could be farther from the truth.
>For all we know he has to work at Novell or something.
Precisely so. And if he did, such information would be an important factor in evaluating the merits of his argument. Worse, the fact that this “editorial” was provided “anonymously” gives me every right (intellectually) — in fact, requires me — to dismiss out of hand any points made therein.
This “anonymous editorial” is the most bizarre thing I’ve seen in quite some time. “Anonymous editorial” comes close to being an oxymoron; at a minimum it is a non-sequitor.
Peter Yellman
So you’re saying it’s a lie that those very same ideas from someone else would not have the same merit or be equally valid. Thinking that merely working for some corp. entity magically makes someone more intelligent is quite the opposite.
Some_Name I_Just_Made_Up
Its funny people always complain about the anonymous part, I guess DRM/treacherous computing etc doesn’t bug.
Mustafa Aziz Ab-durahman Shishmankizi
I read this article thinking, how could this article be published like this? Don’t the people responsible for this site know anything about technology?
KDE for sure, and Gnome, i’m guessing support the Composite extension. This gives you a hardware composited desktop. I use it myself with the nvidia drivers and while it’s working it gives an amazing look and feel.
X.org 7 was just released and i’m looking forward faster development in the area of hardware acceleration. XGL was just an idea of how this stuff could be done. There are others. I’m sure the best one will emerge on top and that is what we’ll all end up using. Video card drivers, as it stands, are low quality in the open source field, and not quite stable in the proprietary field. This is a huge hinderance to developping a hardware accellerated desktop. The pieces to the puzzle are being filled in faster than you think.
Why does it even matter who wrote the article? Just read it and enjoy it … or not.
He’s probably just really tired of getting flamed by everyone every time he writes an honest article.
Insert 2 cents worth: I totally agree that Linux needs this technology. The question isn’t why, but why not?
You probably don’t *need* that Samsung 80″ inch plasma TV either …
point is, this person has clearly no idea what he is talking about. gnome/redhat are NOT working on X, cairo uses the available X server implementations – just like Trolltech’s Arthur in Qt, used in the upcoming KDE 4. so KDE will have a fully hardware accellerated desktop in 2006.
and he also doesn’t understand why it is bad Novell doesn’t work in the open on XGL. well, it is easy (and has been explained several times): it is just not efficient. they are building it themselves, so the community can not help testing it, the developers that will be using it can not start trying and learning to use it, and the developers from the graphics drivers (like Nvidia and Ati) can’t help or integrate it into their drivers.
so – it is a waste of money, as this stupid way of developing just slows down everything. now everyone is waiting for them to release it, and IF they release it is still buggy and mostly untested (unless they spend a long time and lots of money to test it) and no-one knows about it.
stupid stupid stupid novell.
It’s not a flame bait. At it’s best it’s a formulated opinion with a rethorical qeustion. Too bad it was anonymous as that amde me lower the score.
Most may remember that Red Hat is doing the exact same thing as Novell right now, only Luminocity is for GNOME.
Um, no? Luminocity has almost nothing in common with Xgl, beyond the use of OpenGL. The former is an experimental window manager providing eye-candy, the other is an X server. They’re not even exclusive, as far as I know – Luminocity runs fine on Xgl.
Luminocity is a plugin for the X server as whole to interact directly with the HAL and OpenGL to provide 3D effects for Gnome. XGL is a plugin for X to do the exact same thing, only not neccessarily be localized to a particular window manager. XGL is NOT it’s own server, rather it is a standard X server with GLX operating in an embedded fashion. You can see how Luminocity might be paralelled to make the comparison.
Luminocity is a plugin for the X server as whole to interact directly with the HAL and OpenGL to provide 3D effects for Gnome. XGL is a plugin for X to do the exact same thing, only not neccessarily be localized to a particular window manager. XGL is NOT it’s own server, rather it is a standard X server with GLX operating in an embedded fashion. You can see how Luminocity might be paralelled to make the comparison.
Luminocity is a toy window manager with a neat compositor. It applies only to Gnome. Xgl is the start of an OpenGl framework to replace X (it needs the old Xserver now….but only until someone picks up Mr. Smirl’s work on the Xegl). One is a framework for opengl, the other is a toy to begin learning how to use opengl. They are Apples and Oranges. The only similarity is that they are eye candy buzzwords that use opengl to get their cool effects!
Edited 2005-12-22 21:48
And Thom gave this stupid uniformed diatribe a 10.
Well done Thome!
And Thom gave this stupid uniformed diatribe a 10.
Well done Thome!
I forgot to set the value to 7, pressed “rate” too soon. I’ll look into fixing that.
“Is it wrong for a business to make it so?”
No and noone claimed it was.
On the contrary, everyone praised Novel for working on it.
“If Novell is developing XGL behind closed doors, and paying the developers to build it… Where’s the problem?”
If you had read the blog by Aaron Seigo you’d know where he thinks the problems are. Now, you of course don’t have to agree with his points, but simply ignoring them does make you look pretty stupid.
“The reality of Open Source project management is chaotic.”
No, it isn’t. Provide proof, or shut up.
“I find it shocking that certain member(s) of projects that depend on X.org would have a problem with this.”
Why, aren’t they allowed to have an opinion? Even if you don’t agree with them, what’s shocking about what Aaron Seigo said?
“I haven’t seen one speck of a KDE mounted movement to develop what they crave.”
Then you haven’t looked close enough:
http://appeal.kde.org/wiki/Coolness
Oh and some KDE devs wanted to contribute to XGL, but can’t as it isn’t open.
“So where does KDE fall into this? Way behind unfortunately.”
This whole issue has absolutely nothing to do with KDE vs Gnome. Your effort to start a Gnome KDE flamewar simply shows that you are nothing but a troll.
“If both companies make OpenGL acceleration work for their products and exclude KDE, KDE is in serious trouble of being overlooked when it comes to choose a look and feel for your desktop.”
Luminocity is just an experimental window manager that is supposed to show case what is possible and XGL is an X server, so KDE will run on it. How about getting a clue before writing an article?
I can only repeat it, it’s a shame that such a diatribe gets posted on OSNews.
i agree. a shame it is. i’m no developer, just been reading some stuff about this, and i can clearly see how stupid this article is. sure thom and the other editors should’ve known…
Oh and some KDE devs wanted to contribute to XGL, but can’t as it isn’t open.
Where were they when Jon Smirl made it clear to the community that without support and help from others, he was going to drop development on XGL?
Adam
Ok. To break down your list of laundry complaints.
– You apparently read Seigo’s entry out of context. To put it lightly, he was complaining that people outside of Novell aren’t being allowed input into the development of XGL. So, yes he was saying Novell is wrong for doing this, and NO, Mr. Seigo did not point out any specific issues with the developemtn process, other than his shutout of involvement.
– This article wasn’t a rant on the problems of OSS development. Every week there is a new post of this very site that displays the strengths/weaknesses. I’ve never heard anyone argue that it was smooth and streamlined. I’ve been involved in dozens of projects for the past 5 years, and never has one ever been as smooth as a closed developemtn process that is controlled by a management plan. It’s not saying the Open way is bad, it’s just saying that having a clear idea of what needs to get done makes things move faster.
– Mr. Seigo is allowed his opinions, as is the author of this very article. Nothing is shocking about Siego’s comments, but he’s getting an all expenses paid trip to the end of the line with XGL from Novell and he’s complaining about it. It’s just a little nonsensical and kind of crazy to complain about something that (because of the closed development) he has no solid facts about.
– This isn’t a KDE vs. Gnome issue at all. The point was made that Gnome has one functonal method of an OpenGL accelerated interface running, with possibly a second to come. This leaves KDE in a bit of a pickle.
– XGL is NOT an independent X server. XGL is just a name for X with GLX running under the hood. GLX is a plugin, Luminocity is a plugin, but both aim to do the same exact thing. Luminocity isn’t experimental, it’s still being worked on? Read up a bit on that.
– “Why, aren’t they allowed to have an opinion?” Yes they are allowed to have an opinion. So is this guy/gal.
“You apparently read Seigo’s entry out of context.”
How so?
“To put it lightly, he was complaining that people outside of Novell aren’t being allowed input into the development of XGL.”
Where did I state otherwise?
And then he went to say why such a close process is a bad idea in his opinion.
However, my point was that author of an article that slams Seigo for what he said, should take into account what he said, not just exclaim “where’s the problem?”.
“This article wasn’t a rant on the problems of OSS development.”
No, it wasn’t about problems with OSS development at all, hence to simply claim that OSS project management is chatic is simply an unfounded, unexplained, yet very broad claim. In other words, the author was trolling.
“Nothing is shocking about Siego’s comments”
Yet the author of the article claimed they were, which was my point.
“he’s getting an all expenses paid trip to the end of the line with XGL from Novell and he’s complaining about it.”
Because he want’s to take part in the development and think it will actually help with the development of XGL if others could get involved to. Reasonable points, are they not?
“It’s just a little nonsensical and kind of crazy to complain about something that (because of the closed development) he has no solid facts about.”
How do you know what he knows about it?
“This isn’t a KDE vs. Gnome issue at all.”
Yup. Just as I said.
“The point was made that Gnome has one functonal method of an OpenGL accelerated interface running, with possibly a second to come. This leaves KDE in a bit of a pickle.”
And the point is wrong. First, because there Qt4 with arthur and second, becuase contrary to what the author seems to think Luminocity and XGL are two very different things.
http://www.gnome.org/~seth/blog/relations
“XGL is NOT an independent X server.”
Xegl is.
Again, you’re cutting and pasting quotes out of context guy. All your answers are moot when compared to the quotes I replied to you with (in their entirety). However, I will have to re-state that XGL IS NOT AN INDEPENDENT X SERVER. XEGL may be, but we’re not talking about that now are we? Novell isn’t working on XEGL, they are working on X + GLX = XGL. You seem to like to argue. Perhaps you’re a bad lawyer somewhere.
“Again, you’re cutting and pasting quotes out of context guy.”
I haven’t. But simply stating it is easier than actually answering, isn’t it?
“XGL IS NOT AN INDEPENDENT X SERVER. XEGL may be, but we’re not talking about that now are we? ”
No, we are talking exactly about this. Sorry, but if you don’t have a clue, don’t post. Thanks.
“You seem to like to argue. Perhaps you’re a bad lawyer somewhere.”
Ah, personal insults. I really enjoy your style…
Being the only person in this discussion who has committed code to both the XGL and Xegl projects I probably know what they do.
XGL is a transtion tool while the rest of Xegl gets built. XGL is not intended as a permanent solution.
But given the politics surrounding X currently, Novell will probably ship XGL as a product even though it is intended as a transisition tool.
Thanks for clearing this up.
– XGL is NOT an independent X server. XGL is just a name for X with GLX running under the hood. GLX is a plugin, Luminocity is a plugin, but both aim to do the same exact thing. Luminocity isn’t experimental, it’s still being worked on? Read up a bit on that.
Luminocity IS experimental. Its basically a tech demo. The creator one day hopes that the cool effects of its compositor might end up in Metacity, but it is not a replacement for Metacity, nor will it be.
Xgl is an X server implementation that, rather than directly accessing chip specific hardware drivers, does its low-level drawing using OpenGL calls. That means Xgl is functionally equivalent to a traditional X server, it just uses a different rendering path. Put another way, Xgl is to X11 as Glitz is to Cairo: it provides the same APIs rendered in a much smarter way.
Luminocity, on the other hand, is a compositing manager / window manager fusion that composites using OpenGL. Compositing and Window managing are all about what you do with client-rendered windows. Luminocity doesn’t know what’s inside windows, and it doesn’t care. Xgl, on the other hand, I would characterize as primarily being about how the contents of windows are drawn (in this case: quickly and with less CPU load, *grin*). Xgl can do some other non-inside-window things like drop shadows, but I’m going to argue later those are mostly expedient demos of cool technology and Xgl is probably not the place we want to be doing those things long term. From the perspective that Luminocity is mostly about rendering windows and Xgl is mostly about rendering window contents, they are theoretically complimentary. At the moment, they can not be used in conjuction with one another (since they both want to directly drive the GL hardware), but they’re goals are at least compatible.
Neither Xgl nor Luminocity are complete on their own. Xgl provides an X server and requires a window manager (and a compositing manager?) (and an X server for doing GL calls into, but see below, that will hopefully cease to be an issue eventually). Luminocity provides a window manager and a compositing manager but requires an X server (currently using Xfake or Xephyr, though supposedly there’s some plan for modifying the core fd.o X server so Luminocity will work using only the host X server?). With some hand waving (in particular there’s no way to hand OpenGL textures residing in the video card between processes), perhaps we could get Xgl to render windows into textures on the video card, and then use Luminocity to figure out what do with those textures. All graphics computations are done by the card, and data flows only once to the card. Perfect! Other than those niggly make-or-break technical details 😉
http://www.gnome.org/~seth/blog/relations
I find it shocking that a certain article writer don’t get it. It’s not about preferences of projects or hurt feelings, it’s about removing the most important factors in open source. Cooperation and per review, the process which gives open source the edge.
The flood of helping hands argument are just nonsens, since it’s a well known fact very few are actually doing this kind of work. And the x.org developers already know most of the capable ones, making that particular argument a red herring.
And while mentioning Red Hat, he makes it into some desktop preference thing, rather than seeing the stupidity to bar the capable people Red Hat employee to do x.org development from the process. And also baring the most important people, the ones using X as their infrastructure. The ones most likely to have constructive feedback, the toolkit developers. Like Red Hat’s GTK developers and Trolltech for that matter, the ones with the most to gain having a quality infrastructure.
> It’s not about preferences of projects or hurt feelings, it’s about removing the most important factors in open source. Cooperation and per review, the process which gives open source the edge.
Is OpenOffice developed by cooperation and peer review, or by a team of full-time engineers ‘sponsored’ by Sun?
I’m not saying it doesen’t happen, son’t get me wrong.
My point is that large projetcs are better handled with strong hierarchy, and very few communities are able to attain it like, say, KDE.
Novell is going to to the bigger part of the work, and later open up for (i suspect few) volunteers and peer review – imho simply because it’s the way to go for big and complex systems.
A humble opinion.
openoffice is, like mozilla, an bad example of free software. it happens to be GPL or another free license, but that’s the only thing it has.
KDE and Gnome are examples of real free software: developed by a community of developers, using peer review and responsive to questions and requests from the community (maybe KDE a bit more than gnome, as the latter has lots of commercial involvement, degrading the user influence & imho experience).
Yes, I’m glad Novell is contributing to open source at all. No one is under any obligation to open the whole development process.
Browser: Links (2.1pre15; Linux 2.4.31-7tr i686; 126×24)
Yes, I’m glad Novell is contributing to open source at all. No one is under any obligation to open the whole development process.
It’s not open source then, is it, especially considering that it is existing code and existing open source project hosted on Freedesktop.
I can well see why this article was posted anonymously.
Edited 2005-12-21 23:25
Open source = code that is openly available. There is no requirement that the development process itself also be opened up! By all regards, Darwin is an open source project, but it’s development is closed.
There is no requirement that the development process itself also be opened up!
Then you’ve failed to understand why open source works as a concept. Bringing open source projects in-house is the shortest route to bankruptcy and ensures that the wider community is extremely distrustful of you when you do need their help and that less actually gets done.
They don’t have to do it, but it’s suicidal for all concerned if they don’t. Some of the people at Novell have always struck me as Microsoft type people, except with open source software, and so it has proved.
Hardly. There are lots of open source programs developed very successful at corporation. Darwin and WebKit, to name two, then there is OpenMCL, most of the L4 kernels, etc. Indeed, most academic open source code is developed internally before being opened up. LLVM is an excellent example of that.
I don’t see the issue here. If Novell wants to do this by themselves, why get mad at them for it? Lord knows it’s never going to get done if the community has anything to say about it…
point is, it is way less efficient.
first, you deny developers to help you. sounds stupid, doesn’t it? it is.
second, you deny users to help you test it. sounds as stupid as it is, yes.
third, you denie other companies to help you (nvidia and ati could comment and test, and implement the new api’s into their drivers). yes, stupid.
so, this is plain stupid.
why did it work for other projects but not here?
1. many parties are involved (the desktop environments would LOVE to be involved, as would the driver developers and lots of users who love to test the latest and greatest)
2. it WAS open, they just closed it…
3. it is VERY important for the future of linux.
Geez, you’re missing the point all together. The point of Open Source has nothing to do with the development process at all! The whole point to Open Source, is it having the code Open for use, alteration, improvement…you name it. Your logic is flawed and annoying honestly.
It’s not open source then, is it?
On that logic neither is Qt or ghostscript. Whether or not something is open source is determined soley by the license it’s released under.
am I wrong or you should allow *affordable* (btw) access to the sources of derivative works?
they arfe derivating it, let them work. when they’ll be done, they’ll open up…
Open sourcing is a tool, a development model, characterized by certain benefits and shortcomings, like any others. Whether anybody chooses to use it, it’s their business. If I want to start a project and not open the source, because I think that development will go quicker and better this way, it’s my call, as long as I respect all involved licenses.
Doing anything for the sake of that thing alone is a silly idea in general, and so’s open source.
Let’s wait until Novell completes XGL and see what happens then, shall we? Even if they ship it with SuSE in binary only, like Apple did, what’s the problem? SuSE will become a kick-ass desktop and Linux overall will benefit from this. The users will benefit from this.
But I have a feeling they’ll try to cash in on the community goodwill too, by open-sourcing the main code, and make some money too, by keeping the out-of-the-box desktop integration and customization proprietary.
Isn’t this the best economical model to date for open-source? Giving back and making money for yourself at the same time?
Most may remember that Red Hat is doing the exact same thing as Novell right now, only Luminocity is for GNOME
I sorta stopped reading there, as there’s really no way to compare XGL and Luminosity. They’re just different products. Additionally, there are differences in availability, which as far as I can tell is the real problem here.
Thom, did you really voted this stuff 10 or somebody stole your accont??
Ladies and Gentlement, we need a new editor.
It’s probably even worth.
He’s probably also responsible for posting this diatribe in the first place.
“After Windows Vista is released in 2007, both major desktop OS’s will have graphics accelerated interfaces, leaving X in the dust until someone develops similar technology.”
Your very right.
Why even run a 3D card in your linux box, when you cant even use all of it? dont even bring up screen savers.
I my self dont care about the fancy wavy windows, I just want faster, crisper(AA), better looking gfx on my desktop.
gee isnt that what they are working on RIGHT NOW.
vista is still a year (most likely 2 years) away and wont be that impressive.
basically X is under new management with lot of new efforts and will be leaving windows in the dust…
and in the case of fonts it already has (that is my personal opinion so you cant argue, but my desktop setup is pretty nice in the way of fonts compared to my windows work machine (ugly)
X will be leaving windows in the dust? I doubt that X can never be as good as windows until you have proper, functional, opensource drivers for modern video cards, which are pretty hard to come by. I hope for the day, but I don’t think the big two will ever do it.
is apples and oranges
Edited 2005-12-21 23:38
The public XGL project blew up in August when the core X developers decided to pursue EXA. Whatever your views on EXA are, in the end result EXA is just a way to extend 2D performance a little farther.
I quit working on XGL at that time since it was obvious that most of the few public resources X has were going to chase EXA. Pursuing XGL without significant help is a three or four year project which I am unwilling to undertake.
While a lot of the X developers believe they work in a marketing vacuum the distributions understand that they are in a competitive market place. It is pretty obvious to me that not having an accelerated desktop when Windows and the Mac do is going to cause big problems for the Novell/Redhat sales people. The public project is dead so these distributions (and others, yes there are more secret projects out there) are pursuing XGL variations internally as a competitive edge.
The core X developers have chosen the EXA bed, now they have to live in it. The choice of EXA, whether intentional or not, had the side effect of killing the public XGL project. Now we have to live with the consequences.
I don’t want to insult or attack you in any way, but from following this whole developement from the outside I got the impression that you stoping work on XGL was about the only thing that happened because of EXA.
Other than that I saw post after post on the freedesktop mailinglist that pointed out that EXA is in no way a replacement for XGL, that it wasn’t meant as competition to XGL and that XGL would still be the long term solution.
It’s a matter of direction and focus, or rather, a lack thereof. Developer resources focued on EXA represents developer resources diverted from XGL.* As people said, free software developers are perfectly free to choose their own priorities and work on what they desire, and if EXA is the priority, then so be it. By by extension, Novell is perfectly free to work on what it considers a priority, and if that’s XGL, well, that’s that.
*) Needlessly, as some would argue. I cannot remember who said it, but someone pointed out that for little more effort than it would take to properly accelerate RENDER, you could add the same level of acceleration to MESA instead. Personally, I think RENDER as a concept is flawed, because it merely postpones the transition to OpenGL as the primary API (what happens when somebody wants to texture a Cairo shape with a procedural shader?), but then again, IANAXD, so I’ll keep my big mouth shut
Edited 2005-12-21 23:53
“It’s a matter of direction and focus, or rather, a lack thereof. Developer resources focued on EXA represents developer resources diverted from XGL.”
Well, considering that it was one guy who wrote EXA I never understood where the real problem was here.
I know it also needs to be implemented in the drivers, but from what I hear this is supposed to be pretty trivial, so again, I fail to see why this should be such a large issue.
“By by extension, Novell is perfectly free to work on what it considers a priority, and if that’s XGL, well, that’s that. ”
I don’t get it. Nobody said that Novel didn’t have the right to work on XGL, on the contrary, everybody praised Novel for working on it.
Aaron Seigo simply disagreed with them working on it in a non public environment and gave some reasons why he thinks this is a bad idea.
Now it is really beyond me why people don’t argue against the reasons he gave, but instead build one strawman argument after another.
It’s not one guy that wrote EXA. One guy started it and all of the X device driver writers followed him.
Rayiner is accurate in saying that the driver work involved in fully finishing EXA was equivalent to the work needed to get the Mesa drivers in shape for XGL. The developers just chose the EXA route.
In the last few months there’s been a little progress on fixing the Mesa drivers to support XGL (DaveA’s work) but they are far from being finished and Dave is only working on one driver part time.
XGL doesn’t make progress because nobody is working on it, not because it is impossible to do. That’s an argument that was used to justify EXA, the X drivers are already done – let’s just extend them a little more.
I don’t want to criticize you Mr. Smirl because you know far more about this than I do (actually you are a hero of mine for you write-up on the subject), but after reading your “State of the Linux Desktop” on the matter and looking into myself I see a different problem for Linux’s desktop than you do.
Rayiner is accurate in saying that the driver work involved in fully finishing EXA was equivalent to the work needed to get the Mesa drivers in shape for XGL. The developers just chose the EXA route.
It seems to me that the real problem after messing with many cards and systems is that most of the display drivers for Linux suck. It seems we only have two drivers that can actually do the job that OSX and Vista does- the closed Nvidia drivers and the ATI 92xx’s. In your write-up on the State of the Linux Desktop you seemed to shrug off this problem (if I remember you suggested that people without should just go buy a cheap 9250 or do without) but it seems to me to be the central roadblock to a nice desktop!
Most people do not have a ATI 92xx card and most people won’t buy one just to use Linux. Also many laptops do not have cards that can be replaced. Many people have Nvidia cards, but it seems that many open source developers would rather keep the Linux Desktop primative if the other alternative is to build it on top of Nvidia’s drivers. All the other Linux drivers are crap (cept maybe Sis?)- they were created for the Linux Server where just displaying ANYTHING is good enough.
So you say “that the driver work involved in fully finishing EXA was equivalent to the work needed to get the Mesa drivers in shape for XGL” but I don’t know if I agree. You admit that EXA is a bandaid- it does not matter if not every card supports it. Those people will just be left out of what is planned for EXA, which so far to me seems to only be eye candy. The old Xserver is still there for those without an Nvidia or 92xx card. But if the Xserver moved over to Xegl the display drivers (all of them) would HAVE TO support Xgl before a major distro could use that as a default or no display at all right? If I am wrong, please tell me…you are the one that would know!
To get the drivers ready (for non 92xx and Nvidia cards) it seems to me that an entire overhaul of all of them is needed. What will that take- three, four years? And thats just if everyone who can helps. But how long is it taking to move the few drivers that aren’t crap over to EXA? Well the ATI 92xx is already there and the closed Nvidia driver will be done soon now that Xorg 7 is out.
Vista and OSX will stay years ahead because they have decent drivers. Yet Desktop Linux (which itself is a side effect of the Linux server) and its crap open source drivers are years behind on that count.
Do you really think that overhauling every driver out there to support Xegl would really take less time than to hack the three or so drivers that are currently decent enough to accerate the desktop to use EXA? It seems to me this is the problem EXA solves with its band-aid approach- it gives the Linux community some way to say “we have an accerated desktop” without lying (even though it only applies to maybe 30% of computers) while the big job of remaking all the other drivers that are currently crap can go on over the next five years.
It seemed that in your write up you do not focus on the driver issue…but it seems to be at the heart of the problem to me. Most Linux display drivers suck compared to what Windows and OSX has….would EXA or Xgl really change that?
Please correct me if I am wrong- you ARE the authority on this as far as I am concerned.
First, there may be more X 3D drivers than you realize. All of the cards supported by Mesa can be converted to Xegl with moderate amounts of work. The current chip families supported by Mesa (and also X since X uses Mesa) are: fb, ffb, gamma, i810, i830, i915, mach64, mga, r128, r200, r300, radeon, s3v, savage, sis, tdfx, trident, unichrome. This covers most of the common low end 3D hardware. Note that fb is in the list, Mesa (and Xegl) will work on dumb framebuffer and entirely implement OpenGL in software.
The second group is high end hardware with no open source drivers. Nvidia and ATI for example. Xegl treats OpenGL as a device driver. These two vendors would probably take their Windows OpenGL implementation, port them to Linux and then add the 14 EGL entry points.
So it is possible to get Xegl coverage for most of the 3D hardware out there. It’s just going to take some coding and evangelism work.
But the X driver developers have chosen to chase the 2D EXA bandaid. This will probably consume their efforts for a year or two. After that they may start looking at Xegl again. The cards getting EXA support are the same ones as are in the Mesa support list so we aren’t picking up any new hardware support with EXA (I think there is an exception this, ajax implemented EXA on number9 for the five people left in the world with number9 hardware).
At the end of all of this you just have to ask, what did we gain by chasing EXA for a couple of years if in the end we are going with Xegl anyway?
To whoever said: “I can only repeat it, it’s a shame that such a diatribe gets posted on OSNews.”
Well said, on all points Same goes for me: this is B.S., and it’s sad to see it here. The problem isn’t with commercial development, it’s with CLOSED development, which is a very different beast. If you can’t tell the difference, I don’t even know why you’re not using a closed OS like Windows, and forgetting about OSNews.
I think you need to realize that “commercial development” and “closed development” are the same thing in the real world. Those of us that work in the software development world know that companies don’t just throw money around and pay for the development of software they can’t somehow profit from. It would be nice, but that’s just not how it works. Name one company that pays for full time developers to only crank out Open Source software. Either way, if a commercial company DOES decide to release some of their code as Open Source, that means that people are free to change it AFTER it is released. If Aaron Siego can’t wait that long, then maybe he should just start his own openly developed project to try and release something like XGL.
There is a very nice book about this, downloadable for free at
http://producingoss.com/
It’s written by one of the guys behind Subversion. An entertaining and insightful book.
/Meng
jonsmirl I think the XGL project just got a pulse again and it’s ready to rise from the dead. All Novel really had to do was say, hey, were going to build a GL based Xserver. From that point on, how frequently they sinc a cvs account or which code they actually excepted from 3rd parties would have been irrelevant, because they would have encompased enough of the spirit of open source development that the rest wouldn’t have mattered. Everyone knows that Novel has not done anything wrong, but the perception is that they’re taking what has been given to the community, building something that will make them some money, and could care less about the open source community that made it all possible.
It is a distinct possibility that they will not provide this GL server to the community at large, because it would give them a compelling selling advantage. Now everyone thinks they won’t do that because of the bad rep they would obtain, but they aren’t really trying to sell their desktop to the average joe that would be screamming bloody murder; they’re trying to sell to companies currently running windows based desktops. As heard in every God Father movie, “It’s just business.” Trolltech will come up with a GL based Xserver for KDE and they promissed to do so when the KDE 4 talk first started, but all Linix desktops could get this Xserver a lot sooner if Novel, Redhat, Trolltech, KDE and Gnome could all just get along. After all, they’ve all got two common opponents (MS and Apple) that would both love to see Linux become extinct.
Another problem is the “open source or die” crowd. This group is adamantly opposed to changes that make the X server OpenGL dependent. Their opposition is from two sources, first OpenGL drivers for current Nvidia/ATI hardware are closed source. Their belief is that we can not let the desktop become dependent on closed source drivers so XGL must be stopped.
The second source of opposition centers around the third world and the $100 laptop. The fear is that if Europe/USA moves to the XGL desktop there will be no software for people in the third world who can not afford the latest 3D hardware.
My take on this is:
1) Building a new XGL server won’t make the current X server or apps disappear.
2) By the time a lot of software needs XGL the hardware for it will be cheap enough for the third world.
3) I don’t like the closed source 3D drivers but there is nothing we can do about it. Sticking our heads in the sand and ignoring hardware progress is not the right answer.
If the people that you are talking about actually have influence, then it’s just another death nail in the open source desktop ever going mainstream.
The politics revolving around open source are sickening and are why so many people are turned off by that shit.
Man. Ever work in an engineering company? The politics in those don’t exactly smell of roses either! They are simply (mercifully), somewhat less public. Which is, of course, why I decry the ascension of corporate blogs, but that’s a seperate rant entirely…
Really, if I ignore the syntactic sugar and get down to the content of the article, I can dig it a little.
If the OSS community feels they can build it better than Novell, they can start working off the last public source release of XGL and get jumping.
would be a waste of resources. it already is (i posted this before:
point is, it is way less efficient.
first, you deny developers to help you. sounds stupid, doesn’t it? it is.
second, you deny users to help you test it. sounds as stupid as it is, yes.
third, you denie other companies to help you (nvidia and ati could comment and test, and implement the new api’s into their drivers). yes, stupid.
so, this is plain stupid.
why did it work for other projects but not here?
1. many parties are involved (the desktop environments would LOVE to be involved, as would the driver developers and lots of users who love to test the latest and greatest)
2. it WAS open, they just closed it…
3. it is VERY important for the future of linux.)
but what if the community now starts to write the same again! how STUPID would THAT be?
Its one of those Real Life (TM) things that you have to suck it up and deal with, my friend.
Wheither its stupid or not isn’t really an issue; they’re not letting you have the source right now, so you either gotta start a competing project in an attempt to crush Novell or just twiddle your fingers and hope they open up the project.
There are some grognards in the audience that don’t see the need for advanced graphics capabilities. Let me give you a small example of where graphically rich interfaces can come in handy. In our school’s aerospace department, we’ve got a visualization system (connected to a large video wall), that let’s you analyze (simultaniously) 10,000 prospective designs, parameterized along 20-30 dimensions. You can take the data, slice and dice it any way you want, and visualize any relationship you want, in full 3D. It puts to shame the parametric analysis tools most engineers use on their desktop. It increases ones ability to consider various tradeoffs in a design by literally an order of magnitude. It is very cool.
Why do I relate this anecdote? To show that visualization is an incredibly powerful way to take an enormous amount of data, and make it possible for a human being to process it in meaningful ways. Advanced graphics systems offer incredible potential for data organization and analysis, and can allow techniques that current 2D interfaces can only dream of allowing. Seamless integration of 2D and 3D allows such applications, perhaps at a smaller scale, to become available on your average users’ desktop. Joe Blow might not need to analyze 10,000 turbine designs, but he might want to visualize the long term performance of a couple of hundred stocks. Rich graphics interfaces allow such applications to be built, and like it or not, they are the way of the future.
i used OS X for te first time today in an apple stire. ti started some apps. there was a delay when only the mouse pointer moved. and i didn’t know what was going on. was it working or dead? nothing like a load/sysmon meter (like gkrellm) tells me what’s going on.
in general use (browsing, music, file browsing) i found it slower – it was a small laptop of some sort.
Never mind what one thinks about what Novell is doing etc etc… I can’t believe something with as many factual errors like this got to be on the front page of OSNews!
I quit working on XGL at that time since it was obvious that most of the few public resources X has were going to chase EXA. Pursuing XGL without significant help
But now it looks like Novell has started working on it(but not public), don’t you want every capable developer being able to join the work? Developers like you who was not willing to undertake it whitout help.
Since this really has nothing to do with EXA and what the core X developers pursue, it has nothing to do with this discussion. It’s about blocking those like you who want to share the work or participate in the development.
Edited 2005-12-22 01:20
Everyone keeps missing the point that Novell started out working in the open. Go look at the code, it is in CVS. The public community had their chance in August. It was only after the public project collapsed did Novell continue working in private.
Now XGL has gone private in several places. I’m content to wait until the various attempts ship and make their source public. My true interest was not to write code for XGL, it was to make the XGL project happen.
But Jon that makes no sense, whether Novell worked from an internal CVS or a public CVS tree would make little difference to their developers, I’ve gotten the impression that the developers are quite happy to develop in the open fashion and the pressure comes from higher up…
This is nothing to do with public community, do you really think Openoffice isn’t mostly Sun internal people working in an external repository?
– airlied
But Jon that makes no sense, whether Novell worked from an internal CVS or a public CVS tree would make little difference to their developers, I’ve gotten the impression that the developers are quite happy to develop in the open fashion and the pressure comes from higher up…
Collapse of the public project made it easy for the higher ups to get their way.
This is nothing to do with public community, do you really think Openoffice isn’t mostly Sun internal people working in an external repository?
Open Office is different that’s a private project trying to go public and no one is showing up to help.
Yes, all of this can be done on the main CPU and that line of thinking gave us WinModems!
And not doing it on the main CPU gives us proprietary encumbered “FOSS” systems. Might as wel move to windows then, if closed source software is as good as Free software.
Windows at least has the better supported Nvidia and ATI drivers and has already developed an excelerated architecture for Vista.
I agree on the importance of Xgl but I have to say that EXA is being underestimated.
Last week I upgraded to Xorg 7.0RC and I’ve been using EXA on my old radeon video card, with the compositing manager xcompmgr enabled, and I have to tell that the desktop experience is the better I ever had.
The upgrade to Gnome 2.12 and cairo-based GTK slowed down my system, but double buffering and compositing truly enhanced everything, can’t wait for an integrated compositing manager.
Everyone keeps missing the point that Novell started out working in the open.
It’s not missing the point, the point is they stopped working in the open.
It was only after the public project collapsed did Novell continue working in private.
Is the Novell developers unable to do work in a public manner for some strange reason?
Would not the same developers have produced the same code, even if it was publicly available? (Probably not, since it only takes 1 valid patch from the outside to increase the quality of the code from what they do on their own.)
Of course I’m not a developer or programmer. I just started using Linux about three years ago after I realized what Microsoft was turning into. I guess I’m not with the “in” Linux crowd as far as knowing the ins and outs of free software.
The fact that Novel is spending time and money to develope software for Linux is a good thing, no? It looks like, to me, looking from outside in that this could only help Linux and everyone that uses it. Those that don’t like 3d, glx or whatever could turn it off, no?
Before I was liberated by Linux from the evil empire, I spent about US $50 a year on WinCustomize products for Windows. On the whole, WinCustomize make thousands from GUI products. I think it is true that developing more business software such as QuickBooks type programs or more and faster gaming could only help Linux. But it does seem that this may all need to happen together. Whatever Novell does, I think it will only help Linux. And I am not apprehensive because Novell has a good trakrecord of giving back to the community.
Just in case someone is wondering, I guess I’m on the outside when it comes to my distro of choice which is Xandros3.0 Business.
My .02-penguin7009
it is definately NOT bad novell contributes to the community. but they could do better and more if they would cooperate with the community. why wouldn’t they?
the reasons not to don’t benefit anyone. i’m sure they choose to close the development mostly because of emotional reasons – i guess they are just used to do things themselves, because they used to create proprietary software. but in the FOSS world, it is much more clever and efficient to cooperate. they just have to learn that…
see my other comments about why it is STUPID to not to open development if you want to give it away (GPL or alike) anyway.
People who understand this little should not be allowed to write articles.
“Most may remember that Red Hat is doing the exact same thing as Novell right now, only Luminocity is for GNOME.”
Luminocity is not in any way comparable to XGL. It is a tech demo that is never meant to leave the lab, and moreover it integrates hardware-accelerated compositing only (only a small part of what XGL will accelerate) at the level of the window manager only.
“So far it seems like Novell is working on a ‘plugin’, if you will, to X.org that will enable hardware graphics acceleration on the display server itself.”
I will not. XGL is not a “plugin” – it is a rewrite, although not from scratch, I’m sure. You do not add XGL to X.org, you use either the Xegl server or the X.org server. This confusion of yours may stem from the Xglx server, which requires another X server to run on, but once again this is only a demo technology meant for testing XGL tech on a platform with more stable and numerous accelerated drivers (Xegl is the product you’re waiting for, but right now it only works on R200 based cards).
“Darwin is the bastard of Unix and BSD which was developed by Apple to be the heart of OS X.”
The only “Unix” in Darwin is FreeBSD 5 itself. Darwin could perhaps be called the “bastard” of Unix and Mach, which no, is not an implementation of Unix and in fact has nothing to do with Unix.
“As it stands, all other display servers are years behind.”
Windows Vista and its Aero Glass are months away. X will indeed take at least a few years to bring something usable to the table.
“So is Apple wrong for doing this? Are they ‘evil’ for making a product which people want to use?”
Comparing the actions of Apple and Novell is hardly appropriate. Apple has its own small but dedicated fan base. Since the open source technology it appropriated is BSD-licensed, it is perfectly within its rights to do what it did, and no Mac fan is going to care that their work benefits the FreeBSD project not at all. Novell, on the other hand, has all but given up on its proprietary product after years of rejection by the market, and its fortunes now depend on the quality of work developed by open-source volunteers. Since the technology in question is MIT-licensed, it would also be within its rights to pull an Apple and release XGL as a proprietary product. However, this would show grave disrespect to the people who saved it from otherwise inevitable irrelevance.
That said, no one has suggested that this is Novell’s intention. Their more likely intention is to help the XGL project get back on its feet, while simultaneously establishing themselves as the core developers of the project, so that when it is released to the open source community they will be known as the company that made it possible. This behavior is slightly shady, but is nevertheless the reality of the open-source software industry. Red Hat is market leader due to the name recognition they get from being the driving force behind Gnome, GCC and many other projects. With Mono and now XGL, Novell will be well positioned to compete with Red Hat and others, and another profitable heavyweight that is entirely committed to Linux will be very good for open source’s commercial credibility.
Of course it would be nicer if Novell let everyone else play too, but apparently they aren’t confident enough yet. I expect we’ll see the source in a few months, when the company knows its engineers have a nice head start on everyone else. No big deal.
The meaning of open source is that anybody can work on it, the way to work on it is determined by the licence, and from what I’ve read about it is that the maintainers from XGL are all quite happy that their project has been picked up by Novell. Isn’t it up to the original writer to determine what’s allowed to be done with the code? Some people dislike the bsd-licence, some don’t and even swear by it, I don’t know what licence XGL was under but if it’s not the GPL and if Novell isn’t obligated to give the source code back to the community, then so be it, but it was the writer’s choice.
GNOME got started just because of a licence dispute, but look what a wonderfull desktop environment KDE has become. Now it strikes me that a KDE developer working for a company whose main business is to produce a toolkit which is open source in the end, but which is developed internally, has any problems with it. Since Novell has bought Suse, it has become free and yast has gotten the GPL-license, :s seems like the company doesn’t keep everything closed, does it?
Does Novell have an advantage with this? yes, they’ll be the first to have XGL integrated in an OS, but again, if they’re not obligated to share, in the end, the code of it, it’s the writer’s choice, not a choice the community has to make.
Oops, that was me posting it :s, forgot to log in.
i think there is a difference between this and what trolltech is doing. trolltech lets ppl in who have a stake in it, but not many do – mostly the KDE ppl. and as trolltech consists of KDE dev’s, they can contribute and help wherever they want.
on the other hand, many ppl and groups and companies are or should be involved in XGL. but Novell doesn’t let them in. that’s a waste of resources, imho very stupid…
“hundreds of companies due this”
Kind of hard to take this article seriously when the author doesn’t seem to know the difference between ‘do’ and ‘due’
I can’t believe people are complaining about closed development when open development has already tried and died. And you’re mad, not even because the company(s) won’t give you what you couldn’t make yourself, but because they won’t give it to you fast enough? Sheesh.
Between OpenOffice and XGL, I’m thinking there’s some things Open Source doesn’t work well for. Is that what you’re afraid people will think?
When will this ignorants stop falling on the game of Aaron Seigo dirthy secrets intentions?
Is not him who want’s to put his hands on XGL but his employeers TrollTech, when was he when the original XGL author was asking for contributors? but now that Novell has putted and effort and is starting to give fruits he heard the rumors and want a piece of the cake, with all that “contribute to OS” bs, Novell ain’t doing anything wrong, and what about KDE’s closed doors projects like Oxygen? are those valid only for KDE developers? I call you hypocrit.
I stopped reading here:
Most may remember that Red Hat is doing the exact same thing as Novell right now, only Luminocity is for GNOME.
Luminocity is a experimental version of Metacity with a neat compositor built in. Xgl is the start of a Opengl replacement for the Xserver (which will come with Xegl).
Basically its like comparing a windshield to a car frame.
Worst rant ever on OSnews.
Quoting the author:
Would you want the beautiful musings of water rings rippling across your desktop when you move your mouse? Or do you want KDE with “vanilla” 2D looks and simplistic graphics features.
—
Puhlease … Give me “vanilla” 2D look, but please turn off the graphic features…. water rings? on my desktop? N E V E R … just like N O M O R E T A X E S … oh well, the latter doesn’t hold water, but nor should my desktop
But personally I don’t think there is anything wrong with what Novell is doing. And I’m sure it will turn out to be some sort of open source, as long as Novell gets a certain headstart to make up for the investment.
But nodody prevents anyone from access to the somewhat old sourcecode, so if people really want a 3D-bloated server – then go ahead. You’re six months behind, but it’s still there.
Edited 2005-12-22 04:10
“The reality of Open Source project management is chaotic. — In fact, it is very cluttered with people who want to contribute, but just don’t meet the criteria.”
I’m not a developer of some huge project, I’m still trying to learn to code something properly, but even I know that every single project has maintainers..Maintainers decide which patches go in, which ones don’t. People who make bad patches just will be informed that their patches won’t go in. So, I kind of miss the writer’s point here.
I myself am waiting for XGL like there’s no tomorrow. I upgraded to X,org rc4, enabled exa for my card and played around with xcompmgr etc. All I found out was that xcompmgr didn’t slow the machine down as much as before, but it is still slow and buggy. Besides, XGL would use OpenGL for everything, thus speeding up any program which draws something on the screen, but EXA didn’t. I did some benchmarking and the speed hadn’t changed at all. I don’t even know is it supposed to speed up anything else than xcomposite.
PS. I know it’s been said already, but even I knew a looong time ago that Luminocity is a WM and XGL is an X implementation on steroids. Neither is any DE centric. How can anyone write a whole article without first even checking what he’s writing about? And I just have to wonder where he came to the conclusion that everything is about GNOME? Doh.
-WereCat
You’re right, and being a maintainer of a very large, and very popular project where everyone and their sister thinks they have a worthy bit of code to include…..sucks…really…hard. Most of that time spent being a maintenance person is rejecting code, coralling developers into place, and maybe if you have a bit of time, doing a bit of work yourself. In a closed business-like process, you have a common goal to work towards with a team of people that are a given a job to do. It’s a whole different game at that point.
“You’re right, and being a maintainer of a very large, and very popular project where everyone and their sister thinks they have a worthy bit of code to include…..sucks…really…hard. Most of that time spent being a maintenance person is rejecting code, coralling developers into place, and maybe if you have a bit of time, doing a bit of work yourself. In a closed business-like process, you have a common goal to work towards with a team of people that are a given a job to do. It’s a whole different game at that point.”
You’ve got a point, for sure, but I think the issue you’re responding to is merely a symptom of the fact that most OSS project leaders/maintainers are poor managers. Opening the doors to outsiders and the monumental effort of filtering out the stuff that doesn’t suck can be avoided. The key is for the maintainer to set forth an agenda and divide the subsystems among a small group of core submaintainers.
In the case of XGL and EXA, the latter won out simply because of superior management. In the original mailing list post, the EXA maintainer outlined the steps necessary to make drivers EXA-compliant. Then the task of extending existing drivers could be distributed to the existing driver developers.
While the tasks required to implement Xegl are somewhat analogous (only using the MESA drivers and the EGL entry points), I don’t think this was effectively communicated to the driver maintainers. One main argument for EXA was that they only need to extend the existing drivers. This rationalization doesn’t even distinguish the two approaches, but it was used to argue in favor of EXA.
We can all appreciate that EXA is an extension of a dying architecture, the Prescott of the RENDER extension (if you will). And, as others have pointed out, the entire free desktop market is not ready for 3D desktop’s hardware requirements. However, the core Xorg developers should have vocalized a clear focus on developing RENDER and MESA driver extensions, and on integrating EXA and EGL frameworks, as a parellel, coordinated effort. If they had, the driver developers would have had a clear objective, and the developers of XGL wouldn’t have felt like their project had been left for dead.
So, OSS development isn’t so much different that proprietary development. The “common goal” and “given a job to do” statements often don’t ring true in established development firms either. The only thing that is different is that no development firm would be caught dead without a project manager for every development effort, whereas OSS development efforts often (but not always) rely on the lead developer as the manager as well. OSS project leaders must understand that, as their project grows, their priority is establishing an effective and sustainable development model and strategy, not writing KLOCs per week.
point.
How do we know they are working on this. I remember the horrible feeling I had when XGL was cancelled. I hope this is true.
Although if mesa stuff needs to be ported then wouldn’t they have to be doing some stuff in the public. I mean if the driver stuff is one of the big issues (I have no idea how far along XGL was itself) then wouldn’t we know about the work to get the drivers working with their work.
What license is this XGL code under, and when it was previously released more openly what does the license allow?
I think the license may be under the XFree86 license but I can’t find where it is explicitly stated though. What does this license allow?
Linux is really a XFree86/GNU/Linux system…
open gl is dead.
so use it is loosing time
While I agree with much of what Jon Smirl has said and written about the future of X, the fact remains that there are limiting factors in making 2D rendered within 3D a reality in the open source X world. You can’t just jump in because Microsoft and Apple have, and X.org cannot go missing for three or four years plus to develop it.
There are many open source graphics drivers within X, just about all of which would have to be rearchitected, and many of those cards and drivers would simply not be able to provide the capabilities that would actually make XGL remotely useful. The X developers had to consider that, and the fact that many of those graphics cards and drivers are still in use and that XGL would only be useful for drivers that provided full hardware acceleration which are only provided by closed source drivers. You would render all the existing work useless and limit the number of cards people can feasibly use. There are certain inescapable practicalities involved, and no one can bury their heads in the sand over it. When that situation improves, then it will be practical, but the vast majority of hardware out there today, and hardware people want Linux and Unix to run on, is simply not up to this.
However, it’s worth pointing out that Novell’s use of XGL is basically a marketing exercise. They want to have a desktop that does lots of nice 3D, hardware accelerated things to win people over via eye candy and say “we can do Vista!”. Who cares if when you scratch away the surface the rest of the desktop is crap? Maybe Novell absolutely need hardware acceleration to make any of their stuff work, or they’re trying to win Novell’s employees over regarding Gnome :-), who knows?
It will be interesting to see them roll something based on this out across their company (if the NLD ever was fully rolled out), because the vast majority of their hardware is not going to be up to doing this – as is the case with most companies. I’m sure Mr. Friedman will be OK though :-). They’ll probably spend most of their time ironing out issues in their eye candy X server than actually getting anything real done.
Edited 2005-12-22 09:32
Actually it would be interesting if for EX: opensuse sax (or at install time) gives people with a 3D graphics card (and the right driver) the option to use “more eyecnady”, just for those people (which can become A LOT of people)
As I see it, you can’t even run vista graphics without 3D card , it automaticly removes the overhead and you get an ugly system back
So they are in the same situation
I know some anonymous people around here really don’t like being told that the NLD is an irrelevant pile of tosh, but I’m afraid that it is. If you have an issue with what’s below then hit the reply link and say so. Did you not read the link that says ‘Yes, I disagree with this user/opinion’?
While I agree with much of what Jon Smirl has said and written about the future of X, the fact remains that there are limiting factors in making 2D rendered within 3D a reality in the open source X world. You can’t just jump in because Microsoft and Apple have, and X.org cannot go missing for three or four years plus to develop it.
There are many open source graphics drivers within X, just about all of which would have to be rearchitected, and many of those cards and drivers would simply not be able to provide the capabilities that would actually make XGL remotely useful. The X developers had to consider that, and the fact that many of those graphics cards and drivers are still in use and that XGL would only be useful for drivers that provided full hardware acceleration which are only provided by closed source drivers. You would render all the existing work useless and limit the number of cards people can feasibly use. There are certain inescapable practicalities involved, and no one can bury their heads in the sand over it. When that situation improves, then it will be practical, but the vast majority of hardware out there today, and hardware people want Linux and Unix to run on, is simply not up to this.
However, it’s worth pointing out that Novell’s use of XGL is basically a marketing exercise. They want to have a desktop that does lots of nice 3D, hardware accelerated things to win people over via eye candy and say “we can do Vista!”. Who cares if when you scratch away the surface the rest of the desktop is crap? Maybe Novell absolutely need hardware acceleration to make any of their stuff work, or they’re trying to win Novell’s employees over regarding Gnome :-), who knows?
It will be interesting to see them roll something based on this out across their company (if the NLD ever was fully rolled out), because the vast majority of their hardware is not going to be up to doing this – as is the case with most companies. I’m sure Mr. Friedman will be OK though :-). They’ll probably spend most of their time ironing out issues in their eye candy X server than actually getting anything real done.
There are many open source graphics drivers within X, just about all of which would have to be rearchitected, and many of those cards and drivers would simply not be able to provide the capabilities that would actually make XGL remotely useful. The X developers had to consider that, and the fact that many of those graphics cards and drivers are still in use and that XGL would only be useful for drivers that provided full hardware acceleration which are only provided by closed source drivers. You would render all the existing work useless and limit the number of cards people can feasibly use. There are certain inescapable practicalities involved, and no one can bury their heads in the sand over it. When that situation improves, then it will be practical, but the vast majority of hardware out there today, and hardware people want Linux and Unix to run on, is simply not up to this.
I hate to keep repeating myself, but here it is a again.
Building XGL does not make the current X server disappear like a pumpkin at midnight. If you have extremely low end hardware don’t switch to the new server.
Based on this logic we shouldn’t have written the network stack until every PC in the world has a network card. After all, networking ruined all that investment in “sneaker nets”.
Building XGL does not make the current X server disappear like a pumpkin at midnight. If you have extremely low end hardware don’t switch to the new server.
Ok…thaks for clearing that up. That makes me wonder: could a distro switch to Xgl-not Xegl- (along with the old Xserver) and make it so that older hardware just doesn’t use the Xgl part? Is that an advantage to Xgl as opposed to Xegl?
You are thinking at an bigger level, so sorry to drag it down to a distro level. I just know that most distros won’t switch to Xegl within five years (even if it magically got done) if it makes it so they have to maintain two xservers (Xegl and the old one) or risk having every old machine out there not use their distro.
I mean….push comes to shove…more people care about using desktop Linux to bring old machines back from the dead (aka too slow for Vista when its out) rather than providing the most modern desktop for high end hardware. Is Xgl the magical brige?
There are multiple classes of servers:
1) Current XAA/EXA with OpenGL bolted on the side. 2D and 3D device driver reside in different files
2) XGL running inside current X server. It works but it is dragging around a lot of needless overhead
3) Xegl – XGL running directly on a standalone OpenGL stack. You can demo this from the xegl page at fd.o. This is still an X server inside but we are starting to get rid of the old baggage. RENDER is still at it’s core. Applications still live in 2D windows.
4) Sun’s Looking glass. This is where we are headed minus the Java. Everything is OpenGL based. Legacy X windows are rendered using an optional library to textures. True 3D applications can exist on the desktop.
Window managers are orthogonal. It is somewhat possible to mix and match them with the various servers.
1) Current WM’s add pretty chrome to 2D windows. RENDER allows these windows to do transparency.
2) Luminocity and DaveR’s unreleased equivalent. These window managers write straight to the OpenGL API. This lets them twist and composite ordinary desktop windows with hardware class speed and in ways that aren’t possible in the standard X WM model.
3) Looking glass window manager. This window manager uses the OpenGL depth buffer and allows mixing of 2D and 3D apps on desktop.
Before everyone corrects me I simplified these so that I didn’t need to spend 10 pages explaining the details.
3) Xegl – XGL running directly on a standalone OpenGL stack. You can demo this from the xegl page at fd.o. This is still an X server inside but we are starting to get rid of the old baggage. RENDER is still at it’s core. Applications still live in 2D windows.
Thanks….I never knew that.
4) Sun’s Looking glass. This is where we are headed minus the Java. Everything is OpenGL based. Legacy X windows are rendered using an optional library to textures. True 3D applications can exist on the desktop.
Well….I know what I am playing with tonight then.
Thanks Mr. Smirl, you are my hero for your honest answers and your attempts to allow users like myself to understand whats going on. My personal biggest complaint with the Xserver group is that they treat it like an exclusive club sometimes- you have done more than anyone to help those who are not “in the know” understand whats going on!
I wish you luck in your future projects, and I thank you for spending your time to hang around here to explain whats going on with a part of the free software world that has had conflicts with you in the past….
Can’t wait to try your HTML project or whatever it is you are working on, and I plan to buy a ATI 9250 next week so I can take your last project for a spin (Xegl).
Have a nice day!
1) Current XAA/EXA with OpenGL bolted on the side. 2D and 3D device driver reside in different files
2) XGL running inside current X server. It works but it is dragging around a lot of needless overhead
3) Xegl – XGL running directly on a standalone OpenGL stack. You can demo this from the xegl page at fd.o. This is still an X server inside but we are starting to get rid of the old baggage. RENDER is still at it’s core. Applications still live in 2D windows.
4) Sun’s Looking glass. This is where we are headed minus the Java. Everything is OpenGL based. Legacy X windows are rendered using an optional library to textures. True 3D applications can exist on the desktop.
No resources for any of that and no drivers with the requisite level of hardware acceleration support to make true 3D applications happen.
We’ll get there on a piecemeal basis, as is happening now, but the above simply isn’t going to happen.
Building XGL does not make the current X server disappear like a pumpkin at midnight. If you have extremely low end hardware don’t switch to the new server.
IT DOESN’T WORK LIKE THAT. There is a lot of low end hardware out there and there is only the resources to make one really good server, and increasingly relying on closed source drivers for an open source project where they can’t see what’s going on is just not on the cards (no pu intended). That situation needs to be worked out.
I think this was explained to you. It’s the way forward in the long term but it is simply not doable now. Having a dual old and new server is just not going to happen for X. You said it yourself – limited resources.
Edited 2005-12-22 23:50
IT DOESN’T WORK LIKE THAT. There is a lot of low end hardware out there and there is only the resources to make one really good server, and increasingly relying on closed source drivers for an open source project where they can’t see what’s going on is just not on the cards.
I think this was explained to you. It’s the way forward in the long term but it is simply not doable now. Having a dual old and new server is just not going to happen for X. You said it yourself – limited resources.
Duh, sure we have enough resources for two servers. One is already written and shipped. It has great support for all of that low end hardware. Now quit focusing on it and start working on the new server. It’s the high end hardware that is truly lacking support, not the low end stuff.
Why can’t everyone see that no amount of software is going to turn a decrepit EGA card into a Nvidia 7800. EGA cards are never going to run AeroGlass class applications. They don’t run FarCry either if you haven’t noticed. But the existence of EGA cards in the installed base is not a reason for developers to stop shipping FarCry. It’s obvious to me that the OpenGL mess on X is one of the reasons there are so few Linux games.
If you want to mod down comments then explain and reply as to why. You know, that’s why it asks you those questions. I suspect people really don’t like this though:
However, it’s worth pointing out that Novell’s use of XGL is basically a marketing exercise. They want to have a desktop that does lots of nice 3D, hardware accelerated things to win people over via eye candy and say “we can do Vista!”. Who cares if when you scratch away the surface the rest of the desktop is crap? Maybe Novell absolutely need hardware acceleration to make any of their stuff work, or they’re trying to win Novell’s employees over regarding Gnome :-), who knows?
It will be interesting to see them roll something based on this out across their company (if the NLD ever was fully rolled out), because the vast majority of their hardware is not going to be up to doing this – as is the case with most companies. I’m sure Mr. Friedman will be OK though :-). They’ll probably spend most of their time ironing out issues in their eye candy X server than actually getting anything real done.
I mean, I know I’m attacking your baby and all there but if you want to tell us all why the above won’t happen or why it’s wrong, feel free to do so.
Most may remember that Red Hat is doing the exact same thing as Novell right now, only Luminocity is for GNOME. So where does KDE fall into this? Way behind unfortunately.
From what I’ve seen, Zack Rusin’s demonstrations were at exactly the same stage as Luminocity and XGL – a demo.
If both companies make OpenGL acceleration work for their products and exclude KDE, KDE is in serious trouble of being overlooked when it comes to choose a look and feel for your desktop.
What’s that got to do with the subject of the article?
Would you want the beautiful musings of water rings rippling across your desktop when you move your mouse? Or do you want KDE with “vanilla” 2D looks and simplistic graphics features.
Or would your rather see it crash, be unstable and have limited hardware support? Yawn. Gnome’s infrastructure for Cairo and Glitz is years away from providing the quality needed for day to day usage, as well as the X server support.
however, I do realize the possible danger that it could be in.
Ooooh. KDE’s in danger! Get out the pitchforks!
So is Apple wrong for doing this? Are they “evil” for making a product which people want to use?
Apple did not take an existing open source project, on Freedesktop no less, and take it away for themselves.
I think most people will find that Novell is well into the ethical clearing on this issue.
Nope, and I doubt whether the X people will ever accept their code via that process. Novell can maintain their own X server from now on and see how they like it.
If nobody else is going to build an accelerated X server on their own time, and Novell wants one bad enough, they have every right to do it themselves.
With existing code that people have put effort into, and given the fact that it is hosted on Freedesktop with no indication this has happened? Not the same thing.
You can’t complain because your friends built a fort, and you don’t have one.
If you want to do that then go off and create your own proprietary software then. Don’t pretend that you’re open source people.
Who on Earth allowed this to be posted?
– You are totally right. Demo level. But you must understand what that means, and how much work it takes to build sometihng like this. being able to even showoff demo moves on something like this, puts you months, if not years ahead of people trying to do the same from the starting line.
– This article refers to Aaron Seigo (KDE developer) and his comments about Novell building XGL behind closed doors. That’s why it’s being brought up.
– Novell is completely free to take Open Source code and build something onto it that they feel they could use. They are also free to build that same technology, and release it back into the wild if they want. They are notorious for doing this, and nobody ever said they were going to keep XGL for themselves.
– Seriously guy, if Novell comes out with a version of X that can do what OS X does…I’d guess the majority of users choosing Linux for Desktop would want X.org’s EXA accelerated server. Novell seems to pop out some mighty fine projects that are ahead of the game in most cases (Hula, Beagle, OpenSuse..etc). They are bringing things to the desktop that other people aren’t bothering with, but everybody wants. I doubt X maintainers would refuse a nice slab of code that can do what GLX is (hopefully) reported to do.
– You last bit about proprietary software….yeah it seems that’s where it’s headed. At least in your mind. Novell has been released code left and right to the public, including things they’ve sold as proprietary products (Netmail for example). Again, they are free to take X, change it however they want, and spit it back out however they want.
You seem to be one of those people who promote Open Source, tout it’s wonderful ideals, and then get offended when it works the way it’s described to.
Is OpenOffice developed by cooperation and peer review, or by a team of full-time engineers ‘sponsored’ by Sun?
Both actually, you can download OO both as daily snapshots or from the scm. That nearly all the work is done by Sun does not change the fact.
Novell is going to to the bigger part of the work, and later open up for (i suspect few) volunteers and peer review
The important part about open source and peer review is that it’s a process, not something you slap on afterwards. Even more important in this case as it’s part of the infrastructure.
imho simply because it’s the way to go for big and complex systems.
That’s nonsens, doing open development does not have to change the way you organize the project. It’s not like you have anonymous write access to your CVS, making it free for all to modify. The biggest and most complex open source projects manage it just fine, even if they achieve it through slightly different approaches. Look at how the development process of the Linux kernel, Mozilla, KDE, Wine and OpenOffice are different, but in the end they are all open.
Jon I followed the mailing lists during the time you point to as the failure of public XGL project (August). I was, and am, quite sympathetic to the frustration you vented in those lists- and the really stupid behavior of Daniel Stone regarding your cvs priviledges.
But I do feel that you are doing two things which shoot yourself in the foot.
1) You make the case that it is either or- either XGL or EXA- and I do not believe this is the case.
2) You argue from a marketing point of view about why X needs these new technologies and in so doing you inadvertently show disrespect to those who have built X.
I think I understand your impatience- on the list you were screaming against the wind, your frustration was palpable. But these 2 points, IMO, really weakend your argument for moving to XGL.
Now let me clarify why I point to these two issues.
Firstly the transition to EXA is not merely a bandaid. If the ultimate goal is a OpenGL API then EXA might have been a bandaid but I did not see any consensus on the use of OpenGL as *the* API for X.
EXA means a move away from the ancient inefficient use of XAA. The beauty of EXA was, and is, the relative ease with which new EXA drivers can be written. The fact is within less than 6 months we already have a lot of work done to providing complete EXA coverage for the entire range of supported hardware. If in one years time EXA can displace and deprecate XAA that is no small feat-in fact it is simply amazing.
Moreover EXA means high quality RENDER support. RENDER is not simply an X extension it has been widely adopted by the majority of desktop application developers and entire frameworks have been written to take advantage of it. Now lots of people have problems with RENDER, Carsten Haitz(aka Rasterman) most vocally. But in the absence of proper RENDER support in the drivers we have been unable to actually to differentiate whether the problem is with RENDER itself or the lackluster, if not piss poor, implementation we have had to date-prior to EXA. Moreover with good RENDER support RENDER itself will be improved and refined-who wants to refine something for which no good drivers exist?.
But we should also remeber that it was the move to modular X which opened the door for EXA. The change in the underlying build and packaging system made it inifinitely more feasible to start to really muck around with the architecture of X. Your vocal protest against EXA was discouraging precisely in the way you ended up getting discouraged- the only difference is that EXA was here and now and considered doable by the driver developers. Rather than lambasting EXA it would have been far more strategically wise to really emphaisze EXA as a stepping stone and not as a stop gap, not as a bandaid.
Finally nothing significant with XGL is going to occur until large parts of X are merged out of X and into proper kernel interfaces(ie. most of X being userland, hardware arbitration and detection notification in the kernel). A big part of this is dependent upon agreements between Solaris, *BSD and Linux kernel developers. You spoke in the lists about using the framebuffer driver for XGL/XEGL- but until a similiar architecture, similiar to what Linux already has, is present in the other platforms this simply isn’t going to happen. In Linuxland we have had a hell of a time getting our own kernel devs to work hand-in-hand with X/desktop app devs. We not only need this but an agreement with devs beyond Linuxland if X is going to really move beyond the rough consensus which has made X what it is for the past 20 years.
Glitz is slowly becoming the RENDER->OpenGL gateway. Until such time as opensource OpenGL hardware accelerated graphics becomes the norm Glitz is the only plausible way to provide OpenGL acceleration to the modern X desktop. With high quality EXA support, and by extension RENDER support, older cards which are less capable will be able to do a lot more with a good software fallback for OpenGL. EXA radically reduces the amount of things which are to be acclerated, in crontrast to XAA, but of those things that are to be accelerated we have far higher coverage than with XAA(XAA had dozens? if not hundreds? of acclerable primitives and 95%? were never implemented whereas EXA only has a handful, but a handful which can be implemented by almost the entire range of supported hardware). If MESA could take advantage of optimized EXA/RENDER supported hardware it would far outperform the miserable software-only OpenGL experience we have today-allowing for legacy compatibility while embracing new developments.
For the life of me I cannot see this as being anything but a complimetary state of affairs- EXA promotes RENDER, Glitz marries RENDER with OpenGL- embracing the gradual transition to OpenGL accelerating while mainting backwards compatiblity. Don’t forget X was frozen in terms of development for a long, long time because no one had a vision for how to combine past and the future- this vision seems to be here now. And this *is* progress.
Finally although I am not a contributor to these projects it is abundantly obvious that if one steps on the toes of those who are contributing one is likely to get stonewalled. X will never change due to percieved issues of comparison to OS X and Vista. To forget this is to show disrespect for the work which X devs have already done. Of course X development does not occur in a vacuum-but one cannot argue that X *must* do this or that because Aqua does. As usual the best argument is code, the second best is getting people to work together. What needs to happen now, now that EXA is here, is to get the Linux kernel devs involved in the discussion and to get them to start working out the issues to be resolved and open this discussion across the board to *BSD and Solaris devs.
Remeber this: priorr to EXA each driver dev was struggling alone in their own little world, eeking out a modicum of success in getting these mostly undocumented devices to perfom adequately for the legacy of X applications. EXA enabled these same developers to re-write device support for their pet devices- providing better support than that which was already there. Telling them “we don’t need your stinkin drivers ’cause we can have a universal framebuffer which includes the kitchen sink” isn’t going to win you any fans ….
There is an old story that a camel is a horse designed by committee. Today EXA brings parts of OpenGL-like texturing to X. Next year there will be extensions so that X can access the pixel shading hardware. The year after that we will get extensions to access the lighting hardware and so on.
There is a lot of value in moving to a well designed and standardized API today and thus avoiding all of these interations.
Yes, this might be the case, but hopefully, in parallel, we have first a half-finished and later on polished X on GL implementation. I, and to my view many others, do not see these projects mutually exclusive. True, implementing EXA probably did slow down the emergence of Xgl, but does enable a more-continious transfer from non-accelerated to fully-accelerated desktop on linux hopefully.
And please don’t take my comments the wrong way, I am a huge fan of JonSmirl developments I very much appreciate and agree on the view you have on graphics on linux. So my thanks go to you *and* to all the other developers improving graphics on linux.
Juhis ([email protected])
Yes, this might be the case, but hopefully, in parallel, we have first a half-finished and later on polished X on GL implementation. I, and to my view many others, do not see these projects mutually exclusive. True, implementing EXA probably did slow down the emergence of Xgl, but does enable a more-continious transfer from non-accelerated to fully-accelerated desktop on linux hopefully.
And please don’t take my comments the wrong way, I am a huge fan of JonSmirl developments I very much appreciate and agree on the view you have on graphics on linux. So my thanks go to you *and* to all the other developers improving graphics on linux.
Juhis
Yes, this might be the case, but hopefully, in parallel, we have first a half-finished and later on polished X on GL implementation. I, and to my view many others, do not see these projects mutually exclusive. True, implementing EXA probably did slow down the emergence of Xgl, but does enable a more-continious transfer from non-accelerated to fully-accelerated desktop on linux hopefully.
And please don’t take my comments the wrong way, I am a huge fan of JonSmirl developments I very much appreciate and agree on the view you have on graphics on linux. So my thanks go to you *and* to all the other developers improving graphics on linux.
Juhis
Eric Anholt was nice enough to politely point out that I don’t know quite what I’m talking about in regards to Jon Smirl and XGL. Sorry for adding to the confusion and jumping to the wrong conclusions.
Note to self: “It is better to remain silent and be thought a fool, than open one’s mouth and remove all doubt.” **
– Adam
** Samuel Johnson.
It’s really sad to see people tear into the writer of an article, question his motives, and in general caterwaul about what amounts to him not agreeing with their points of view. I personally think many open source projects such as Firefox are excellent, but I am not going to stoop to zealotry and begrudge a company, be it Apple, Novell, or even Microsoft for wanting to create, market, and profit from propriety software.
Where I get pissed off is when quality is sacrificed for greed, standards get broken, and companies use under-handed tactics to gain a virtual monopoly in a market, whatever that market may be.
Let’s be real folks. While many open sourced projects are excellent and their owners play nice, OSS is not the cure all for the real problem facing computer users.
I hope Novell makes a trillion dollars from their project. :-p
X is not just another open source project like firefox or openoffice to which there are numerous alternatives. X is the current glue that most linux DE depend on for displaying graphics. Much like the kernel, it’s one of those projects in which small changes can have massive effects on Linux as a whole. XGL has been identified and excepted as the future of 90% of all Linux desktops. If 90% of the Linux community could be impacted by a project that has been identified as a critcal component of Linux’s future wouldn’t it make sense to give them and the companies trying to support Linux a little bit of input on the implementation.
What Novel is doing is the equivalent of building a house for someone while not giving them any input on the construction. It’s the equivalent of telling someone I know what’s best for you. They may be right, but if they’re wrong and thier version of XGL get’s blacklisted or goes the proprietary route then you’ll really see a community hell raising.
Critical side note:
What everyone is really scared of is the very realistic possibilty that they just might not give it back to the community. After all, that MIT license gives them every right to do so.
Now it strikes me that a KDE developer working for a company whose main business is to produce a toolkit which is open source in the end, but which is developed internally, has any problems with it.
Trolltech provides nightly builds; KDE developers contribute patches which Trolltech can accept or reject so they have full control over Qt but nevertheless allow outside input.
For some reason Novell doesn’t do this. It would be trivial for them to set up a public cvs. They still could act as gatekeepers for patches etc.
Does Novell have an advantage with this? yes, they’ll be the first to have XGL integrated in an OS, but again, if they’re not obligated to share, in the end, the code of it, it’s the writer’s choice, not a choice the community has to make.
Yes but for all the eyecandy Xgl can provide (in addition to a smoother experience which comes for free) it needs support from the WM and DE so sooner or later Novell will have to let others look at it which makes the current situation even more pointless.
The only reason I can see for Novell to behave this way (i.e. take an open codebase, develop it in secret just to open it up again sometime later on) is to get an experience edge for the in-house developers. Whether that’s worth it is questionable.
It’s like the situation with Apple and khtml some time ago; Novell does nothing illegal but it’s not really nice either, Apple changed and it helped both sides, perhaps Novell can too.
I’m not sure if it’s very stupid, it’s been pointed out before: sometimes it’s better to build a good base and then open it up. It’s just the uncertainty of what Novell is going to do with XGL (open it, or keep it closed) that couses this comotion.
How many resources are waisted? There were only two people working on it in the open source community, now only one, it’s not like the community hasn’t had the chance.
http://aseigo.blogspot.com/2005/12/bit-more-on-xgl.html
jon: EXA and XGL have nothing to do with each other. the EXA author actually has and would like to continue to work on XGL, for that matter. please don’t confuse this issue with your own personal axe grinding. thank you.
jon: EXA and XGL have nothing to do with each other. the EXA author actually has and would like to continue to work on XGL, for that matter. please don’t confuse this issue with your own personal axe grinding. thank you.
There are ten drivers in the mesa public CVS that need EGL support added and nobody is working on them. Finishing them ought to keep you busy until Novell does another release.
BTW, why can’t Zach write his own messages?
the fact that all of xgl development is done by novell, as a fork off mean it is still there, but we probably don;t know if any thing has been done. As the freedesktoip site hasn’t been really well updated. Contacting say Miguel on planet.gnome to speak to the company or nat friedman would help the problem, however probably like tommorow we shall here news.
Well, get cracking boys!
http://lists.freedesktop.org/archives/xorg/2005-December/011803.htm…
The link actually says nothing other than that Adam Jackson, a x.org developer know less of what Novell does than what aseigo has learned communicating with various people. And more importantly that he don’t get the issue, most likely because he has been too busy with other stuff rather than trying to work on XGL.
What Novell should have done, and the way most other OSS projects work, was to simply have created a branch in the x.org cvs. Called it incredible_cool_glx_novell branch or some such, but kept it open so other interrested parties could have participated. It’s OSS 101.
Edited 2005-12-23 01:12
“If any given party stops contributing open code, well, tough for them, it’s not like their commits can be undone. hw/xgl exists, work from it.”
“I will continue to work on enabling X on GL in the open code, and I suspect I’m not alone there. Political debates about who’s playing nicely with whom don’t get development done any faster, so let’s not have them, k?”
Who gives a crap about Novell if they don’t want to play nice?
I haven’t heard anything but speculations. I’m starting to wonder if you even read what he wrote…
“The only people who can speak to Novell’s position on xgl development with any authority are Novell themselves. So please don’t speculate, it doesn’t do anyone any good.”
…
“There were significant technical barriers to getting xgl merged into the modular tree”
…
“There haven’t been any checkins to the xgl DDX in the old modular X server tree for a few months. This does not necessarily mean that people have stopped working on it. It doesn’t even necessarily mean that the evelopers have “taken it closed”.”
Also see the quotes in my previous comment if you’re to lazy to read the mail in detail yourself…
to read the mail in detail
I did, and it did not really say anything. But perhaps you should read what another x.org developer writes about the issue:
http://www.osnews.com/permalink.php?news_id=13040&comment_id=77163
“He’s currently working for Novell, and I’ve heard (though not directly from him) that he’s been instructed to keep changes private”
Speculations…
And even so, if that’s the case, “dump” Novell then. David Reveman is not the only one capable of doing the job. But it will take longer without him (or not, who knows).
Again, from Adam Jackson’s mail:
“I will continue to work on enabling X on GL in the open code, and I suspect I’m not alone there. Political debates about who’s playing nicely with whom don’t get development done any faster, so let’s not have them, ?”
And that’s just what this big mess is, a “political debate” filled with speculations and what not.
“That cuts both ways. Don’t start those debates, and don’t give cause for them to exist.”
And I guess I should take his advice more to heart…
Speculations…
The only thing that is speculation is why the changes are kept in private, and the why is not really the issue.
Again from Eric Anholt’s post:
“The issue here is that Novell is taking developers away from the open-source community for internal closed-source development, resulting in duplicated effort from a limited pool of talent or diverting new development efforts”
“The only thing that is speculation is why the changes are kept in private, and the why is not really the issue.”
I would say it’s one important issue. Alan mentioned several good reasons as to why there hasn’t been a code drop in some time (but the fact of the matter is that no one seem to know for sure, it’s just alot of speculations). If they don’t keep the code for themselves “out of spite” how, exactly, are they taking developers away from the open-source community?
But if Novell want’s to go their closed source way and X.org would put it’s focus on Xgl, Novell’s efforts would become quite meaningless in due time. Would we get Xgl faster if Novell contributed, sure. Alan also mentioned this.
Anyways, we seem to be going in circles. I’ll really take Alan’s advice to heart this time and not put more fuel to the fire. Bye!
Eric/Adam/et al’s pursuit of EXA triggered the collapse of the public XGL/EGL project. Intentional or not it doesn’t matter because the project has collapsed. Novell opportunistically seized on this and went private with the code. You can’t blame Novell, the X community was sending a strong message that they didn’t want to have anything to do with XGL/EGL.
But now it looks like XGL is making progress and some X developers want a piece of it. Tough luck, you dug your EXA hole now go live in it. Novell deserves full credit since they saw the value in the project and pursued it on their own time and money. Kudos to Novell.
Alan mentioned several good reasons as to why there hasn’t been a code drop in some time
The code drop thing is just a red herring, the development could just as easy have been done in a branch in the public x.org cvs tree. Giving everone of the x.org developers the ability to incremantally merge the changes they find usefull, rather than waiting for some major codedrop.
And trying to ignore the issues or pretend everything is right and dandy, is not helpfull.
look at xgl cvs!
“Well there were a couple of snapshots later than CVS available outside of
Novell, so I’ve done a crazy merge to try and get them into a workable
CVS, I suspect I may have failed.. there is a pre-xgldrop-merge tag if I did.”
No don’t look at xgl CVS, that was just me committing some public trees that had escaped Novell back in July/Au gust (CVS was in June…)
I’ve got my own plan and timetable for getting Xegl working in some reasonable fashion on my laptop before LCA, and I’ve decided screw Novell, I’m taking any code that escaped and I think some X.org people will join in on trying to get a decent non-Novell implementation, Novell can still come to the party, but it isn’t going to be the same party :-), and they’ll be doing the merging at the end of the day..
I haven’t read through all the comments yet, so forgive me if someone has already addressed this well, but the article and many of the comments that I’ve read contain a lot of irrelevant nonsense. Arguments about Luminosity’s development or EXA vs. XGL have nothing to do with this issue. Here is a general rundown of the problem here.
XGL is a project to bring OpenGL capability to the X server. Whatever you may think of the importance of this, it is important to some people. XGL development was proceeding slowly (again, the reasons for this are not generally relevant). This is an open source project, with code readily available. There are a few core developers, but they can’t work on it as much as they’d like to. There are also some others who keep up with the progress of the project and contribute to it at least on occasion. What happens?
First, Novell hires the core developers of XGL and pays them to work on it. This is great, and nobody is saying otherwise. This will almost certainly speed up development.
Then, Novell stops publishing the cvs repositories of the code, or even snapshots of the code. Why? Who does this help? What does this accomplish?
With no cvs access, or even snapshots, the occasional contributors, or people trying to keep up on development as it relates to other projects have nothing to go on. Now, nobody else can possibly contribute even if Novell provided a place for code submissions, which of course, they do not. Also any plans to make allowances for XGL in other projects are stalled.
What choices does this leave the other people involved with XGL who are left on the outside and not even allowed to be looking in? Well, they could fork the code with even less resources available than before. This is clearly not desirable to any of them, especially if they are duplicating work which will eventually be released as open source, as they still believe/hope it will be. On the other hand, they can just wait for Novell to release XGL, forget about any contributions or bugfixes to it (until it’s released anyway), and deal with how it might affect any other projects later.
The point is, this project’s development was transparent and now it’s not only closed to outside contributions, its not even published anymore. Can anyone blame Aaron for not being happy about this?