“Novell is announcing its contribution of the Xgl graphics subsystem and the ‘Compiz’ compositing manager to the X.org project. These enhancements open up a whole world of hardware acceleration, fancy animation, separating hardware resolution from software resolution, and more. As a result, Linux desktops will become more usable, end-user productivity will increase, and Linux is firmly positioned at the forefront of client computing technology.” Videos and screenshots are included in the press release. And on a related note, Dan Winship of Novell has explained on gnome-desktop-devel why Novell worked on all this behind closed doors— and this also applies to the striking similarity between Novell’s mockups from December and Nat Friedman’s videos. The changes made to GNOME will all be released back.
Novel buys the kde oriented SuSE Linux, then unveils massive enhancements to GNOME…
While Novell bought the KDE-centric SUSE, they also bought the very Gnome centric Ximian and looking at who did this work, it was the Ximian side.
“Novel buys the kde oriented SuSE Linux, then unveils massive enhancements to GNOME…”
Who cares? KDE guys have done plenty of beautiful things and KDE is much more oriented on features than simplicity. This will surely be integrated partly in the 4.0 release and than massively used in the 4.1 and you can count on the Plasma team to use it, I’m sure. Vista Areo? Pff, it will look like crap next to my KDE desktop
GNOME is now more or less the default DE in Novell and Suse related distros. It seems like Novell just wanted a solid OS on which to build their enterprise Linux offering, but they don’t care much about KDE. They said that KDE would still be the default in Suse but if you look at the Suse 10.1 betas GNOME is right at the top of the selection (the place that used to be occupied by KDE) during installation.
GNOME is more of a standard desktop within U.S. corporations because that is what Red Hat has always used and they have been the dominant distribution among enterprise customers in the U.S. Novell is aiming more at enterprise customers than they are at home customers. That’s where the money is. So, it shouldn’t be too much of a surprise to see them doing some serious development on GNOME.
As others have stated as well, Novell bought SUSE and Ximian. SUSE isn’t only valuable because it runs KDE. SUSE is valuable because it has a solid base. Adding work from the Ximian guys and running GNOME on that base is a good move by Novell IMO.
I don’t think that Novell doesn’t care about KDE. It’s just a matter of resources. Novell has some great GNOME resources through their aquisition of Ximian. They have a lot to contribute to GNOME, and it fits with some of their strategies. KDE has generally been more focused on features than GNOME anyway. What KDE lacks seems to be on the roadmap without the help of Novell. KDE development seems to be faster than GNOME development IMO. KDE is also in the process of changing their basic toolset. I don’t blame Novell for letting people more directly involved with KDE work through some of that process.
From my perspective the work that Novell has done has either been something that brought one of the two major desktop environments more on par with the other, or it has been something that could be adopted by both.
Nice to see compiz out in the open. I missed it when it happened, but the changes to X itself were released around the turn of the year. All we’ve been waiting on is the compositing manager and changes made to gnome to make use of the new stuff. As such, not as much was closed off for not nearly as long as I thought, so kudos.
Without the stuff released today, I was able to get some stuff going in kubuntu
http://img148.imageshack.us/my.php?image=xgl7uk.png
http://img95.imageshack.us/my.php?image=xgl22bc.png
It required some command line hoop jumping though. Hopefully the new stuff will make things a bit easier to use for more people/distros.
I was able to get some stuff going in kubuntu
I see from the shots you are using Noatun, what does happen when you enable the Madness visualization plugin?
I see from the shots you are using Noatun, what does happen when you enable the Madness visualization plugin?
heehee I had never used that plugin. Just like its description says, it moves all the windows around, and with glxcompmgr running they all wobble a little
“why Novell worked on all this behind closed doors”
Bulshit , the real reason is cost , the longuer and more implementation you make the more costly it becomes , You whant to improve on a project on your own , its all fine , but when you have to bulshit Open Source devlopment implementation in order to justify why you did it , I think anyone can look at the rest of GNU/Linux and know that half the thing that we are using today where started by now busted dot.com which the community finished on its own developemnt cycle , time and dime.
Also there is a clear mandate at Novell to push development on Gnome and try to not be as up to date on the KDE part , otherwise the new menu would fit both Gnome and KDE and XGL would be just as integrated. again this whas not done primarily because of budget.
The community usually have two answer for this kind of behavior , resist and fight and finnaly destroy it, its the software that dies because no one use it , or they embrace it and make it even better. I hope its going to be the latter.
LOL..I think you did a fine example of showing us all why the development was done indoors. Contrary to some zealots belief, the community doesnt know it all , and can sometimes do more to harm to itself than good. If Novell wants to develop for nothing but GNOME and not KDE , if they want to develop stuff inhouse…then contribute back later, more power to them. I doubt they sent GNOME or Xorg a bill for the work they did that will benefit everyone in the community. I for one wish more of this will be done in the future. Can you imagine how long it would have taken the “community” to develop a XGL to the level they have??? We would spend 6months bickering over protocols and languages…I say KUDOS to Novell for what looks to be a job well done…And I look forward to seeing some of the changes they have made to THEIR version of GNOME make it into mainstream…
“I think you did a fine example of showing us all why the development was done indoors.”
No , what I did whas put an objection to is nonsense , I think its clear that you disagree with me , but hey I got GNU/Linux , GNOME , XFCE , KDE , etc … projects to fall back on , most of the time when a software development under open source is going nowhere its because of :
– Funding
– Code contribution ( lack of or really bad )
– Lack of interest from the community in the project.
– Bad Managing developper or lack of one decision maker or leading group.
“Contrary to some zealots belief, the community doesnt know it all”
The GNU/Linux community is over 250 million , you will have to change your stupid name calling one day. We got the extremely inteligent in the community too , the zealot are just the Defending machine.
” and can sometimes do more to harm to itself than good.”
No as the code is Open source and availaible to all you disagree you can proove your point by a fork , Just as they did , exept there trying to push it as the new and improve XGL and X.org.
“If Novell wants to develop for nothing but GNOME and not KDE , if they want to develop stuff inhouse…then contribute back later, more power to them.”
I would rather they be definative on this , so that the great KDE developper Novell as can go seek work where they get the budget and support they need.
“I doubt they sent GNOME or Xorg a bill for the work they did that will benefit everyone in the community.”
You have a strange problem with reality :
http://www.dwheeler.com/sloc/ Its Novell who’s working on the code and shoulder of other giants. If they where to start charging for there clients that would also be acceptable , but then SUSE when bankrupt using that method. Novell own SUSE now … They learned that the GPL protect from Apple and Microsot a long time ago.
“I for one wish more of this will be done in the future.”
Yes , I know , but your clueless about IT history.
“Can you imagine how long it would have taken the “community” to develop a XGL to the level they have??? ”
Sorry to burst your Bubble 95% of the work had already been done by the community , all they did whas integration and improvement and I think removal of code , I also suspect that the budget they had whas not so small. The only thing I need to say is comeback in a year when the community as built the plug-ins and cool effects and see where it could have been now. With community involvment.
“We would spend 6months bickering over protocols and languages…”
We dont see the GNU/Linux company bickering , the only group we actually see bickering over 6 month ( actually years ) is those that contribute nothing and those that oppose GNU/Linux , the developper are quite amicable between themself , all the rest pretty much do what they whant and what they can.
“I say KUDOS to Novell for what looks to be a job well done…”
I have to second that based on the result , just you will never make me accept the message sent with it “its better to develop everything in closed development with no one else input.” GNU/Linux is where its at because its exactly the opposite of whats beeing sugested.
“And I look … make it into mainstream…”
Its not their GNOME version , no one can own GNOME , as for mainstream its all going to depend on the GNOME board and GNOME developper and the license. Frankly they whant it to be implemented by others send out the the source code on CD to other distributions and offer to help them integrate it. Not everyone as Novell Budgets and developpers.
Sorry but your bulshit mesaage that zealot have anything to do with the poor Desktop showing is just that another type of bulshit , there is Two company with 3 billion in budget who can start there own Desktop line if they wish , tomorrow , ( I will keep repeating this to my death : DELL whas really started with 50 million ) , One is Red Hat who’s not interested , the other is Novell who is not interested in retail customer that much.
BTW , instead of voting with your mouth , vote with your wallet , when the new SuSe come out go out and buy it. Give it as a gift and offer paid copy to those in the right position.
BTW , instead of voting with your mouth , vote with your wallet , when the new SuSe come out go out and buy it. Give it as a gift and offer paid copy to those in the right position.
That’s my intention, really.
Great 😉
“I have to second that based on the result , just you will never make me accept the message sent with it “its better to develop everything in closed development with no one else input.” GNU/Linux is where its at because its exactly the opposite of whats beeing sugested. ”
May I remind you that one of the most important free software projects today was developed pretty much like Xgl, at least initially? I’m talking about GCC and how Stallman wrote it, and there are many other examples.
This model may not the best way to continue the development in the long term, which is probably why Xgl and Compiz are being merged to the freedesktop.org CVS. But thanks to the development model chosen by Novell we now have Xgl and Compiz as free software, and you’ll be able to enjoy it on your system without having to obtain proprietary software.
If you want to vote with your wallet you could vote against proprietary software instead of against a development model.
Edited 2006-02-07 21:15
I aint arguing the result and how they did it , but the fact that they are advocating this method alone and are not fully disclosing the details.
“But thanks to the development model chosen by Novell we now have Xgl and Compiz as free software”
Thats where I disagree , 95% of the work add already been done by the community.
“and you’ll be able to enjoy it on your system without having to obtain proprietary software.”
“If you want to vote with your wallet”
If ? Ok …
“you could vote against proprietary software”
I already do to a point. I am more a choice person than radical , I whant my proprietary and Free software on the same level on everything , then Free software clearly win.
“instead of against a development model.”
I can do both , I aint arguing against the method , just that its a lie to suggest that the same thing would not have been achieved by the GNU/Linux community , and that this method is the best , the community actually did 95% of the work. I think I dispise more the Agenda beeing pushed here then the result and method used.
I aint arguing the result and how they did it , but the fact that they are advocating this method alone and are not fully disclosing the details.
It doesn’t really seem to be the case, if you read Dan Winship’s mail carefully you’ll understand that they chose this method as they thought it was best for a specific purpose: to spearhead innovation in a critical area.
Considering that a) after the initial push the code is given to the community to further develop as open source software b) the remaining 99% of sw development at Novell follows the “traditional” method, I fail to understand what your concerns are.
rehdon
Since you did not read Dan winship mail , hence cant understood it let me quote directly from it :
“If we had proposed the changes on the mailing lists, it would have started a huge discussion about what people hated about the design (“you can’t make the panel menu depend on beagle!!!”) and how it should be different. And then we could have either (a) completely ignored everyone and done it ourselves anyway, or (b) had a long conversation about the merits of the design and then not actually finished the code in time for NLD10.”
“An equivalent answer to the question is “because you can’t do design by committee”. Everything good in GNOME is good because one person or a small number of people working closely together made it good.”
* Evolution’s UI blocking on I/O SUCKS. Due to lack of design in the
early stages of development
* Evolution’s integration with gaim and tomboy RULES. Both of these
happened because specific people (ChipX86, orph) made them happen.
* Multimedia integration SUCKS. No one has ever sat down and tried
to fix the big picture. (Although I think the gstreamer team is in
the process of doing this now?)
* Apps not remembering their window size and position SUCKS. Again,
needs someone to take this problem and make it their own. I
remember xkahn was trying to fix this a few years ago, but never
finished.
* Bug-buddy SUCKS. Jacob’s original UI was simple and brilliant. But
as more and more people added more and more features without
looking at the big picture, it got unwieldy. (But now a small
team is putting the simplicity back in again.)
* Deskbar applet RULES (kikidonk), dashboard RULES (Nat), and beagle
RULES (trow and joe). None of these was done *exclusively* by
those people, but each of them reflects one person’s (or a few
people’s) vision, as opposed to the current state of bug buddy,
which just sort of happened.
“If you try to design something by committee, you either have to end up with the latter sort of messy does-everything UI, or you ignore and hence piss off a large chunk of the committee.”
I have, do, and always put my money into any project I use. It’s funny how people will jump on the boards and scream blasphemy and talk nonsense, then when someone calls them on it, and they get moderated down (so obviously others agree hence the -5 moderations and the +5 moderation for my reply) they get political and proper. Face it you came on the boards with your chest poked out, then got moderated down, and now you are upset…The fact remains that, like one of the previous post, with the direction they look to be going, I WILL be buying NLD, and I also own stock in Novell, and have purchased previous versions, and have purchased and support other distros and small projects in the past… How many times have you clicked on those “Donations Here” buttons on the sites of all the opensource projects you hold up on your shoulders now…it also never fails to follow that you always get the line for line replies from the thread junkies who get their feelings hurt…Most of you replies were completely off basis, but hey its a public place, speak your mind, just be prepared to back up your claims…
” have, do, and always put my money into any project I use. ”
No 😉 , thats impossible in GNU/Linux :
http://www.dwheeler.com/sloc/
There just too many software to contribute to all the project you use , and you dont have the money to be alone really significant ( same for me btw ).
“It’s funny how people will jump on the boards and scream blasphemy and talk nonsense, then when someone calls them on it”
I have to agree stop your Bashing and nonsense.
“and they get moderated down … +5 moderation for my reply) ”
You might whant to have a discussion with the OSnews Staff on that one 😉
“Face it you came on the boards with your chest poked out”
No , I always submit the same way , I write what I know is right , factual and the reality , and what I whant to discuss.
“Then got moderated down, and now you are upset…”
You really dont know me at all 😉
“The fact remains that, like one of the previous post, with the direction they look to be going, I WILL be buying NLD, and I also own stock in Novell”
Interesting tidbits here 😉 Too bad you did not state this in your first comment.
“How many … you hold up on your shoulders now.”
My usual answer is None of your business , its the addition of yours and my and every others contribution that make GNU/Linux great. But just this once I will make an exeption :
None , I used the other button : Advocate + Contribute code + contribute money + contribute idea + defend project + write documentation + translate. You cant see or understand that one.
Someone , I spoke to in 2001 valued my direct contribution to GNU/Linux at 50 million , he should know he as a great distribution of is own. Personnaly over 14 years I contributed personnaly 125 000$ CAD ( 9000$ CAD per year ), Thats nothing in comparaison to my companies and its nothing compared to the companies of others I made do the same. Now whats yours and what is your real name coward ?
“just be prepared to back up your claims… ”
I am Moulinneuf I dont make claims. I just happen to be a funny Iceberg people laught at , because thats the way I like it , If I where 1/1000000000 the coder and good guy that people like Duval , Texstar , Linus , Yougn , Havlik , De Icaza etc are , I am Just myself and I contribute like I can.
Contrary to some zealots belief, the community doesnt know it all , and can sometimes do more to harm to itself than good.
I don’t know why you got moderated to +5. It’s amazing how seemingly politically correct comments can get modded up. You’ve just blown the case for open source software and the open source process out of the water. Want to develop software in-house? Great. Make it proprietary. At least Microsoft isn’t fooling anyone over that.
I always thought some of those people were like Microsoft, except with open source software. I’ve been proved right. We can all sit back, relax and enjoy the perfect way to cause trouble and the perfectly wrong way to relate to the open source community in the coming weeks and months ;-).
Edited 2006-02-07 21:40
I agree completely with the Novell statement. Designing something with everyone telling what to do and not to do is just silly. Look at just about any successful open source project, it started with a clear vision and implementation (and afterwards is maintained with someone who has the vision).
Imagine if Linus didn’t spent all that time getting the first kernel out and instead discussed it with the “community”, we wouldn’t have linux.
Who made you King of “The Open Source Process?”
A lot of successful free software is largely developed by a small kernel of developers, who work on initial design and implementation concerns before accepting contributions from others. And a lot of free software that never goes anywhere was not-really-developed-at-all by a group of “contributors” that spent most of their time talking about software rather than writing it.
Maybe, just maybe there are a number of development processes people can use for an intended goal that also happen to involve open source licenses. Maybe those using different ones can do so without you yelling, “THERE GOES OPEN SOURCE DEVELOPMENT!” because you have some vendetta against them. That is unless you’re going to start informing us how open source development is being discredited by the swarm of useless sourceforge projects that haven’t been developed in six years.
Sedge, as usual you’re a one trick pony. Novell isn’t a KDE shop so now they are Microsoft.
“Oh the horrors, Novell decided to do some work in-house before releasing it as “open source. We must resist and fight this atrocity”.
We can all sit back, relax and enjoy the perfect way to cause trouble and the perfectly wrong way to relate to the open source community in the coming weeks and months ;-).
Oooh, the “open source community” out for revenge against Novell. All rise up and fight the evil Novell.
When will you twits understand that there is no “the community”?
When will you twits understand that there is no “the community”?
Resorting to insults again, eh, Lumbergh? Well I guess now that they’ve made moderation points harder to get you’ve crawled out from under your bridge.
There is a Linux community, but it’s not the distinct, exclusive entity you’re trying to set up as your straw man.
The truth is that there is a Linux community, just like there is a KDE community, a Gnome community, a X.org community, a few BSD communities, a Ubuntu community, a Debian community, a Mandrake community, an OpenSolaris community, a vim community, an Emacs community, and so on. Communities withing communities. I myself am a member of a few.
Each time someone expresses their interest for a piece of software enough to contribute to it (in whichever way they can – you don’t need to be a programmer), then they are part of that specific community.
Both KDE and Gnome will profit from this new hardware-accelerated server, just like GNU/Linux, *BSD and other *nix systems will profit from it. I find it quite amusing that some people would manage to turn this into YAFW.
Resorting to insults again, eh, Lumbergh? Well I guess now that they’ve made moderation points harder to get you’ve crawled out from under your bridge.
Hehe, why am I not surprised that you would care about such things.
There is a Linux community, but it’s not the distinct, exclusive entity you’re trying to set up as your straw man.
Oh, but that’s exactly what Sedge was trying to say (even though he doesn’t believe it). Only people like you believe it. Sedge is just A#1 KDE fangirl, and thus his pissing and moaning about everything Novell does.
The truth is that there is a Linux community, just like there is a KDE community, a Gnome community, a X.org community, a few BSD communities, a Ubuntu community, a Debian community, a Mandrake community, an OpenSolaris community, a vim community, an Emacs community, and so on. Communities withing communities. I myself am a member of a few.
Each time someone expresses their interest for a piece of software enough to contribute to it (in whichever way they can – you don’t need to be a programmer), then they are part of that specific community.
Whatever makes you feel better about yourself. Does it make you feel warm and fuzzy because in your fantasy world you are part of “the community”?
Both KDE and Gnome will profit from this new hardware-accelerated server, just like GNU/Linux, *BSD and other *nix systems will profit from it. I find it quite amusing that some people would manage to turn this into YAFW.
Exactly, so why don’t you talk to your Novell-hating pal Sedge there about why he added to YAFW.
Oh, but that’s exactly what Sedge was trying to say (even though he doesn’t believe it). Only people like you believe it.
You simply don’t grok what I was talking about. If you share code with others then you need to let them know what you’re doing and how they can get involved. If not, then you have trouble and people feeling as if they are being undermined. If you have that situation then you have what is known as a fork, and you can see how upset Dan Winship got when that word was mentioned.
The open source community is not one whole group or entity, and it definitely isn’t people going off and doing their own thing, and complete parallel development, in a closed environment. The community is a lot more fluid than either of those extremes – you’re both wrong about what I was saying.
Working in a community where you use the code and work others have produced is all about working with people – something you seem to know very little about. I can imagine that that is why many programmers like to go off to a closed room somewhere and just code, which is what some people at Novell seem to be doing. My point is that if you are developing with open source software, more than any other software development method, if you choose that route then it leads straight to hell.
Sedge is just A#1 KDE fangirl, and thus his pissing and moaning about everything Novell does.
You speak for yourself, as always :-). It doesn’t alter anything. If you have something to say related to what was actually said, by all means do so. However, there’s only so many times you can repeat that.
The reasons why I moan about Novell, or more specifically, some of the things they do, I have consistently laid out. You people then moan back because you simply don’t want to accept that some peoples’ marketing at Novell, and what they’re doing, is complete BS – and no, they’re not the centre of the universe. We have had the same marketing BS for YEARS, and it hasn’t done them, open source software and the cause of desktop Linux one iota of good. And then they do YAP (yet another presentation – good acronym!) and expect people to clap and cheer over things that are totally unrelated to the function of their enterprise product, or which I and others have been doing for months with little fanfare.
Exactly, so why don’t you talk to your Novell-hating pal Sedge there about why he added to YAFW.
Poor you.
You simply don’t grok what I was talking about. If you share code with others then you need to let them know what you’re doing and how they can get involved. If not, then you have trouble and people feeling as if they are being undermined. If you have that situation then you have what is known as a fork, and you can see how upset Dan Winship got when that word was mentioned.
I know exactly what you *mean*. What you meant was that where would be some big backlash against Novell because “the community” didn’t like this “in-house” work. Nice try, but your jihad against Novell/Ximian/Gnome is impotent.
Working in a community where you use the code and work others have produced is all about working with people
There’s that “community” thing again. If we just say “community” and “freedom” enough times it’ll lead to….uhmm, nowhere
can imagine that that is why many programmers like to go off to a closed room somewhere and just code, which is what some people at Novell seem to be doing. My point is that if you are developing with open source software, more than any other software development method, if you choose that route then it leads straight to hell.
Did you even read the rational on the mailing list for the in-house work? – apparently not. In any case, you and the other anti-Novell, sniveling weenies are in no position to tell Novell employees, or anybody else for that matter, what they should or shouldn’t do. But in your demented, little fantasy world Novell’s actions “lead straight to hell”
Sedge is just A#1 KDE fangirl, and thus his pissing and moaning about everything Novell does.
You speak for yourself, as always :-). It doesn’t alter anything. If you have something to say related to what was actually said, by all means do so. However, there’s only so many times you can repeat that.
Haha, nice try at ducking and running around the known fact that you’re a one trick pony with nothing except antipathy for anybody that doesn’t embrace KDE.
The reasons why I moan about Novell, or more specifically, some of the things they do, I have consistently laid out.
And why do you care? What is your stake in Novell’s enterprise desktop? Oh, wait, you don’t have one.
You people then moan back because you simply don’t want to accept that some peoples’ marketing at Novell, and what they’re doing, is complete BS – and no, they’re not the centre of the universe.
Haha, I could care less about Novell’s marketing. I ran Suse once, for about 2 days on a partition, and then went back to Kanotix – running KDE no less.
We have had the same marketing BS for YEARS, and it hasn’t done them, open source software and the cause of desktop Linux one iota of good. And then they do YAP (yet another presentation – good acronym!) and expect people to clap and cheer over things that are totally unrelated to the function of their enterprise product, or which I and others have been doing for months with little fanfare.
Why do you even give a damn? Why don’t you just apply for a position at Novell if you’re so worried about their marketing. We had the discusssion about autopackage and ISVs not too long ago and really Novell isn’t even in a position to do a whole helluva lot about Linux’s indemic desktop problems.
Exactly, so why don’t you talk to your Novell-hating pal Sedge there about why he added to YAFW.
Poor you
Sedge, I know the truth hurts. Maybe the folks over at dot.kde.org will console you.
What you meant was that where would be some big backlash against Novell because “the community” didn’t like this “in-house” work. Nice try, but your jihad against Novell/Ximian/Gnome is impotent.
Yawn. Zzzzzzzzzzzzzzzzzzz…………
Whatever makes you feel better about yourself. Does it make you feel warm and fuzzy because in your fantasy world you are part of “the community”?
Communities exist, whether you can see it or not. There is a Linux community as long as people claim to be part of it, whether you like it or not. The fact that you did not offer counter arguments, but rather resorted to insults – as usual – only servers to reinforce my points.
What a sad, sad man you are.
LOL..I think you did a fine example of showing us all why the development was done indoors. Contrary to some zealots belief, the community doesnt know it all , and can sometimes do more to harm to itself than good. If Novell wants to develop for nothing but GNOME and not KDE , if they want to develop stuff inhouse…then contribute back later, more power to them. I doubt they sent GNOME or Xorg a bill for the work they did that will benefit everyone in the community. I for one wish more of this will be done in the future. Can you imagine how long it would have taken the “community” to develop a XGL to the level they have??? We would spend 6months bickering over protocols and languages…I say KUDOS to Novell for what looks to be a job well done…And I look forward to seeing some of the changes they have made to THEIR version of GNOME make it into mainstream…
I completely agree.
About bickering, thats assuming they get over the philosophical arguments relating to the opensource licence to use, and which one is has been blessed by stallman himself.
Contra to the myth of OSS advocates, who see OSS as the be-all and end-all; people are attracted to projeccts because of a sexy factor – and I’m sorry, the VAST majority of what needs to be done on the desktop is not very sexy, very mundane and very boring, hence the reason why we need the likes of Novell to work on the shit boring stuff – programmers may not like it, but at the same time, they know if they’re being paid for the work, they’ll begrudgingly do it because its paid employment.
SUN did the same thing, Novell is doing the same thing as well – the realise that it isn’t all sunshine and lolly pops for OSS desktop, and sometimes a complete working version (or atleast the ground work) needs to be done, before throwing it out into the internet, atleast then, coupled with a roadmap and something to focus on, the project hijackers of the OSS world don’t stand a chance.
About bickering, thats assuming they get over the philosophical arguments relating to the opensource licence to use, and which one is has been blessed by stallman himself
I don’t understand what you mean. The recommended (what you call blessed) by RMS licences are known already. Choice of licence has nothing to do with philosophy either for most developers.
Contra to the myth of OSS advocates, who see OSS as the be-all and end-all
I didn’t know that. Where did you find the demonstration of what you say ?
I thought OSS was there to remove the beliefs of Free Software.
people are attracted to projeccts because of a sexy factor – and I’m sorry, the VAST majority of what needs to be done on the desktop is not very sexy, very mundane and very boring
That’s just not true. I sure does not see it like that. I see that it is very interesting, but means lots of involvement, with no rewards except self satisfaction of making everything work better.
I mean, look at pango. Most people, even here, can’t even understand how it amazingly improved i18n and others things on Gnome (everybody takes it for granted perhaps), but the only comments I see about it, is that it is sh*t and should be replaced. Great reward …
Or look at people bashing (KDE or Gnome) devs because their feature is still not implemented or their bug is still not fixed.
I regularly look at Federico’s blog about Gnome and performance, and I find it extremely interesting and sexy, but I can’t contribute because I would need several hours straight to get some meaningful results. Federico can do that because he’s paid for it, that’s much more time to do these things.
hence the reason why we need the likes of Novell to work on the shit boring stuff – programmers may not like it, but at the same time, they know if they’re being paid for the work, they’ll begrudgingly do it because its paid employment
BS again. To develop things like that, you need LOTS of time, several PCs, processing power, graphic cards, …
So it’s easier to Novell to provide that than individuals in their basement.
You say developers do this begrudgingly, but I have a hard time believeing that. I think time is the main resource lacking lots of developers to do all this work, that’s why some have to abandon such projects.
the realise that it isn’t all sunshine and lolly pops for OSS desktop, and sometimes a complete working version needs to be done, before throwing it out into the internet, atleast then, coupled with a roadmap and something to focus on, the project hijackers of the OSS world don’t stand a chance
Xgl is still development code. The benefits are there for me now though, even if I won’t install Xgl before a while. Because it’s now easier to shut off trolls that repeat endlessly that Vista is better than everything else and Linux desktops are doomed (!!).
I did not understand why there was such hatred for Novell, but I do not understand the hatred for OSS world either. I don’t even understand what or who are the ‘project hijackers of the OSS world’.
I though ‘project hijackers’ existed only in the proprietary (closed) world.
Lol, my sentiment exactly http://osnews.com/permalink.php?news_id=13551&comment_id=92467
Novell are being brutally honest about the downsides of the community, and the community would do well to actually listen instead of bicker more.
SuSE purchase was to get the underlying Linux System, not the desktop that could be replaced easily. Novell needed a solid Linux Distro to base their NDL on. SuSE was one of the top distro at the time so they bought it. Not really that weird.
Who got the idea to drop traditional Applications/Places/System menu over dumbed down XP/Vista start menu clone? : It may look nice but the old one is far more elegant and efficient.
The presentation videos are pretty cool, I wonder about stability tough. Available composite extension and composite managers can’t be used in real world because of crashing, even if it happens once a month. I hope compiz will resolve this issue.
While I agree Applications/Places/System had its merits, you have to admit the System > Administration, System > Preferences and Applications > System is a bit confusing and a bit of a mess. The Places menu is incredibly useful, but hopefully this “new” menu incorporates that.
I’ve been using beagle and the deskbar applet as a recplacement for the menu for a few weeks now, and this menu appears to take the best of both worlds – offer regularly/recently used applications as large icons (replacing those attached to a panel), provide a search interface within easy reach to get to “what you’re looking for”, and reduce the panel clutter from a “standard” Gnome desktop setup.
I’ve wrote my previous post mainly because of lack of equivalent of Places in the new “Start” menu. I would reconsider my opinion if it could be nicely merged
Well, in gnome 2.14, Applications -> System tools was removed and merged into the appropiate other menus.
The presentation videos are pretty cool, I wonder about stability tough. Available composite extension and composite managers can’t be used in real world because of crashing, even if it happens once a month. I hope compiz will resolve this issue.
This was true, but Xorg 6.9/7.0 reallt improved everything. I’ve been using EXA + xcompmgr + Gnome 2.12 since xorg 7.0 release with just a minor glitch (the log out windows is invisible unless I kill xcompmgr before).
well, they copied exposé and the cube effect…
not that I care…I’m glad to see this coming to linux!
come off it.. 3ddesktop had the ‘cube’ affect LONG before apple had it. anything anyone has stolen from apple was stolen by apple.
What is in these improvements? Does Xegl work yet or are they planning to ship the Xglx kludge?
Kludge? Personally, I always thought Xegl was a dead end. Just drop the 2D already. Nobody cares.
3D is the future, and anything that needs extra 2D love and care can define an OpenGL extension to handle it.
And XglX uses air? It still requires some of “2D” (modesetting), which can be either the awful Xorg XAA way of dealing with things, or a clean new GL API based display device interface which can handle multiple graphic cards, exotic / multi- display modes, as well as hotplugging.
And XglX uses air? It still requires some of “2D” (modesetting), which can be either the awful Xorg XAA way of dealing with things, or a clean new GL API based display device interface which can handle multiple graphic cards, exotic / multi- display modes, as well as hotplugging.
Or choice three: EXA. It was made for a reason. Now its obvious why. The free Unix Desktop will scale from old machines (who would use EXA/XAA and no XGL) to medium machines (that will use XGL for 3D stuff and EXA for 2D stuff) and high end future machines (all 3D).
It might not be the most elegant way to do things, but since when is that the Xserver’s style?
I think it’s great that Novell has released this; Linux has needed this to put it’s visuals on par with OS X.
Now the question is, how does this get packaged, and which distro other than a “Novell” based one will have it show up first?
Compiling this from CVS is a bit much for me to want to do. Has any other distro made mention of this?
It’s been interesting to me to notice how many things that Novell has done that other distro’s didn’t want to do that have later been integrated into those other distro’s. It has been stated that no one has to use any of this even though it’s free code. Novell doesn’t have a problem with a little brand differentiation. I wouldn’t be surprised though to see this added to the big distro’s at least within the next year.
I wonder if the KDE guys will use XGL and Compiz. They’re going to have an accelerated desktop as well, and XGL could speed up that process. I
I wonder why they didn’t just go to the real solution and work on XEGL. XGL is kind of like the mid point to where the graphic subsystem eventually must go.
I wonder if the KDE guys will use XGL and Compiz.
Do you have any information why a KDE user would be prevented from using XGL?
My understanding is that this is a Xserver, meaning any X11 application will use it when being run on it.
Aaeron Seigo said he was happy with the outcome and that he thought there is enough time to integrate Xgl into KDE 4.
rehdon
While Novell bought the KDE-centric SUSE, they also bought the very Gnome centric Ximian and looking at who did this work, it was the Ximian side.
Don’t gloat too much. KDE ppl did it (practical transparency / compositing manager integration) more than a year ago. All the settings are in the Control Panel.
Or will distro’s or projects just have a not built here mentality about it. Then try to rewrite the code.
Apparently mail.gnome.org:80 is down, I really want to read why it was kept behind closed doors.
Edited 2006-02-07 20:01
http://article.gmane.org/gmane.comp.gnome.desktop/27699
I don’t really see a problem with Novell developing this software indoors, as they have chosen to make it modular. Thus KDE, E17, XFCE etc can develop and add their own plugins, and these effects or features can then be used by all WMs, which is very cool. This development didn’t hurt anyone, as Novel knew that should they make XGL incompatible with other DE’s or distros, the community wouldn’t adopt it. What it did is benefited the entire community. I mean, this project was practically dead six months ago, and now it’s actually working! Might not be complete yet, but it’s not too far.
It seems that Linux will be able to keep up with Vista and OSX in the graphics department, something that seemed unlikely not long ago.
I say more power to Novel for putting this technology at our disposal!
I think the developer comments were spot on when it comes to design. Great decision on their part.
Plus I think it will also be enough to get me to put money on the next version of NLD.
Where are the anti-MS crowd who were cursing Microsoft by saying who need hardware acceleration for graphics etc. Come on guys, be honest, come out here and say its crap. Instead OSS guys should try to improve the basic desktop right?
Really a 3D cube, i don’t think my idea of using computer is playing games. They better work on increasing productivity rather than some crap stuff like 3D cube which looks cool but is almost useless to find a running app. eXpose like design is way better.
Compiz will allow for all kind of effects to be developed thanks to its plugin system. Some of these improve the usability of the environment, some are nothing more than bling for the people who like it. Compiz will have an expose plugin (which was actually shown in the demo, please watch it again), as well as the cube plugin. Expose or cube are not different designs, they’re two features that will be available by default in Compiz.
The cube plugin has nothing to do with gaming. It’s about improving the experience of using multiple workspaces. Please read http://news.com.com/Novell+seeks+to+boost+Linux+graphics/2100-7344_…
Edited 2006-02-07 20:51
They better work on increasing productivity rather than some crap stuff like 3D cube which looks cool but is almost useless to find a running app. eXpose like design is way better.
Did you actually follow the link? Because video #3 shows off an eXpose clone, with a transparent window that plays the Harry Potter trailer…
You really suffer from a lack of vision…
the cube is simply a demo of capabilities not an end user tool…
and you are trolling/delusional or more likely making up that first paragraph so I wont bother to refute that.
Maybe you should work on the creative aspect of your seriousely lacking personality before critisizing others for creating something new (which you obviousely havent even come close to)
your nick is very fitting in this case. aka you might as well be called “TunnelVision”
You seem like a troll, and your post is simply an attack on other trolls. I will, however, try to give a response to your question for those who might really be interested.
A 3D cube by itself isn’t something that would improve productivity, but virtual desktops are. A 3D cube is something that can make virtual desktops more tactile to those that like that sort of thing. So, in some ways more graphics power can be an improvement on an already powerful desktop.
Just because MS hasn’t released any form of virtual desktops doesn’t mean that there aren’t real uses for them or possible improvements in the ways that they are used that an MS junkie might not recognize or appreciate. There are actually people who feel hampered when using Windows for the lack of virtual desktops. Those people would much rather see virtual desktops in Windows before they would care much about transparent windows or sidebars.
More concepts and ideas will become possible as a result of better graphical capabilites regardless of whether it’s Windows or Linux. A 3D cube may not be the best addition to virutal desktops in the end or even to the desktop as a whole, but sometimes concepts need to be tested before they can be proven.
Unfortunately, I think that the complaints made by many people about Vista is that the changes seem mainly cosmetic (a major gripe if true after years in development). I know that there is more to Vista than graphical capabilities, but with so many promised features removed, people do have some valid complaints.
Anyone know of any good guidance on compiling/installing this from CVS? Managed to get the source, but doesn’t seem to have a makefile included or anything. I don’t think compiling each file by hand is a fun idea.
As far as I understand, NLD 10 is an enterprise desktop. Why does an enterprise desktop actually need fancy 3D effects? As a long time linux desktop user I am glad that Novell is contributing to XGL, but I don’t really see how this is useful for an enterprise desktop.
I am also a KDE user myself and I am not very happy that Novell bought one of the major KDE based distros and is no degrading KDE to a second class citizen. I think this is a mistake. KDE was really what made SUSE special and now Novell turns it into just another GNOME distro. There is no real reason anymore to chose NLD/SUSE over Redhat/Fedora or Ubuntu.
And I also think it is quite unfortunate that Nat Friedmann (and to a lesser extend Miguel de Icaza) are anti-KDE (at least that is my impression). I have seen lots of former SUSE (KDE) users in forums complaining about them. This splits the community. But for the success of Linux on the desktop, GNOME and KDE have to cooperate. People will not use Linux on the desktop because of some fancy graphical effects. In the end applications matter and it is a matter of fact that some of the best Linux applications are KDE/Qt applications like k3b, scribus, amorok. Of course one can try to recreate them using Gnome/gtk, but this is a total waste of resources which could be used much better in other areas. In my opinion it is absolutely necessary to integrate KDE/Qt apps in GNOME as good as possible (and of course Gnome/gtk apps into KDE), because a desktop with a mixed set of applications (say firefox, oo, k3b, amorok, inkscape, gimp, krita) will be superior to a pure KDE/Gnome desktop. KDE applications in Gnome should use the Gnome theme and the Gnome file selector (for example) and the other way round. And in this respect it is really bad that many KDE users/developers actually have the impression that Nat Friedmann and Miguel de Icaza want to harm KDE. I think both should really try to give KDE people the impression that this is not the case. Because in the long run it will hurt linux on the desktop if there is not more corporation between Gnome and KDE.
As far as I understand, NLD 10 is an enterprise desktop. Why does an enterprise desktop actually need fancy 3D effects?
Well, it’s nice that it’s happening. However, why they want to put that kind of effort into this for an enterprise (I laugh every time I hear that word) desktop is anyone’s guess. Maybe they’re trying to convince Novell’s own employees to use Gnome, or even just desktop Linux period? Who knows? That’s the only reason I can think of. Get people useing desktop Linux by allowing them to plug their iPod in at work and manage their photo collection. Not that you should be doing that at work of course, but I’ve been doing that (along with Suse Linux and other KDE users – maybe even Novell employees) with Digikam, Amarok and an iPod slave for months. Feel free to come and knock me out any time Nat ;-).
I am also a KDE user myself and I am not very happy that Novell bought one of the major KDE based distros and is no degrading KDE to a second class citizen. I think this is a mistake.
Hmmm. You’re believing far too much of the bullshit. The most widely used Novell distribution is still Suse Linux and openSUSE, and the desktop everyone uses there is still KDE. The NLD has a userbase that pales in comparison to Suse Linux, Ubuntu and probably distributions like Linspire as well.
People will not use Linux on the desktop because of some fancy graphical effects. In the end applications matter…
Well spotted. From the looks of things that’s about all they have though, unfortunately :-).
As far as I understand, NLD 10 is an enterprise desktop. Why does an enterprise desktop actually need fancy 3D effects? As a long time linux desktop user I am glad that Novell is contributing to XGL, but I don’t really see how this is useful for an enterprise desktop.
The most important thing about this is responsiveness. I aggree with you that an enterprise desktop doesn’t need wobbly windows, but the responsiveness is amazing. A major glitch in X-Desktops where the slowness of it, which sometimes frustrated people. With Xgl, this is history and the X-desktop is faster than ever before. Using hardware for rendering the desktop should have been done looong before now.
Edited 2006-02-07 22:16
Using hardware for rendering the desktop should have been done looong before now.
I agree.
It wasn’t possible because software just wasn’t there (hardware WAS, almost 5 years ago). Linux 3D drivers (very bad except Nvidia in recent years), lack of needed OpenGL extensions for speedup, Xfree86 stagnation…it is good to have all that finally behind.
Xgl is just about to go mainstream, with great push from Novell which endorsed it (a very wise choice IMO). They do it primarily because of Vista and to kill off Sun’s and RedHat’s initiatives.
Luminocity is now certainly dead, while Looking Glass will remain a niche experimental GUI. Eventually Gnome and KDE will pick up some good stuff from LG and slowly advance to hybrid 2D-3D desktop with ability to behave (configurably) in both ways.
I just realized I don’t agree with most of the points you made in your post.
1) Enterprise also includes desktops, so improving the desktop environment is part of it. The fancy effects are not there so desktop consumers will be impressed by the software; these effects actually help making the software easier to use. XGL is also faster than a normal X server, with or without the pretty plugins, so it makes the Novell products more interesting to their customers.
2) The products from Novell that will be focusing on GNOME are not the ones you care about (by “you” I mean anyone who actually knows what KDE and GNOME are). What you care about is SUSE Linux which is probably even better for KDE users now than it was before. For one OpenSUSE was created; SuSE wasn’t free before (YaST was proprietary software, relicensed under the GPL after Novell bought it).
3) You can’t say that Miguel and Nat are anti-KDE based on comments you read from people that claim to be KDE users (I’m not saying KDE developers on purpose). Maybe this feeling is because people have the (wrong) idea that SUSE and OpenSUSE will become GNOME-centric products, when in fact that’s not true. They equally support both environments.
Regarding the applications you mentioned, GNOME (and Novell) are just picking a different approach, not recreating KDE apps with GNOME libraries. For example, if you want to create a CD with files under GNOME (a “data” CD) you do it from the file manager. If you want to create a music CD (“audio” CD) you do it from your music library (Banshee in the NLD). This is a different approach and some people will prefer it.
Dear Seggy…Get the story str8 first, That’s in no way blows the case for opensource…what it does is discredit the case that every piece of linux software needs to be developed open source or it will suk. Take comments in their entirely not out of context. I am not attacking open source, but rather defending a companies right to choose. Last I checked nothing Novell did broke any of the strict rules on source that the community set forth, but there are alot of opeople who want to discredit their effort because they couldnt watch over them. Like the poster who mentioned GCC sated, its all for the good of the whole, and don’t even bring MS into the discussion because thats entirely irrelevant to the fact that people feel “betrayed” that Novell developed somethign in house and they couldnt put in their 2cent s worth during the process…I am for one ELATED that they did inhouse and have now contributed the code back to the community, becuase let us not forget Xgl was basically DEAD until they took over inhouse, and now we have a amazing new toy to play with that is really going to do alot for the community…so again i’m calling bullshit…Unlike some I think that be it open or closed development…it the code is given back openly…GREAT..being in a corporate environment, I know that sometimes 3 guys working fulltime with no one poking around, can sometimes get alot more done alot faster than a having to take every decision to a 25panel committee…either way its all about CHOICE…thats the true essence of opensource…having the choice to run what you want to run, and how you want to run it…if you don’t like the code they released, fixed it, change it, make it work for you…but don’t get pissed because they didnt include you in the process even though they gave you the results…It’s like me being pissed that Playskool didnt consult me on what shape the playdough can should be, even though i can make it whatever shape I want once I have it lol…
*NOTE: last comment for thread…again grats to Novell for a job well done, look forward to seeing what comes out of the labor…
Edited 2006-02-07 21:56
I have to make a few observations. Disclaimer: I am a Novell employee. But I speak for myself.
1. Both open and closed development models have their own advantages. That’s one reason why both exist.
2. Using any open source software is OPTIONAL. If you don’t like it, how it was developed, or its hardware requirements, you don’t have to use it.
The important thing here is CHOICE. The community will benefit from source that is developed behind closed doors and then released. It’s important that organizations are not forced into contributing to OSS wholly by open development. As has been previously stated, the worst-case scenario for the contribution is that it is rejected. But from there, it is still potentially something to learn from.
Why is it that KDE users get upset when they see Novell using GNOME? So they have redesigned GNOME and showed XGL running on it, good for us. If you dont like GNOME whats the big deal?. When this gets released KDE4 cannot come soon enough.
Just to make it clear, Novell hasn’t redesigned GNOME. They simply wrote a new menu applet (GNOME already has two in a default installation, with the Applications/Places/Desktop being the default).
Yes but shots are clearly the same as the mockups and GNOME dont have the bar on the bottom like that. Funny how I get modded down when talking about KDE, but KDE lot get modded up for there KDE novell madness.
wasnt most of the stuff on xgl written by david.
thats kind of the impression i got.
and david was the main coder at novell for the project.
The official press release from Novell is here:
http://www.novell.com/linux/xglrelease/
All this stuff looks really smooth and whatnot.. but even in one of those videos on the novell site you can see that resizing a window is still dog slow. It’s kind of embarrassing if you showcase all those neat effects and such a “simple” task as resizing a window looks as shitty as that.
So are they (whoever that is) working on this or is it on some sort of to-do list or something? Any info on it would be appreciated..
All this stuff looks really smooth and whatnot.. but even in one of those videos on the novell site you can see that resizing a window is still dog slow.
It did seem slow, but this is in fact a lot more complicated than you claim. Moving objects in 3D, compositing them is a lot simpler than changing their geometry and remapping their surface.
I imagine someone is working on it, but whatever happens resizing is always going to take more juice than moving them and/or applying an alpha channel.
I know it’s technically probably a very difficult thing to do/fix. I didn’t mean to call it “simple” in that way, but for me, as a stupid end-user, it’s one thing that stands out as an issue.
Well, I hope that there’s someone working on it somehow. I just never see anyone actually mention anyting about resizing windows and I’m afraid that’s why it’s not a high priority issue..
There is no magical solution to make resizing fast, since the whole window content has to be recalculated (not just scaled) constantly. Depending on the complexity of the content this can be either fast or slow. Compositing doesn’t help at all with this and it will likely never be lightning fast. It can only remove the flickering, so it appears more “solid”.
I personally don’t see why this should be a big deal. I’m not constantly resizing windows, so I don’t mind if it looks slow as long as it’s reasonably responsive.
Actually compositing made it worse. It doesn’t need to be lighting fast, but just as fast that the window follows the cursor in a somewhat fluid way.
I, for one, resize my windows all the time and for me it’s just not reasonably responsive. I know this is a very subjective topic but I think right now (with compositing) it’s just unbearably slow (close to being broken).
Resizing on X is a bit more complicated than it seems. Reflow (the computation of the new window contents) is actually fairly low on the list of problems. Notice how applications that have slow resize on Linux (eg: Firefox), have very smooth resize on OS X or Windows, even though the reflow code is the same on both platforms.
The real problem is synchronization. The window manager and client aren’t synchronized during resize, which hurts things enormously. In theory, there is a NETWM spec to achieve this synchronization, and I had a Qt/kwin patch that implemented the spec to good effect (at least on my 2GHz P4 laptop) to make resizing smoothness fairly comparable to Windows. My motivation to finish the patch is unfortunately very limited, given I don’t use KDE anymore.
GTK/metacity have an implementation of the spec as well (included since GNOME 2.8 or 2.10, I think), but its results are much less impressive than on KDE. I don’t know the GTK source code all that well, but my guess is that GTK’s rather aggressive buffering and combining of EXPOSE events actually hurts redraw latency in order to minimize overdraw. Qt’s redraw model, in contrast, is very simple and low latency, at the expense of having more overdraw. I’m inclined to argue that Qt’s model is better, given that the “feel” of responsiveness is entirely dominated by latency.
All that aside, its become apparent to me that synchronization isn’t super-useful without full use of compositing. No matter how fast the app handles content reflow, and no matter how well the application and window manager are synchronized, the user will still see an ugly (if fast) resize unless the process is double-buffered. The last time I looked into the issue, COMPOSITE wasn’t fully able to support the window resize semantics necessary for double-buffering resize (you need to be able to seperate the allocation of the new window buffer and the deallocation of the old one, and during the resize itself have a handle to both copies), though this feature might have been added recently. Of course, implementing it efficiently is another can of worms — allocating/deallocating all those buffers during a resize demands a lot from the memory manager, and video memory management is a current trouble area for X.
Ultimately, getting smooth resizing is going to involve a combination of proper synchronization, double buffer, and good resize handling in the toolkit. With those factors in place, resize will become reflow limited, which shouldn’t be a problem for the vast majority of applications. Unfortunately, all these pieces could take awhile to fall into place. I believe Reverman’s work implements the synchronization spec, but I don’t know if double-buffering is handled very efficiently, and I’m pretty sure GTK+’s handling of resize events hasn’t improved any lately. A major hurdle has been graphics drivers, since the lack of good open-source drivers for modern cards has meant that its been very hard for X developers to experiment with ways to get the graphics driver to cooperate with the double-buffered windowing system.
maybe you could send the Kwin patch to a mailinglist or attach it to a bugreport? someone might be willing to pick it up…
Dan Winship’s comments depict what is wrong with open source software development. Good software is almost invariably conceived and developed by a small number of people. Too many cooks spoil the broth.
Dan Winship’s comments depict what is wrong with open source software development. Good software is almost invariably conceived and developed by a small number of people. Too many cooks spoil the broth.
First, open source does not necessarily mean open development, though this is usually the case.
Next, what you say isn’t true for many projects. The Linux kernel is a good example, KDE is another good one. It can work either way.
Most open source projects are small, not because it makes for better code, but because the scope is more limited (which can still provide some great apps – K3b comes to mind).
< ..open source does not necessarily mean open development..>
Open source does refer to the development method. Many of the open source advocates in fact point to community involvement as the reason for the alleged superiority of open source software (e.g. cathedral and bazaar). Novell just said ‘screw the community’ and they came out with a good product in a short time. That ought to shut some people up.
As for the software you mentioned, I don’t necessarily think linux kernel or kde are any good, though I don’t know how many are involved in either. k3b is OK.
I dont really think Novell said “screw the community” they will after all contribute this back to gnome.
I think they said “lets temporally screw the community devel process so we can make our ship date”
Time will tell in the end if the two statements are in fact equivilent
I meant they said “screw the community input”. It should give pause to anyone that thinks putting a hundred people together is all there is to quality software development. That covers most open source supporters!
Open source does refer to the development method.
It can, but not necessarily. The “rules” are a lot more flexible than you seem to believe.
As soon as the code is available to look at an modify, it doesn’t really matter if previous development was done behind closed doors. The code has been “open sourced”.
This is what happened with Xgl. Novell did some work behind closed doors, which may have annoyed some, but the code is now available for all, and we can expect that people outside of the project will contribute to it.
Novell just said ‘screw the community’
Novell did not do that at all. I think you’re misunderstanding (or misrepresenting) their intentions in order to support your own anti-open source views.
As for the software you mentioned, I don’t necessarily think linux kernel or kde are any good
That’s a matter of opinion, not fact. I happen to think that these are excellent pieces of software.
< It can, but not necessarily. The “rules” are a lot more flexible than you seem to believe. >
You have it wrong. Read Cathedral and Bazaar. Community participation is what distinguishes open source projects from other kinds, and is supposedly what gives it its alleged advantages. This is what the Open Source supporters have been telling the world for a long time!
Novell said ‘screw community participation’, came out with a good product in a short time, and are now saying community participation would have slowed them down!!
<I happen to think that these are excellent pieces of software.>
I’m just pointing out that I’m not buying your proof that large groups can produce good programs from the examples you gave.
I think the point is that while initial development of Xgl was not open source, Bazaar type development, it has now been opened up. Novell clearly isn’t completely against community participation because they are now welcoming it. They just thought that the initial start would be better off just being done by someone rather than having lots of discussion about how to do it the “right” way first. Something I happen to agree with. But in the long term Novell seems to be saying that the Bazaar is the right way to go in order to maintain and improve what was initially created.
Regarding large groups ability to produce good programs – I think there is no question that they CAN produce good programs. The only questions are whether they are more likely to do so than small groups and whether they are able to create the really (rare) outstanding pieces of software.
Exactly. The Xgl development Novell did is NOW open-sourced. So Novell didn’t say “screw you” to the community, but rather gave a gift to the community – despite what some anti-open source posters are saying…
< I think the point is that while initial development of Xgl was not open source, Bazaar type development, it has now been opened up. >
But this invalidates the oft-repeated claim that collaborative development is what gives rise to the superior quality of open source programs. Novell says it wastes time!
I’m having trouble seeing if you are being serious or just trying to troll.
In case you really haven’t heard what I’ve been saying: But this invalidates the oft-repeated claim that collaborative development is what gives rise to the superior quality of open source programs. Novell says it wastes time!
NO!!! Novell only said it wastes time at the beginning, when you are creating something from scratch. Novell DID NOT say that in the long run it would be better to stay closed source. In fact, they’ve done the opposite and are letting the community modify code and take over the job of maintaining it.
People generally say that collaborative development is good for 2 reasons – 1: Lots of ideas, people can figure out the best way to do something before someone makes a stupid design mistake. Novell says this just wastes time. 2: Lots of eyes looking at the code thats already been produced, finding errors, improving performance, adding features. Novell seems to agree with this, no doubt especially because they won’t have to pay anything for it.
Novell says that there are downsides to collaborative development, but it is a gross distortion to twist that around and say that there is NO upside either.
Personally, I think different projects call for different types of development. In this case it seems that Novell was correct, they’ve managed to get the ball rolling when no one else could. Or at least faster. But that doesn’t mean that it is the only way to do something. I think the community can sometimes get projects going as well that any one individual might not be able to, and that corporations might not be interested in.
Hey, if a lot of people testing your program is good, proprietary companies already do that through beta testers!
If you are talking about subsequent modifications, they are all constrained by the initial design and implementation. You have a clear hierarchy of designers and tinkerers, IOW a cathedral! That is not exactly the colloborative development model that is supposed to give an edge to open source software. You don’t think MS doesn’t get suggestions for improvement from their customers?
You’re the troll. You haven’t even looked at the title of the post you are responding to!
OK, I brought out the T word to quickly…
Hey, if a lot of people testing your program is good, proprietary companies already do that through beta testers!
There is a big difference between beta testing and people looking over source code. Both types of development rely on beta testers quite a bit.
If you are talking about subsequent modifications, they are all constrained by the initial design and implementation. You have a clear hierarchy of designers and tinkerers, IOW a cathedral! That is not exactly the colloborative development model that is supposed to give an edge to open source software.
If something is designed well, then it is easy for others to add on to it.(Well, at least in some cases) They can even add completely new things – think about Eclipse and Firefox. They were originally created by small groups of people, but anyone can add a plugin/extension and these can add quite a bit of functionality. More importantly, even though these programs were originally developed as a cathedral, they are now open to anyone to modify in anyway they wish. The definition of a bazaar.
You don’t think MS doesn’t get suggestions for improvement from their customers?
You’re the troll. You haven’t even looked at the title of the post you are responding to!
Of course MS gets suggestions. What they don’t get is actual code.
I’m not a troll. I apologize if I incorrectly called you one, but I was getting a bit discouraged reading this thread.
Edited for spacing.
Edited 2006-02-09 01:33
< If something is designed well, then it is easy for others to add on to it.(Well, at least in some cases) They can even add completely new things – think about Eclipse and Firefox. They were originally created by small groups of people, but anyone can add a plugin/extension and these can add quite a bit of functionality. More importantly, even though these programs were originally developed as a cathedral, they are now open to anyone to modify in anyway they wish. The definition of a bazaar. >
You just described all the advatages of Free Software (even in the unquoted parts of your message), which I am not talking about.
I am talking about the colloborative model of open source development. What Novell did is not open source. They developed it cathedral style, and released it when they are ready to ship it to customers. They even said they couldn’t have met the release date if they didn’t do it the cathedral style. Novell effectively said the open source method doesn’t work.
Raymond says “with no beta to be released before its time” when he talks about the cathedral style. “A great babbling bazaar of differing agendas and approaches” is supposed to work its magic during initial creation. That’s why ‘release early and release often’.
You and that other poster have been telling me how Novell didn’t screw the community, while my meaning was that they ditched the open source method (“screw community input”).
Perhaps I do need to read about the Cathedral and Bazaar. It is my understanding that you can switch from one to the other whenever you wish, but the way I read your post are you saying that once you’ve chosen the cathedral method you’re completely done? That you can’t then switch? I think I maybe I am confusing Free Software with the bazaar. Does the bazaar only have to do with design, and not have anything to do with maintenance, coding, etc?
They developed it cathedral style, and released it when they are ready to ship it to customers.
It is still not in the “ready to release” state. Right now I would describe it as beta software. Ready to test and reasonably stable, but not ready to go into distros tomorrow. Novell even said there were still some rough edges to iron out.
< It is my understanding that you can switch from one to the other whenever you wish ..>
True, but that doesn’t change the fact that Novell felt it necessary to resort to cathedral method for the work so far, which I am arguing reflects poorly on the bazaar method. Sure they qualified it as a special situation, but the qualification seems (to me) to apply to virtually all of software design.
I concede your point about the beta status.
You can have the last word.
Next, what you say isn’t true for many projects. The Linux kernel is a good example
No it’s not. The actual Vanilla kernel gets developped by only a small group of Torvalds-approved developers.
Now, if you combine all kernels (also distribution-specific ones) you’d have a point.
No it’s not. The actual Vanilla kernel gets developped by only a small group of Torvalds-approved developers.
Well, I guess it depends what you consider “small”. To me, small is under 10 programmers. Clearly, from looking at the changelogs published on the LKML, the Linux kernel employs much more than 10 developers…
Perhaps it’d be a good idea to frame this debate better, because otherwise it’ll go nhowhere.
…from here.
I’m so happy to hear of xgl’s public release. Very excited!
I look forward to eventually testing out a live-CD to see the difference in responsiveness as compared to standard, non-accelerated X. I’m not too interested in the cube eye-candy, as much as having a responsive desktop without “downgrading” (no offence) from KDE or GNOME to a more lightweight window manager like WindowMaker of XFCE (both of which are fine projects.)
So, is mesa-solo part of the future for the desktop anytime soon?
It stands to reason that they would leverage GNOME since GNOME is the DE for RedHat Enterprise as well.
With that being stated, the end user can always use both KDE and GNOME.
for taking a stalled project and turning it around so quickly.
…………… but i’ll be interested when I can use it in a KDE SUSE release!
Horray, XGL is here and it’s great that we now have it, and he’s right in saying taking it behind closed doors was the fastest way to make it happen, however that step was fundamentaly wrong.
When i say wrong, I mean they didn’t have to go behind closed doors to make it happen or meet their schedule. In truth, all they had to say is was hey, were going to go work on XGL for NDL 10. Were going to set up a repository, and every so often were going to do a code commit, but we aren’t going to commit the source until we have somthing meaningful. A couple of commits latter, and some good press and they’d have been called heros.
However, the truth is that they wanted to reconstruct XGL in the manor that they felt was best, without consideration for dissenting oppinions. What people seem to miss is that when your writing code there is nothing that forces you to include other people’s ideas. It’s not what you do, it’s how you do it. They could have still done it there way, but at least they would have heard some other ideas that they may have found usefull. Open development (communication of varied ideas) is like good advise, you can consider it and include it in your decision making, or you can exclude it all together; but when you limit your view of the world just to achieve a desired goal two things usually happen.
1) you’ll quickly achieve your desired goal
2) The goal you’ve so diligently persued might not be a fit for the ever changing world (group of people who’s goals really are to solve the fundamental problems of xorg in this case, as opposed to arguing just to argue)
Novel wasn’t wrong for doing what they did, and the results are fantastic, but it doesn’t change the fact that they didn’t have to take their ball and run home until things could be done their way.
“Aaeron Seigo said he was happy with the outcome and that he thought there is enough time to integrate Xgl into KDE 4.”
the question is; when will KDE4 arrive.
I would like it to arrive in time for the big-three’s Oct 2006 release, but i doubt we’ll be that lucky.