“The first major version release of the X Window System in more than a decade, X11R7.0, is the first release of the completely modularized and autotooled source code base for the X Window System. X11R6.9, its companion release, contains identical features, and uses the exact same source code as X11R7.0, but with the traditional imake build system.”
Great to see progress in this area! Well done, blokes!
1st!
It looks like 6.9 will be in Debian unstable soon. ๐
http://www.livejournal.com/users/gravityboy/2005/12/19/
http://fedoraproject.org/wiki/Xorg. Already in Fedora development tree and RC is available from FC5 test 1 release
To think we worried about the future of X when the XFree86 Consortium phased themselves out of the picture with their licencing change in January 2004.
X development couldn’t have been put in better hands. X.org is doing a stellar job.
Congrats to the X.org developers on a major milestone in the development of X.
-Gnobuddy
Ditto! Many thanks to all those involved, the FLOSS community really needed this kind of progress.
rehdon
I expect to see cool window sadows (composite) and smooth fast responce GUI on my ATI Radeon 9800 Pro. Latest ATI binary driver is not good as promised (I even cannot logout w/o hang after gnome sesion closed). May be this release make me do not spend time in rage3d.com linux forums …
Don’t count on it. There’s still no good DRI acceleration for the R300 series cards, so you still rely in the propietary drivers to get composite.
But does the propietary drivers support composite?
It’s not composite they need to support. They need to support RENDER or EXA.
No, for any ATI card newer than a 9250 you are kinda screwed for composite for a while longer. No driver offers acceration.
I am a bit disappointed.
I expected a onefile tar.bz2 file somewhere, that I can jump in and execute:
./configure –prefix=/usr/X11R6/
–enable-myfonts
–enable-mygfxcard
–skip-junk
Or something like that. But not 100 dozen small tar.bz2 files.
I expected a onefile tar.bz2 file somewhere, that I can jump in and execute…
This is the all idea behind modularized build.
It’ll give you the option to build and upgrade one driver without the need to rebuild the full X.org package from scratch.
> It’ll give you the option to build and upgrade one
> driver without the need to rebuild the full X.org
> package from scratch.
Well you can have the same modularization when receiving one tar.bz2 file. You modularize the stuff during the configure time. If I tell the configure script to build ATI only and install only the things I want then it only compiles and installs the things I want. There is no reason spreading the stuff over 100 tarfiles.
The necessarity of 100 tarfiles only cause another overhead imo. The overhead that soon dozen of people will join the #freedesktop IRC channel asking howto build the tarballs and in which order they get build. It adds the next set of complexity once the people join that channel again asking why their XOrg show wrong fonts or why file xyz won’t compile due to missing headers.
You can come over arguing that this is not related to XOrg and that people should be using a regular distribution. But I guarantee you that these questions (and others) will come up nonetheless.
As always, the clueless are always those that complain first.
Well you can have the same modularization when receiving one tar.bz2 file. You modularize the stuff during the configure time.
No you can’t. Sorry to make you remember that to most people, upgrading just the server through a 2 MB file is not the same as upgrading the same server after downloading a unique 200 MB tarball.
It’s sad you can’t understand that.
There is no reason spreading the stuff over 100 tarfiles
I explained one obvious reason above, which just show how limited your experience of the matter is.
The necessarity of 100 tarfiles only cause another overhead imo
It does only for the maintainer. I manage my own Linux OS, and yes it was what I consider a major transition, as I had to create scripts to automate the upgrade of packages for XOrg. Where before I had one package to manage, now I have 220+. And yet I left tens of useless packages to me. Guess what, despite the 2 hours it took me to create a shell script to automatically update my ALFS XML file, I’m still far better than before. There were some bugs in XOrg 7 RCs, and I fixed them in minutes, recompiling only the buggy packages, where it took me hours before !!!!
The overhead that soon dozen of people will join the #freedesktop IRC channel asking howto build the tarballs and in which order they get build
You mean dozen of morons then. I did not know more than anyone on the order of build 1 month ago, and managed to acquire the information through the XOrg site in under an hour of time.
And my task was harder, as I did not want to use the script they provide.
So if you are a moron that is not capable of managing a simple task that is explained to you on the site, you’d better not come showing how stupid you are on IRC, or people will just tell you to RTFM and they would be right IMHO.
It adds the next set of complexity once the people join that channel again asking why their XOrg show wrong fonts or why file xyz won’t compile due to missing headers
These people clearly should not compile packages.
But I guarantee you that these questions (and others) will come up nonetheless
We already know there are morons, clueless people and those who think they are experts when they are not, that’s not news.
> As always, the clueless are always those that complain first.
Your rhetorical skills and communication forms surely speaks for you
> No you can’t. Sorry to make you remember that to most
> people, upgrading just the server through a 2 MB file
> is not the same as upgrading the same server after
> downloading a unique 200 MB tarball.
a) upgrading such a 2mb file will lead into more problems correctly reporting bugs. This will lead into a mixed XOrg build at the end, one has older xlib files, but never ATI drivers or different libraries of something etc. Giving these people help during bugreports will be a pain in the ass.
b) XOrg has never been bigger than max 50mb tar.bz2
> It’s sad you can’t understand that.
I understand good enough and thought even forward into future possible scenarios.
> I explained one obvious reason above, which just show
> how limited your experience of the matter is.
Thank you very much, looks like god gave you some extra brainz but less friends
> It does only for the maintainer. I manage my own
> Linux OS, and yes it was what I consider a major
> transition, as I had to create scripts to automate
> the upgrade of packages for XOrg.
I thought the trend for linux was to simplyfy things rather than complicate them even more.
> Where before I had one package to manage, now I have
> 220+. And yet I left tens of useless packages to me.
> Guess what, despite the 2 hours it took me to create
> a shell script to automatically update my ALFS XML
> file, I’m still far better than before. There were
> some bugs in XOrg 7 RCs, and I fixed them in minutes,
> recompiling only the buggy packages, where it took me
> hours before !!!!
Ever considered buying a new computer ? On my system it takes less than 45 mins to compile XOrg using ‘make World’ and I don’t need to write fancy buildscripts.
> You mean dozen of morons then.
Ahh, you are one of the audience that call people “morons” or “idiots” well then only morons and idiots are going to use XOrg.
> I did not know more than anyone on the order of build
> 1 month ago, and managed to acquire the information
> through the XOrg site in under an hour of time.
Ok so we have two hours writing a shell script and another one hour reading the XOrg site. That’s three hours of wasted time, while entering ‘make World’ and have XOrg finishes compiling in less than 45 mins was all you needed. That makes 2 hrs and 15 mins wasted time.
> So if you are a moron that is not capable of managing
> a simple task that is explained to you on the site,
> you’d better not come showing how stupid you are on
> IRC, or people will just tell you to RTFM and they
> would be right IMHO.
You are some sort of special person and your rhetorical skills are indeed communicative.
> These people clearly should not compile packages.
But these people will do anyways and they will come into the channels asking for problems regardless if they should or shouldn’t. That’s my point actually.
> We already know there are morons, clueless people and
> those who think they are experts when they are not,
> that’s not news.
Who is ‘we’ ? are there more creatures like you walking around in the open source front ?
a) upgrading such a 2mb file will lead into more problems correctly reporting bugs
No it won’t. I can now say : “package X have problems”, instead of “XOrg version Y”, which is way better for bug reports.
That seems like a big improvement to me.
This will lead into a mixed XOrg build at the end, one has older xlib files, but never ATI drivers or different libraries of something etc. Giving these people help during bugreports will be a pain in the ass.
I don’t see why, it’s not different than the situation in the Linux OS. It won’t cause any problem.
You will detect incompatible versions at compilation. Again, it won’t be a problem.
Autotools are very good in this matter, as they allow you to pass informations on the package built.
That’s how it works at home.
Another good thing, is that now, there are clear groups : proto, libraries, tools, …
b) XOrg has never been bigger than max 50mb tar.bz2
OK.
Thank you very much, looks like god gave you some extra brainz but less friends
I talked about experience not brains. I still thinks you don’t need to be a genius to use these things.
Because it’s simpler I can make more friends now.
I thought the trend for linux was to simplyfy things rather than complicate them even more
It’s simpler actually. I had to do this work because it’s a big transition, that’s all. I could have kept using the 6.9 release and use the same system as before.
It’s not more complicated, I just had to create more files ONCE, and a helper so that I don’t have to change all these version numbers manually in the my XML file when a new release is out.
Ever considered buying a new computer ? On my system it takes less than 45 mins to compile XOrg using ‘make World’ and I don’t need to write fancy buildscripts.
I considered it, but I will wait for some advance in technology and lower prices first. The computer I’m using now is a bi-AMD 2200+, I don’t think it’s out of date, and it tags along pretty well, running 3 simultaneous desktops 24/24 7/7.
But this is not the issue. The issue is that before, to test one change of configuration (to test performance, stability, …), I had to recompile the entire package each time. Now, most of the time, I have one to recompile, which can be done in less than 1 minutes (or even 10 seconds) for most.
Ahh, you are one of the audience that call people “morons” or “idiots” well then only morons and idiots are going to use XOrg
No, you changed the subject. I called morons or idiots people that complain that they can’t compile XOrg when the problem is that they don’t have the knowledge to do that and don’t want to learn, I’m NOT talking about users. Simple users won’t compile XOrg.
I don’t see how you equate people complaining about the order of compiling packages of XOrg 7.0 to every users. It’s just not the same, and I did not acquire this knowledge because I’m a genius (I’m not a genius).
Ok so we have two hours writing a shell script and another one hour reading the XOrg site. That’s three hours of wasted time, while entering ‘make World’ and have XOrg finishes compiling in less than 45 mins was all you needed. That makes 2 hrs and 15 mins wasted time.
You call that wasted time, to me it’s not. Now I don’t have useless fonts installed anymore, I don’t have lots of useless libraries, drivers and tools anymore, I don’t have to patch a huge configuration file anymore, I can finally use upstream packages (xterm for example), …
Given the time it took the people working on modularizing X, what you say is like saying they lost their time too. I’m sure they think otherwise.
But these people will do anyways and they will come into the channels asking for problems regardless if they should or shouldn’t. That’s my point actually.
That’s why I think they are morons. That seems like a pretty stupid behaviour. Some people have time to waste and love to make others waste their time obviously.
Who is ‘we’ ? are there more creatures like you walking around in the open source front ?
People that know how to read and search for information instead of bothering others for fun ?
I bet there are a lot.
You should know better. Helping these people on IRC is things you do when you are new to FOSS, most people (me included) are quickly tired of doing this though.
Please stop acting like a retard, then come back and we continue having a normal conversation.
Seems pretty simple to me. He’d rather have the *option* of downloading all the source at once *or* downloading only that which he needs.
Given the choice between _either_ the whole source in one tarball _or_ all of them separate, most would prefer downloading the whole thing in one go.
Of course, no one wants to click a hundred links as the only means to get a single source tree, and no one wants to download all sorts of unnecessary stuff to get a 4k patch. Luckily, we _don’t_ live in an either-or world!
> Seems pretty simple to me. He’d rather have the
> *option* of downloading all the source at once *or*
> downloading only that which he needs.
Ok sherlok, which ones do I need or do you know out of mind, which ones you would need ? Maybe you are missing something – not today but one day, when you happen to require it.
Then you get out take it and install it. But then, maybe something else you have compiled or installed required it too but skipped it because it was not there and didn’t compile in the functionality. You end up with an half assed system.
Okay, so to be specific, one *module* ๐ An independant part of the source tree.
xterm is a poor example here, since it’s had a configure script since 1997. Xorg has contributed nothing to that effort.
Moreover they couldn’t have spent much time modularizing, since most of the configure.ac files are cut/paste. (Yes, there are a lot of them, but the work involved is trivial). A quick check accounting for this shows me that the total effort for all of Xorg’s modularazation is about as much work for xterm’s configure script alone.
Finally, imake was adequate for building applications. The point of the modularization was to allow some developers to create unstable versions of libraries (changing interfaces, omitting documentation, etc).
you need to cool down buddy.
dont take it to personally.
it’s just a discussion
6.9 is the monolithic branch.
http://xorg.freedesktop.org/releases/X11R6.9.0/src-single/index.htm…
This is the all idea behind modularized build.
A modularized build does not equal splitting up the tarball, how the download is handled has notting to do with the build.
It’ll give you the option to build and upgrade one driver without the need to rebuild the full X.org package from scratch.
That should be a feature of the buildsystem and it does not in any way demand the package to be split up. You know in the same way you can rebuild a Linux kernel module whitout recompiling the whole kernel.
It would be much better to help the guys to do it, than blaming the current status of the modular system. Are you passive or active?
It really bothers me that this was the highlight of the article:
“X11R7.0, is the first release of the completely modularized and autotooled source code base for the X Window System. ”
“X11R6.9, its companion release, contains identical features, and uses the exact same source code as X11R7.0, but with the traditional imake build system.”
Has it ever occured to anyone that autotools isn’t necessarily better than imake? autotools is also very old, (and arguably quite broken, even in some ways compared to imake) and there are much better build systems such as scons available.
I know everyone out there thinks that development will “skyrocket” if just more people are involved, and I do not agree. There are a few key developers who are doing the majority of the work, and the rest might be non-regular contributors.
Overall, the only reason to me why moving to autotools would advantageous is that those non-regular contributors would already know how to use the tool. (in other words, it is not based on the merits of autotools, but rather on it’s ubiquity)
I’ve done my share of X11R6 builds over the years, and I have to say that I always found Imake intimidating and very difficult. In the perfect Imake world, all distributions would come with properly configured and installed Imakefiles and everything would just work. But since X11 is the only major project left using it, configuration for a particular system always seems to be a nightmare re-learning curve for me.
Imake is cleaner, but automake does a better job of “just working” on most systems; as long as you’re not the poor fool who has to write and maintain the cryptic and undocumented configure.am file *shudder*.
I hate both systems, but at least automake is familiar.
> as long as you’re not the poor fool who has
> to write and maintain the cryptic and
> undocumented configure.am file *shudder*.
I’ve only recently found myself having to learn Autofoo, and while it DID look cryptic at first, I found that this was more an issue of me not understanding it at all.
Also, the file is called “configure.ac” not configure.am (and formerly “configure.in”)
The “.am” files are the Makefile.am files that are located in each of the projects subdirs ๐
configure.ac is also more a part of autoconf than automake.
And actually, in my albiet limited experience, I found it is actually quite well documented too. The Autoconf documentation at the following link is most useful ๐
http://www.gnu.org/software/autoconf/manual/autoconf-2.57/autoconf….
And then you just started to try and use this auto-crap on for example anything else then JBLD (Joes Favorite Blown Linux Distro) and realized that several different packages require very different releases of it installed. That you never know for sure how to control what goes in and what not. That regenerating those loads of shell garbage takes ages that it’s about nearly impossible to add anthing to the source tree and so on… And after all how long did it take to come up with this congenial modularisation idea?
After all the main problems with X11 are far more blunt then tweaking a fine working build system.
And then you just started to try and use this auto-crap and realized that several different packages require very different releases of it installed.
In my experience, you need the latest autoconf/automake (currently 2.59/1.9) for most packages, and autoconf2.13/automake1.4 for old craps. Unfortunately, old craps include Mozilla, XEmacs, and OpenLDAP, and frankly I gave up any hope for these projects moving to newer autotools.
Like someone else said, make sure you have the latest autofoo installed.
At any rate, Autofoo are BUILD tool, for DEVELOPERS. They’re not intended for end-users to use, so if you can’t handle them then it’s really a sign that you should go get a tarball with a configure script or wait for your distro’s repository to include packages. The last thing oss developers need is greedy impaitient end-users complaining to them that they can’t build the software. If developers coddled every hapless end-user through the process (for every you-name-the-damn-distro) they’d never get any code done.
Yeah, Jam would be preferable to the ugly autotools cr*p. But I am happy they split the X-beast into multiple packages, having to recompile all the example programs and fonts just to get new drivers was a major pain.
Jacob
> autotools is also very old
What’s old about autotools? The autofoo suite is still actively maintained and being improved.
They do not support RENDER acceleration as nVidia driver do, so you can’t get accelerated composite.
I don’t know if they plan to support EXA but even if they plan to support it, don’t spect propietary drivers with EXA support before summer 2006 or so.
Proclaiming the usage of autoblah as progress is just a bad joke. Let me geuess for a starter the build process will now take much much longer… And then apparently it doesn’t occur to anybody that the autotools are a very very overengineered solution to a quite tirvial problem.
If you don’t like autotools, then fork Xorg and please shut up.
AAH! No, you shut the hell up! Goddamn why do so many people on osnews and other sites feel like others are not allowed to have an opinion other than “wow great job guys, you are so right in every way!!!”.
YOU, shut the hell up. YOU are the only one who has added absolutely zero interesting information to this thread.
The commenters had interesting points. I had no idea that autotools was old and not ideal. I had no idea that it was being selected for its ubiquity and not because of its superiority over other build systems.
“The commenters had interesting points. I had no idea that autotools was old and not ideal. I had no idea that it was being selected for its ubiquity and not because of its superiority over other build systems.”
That’s actually an opinion, not a fact. autotools is used for most of the software running on a GNU/Linux system because it *does* work, proven and reliably. Learning it takes some RTFM, but that’s not something everyone needs to worry about, because the work has been done by other people already.
X.org was split to make developing X.org easier. We can sit here and read people who never read a line of code from it complain all day about the change and how it makes it longer or harder to build it, but there’s absolutely nothing to gain from that.
Proclaiming the usage of autoblah as progress is just a bad joke
It’s not when it allows XOrg to be modularized, which imake did not allow.
It’s an improvement despite the bad feeling you desperately try to convey like a good troll you are.
Let me geuess for a starter the build process will now take much much longer
Actually no, but you would have to have a clue to know otherwise.
Your comment just shows you did not even read one README, and that you don’t know how autotools works.
You don’t know even half of the features of autotools, but you are quick to flame it.
FYI, autotools has a cache that can be shared among several packages, and that’s what we use when building XOrg.
If you had observed your typical imake XOrg build, you would have seen a lot of time taken to :
– remove files
– configure packages
and still, some hard coded paths still exist in XOrg, even in the modularized builds. Autotools at least allowed us to find them and make them configurable.
And then apparently it doesn’t occur to anybody that the autotools are a very very overengineered solution to a quite tirvial problem
Saying that the problem autotools resolves is trivial just shows you don’t know what you are talking about.
A portable cross platform auto configuration tool is nothing trivial as soon as you go even on the surface of the matter. The problems are even explained in the first parts of the docs (and there are a lot), but I assume you never read even one section of them.
Proclaiming the usage of autoblah as progress is just a bad joke.
Autotools itself is a bad joke. How much time was wasted porting Xorg to that shit.
x.org 7.0 is huge news. Now, individual developers can work on seperate parts of x independent of eachother.
Instead of building the *entire* x.org tree everytime, they can just build the module or component that they are working on themselves. I predict that x.org development WILL pickup within the next year or two signifigantly as this breaks down a major barrier stopping some devs from working on x. Rebuilding x.org entirely takes a long time.
True, and specialised, small groups can form as well – say a small group that is strictly dedicated to just making the implementation of the protocol more efficient, another focused solely on improving drivers – all which can be released at different dates – kinda like how GNOME is done in respects to the different modules being updated and made available, even outside the release dates.
I expect Xorg 6.9.0 to be in SUSE 10.1 Beta1, since Alpha4 has X.Org 6.9 rc3.
A true blessing, since SUSE 10.0 doesn’t fully support my Nvidia GeForce 6200.
Are you nuts? Compiling X takes half a year on even modern computers, if I was a hardcore gentoo user, then maybe, but man, you don’t just compile X for fun.
It takes a few hours. OpenOffice is much worse. Now that’s a package I prefer not to recompile.
> It takes a few hours. OpenOffice is much worse.
the worst i’ve ever seen is sun’s jdk… i gave up after three days (at 100% cpu most of the time) on my 1.6GHz turion laptop (1280MB ram)…
the worst i’ve ever seen is sun’s jdk… i gave up after three days (at 100% cpu most of the time) on my 1.6GHz turion laptop (1280MB ram)…
WOOT!?
Did you have famd running with normal priority (or at least higher priority than sun’s jdk) or did compilation get all CPU-time?
I’m using blackdown and it takes a while, but nothing like openoffice which usually takes around 8-10 hours.
Three days… oh my… unbelievable.. especially on such a system. I’ve got a 1.5 GHz Sempron with 1 GB ram… I guess I won’t try suns’s JDK then
> Did you have famd running with normal priority (or at least higher priority than sun’s jdk) or did compilation get all CPU-time?
compilation got nearly all CPU-time.
Oh boy… oh boy…
Three days :-O
Did you switch to another Java DK then? Or did you just use a precompiled version? (A reasonal choice after 3 days of compilation and not being finished)
i just used a linux binary version (on freebsd)…
Are you nuts? Compiling X takes half a year on even modern computers, if I was a hardcore gentoo user, then maybe, but man, you don’t just compile X for fun.
No, are YOU nuts? I happen to be a ‘hardcore gentoo user’ and quite frankly I don’t see where you get off saying that ti takes half a year to compile X on a modern computer. I’ve got a 1GHZ PIII laptop with 192 megs of ram. (4 years old) and it compiles Xorg 6.8.2 in 2 and 1/2 hours. My P4 Desktop compiled 7.0RC1 in about 1/2 an hour. You want something that takes a long while? Try compiling KDE sometime. Man, that stil takes about 10 hours on my desktop.
Gnome is a big one too.
X.org is pretty small actually. Not that much worse than glibc, binutils, gettext and gcc.
Half an hour on my P4 (epox pgm2i) at University, enough to buy a cup of coffee an the near Gregory’s fast-food and buy some A4 to print USB1.1 spec ๐
amd2400+ w/512 of ram
# genlop –info xorg-x11
* x11-base/xorg-x11
Total builds: 12
Global build time: 14 hours, 56 minutes and 16 seconds.
Average merge time: 1 hour, 14 minutes and 41 seconds.
# genlop –info openoffice
* app-office/openoffice
Total builds: 2
Global build time: 1 day, 21 hours, 10 minutes and 7 seconds.
Average merge time: 22 hours, 35 minutes and 3 seconds.
Edited 2005-12-22 18:11
> Global build time: 14 hours, 56 minutes and 16 seconds.
Is that the build time for XOrg on an AMD-2400 ?
That sounds totally unrealistic to me since it doesn’t require longer than 45mins on my system having an AMD-2600.
The global build time is the total build time for all 12 times i’ve built it. the average build time was just over an hour. reread the output of genlop in my post for confirmation.
Ha. You’re crazy. I am a hardcore gentoo user and the last time I built X (version 6.8.2) on my 700Mhz, 256MB laptop it took me less than 2 hours.
Congratulations to x.org foundation. It’s begening of new era in X window world.
Now that Xorg is modularized, get ready to see more frequent, sexy developments in X in the coming months. Development teams can now release updates, fixes, and feature additions without having to wait for the entire monolithic build to sync/release.
We all benefit. Kudos to the xorg peeps!
I’ve built 6.9 and now the XV port from the V4L extension doesn’t work anymore in XawTV, Xmame does a segment fault when going to XV mode and, like in 6.8.2, the screen suffers of periodical small stops watching a video or scrolling down in a window.
> Autotools itself is a bad joke. How much time was
> wasted porting Xorg to that shit.
A lot more time and consideration than was taken for your idiot post.
A lot more time and consideration than was taken for your idiot post.
So you admit they wasted a bunch of time porting to the autotools shit, while X is still years and years behind Vista and OSX. Nice.
>the autotools shit
What do you know about autotools to call it shit?
It is obvious that you don’t know what you are talking about to compare X with Vista and OSX both OSes. Do you know that X is not an OS??? As long as Bill tries to remove the kernel from Vista and OSX is BSD based I will show a bit more respect to the developers who provide you with free software…
Try to read about X and autotools and post something constructive. Explain us why you call it as you do…
Tell what you are missing from X (try to understand the X as something between the kernel and desktop environment) what the priorities should be! Stop trolling. It is not nice what you are doing.
What do you know about autotools to call it shit?
It is obvious that you don’t know what you are talking about to compare X with Vista and OSX both OSes. Do you know that X is not an OS???
I know enough about autotools to avoid it at all cost. Nobody likes autotools. It’s the most convoluted shit in the world.
As long as Bill tries to remove the kernel from Vista and OSX is BSD based I will show a bit more respect to the developers who provide you with free software…
I’m not going to show respect to a bunch of filthy punks who think software is a religion. I respect them like I respect scientologists or any other cult.
“So you admit they wasted a bunch of time porting to the autotools shit, while X is still years and years behind Vista and OSX. Nice.”
One could just as easily argue that they “invested” a bunch of time porting to the autotools “shit” in order to alieviate the condition of X being “years and years behind Vista and OSX [graphics frameworks].”
I recently spent a few months wedging new technologies into a large, unweildy, left-for-dead build system. It was a real pain, but mostly because none of it was documented, little of it was understood, and the people who maintain it don’t really want to talk about it. Once I figured it out, I still couldn’t get things working exactly the way I wanted because the system just wouldn’t allow it.
For comparison’s sake I tried the same process for autotools packages. Within a few hours I had a simple script that would prepare any autotools package to build using this new procedure, and the implementation could be made a lot nicer.
The point is, it really doesn’t matter what build system they choose, just that they have one, and that they document the local implementation clearly. Imake is not really a build system, just a souped-up Make with a nonstandard Makefile format. I doesn’t matter if they document it well, it still doesn’t scale gracefully with project size. There are many full-fledged build systems out there, autotools being perhaps the most common. I skimmed the guide to the new modular tree a few months ago, and from what I could tell, they’re really trying to reach out to new developers.
Are there better build systems out there? Arguably, yes. Do more FOSS developers know autotools than any of the others? Arguably, yes. Are there any FOSS developers who would have started working on Xorg if they had chosen a different build system (or stuck with Imake), but will refuse on principle because they went with autotools? I really hope not…
Dear OS News and everyone reading this:
I would just like to say that I’m fed up with this site (and many others). It appears to be taken over by a bunch of computer-geeks fighting at each other with pointless arguments, polluting the internet with their “comments”. I myself am a computer scientist and frankly said, very ashamed to be one. As soon as I finish this, I will delete this site from my bookmarks and won’t read it ever again. Please take this very seriously, for there are many others who feel just like me.
Ultimately the community must police itself. Don’t just stand outside waiting for it to happen! How about you register and use your votes and posts to moderate the discussion?
I’m afraid the current rules doesn’t allow that kind of modding.
If this was a wiki… then maybe something could be done ๐
Seriously most of what has been said in this disussion has been less than mature, informative and insightfull.
I mean what kind of an agrument is “x is shit”… “no it isn’t”…
-1 uninformative shit :p
Sorry, forgot to log in. 83.227.230.x is me.
is there a build order and is it recommend to install to /usr instead of traditional /usr/X11R6?
(Fedora use /usr)
Nice to see that, one nice job for xmas for my Slackware
“R7.0.Slackbuild”
sorry for asking, it’s there:
http://wiki.x.org/wiki/ModularDevelopersGuide
I think you miss the point here. It is not important
that the build system has been changed from imake to
autotools. The important thing is that the source base
has been changed from singular to modular. The build
system can be probably changed if there are strong
reasons enough to do the change.
For those who complain about compiling it, just wait for your favorite distribution to supply it, and shut up.
In many ways the new modularized approach is better than the monolitic one.
1. Developers can concentrate on the module they want to work it, without having to recompile the whole thing.
2. End users can have a smaller install footprint with just the components, libraries and drivers they need.
The only ones that will get a little pain under the transition are the packagers, but anyway, at the end it’s better, since upstream fixes (ie. security patches) can be applied as needed on specific components without recompiling the whole mamooth.
Right now I’m using xorg 7 testing packages from archlinux and only two problems where chaging some gdm.conf paths and a script missing by the packager. And now it’s working perfectly.
Old computer here (TBird-1333 MHz, NVidia GF 5500), RenderAccel enabled, NVidia closed source drivers.
phoebus@zephyrus ~ $ evas_xrender_x11_test
####################################################
# Performance Test. Your system scores…
####################################################
# FRAME COUNT: 316 frames
# TIME: 20.000 seconds
# AVERAGE FPS: 15.800 fps
####################################################
# Your system Evas Benchmark:
#
# EVAS BENCH: 0.263
#
####################################################
phoebus@zephyrus ~ $ evas_software_x11_test
####################################################
# Performance Test. Your system scores…
####################################################
# FRAME COUNT: 2410 frames
# TIME: 20.002 seconds
# AVERAGE FPS: 120.490 fps
####################################################
# Your system Evas Benchmark:
#
# EVAS BENCH: 2.008
#
####################################################
phoebus@zephyrus ~ $ evas_gl_x11_test
####################################################
# Performance Test. Your system scores…
####################################################
# FRAME COUNT: 30785 frames
# TIME: 20.001 seconds
# AVERAGE FPS: 1539.206 fps
####################################################
# Your system Evas Benchmark:
#
# EVAS BENCH: 25.653
#
####################################################
Thanks to all the modularising effort, the numbers above should come closer to the numbers in between, and, may God hear me, closer to the numbers given in third place. Thank you, for speeding up the development speed of X.
i’ve noticed that when installing Kubuntu at work on an ATI Radeon 8500 everything worked without any configuration. opengl hardware accel and everything.