“Five months after release of X11R7.0, the modularized and autotooled release of the MIT Licensed X Window System source code, the X.Org Foundation has issued its first modular roll-up release. X11R7.1 supports Linux, Solaris, and BSD systems. It includes important new server and driver features for embedded systems, 64 bit platforms, enhanced operating system support, and accelerated indirect GLX support. It most importantly demonstrates to developers and industry immediate benefits of modularization.”
http://ftp.x.org/pub/X11R7.1/doc/RELNOTES2.html#4
# New GLX extension: GLX_EXT_texture_from_pixmap This extension allows GLX clients to efficiently bind pixmaps – such as those provided by the Composite extension for redirected drawing – to OpenGL textures. This enables the compositing manager to bring the full force of OpenGL to bear on the problem of rendering the desktop.
๐ Fantastic! Looking forward to trying this badboy
Yep, that’s the one i’ve also been waiting for.
Great work! R7 was already good, this one even more!
sigh… I was hoping they’d still be releasing in parallel the fat tarball releases ala 6.9/7.0. while I can appreciate the technical reasons behind this for the X maintainers, and even perhaps for end users, however it looks to be a nightmare from a package maintainer’s perspective.
with the old way, I have 1 (or divided into 7) tarball(s) to download. about one file, the host.def, to modify to taste, and then I compile it all in one go, and release as a collection all in one go. now I have about 280 packages to download, configure, compile, and release… not exactly fun.
You could simply use a script to build all packages.
sure, I imagine one could do a
for i in `ls *`;
do cd $i;
./configure –prefix=/usr/local;
make;
make install;
cd ..;
done
after untarring them all (that’s just a guess, I haven’t messed around with the 7.x series yet. they do use autoconf now though right, not imake?). but what if there’s different configure options to take in, how about parts being dependent on the others, build order dependencies and such? while the old way was a pain if you got one part wrong, or wanted to tweak with it, since it was such a large compile, it was still easier to manage than the current model.
I’m not saying it’s the end of the X world for me, but it does mean alot more work next time I get around to packaging a new X (we’re still using 6.8.2, and I don’t see us going up for a while so no worries yet. but still…)
A bit unrelated to X.org, but it made me wonder:
> for i in `ls *`;
Why the heck not simply use “for i in *”?
Btw. for zsh one could also do “for i in *(/)” where the *(/) would expand to only directories.
> Btw. for zsh one could also do “for i in *(/)” where
> the *(/) would expand to only directories.
Any /bin/sh should support “for i in */”.
meh, it’s just how I’m used to it ๐
Isn’t the idea of the modularized xorg, that you don’t have to release everything at once? Perhaps now, you might have a little more work to do, but when releases start happening seperatly for each module, wouldn’t you only need to update what is in fact updated? In the release notes, they state that the release points (7.0, 7.1, etc…) are merely rolled up states of all the modules at certain designated times.
All the in-between releases aren’t considered to be stable. 7.1 as a whole is released as stable.
You’ll see 7.1.x releases of various files for bugfixes now, and also 7.1.9xx as unstable development versions, if they do things the same way as they currently are.
Well, that’s a bit simplified, but you get the idea.
now I have about 280 packages to download, configure, compile, and release… not exactly fun.
Are you seriously saying that they don’t have an automated way to compile it all at once that takes care of dependancies? That would be ridiculous.
KDE has tons of modules and apps that you can compile separately, but that doesn’t mean I have to do it that way. Just configure and make the master makefile and everything gets sorted out for you.
sadly, from the looks of it, that seems to be how it is. (I would love to be made wrong on this one…)
to get a feel for how complex this is, look at this:
http://lists.x.org/archives/xorg-modular/2005-November/000801.html
(btw since you mentioned it, I think the way the KDE devs setup their source tars is a _very_ good example of how things should be done in projects of that scale.)
sadly, from the looks of it, that seems to be how it is. (I would love to be made wrong on this one…)
to get a feel for how complex this is, look at this:
http://lists.x.org/archives/xorg-modular/2005-November/000801.html
(btw since you mentioned it, I think the way the KDE devs setup their source tars is a _very_ good example of how things should be done in projects of that scale.)
I’m not a dev so take this with a grain of salt, but they do provide a build script, you can simply download the tarballs and run the script. They even recommend against building by hand due to coding for dependencies etc.
I will I admit I may be missing something here though.
http://wiki.x.org/wiki/ModularDevelopersGuide#head-ef8aae011f5ba67c…
Man, stop spreading lies, anybody can build all modules with a simple script, it is really easy:
http://wiki.x.org/wiki/ModularDevelopersGuide
Modular X11 is a good thing.
that’s nice. now can you use that to build an x.org rpm set to distribute to multiple clients? or, as in our case, using a homebrewed installation set that builds your binaries that has your source dir in AFS, you object dir on the local drive, and the destdir again in a seperate AFS space, to then get export out to multiple clients?
the “don’t try to build this by hand” is not the answer that package maintainers for a distro want to hear.
I took a look at the script, it looks pretty good for single machine installs, and with some tweaking could be useful to me. just don’t try to say this is somehow easier than the old method that didn’t require such jumping through hoops…
now can you use that to build an x.org rpm set to distribute to multiple clients?
Of course. That’s not the package management I chose, but yes, I could do that without any problem.
or, as in our case, using a homebrewed installation set that builds your binaries that has your source dir in AFS, you object dir on the local drive, and the destdir again in a seperate AFS space, to then get export out to multiple clients?
Without a problem. I wonder what is your point, as it is easier to do that now that everything has been autotooled.
I wouldn’t even have tried that before. Well, there was lndir, but I had so much strange problem with the binaries if I interrupted the compile and restarted it, that I always erased the lndir directory as soon as the compilation borked, and restarted everything.
I can even cross-compile now, while before, cross-compiling was not working for all elements (last time I tried was with XFree).
the “don’t try to build this by hand” is not the answer that package maintainers for a distro want to hear
Why ?
You shouldn’t use their build script as a package maintainer. You lose all the control then.
I took a look at the script, it looks pretty good for single machine installs, and with some tweaking could be useful to me. just don’t try to say this is somehow easier than the old method that didn’t require such jumping through hoops…
I didn’t use this script. It offered no improvement to me over the old method.
“sadly, from the looks of it, that seems to be how it is. (I would love to be made wrong on this one…) to get a feel for how complex this is, look at this:”
XFree86 is still in existence, no? And is still monolithic?
I don’t understand.
If you don’t have an automated way to make releases, then you should not make releases, as human error can always happen (even in the original XFree), and lots of packages depend on X and where its elements are.
If you have an automated way to make releases, you just have more work to do once or twice to integrate the packages (the nightmare you talk about), and then you’re set.
And it is easier afterward too. You’re not compelled to keep some outdated packages. Like Mesa, which was always several versions behind, and that you could not really replace without risking crashes.
Or the fact that you downloaded tons of useless tools, or outdated versions of packages that you could replace fortunately (like expat, fontconfig, libfreetype, …).
Frankly, it’s way better now, I even got rid of /usr/X11 recently (everything is in /usr), and it works. I can use the latest Mesa, necessary for AIGLX (though I don’t use it), I can use the latest freetype library, the latest far faster and reliable fontconfig development version (2.3.95).
I use the repository http://xorg.freedesktop.org/releases/individual/ with a script I made which updates the version numbers in my package system, tells me what I miss and what does not exist anymore. I thik that’s a good packager resource.
I don’t want to sound patronizing, that’s not my intent, but unless you’re doing this for a distribution that’s being exported to multiple clients (in my case the college I work at) you won’t be able to understand that this does _not_ make our lives easier. unfortunately I’ve found it much to common for instance for upstream providers to not get it in their head that people aren’t always going to be installing their stuff locally, as a single user on their own machine would. Some devs don’t even write in DESTDIR support into their Makefiles, or, decide they’ll be really cool and write up their own unique build and install system say in python or something, which will be useless to someone like me…
I’m not saying the x.org folk are bad, and again I can understand why they needed to change the old model which was still using imake for goodness’ sake. but telling us to track 280 releases instead of one, is not doing us a favour. (that said, this might not be the case. what I’m getting the feeling of is that there will be milestone releases saying 7.x (and all it’s contents) is stable, use it, type of deal. I just prefer being able to use the older method where there was one download, and one file to configure to tell it what I do and don’t want, and how I want it.)
oh about getting rid of /usr/X11, you might want to reconsider (or do what we did and make a symlink to /usr/local which is where our built stuff goes for the most part). some packages are pretty stupid about this (vmware comes to mind) and won’t be happy unless it’s there. anyhow, being able to specify your prefix for X isn’t exactly something new. take a look at the old host.def file, there’s a lot of things that you can tweak in there. (#define ProjectRoot is what your looking for)
but unless you’re doing this for a distribution that’s being exported to multiple clients (in my case the college I work at) you won’t be able to understand that this does _not_ make our lives easier
Your life being easier depends on your package manager, not on the number of packages to push.
So no, I don’t understand. Or rather, yes, I know there’s a big packages creation time beforehand for the maintainer. I can’t deny it, I’ve been through it.
unfortunately I’ve found it much to common for instance for upstream providers to not get it in their head that people aren’t always going to be installing their stuff locally, as a single user on their own machine would
So modularized X is good for you, as it solves this. So it’s better now.
Some devs don’t even write in DESTDIR support into their Makefiles, or, decide they’ll be really cool and write up their own unique build and install system say in python or something, which will be useless to someone like me…
XOrg was full of these kind of things. Fortunately, the modularisation allowed people to spot all of these.
So it’s better now.
but telling us to track 280 releases instead of one, is not doing us a favour
You sound like it was doing them a favour. It took them months to do that and autotooling everything.
It was necessary IMHO, and I don’t think they did it because they are masochists.
You meant tracking 280 packages I guess. There’s far less than 280 releases for this 7.1 release for example.
And if I can track releases with one script and alone, I think any distro packager can do it too, with QA.
Some packages don’t even need big QA, like fonts, or old rarely used X apps.
But don’t worry, I don’t think the monolithic release is abandonned, IIRC a 6.9.1 is on the way.
oh about getting rid of /usr/X11, you might want to reconsider (or do what we did and make a symlink to /usr/local which is where our built stuff goes for the most part)
I initially did that and a bunch of other symlinks for 7.0. Everything has been solved before 7.1 release though (which I still haven’t installed, but I don’t need more than 5 packages to update I think).
some packages are pretty stupid about this (vmware comes to mind) and won’t be happy unless it’s there
Then these packages have to be fixed. IIRC I patched xfce-session for this for example.
My desktops are now fully functional without any symlink anymore.
anyhow, being able to specify your prefix for X isn’t exactly something new. take a look at the old host.def file, there’s a lot of things that you can tweak in there. (#define ProjectRoot is what your looking for)
Of course, but doing it and then have it work is something new.
Given all the hard paths removed in the XOrg code thanks to the modularisation, I doubt it would have worked (without the /usr/X11 symlink at least). Even the use of other fonts path (i.e. /usr/share/fonts) was implemented through symlinks in the monolithic package. I say no thanks : you removed them, and suddenly lots of packages that did not actually use the new paths stopped working.
For example, sth as essential as xterm, which is outside XOrg releases, had to be fixed.
Really, IMHO, the modularisation had a LOT of benefits already.
I’ve been waiting soooo long for this one… Cannot wait to try out AIGLX on my machine.
Hopefully Nvidia releases drivers supporting GLX_EXT_texture_from_pixmap soon. They just released 9xxx beta drivers for Windows.
I don’t know why you think that WINDOWS BETA release might soon bringing new LINUX release.
I think it might take some time(even more then relese of 8756).
And everything about new release:
http://www.nvnews.net/vbulletin/showpost.php?p=895243&postcount=5
very precise information from nvidia:
http://www.nvnews.net/vbulletin/showpost.php?p=895264&postcount=6
Unfortunately there is a problems with ABI change.
The 8xxx doesn’t work[1] with Xorg 7.1 so some customers who want use STABLE Xserver are out of luck :
http://www.nvnews.net/vbulletin/showpost.php?p=893819&postcount=2
And why there is a ABI change:
http://lists.freedesktop.org/archives/xorg/2006-May/015384.html
http://lists.freedesktop.org/archives/xorg/2006-May/015396.html
1. http://www.nvnews.net/vbulletin/showpost.php?p=895534&postcount=13
I don’t guarantee that this hack will work(and without renderaccel nvidia is no option for me).
ps. I really hope the 9xxx version will be released soon, but I’m sceptical about this, and I don’t believe NVIDIA any more.
ps2. (sorry for bad english)
Unfortunately there is a problems with ABI change
you forgot to give a complete sentence, here it is : “Unfortunately there is a problem with ABI change and closed source drivers from IHV”.
The 8xxx doesn’t work[1] with Xorg 7.1 so some customers who want use STABLE Xserver are out of luck
Not entirely true …
It works, but not all the advanced features.
1. http://www.nvnews.net/vbulletin/showpost.php?p=895534&postcount…
I don’t guarantee that this hack will work(and without renderaccel nvidia is no option for me).
That’s exactly what I have done (disabling RenderAccel) and it worked. But I’m still using the beta X server.
If you don’t do that, Gnome apps actually work most of the time, but you can’t see any character in all KDE apps.
I didn’t notice slowdown of Gnome nor KDE (I don’t use big composite features on any of them), but my XFCE is extremely slow now (but it wasn’t recompiled when I switched X from /usr/X11 to /usr, nor for the new X server, I’m waiting for the next stable release).
ps. I really hope the 9xxx version will be released soon, but I’m sceptical about this, and I don’t believe NVIDIA any more
Same here. 8762 seems to have nothing interesting for me.
Edited 2006-05-24 12:36
… will this modulisation mean lower memory usage ?
But yeah cool its shipping with nice cool stuff .
I guess it must suck sometimes to see that a feature is available for the newcomer unix-like OS Linux when you are using a Unix .
Just IMO
“I guess it must suck sometimes to see that a feature is available for the newcomer unix-like OS Linux when you are using a Unix .”
You are kidding right, if you arent just understand that people as ignorant as yourself should not be making comments. Take it as you will.
ha
Whatever your issue is
But as X seems to have been around longer than Linux & Unix existed before Linux I dont see the utter ignorance on my part.
I just simply commented that for a project which seems to have started for Unix is getting lots of Linux specific “bits” or first-servings which is also true for projects like KDE or GNOME.
Of course its logical – it was just an observation .
But although I think Linux is a really cool “thing” which has come a long way – people seem to associate OSS with Linux & graphical systems also with Linux forgetting that there is an OSS & Unix(&/compatible) world outside of Linux .
And then of course there is also the real world
Forgetting that – isnt that ignorance ?
————
Does fully modularised X do anything for increased stability & less memory usage ?
Edited 2006-05-24 10:22
I just simply commented that for a project which seems to have started for Unix is getting lots of Linux specific “bits” or first-servings which is also true for projects like KDE or GNOME.
The project was started specifically for Unix that runs on PC. So when Linux became popular, as XFree86 was the most used X server on Linux, XFree86 became very popular too. That means Linux users were the main users of XFree86 (the BSD were rather used as servers, which IMHO is why the state of Gnome, KDE, XFree or other desktops on these system is behind what is possible on Linux). That’s why there’s a lot of Linux specific “bits” in it.
It was greatly visible in the monolithic XFree config file (host.def). You saw several Unix versions configurations there, but the most advanced ones were for Linux, some options were available for Linux only.
It must be easier too, to implement ideas on Linux, as the best engineers working on XFree were from HP or Sun.
And even though I think K. Packard complaint (HP engineer ?) was the start of the migration to XOrg, it was all possible because the license problem affected Linux users, and they were the main XFree users.
But although I think Linux is a really cool “thing” which has come a long way – people seem to associate OSS with Linux & graphical systems also with Linux forgetting that there is an OSS & Unix(&/compatible) world outside of Linux
Which is understandable, as once all Linux distro switched to XOrg, I think 90+ % of the users were out of XFree.
So people associate graphical systems with XFree/XOrg, which they associate with Linux. Can’t blame them.
Does fully modularised X do anything for increased stability & less memory usage ?
Yes !
Perhaps you should check your attitude at the door, compared to Unix proper, Linux is the newcomer, and he/she/them have every right to post a comment
You hit the nail on the head for me as a FreeBSD user. ๐
R7 isn’t yet implemented in the ports collection, and already I can’t wait to have KDE running with openGL acceleration on my laptop. (no, I’m going to stick with FreeBSD as I like it more than GNU/Linux :-P)
It’s no worse than GNOME or KDE and most people won’t have to build the packages from tarballs. If you really like doing that kind of thing, use Gentoo, where it’s pretty easy to get up and running with modular X.
the folks who are providing you those nice distros would care ๐
So… you’re telling me those aren’t the Santa’s elfs ? Package creation now is much less magical than I thought it would be…
(Sorry, just couldn’t loose the joke…)