X.Org is making great progress; a big thumbs up to all involved.
On a different note, am i right in reading that they are moving some display specific code into the kernel? Will this provide the performance advantages as seen with Windows et al?
So, I am guessing GTK+ and QT will have to provide graphic application developers with mechanisms to manipulate these new features.
However, I am more interested in how window manager authors exploit the features in question. While I welcome all these new eye candy, and hope to God they are not abused, the project I am most fascinated by on freedesktop.org is XCB/XCL.
It attempts to be a cleaner, leaner and meaner predecessor to Xlib, the current C binding to the X Window System protocol. It’s focus is to streamline Xlib by eliminating all its inefficiencies. In addition, it aims to be more modular and therefore easily extensible.
I was hoping xorg will ship with XCB/XCL. XCL by the way is just a compatibility layer to Xlib. But I understand why that won’t be happening anytime soon. The link below will direct you to more fun reading regarding XCB/XCL.
On a different note, am i right in reading that they are moving some display specific code into the kernel? Will this provide the performance advantages as seen with Windows et al?
This is what, as far as I know, DRI (Direct Rendering Infrastructure) is all about. I guess it will probably be quite similar to NT’s GDI, which was outside of the kernel before NT version 4.
I’m not sure why this provides a performance benefit, except maybe that user mode driver code is not permitted to access the hardware directly so must incur the overhead of using a device file and the context switches with syscalls. I guess if the kernel would allow controlled access to hardware (mapping device memory in to a user process) this would not be required. I’m personally against putting even more code in the kernel, but that’s their choice I guess.
2.4) We need to make X faster, more robust, and just plain better, so I think we should remove or modularize network transparency?
We cannot remove network transparency. Period. There is absolutely no question about it whatsoever. X11 is a network protocol. Perhaps the reason everyone thinks its slow, is because everyone thinks that X11 on a local machine goes thru TCP/IP. It doesn’t. It uses UNIX sockets which are very fast.
2.5) So why is X so slow on my machine if not for network transparency?
Yes, XFree86 /can/ be slow, especially on uniprocessor machines, but network transparency is NOT at fault. More common culprits appear to be toolkits, video drivers, and font rendering/render. Render really needs to DMA driven. Right now it pulls bits from the framebuffer using the CPU which with PCI is abysmally slow.
They aren’t talking about moving drawing code into the kernel layer. They are talking about rationalizing the drivers that currently touch the graphics card. Right now, you’ve got the X server directly touching the graphics card (from userspace), the FB driver, the regular VGA driver, and the DRM. Ideally, you’d have one kernel-level driver that’d handle the jobs that need to be at the kernel level, like mode-setting, synchronization, DMA transfer, etc, so that X, FB, etc, can share it. All actual drawing code should and will remain in userspace, because there is absolutely no point in moving it to kernel space. Modern graphics cards operate by having the host CPU use DMA to transfer a command buffer to the GPU. It’s faster to fill out the command buffer all in userspace, and then just call the kernel to actually set up the DMA, than to constantly switch back and forth between userspace and the kernel while filling out the buffer.
As for performance relative to Windows GDI, there is little improvement to be had. The Athenyx benchmarks showed that the GDI was only 7-8% faster than X, which isn’t even noticible.
As for performance relative to Windows GDI, there is little improvement to be had
—-
seems a lot of Xorg developers believe that the current dri improvements and reduction in round trips can improve the performance considerably. GDI is not really the best we can hope to achieve. I am pretty sure we will see improvements on the peformance front too. More importantly thou fd.o is a much needed colloboration zone which currently needs more kde devels participating in it. I am not sure whether dbus and gstreamer would be integrated. looks like there is much confusion over that
Of course, there are lot’s of performance improvements to be had, but they are mostly not related to 2D performance (unless you move rendering to the GPU, of course). Thus, the original poster’s claim about catching up to the GDI (which is just the 2D API) aren’t really relevant. The real performance improvements will come from better application behavior with regards to things like round-trips and synchronization. That’s a matter not so much of raw speed, but intelligent behavior.
The Matrox G550 and later cards have long had serious issues running in DVI mode (as typical for LCD monitors) with XFree86 or the current Xorg release. That is, they usually don’t work.
Anyone know if this new release deals with that? I’ve got a nice G550 that I can’t use with my nice shiny new LCD.
Thus, the original poster’s claim about catching up to the GDI (which is just the 2D API) aren’t really relevant. The real performance improvements will come from better application behavior with regards to things like round-trips and synchronization. That’s a matter not so much of raw speed, but intelligent behavior.
—
thats right. i wasnt talking about the who is better fight just pointing that more intelligent behaviour is likely to follow in the next modular release 6 months away after the current one
X on GL and Cairo/Glitz speed things up dramatically, in some cases 100:1 over the 2D code. X on GL will be in the next X release. Glitz is in this one. Apps like Mozilla and GTK are being ported to Cairo/Glitz currently.
Glitz is much faster for two reasons. It uses the 3D hardware to draw 2D. Glitz also uses DRI which is direct rendered – no process swaps to the X server.
Here’s the plan for fixing the low level mess of too many drivers trying to control a single piece of hardware.
Wow. I had no idea that X on GL was on such an accelerated schedule. Is there a clear idea of where the driver API is heading? Specifically, is there any idea of how to have multiple apps using Glitz without overloading the GL library with too many contexts?
KeithP’s next thing to work on after the current X release is X on GL, at least that’s what’s he told me. Of course his managers could change his mind. Ian and I and others are working to bring the entire fbdev/dri/X/etc driver mess under control. X on GL is not that far from reality if more people would contribute to working on it. You can start work on XGL today by using the existing OpenGL implementations and running the development X server inside a normal one. There are plenty of DRM projects too.
I hate to say it, but it’s doubtful that you’ll see much improvement with your Matrox card. I switched to nVidia last month after my difficulties with my G550 and X/DRI got too frustrating. I reported bugs (to XF86, to the DRI project, to X.org, etc.) but they didn’t get anywhere because it appears that *no one* active in the DRI project right now is working on the Matrox drivers, and no one at Matrox is doing anything to improve Linux support. I know your problems aren’t necessarily with DRI, but I really get the impression that nothing much is going on with Matrox right now.
John Smirl: I’ve got a question. What’s going to be the “preferred” rendering method for apps under the new X server? Render directly via GL or indirectly via XRender (which presumably will be accelerated via GL)?
Sorry if Im asking in the wrong place, but as a Sun Hardware user, I was wondering if there would be some improvement on the famous “Expert 3D-Lite” card under X.org & GNU/Linux.
Its a pain in the back to actually switch video cards from the box just to get into Gentoo-sparc.
If someone out there knows what Im talking about..
would be great to hear some opinions on this topic.
I built the new server on Slackware from CVS earlier today. Most likely, I’ll go back to the stock Slackware server until Patrick V. adds the official new release to current, assuming he does.
Initial impressions:
1) It seems a tad slower. That’s no surprise, since this is not release code. And, of course, that is a subjective opinion.
2) Compositing, triggered by a separate program after the server is running, looks pretty darn good. Granted, I’m using metacity with Gnome 2.6 and the effect seems limited to dropshadows.
3. Transparency, also enabled by a separate executable, works. I can take it or leave it, but lots of folks drool over this stuff.
4. It seems to work just fine with Nvidia. I built the driver using Nvidia’s current package, enabled RenderAccel, and away it went.
5. I moved /usr/X11R6 out of the way and then linked it to the location of the new build. If you do that, remember that some other apps, like xscreensaver, might be lurking in there and won’t be found until you fix things
6. Unsurprisingly, the build doesn’t know about the way Slackware handles window managers. I had to create .xinitrc in my home directory, otherwise startx resulted in a lovely twm display.
I believe answer to the preferred rendering scheme is Cairo. That’s because the same apps can end up pointing to both GL and X rendered targets. On your local machine the preferred rendering target should be GL unless you have old 2D hardware. With old 2D hardware X will probably beat software Mesa.
Remote testing has not been done yet. Right now remote GL is always indirect (SW mesa). With X on GL that will switch to direct (DRI). No one has measured where GLX or X protocol is faster for remote Cairo.
as to Direct FB….
You should not be comparing X on GL to DirectFB. Instead you should compare mesa-solo to directfb. mesa-solo is the standalone version of mesa/GL that runs without the need for X. X on GL will run on top of mesa-solo. directfb does the equivalent, it runs a rootless X on top of directfb.
I believe mesa-solo is the superior solution since it is designed around the 3D hardware (note that 3D hardware also runs 2D without problems). The trend in graphics hardware is definitely to make 3D faster at the expense of the old 2D modes. directfb is implemented using the 2D HW mode.
mesa-solo is also superior since it is a well documented, standardized API that has books and courses available for it. We should also get compatibile OpenGL stacks from Nvidia/ATI shortly after we get X on GL running. That will give us full accelerated X on the latest 3D hardware.
3. Transparency, also enabled by a separate executable, works. I can take it or leave it, but lots of folks drool over this stuff.
This is because the current implementations of transparancy are actually not transparancy. These are rather hacks (imlib, imlib2, and similar in GNOME / KDE) which make a snapshop of the background and show that. If you’d move the window, it would make a snapshot there again. It is perceived as transparancy by some, but it actually ain’t, and it ain’t good for performance either.
Yep, like I said, I built it from CVC. It was RC2. Transparency was enabled by transset. I took DPI’s comment to refer to how transparency is done now, not to the pending release.
BTW, I built it again on Fedora Core 2 last night (up to RC3 this time). Fedora complained at boot about not finding libSM.so.6. but the file exists where it is supposed to exist. Haven’t tracked that down. The server built and runs with no significant problems. Occasional artifacts remain on screen for some seconds when I minimize a window (e.g., pieces of a Gnome terminal border). Subjectively, it still does not respond with alacrity.
An untweaked Fedora boots into X; this was very slow, though, with much disk activity. I didn’t see that on Slackware (using the 2.4.26 kernel).
If it picks up speed before release (removing debug code?), I’d switch to it. Effective use of compositing and transparency awaits the folks at KDE, Gnome and elsewhere.
Now, if I could only get Cairo to compile and stop whining about Glitz…
My comment referred to current, popular transparancy implementations. I’ve already seen KDrive/X.Org in action at FOSDEM (feb 2004) and i’ve also seen DirectFB doing it over there — both live.
It’ll take some time till the current software using the “hacks” (E, KDE, GNOME, and many more) switch, if they’re even gonna do that at all.
🙂 AWESOME !
I am one happy linux user on 27th
X.Org is making great progress; a big thumbs up to all involved.
On a different note, am i right in reading that they are moving some display specific code into the kernel? Will this provide the performance advantages as seen with Windows et al?
So, I am guessing GTK+ and QT will have to provide graphic application developers with mechanisms to manipulate these new features.
However, I am more interested in how window manager authors exploit the features in question. While I welcome all these new eye candy, and hope to God they are not abused, the project I am most fascinated by on freedesktop.org is XCB/XCL.
It attempts to be a cleaner, leaner and meaner predecessor to Xlib, the current C binding to the X Window System protocol. It’s focus is to streamline Xlib by eliminating all its inefficiencies. In addition, it aims to be more modular and therefore easily extensible.
I was hoping xorg will ship with XCB/XCL. XCL by the way is just a compatibility layer to Xlib. But I understand why that won’t be happening anytime soon. The link below will direct you to more fun reading regarding XCB/XCL.
http://freedesktop.org/Software/xcb
On a different note, am i right in reading that they are moving some display specific code into the kernel? Will this provide the performance advantages as seen with Windows et al?
This is what, as far as I know, DRI (Direct Rendering Infrastructure) is all about. I guess it will probably be quite similar to NT’s GDI, which was outside of the kernel before NT version 4.
http://dri.sourceforge.net/cgi-bin/moin.cgi/
I’m not sure why this provides a performance benefit, except maybe that user mode driver code is not permitted to access the hardware directly so must incur the overhead of using a device file and the context switches with syscalls. I guess if the kernel would allow controlled access to hardware (mapping device memory in to a user process) this would not be required. I’m personally against putting even more code in the kernel, but that’s their choice I guess.
2.4) We need to make X faster, more robust, and just plain better, so I think we should remove or modularize network transparency?
We cannot remove network transparency. Period. There is absolutely no question about it whatsoever. X11 is a network protocol. Perhaps the reason everyone thinks its slow, is because everyone thinks that X11 on a local machine goes thru TCP/IP. It doesn’t. It uses UNIX sockets which are very fast.
2.5) So why is X so slow on my machine if not for network transparency?
Yes, XFree86 /can/ be slow, especially on uniprocessor machines, but network transparency is NOT at fault. More common culprits appear to be toolkits, video drivers, and font rendering/render. Render really needs to DMA driven. Right now it pulls bits from the framebuffer using the CPU which with PCI is abysmally slow.
http://www.xouvert.org/faq.html
They aren’t talking about moving drawing code into the kernel layer. They are talking about rationalizing the drivers that currently touch the graphics card. Right now, you’ve got the X server directly touching the graphics card (from userspace), the FB driver, the regular VGA driver, and the DRM. Ideally, you’d have one kernel-level driver that’d handle the jobs that need to be at the kernel level, like mode-setting, synchronization, DMA transfer, etc, so that X, FB, etc, can share it. All actual drawing code should and will remain in userspace, because there is absolutely no point in moving it to kernel space. Modern graphics cards operate by having the host CPU use DMA to transfer a command buffer to the GPU. It’s faster to fill out the command buffer all in userspace, and then just call the kernel to actually set up the DMA, than to constantly switch back and forth between userspace and the kernel while filling out the buffer.
As for performance relative to Windows GDI, there is little improvement to be had. The Athenyx benchmarks showed that the GDI was only 7-8% faster than X, which isn’t even noticible.
As for performance relative to Windows GDI, there is little improvement to be had
—-
seems a lot of Xorg developers believe that the current dri improvements and reduction in round trips can improve the performance considerably. GDI is not really the best we can hope to achieve. I am pretty sure we will see improvements on the peformance front too. More importantly thou fd.o is a much needed colloboration zone which currently needs more kde devels participating in it. I am not sure whether dbus and gstreamer would be integrated. looks like there is much confusion over that
Rocking good read, kudos.
Of course, there are lot’s of performance improvements to be had, but they are mostly not related to 2D performance (unless you move rendering to the GPU, of course). Thus, the original poster’s claim about catching up to the GDI (which is just the 2D API) aren’t really relevant. The real performance improvements will come from better application behavior with regards to things like round-trips and synchronization. That’s a matter not so much of raw speed, but intelligent behavior.
The Matrox G550 and later cards have long had serious issues running in DVI mode (as typical for LCD monitors) with XFree86 or the current Xorg release. That is, they usually don’t work.
Anyone know if this new release deals with that? I’ve got a nice G550 that I can’t use with my nice shiny new LCD.
Thus, the original poster’s claim about catching up to the GDI (which is just the 2D API) aren’t really relevant. The real performance improvements will come from better application behavior with regards to things like round-trips and synchronization. That’s a matter not so much of raw speed, but intelligent behavior.
—
thats right. i wasnt talking about the who is better fight just pointing that more intelligent behaviour is likely to follow in the next modular release 6 months away after the current one
X on GL and Cairo/Glitz speed things up dramatically, in some cases 100:1 over the 2D code. X on GL will be in the next X release. Glitz is in this one. Apps like Mozilla and GTK are being ported to Cairo/Glitz currently.
Glitz is much faster for two reasons. It uses the 3D hardware to draw 2D. Glitz also uses DRI which is direct rendered – no process swaps to the X server.
Here’s the plan for fixing the low level mess of too many drivers trying to control a single piece of hardware.
http://lkml.org/lkml/2004/8/2/111
Next X release should be on par or be better than Longhorn and Mac equivalents.
Wow. I had no idea that X on GL was on such an accelerated schedule. Is there a clear idea of where the driver API is heading? Specifically, is there any idea of how to have multiple apps using Glitz without overloading the GL library with too many contexts?
Jon, I was wondering what the situation is between cairo/glitz and xfixes and xdamage, as per a discussion that a forum member had with rasterman:
http://forums.gentoo.org/viewtopic.php?p=1459060&highlight=xorg+ras…
Are all these pieces that freedesktop is building going to work together in the end?
KeithP’s next thing to work on after the current X release is X on GL, at least that’s what’s he told me. Of course his managers could change his mind. Ian and I and others are working to bring the entire fbdev/dri/X/etc driver mess under control. X on GL is not that far from reality if more people would contribute to working on it. You can start work on XGL today by using the existing OpenGL implementations and running the development X server inside a normal one. There are plenty of DRM projects too.
No one is working on the VGA control device, http://lkml.org/lkml/2004/8/3/223 That’s a small self contained project.
I hate to say it, but it’s doubtful that you’ll see much improvement with your Matrox card. I switched to nVidia last month after my difficulties with my G550 and X/DRI got too frustrating. I reported bugs (to XF86, to the DRI project, to X.org, etc.) but they didn’t get anywhere because it appears that *no one* active in the DRI project right now is working on the Matrox drivers, and no one at Matrox is doing anything to improve Linux support. I know your problems aren’t necessarily with DRI, but I really get the impression that nothing much is going on with Matrox right now.
Sorry if I missed this in the article, but…
When can we expect a OpenGL-accelarated X.org?
I hope freedesktop.org really puts out a great standard set and everything comes out great… And tat developers start to USE IT!
It would change the Linux experience immensely for the positive if it did… Choice is great, but interoperability is awesome.
And a 3d-accelerated display, true transparency and svg, etc… are fscking incredible.
John Smirl: I’ve got a question. What’s going to be the “preferred” rendering method for apps under the new X server? Render directly via GL or indirectly via XRender (which presumably will be accelerated via GL)?
Sorry if Im asking in the wrong place, but as a Sun Hardware user, I was wondering if there would be some improvement on the famous “Expert 3D-Lite” card under X.org & GNU/Linux.
Its a pain in the back to actually switch video cards from the box just to get into Gentoo-sparc.
If someone out there knows what Im talking about..
would be great to hear some opinions on this topic.
Regards,
the loco64
The prefered way to render is using cairo
Cairo can render either via OpenGL or via XRender. It can’t render by itself. So my question still stands.
I built the new server on Slackware from CVS earlier today. Most likely, I’ll go back to the stock Slackware server until Patrick V. adds the official new release to current, assuming he does.
Initial impressions:
1) It seems a tad slower. That’s no surprise, since this is not release code. And, of course, that is a subjective opinion.
2) Compositing, triggered by a separate program after the server is running, looks pretty darn good. Granted, I’m using metacity with Gnome 2.6 and the effect seems limited to dropshadows.
3. Transparency, also enabled by a separate executable, works. I can take it or leave it, but lots of folks drool over this stuff.
4. It seems to work just fine with Nvidia. I built the driver using Nvidia’s current package, enabled RenderAccel, and away it went.
5. I moved /usr/X11R6 out of the way and then linked it to the location of the new build. If you do that, remember that some other apps, like xscreensaver, might be lurking in there and won’t be found until you fix things
6. Unsurprisingly, the build doesn’t know about the way Slackware handles window managers. I had to create .xinitrc in my home directory, otherwise startx resulted in a lovely twm display.
In other words, this is good stuff.
So, this guy maintains the floppy disk driver?
j/k
That’s what I thought
It seems that the old ugly X11 is getting better although these things are already present on DirectFB.
I believe answer to the preferred rendering scheme is Cairo. That’s because the same apps can end up pointing to both GL and X rendered targets. On your local machine the preferred rendering target should be GL unless you have old 2D hardware. With old 2D hardware X will probably beat software Mesa.
Remote testing has not been done yet. Right now remote GL is always indirect (SW mesa). With X on GL that will switch to direct (DRI). No one has measured where GLX or X protocol is faster for remote Cairo.
as to Direct FB….
You should not be comparing X on GL to DirectFB. Instead you should compare mesa-solo to directfb. mesa-solo is the standalone version of mesa/GL that runs without the need for X. X on GL will run on top of mesa-solo. directfb does the equivalent, it runs a rootless X on top of directfb.
I believe mesa-solo is the superior solution since it is designed around the 3D hardware (note that 3D hardware also runs 2D without problems). The trend in graphics hardware is definitely to make 3D faster at the expense of the old 2D modes. directfb is implemented using the 2D HW mode.
mesa-solo is also superior since it is a well documented, standardized API that has books and courses available for it. We should also get compatibile OpenGL stacks from Nvidia/ATI shortly after we get X on GL running. That will give us full accelerated X on the latest 3D hardware.
Cairo can run almost on everything since it renders on in memory buffer, in fact I’m using it to draw on a DirectFB surface.
I only have a problem when using alpha < 1.0, the DirectFB surface has weird colored “stripes”.
Anyway DirectFB supports OpenGL too but I never tried it nor I know mutch (yet).
DirectFB uses mesa-solo for it’s OpenGL implementation
The X equivalent to DirectFB is XAA
3. Transparency, also enabled by a separate executable, works. I can take it or leave it, but lots of folks drool over this stuff.
This is because the current implementations of transparancy are actually not transparancy. These are rather hacks (imlib, imlib2, and similar in GNOME / KDE) which make a snapshop of the background and show that. If you’d move the window, it would make a snapshot there again. It is perceived as transparancy by some, but it actually ain’t, and it ain’t good for performance either.
Thanks for your post btw, very appreciated.
Hmm, I think the guy ran the CVS version of xorg with composite, and thus got *true* translucency. Not the hacks you talk about.
Like on Keiths’ screenshots:
http://freedesktop.org/~keithp/screenshots/
I’ve actually seen screenshots of translucent windows above a movie being played, and the movie was displayed through the translucent window… neat!
Yep, like I said, I built it from CVC. It was RC2. Transparency was enabled by transset. I took DPI’s comment to refer to how transparency is done now, not to the pending release.
BTW, I built it again on Fedora Core 2 last night (up to RC3 this time). Fedora complained at boot about not finding libSM.so.6. but the file exists where it is supposed to exist. Haven’t tracked that down. The server built and runs with no significant problems. Occasional artifacts remain on screen for some seconds when I minimize a window (e.g., pieces of a Gnome terminal border). Subjectively, it still does not respond with alacrity.
An untweaked Fedora boots into X; this was very slow, though, with much disk activity. I didn’t see that on Slackware (using the 2.4.26 kernel).
If it picks up speed before release (removing debug code?), I’d switch to it. Effective use of compositing and transparency awaits the folks at KDE, Gnome and elsewhere.
Now, if I could only get Cairo to compile and stop whining about Glitz…
My comment referred to current, popular transparancy implementations. I’ve already seen KDrive/X.Org in action at FOSDEM (feb 2004) and i’ve also seen DirectFB doing it over there — both live.
It’ll take some time till the current software using the “hacks” (E, KDE, GNOME, and many more) switch, if they’re even gonna do that at all.