“Over time, many people have complained about the X Window system; the X Window system, or Xorg in its current most popular implementation, is the layer between applications and the graphics adapter. It has some fantastic features (like the ability to run application over the network) and some shortcoming.
One thing is sure: it has evolved over the last year or so, immensely, especially as far as 3D and hardware acceleration.“
But it still sucks: http://www.osnews.com/story/21482/Compare_and_Contrast_Ubuntu_9_04_…
(I’m a Ubuntu user and I’m not happy with my hackintosh having faster graphics then any Linux distro I installed here)
My understanding is that graphics performance is most related to the driver implementation and not X itself. NVidia benchmarks I’ve seen don’t really look all that bad so I think that bears that out. Also, people always want to throw X windows out in general but I’d love to so some profiling indicating whether the small difference when the drivers are equivalent comes from bottlenecks stemming from the X architecture or from the xorg implementation.
There is also a lack of using by widget kits of new underlying technology; take libxcb; why don’t we see widget kits using that instead of libX11 – which would address many of the issues relating to latency and lack of ‘teh snappy’? its the old story of Xorg programmers putting a lot of awesome technology out there but those further up the stack not taking advantage of these new features.
LibX11 is using libxcb internally, at least on FreeBSD. From /usr/ports/x11/libX11/Makefile:
BUILD_DEPENDS+= ${LOCALBASE}/libdata/pkgconfig/xcb.pc:${PORTSDIR}/x11/libxcb […]
CONFIGURE_ARGS+= –datadir=${PREFIX}/lib \
–with-xcb=yes
Reece
the most interesting use of XCB, compared to Xlib is for windows managers. The only thing you will gain for toolkits using XCB is the startup which can be faster if XCB is correctly used (and a lot faster if the client is launched over a ssh pipe). You can gain a bit more if the main loop that the toolkit and the windows manager use are ‘compatible’ or the same, so that round trips are (almost) all removed.
If I remember correctly a driver developer for SiS compared the Linux and Vista driver and found the Linux-desktop to be faster in his benchmarks. But then again somehow Vista doesn’t seem like such a high bar to overcome.
Typical. One thing’s for sure, though. X has taken more undeserved blame for more things that have absolutely *nothing* to do with it than any other specification in the history of computing.
It does not matter what X gives developers. Give them MITSHM and people still blame the the X protocol for any slowness. Give them DRI, and they still blame X. Give them DRI2. Ditto. When are the driver writers going to start getting all the blame that they really have deserved for all these many years?
Those who do not understand X are condemned to wrongly blame it, poorly, and to completely miss pinpointing the *real* culprits.
Edited 2009-05-13 04:05 UTC
They need a more accurate and reflective set of benchmarks to that pinpoint exactly where the performance is seen and where that performance is lost. I am reminded of an article that once shed a bit of light on one of the most commonly used “X Benchmarks” .. glxgears
http://www.osnews.com/story/21133/Why_glxgears_Is_Slower_with_Kerne…
well, it’s not like all blame was without reasons, from the counterintuitive coordinate system one must program to, to the outdated structure
yes, avoiding the replication of code for primitive and text drawing is nice, but sharing code via separate server processes is so 70’s – they’ve invented libraries and plugins in the meantime… same goes for the mechanism vs policy separation between x server and window manager (not that if the composite WM crashes the system is still usable anyway) with the communication and round trips that it entails (now running X-kernel-app(wm)-kernel-X and back on each transaction, but will become X-kernel-dbus-kernel-app (wm)-dbus-kernel-X if we put DBUS based ipc into consideration)
because it does not matter what hacks are added to the graphic stack, as long as its overall design and the assumptions on which it is based, are not revised and, eventually, discarded if they don’t apply any more (which is quite likely after decades from the original inception… we don’t live in a world of unix mainframes and terminals, of window based gui’s used for sober and rudimentary interfaces or even multiple text editors and (/remote) shells at once, and separated processes as only plausible means of separating “policy and mechanism” – any more, not the vast majority of desktop users at least)
we’ve all come to take the network stack as an obvious prerequisite – but actually, the most successful desktop systems didn’t even implement the network stack when they first came out, and it was added later…
coincidence? i’m inclined to believe it’s not, and those OS’s were more optimized in their function as desktop‘s just because of this and the of not depending on the network stack for GUI operation
in the desktop world of today, it would make sense to revive this design approach, and design the GUI as a mere front end for the local machine, as if the network weren’t even available
this would not impede clients and servers for the desired network remoting protocols (there’s more than X11 with all its extensions, VNC, RDP, …) to be added later, and even implemented on top of the locally optimized graphics stack
one may think this means losing network transparency – but network transparency is already lost when an application or toolkit checks whether it’s running locally or remotely and uses separate code paths for either case; and it’s a feature that a vast majority of users does not need, yet it implies a suboptimal compromise (for the local code path again) design approach that penalizes everyone
it’s not by chance that quite some effort is currently made to make applications less and less X11 dependent…
if the system were windows, and graphic driver developers were not the same as system developers, and the system were unilaterally performing poorly on the graphic side, the point would apply and the blame would be on (eg) ATI /AMD developers
but we’re on linux, where if the graphic stack is stable, driver – related kernel interfaces are volatile, and if kernel interfaces begin to stabilize, the whole graphic stack is overhauled (with the introduction of whole new subsystems AND interfaces as a result, that in turn need to stabilize and take very long timeframe to do so)
now, i’m inclined to believe this whole development model, and yes the somewhat outdated architecture of the X gui stack are to blame
one may understand X, and do so at a wide extent, and yet blame it – understanding something does not imply loving it, and blaming something does not denote cluelessness by itself
i.e. the decades of poorly managed X server projects? xorg has a hard time getting stuff out the door, and they are miles ahead of what things were like in the XFree86 days.
I believe that you are making a distinction between X and its implementations. And if so, I agree. However, while Xorg is still not progressing quite as rapidly as one might prefer to see, it is, as you note, lightyears ahead of the XFree86 state of affairs. If we had seen Xorg’s rate of development throughout the XFree86 days, there would likely be few complaints about its status today.
My vantage point, with respect to X, is a bit different, perhaps, that that of most people. Some people may be getting weary of my mentioning that I do a lot of work with Xterminals, XDMCP, and NX. But hey, it *is* what I do. 🙂 And that experience makes it clear to me that even running through the standard X protocols, serialized over the wire at 100mbit, X does very well for normal business related tasks. My biggest complaint about it is one that no one ever talks about, and of which many are probably not even aware. And that is the sensitivity of X to latency. People think it requires a lot of bandwidth, but it doesn’t. I could run the 60 users on my largest server on 10mbit, I think, with acceptable performance. But even a few tens of milliseconds of latency absolutely kills performance for standard X. Fortunately there is NX, which sparkles in that capacity. (As in: Running rdp encapsulated in NX over a WAN *noticeably* speeds up rdp.) And I believe NX is slated to eventually become an official part of Xorg, although admittedly I have lost track of the status of that effort.
So… my business responsibilities hinge on a stable X with good performance, both on the LAN and over the WANs, using a variety of video chipsets. (I’ve never received a single performance complaint regarding screen update speed from any of my users over the last several years that we have depended upon it.[1]) And I’ll confess that I much enjoy my time playing OpenArena at 1680×1050, at almost the maximum quality settings, using my onboard Intel G43/X4500 chipset. And, yes, I’ve spent plenty of time in Doom3 and Quake4 with my old NVidia 6800GT, also at high quality settings. And from that perspective, I find myself scratching my head and wondering what people are complaining about when they moan about X.
Most of the major changes going on in Xorg look interesting to me. (GEM, etc.) But much of it seems rather esoteric from my relatively nuts and bolts, pragmatic, and maybe even boring to some, perspective. With the exception of the hotplug stuff, which I have little need for, but which I can see would be very important to some.
[1] In the absence of actual WAN service quality issues, of course.
Edited 2009-05-13 20:49 UTC
Comparing Apple graphics capability and X is totally irrelevant.
The OSX display is in the kernel, thus more comparable to windows XP (and as much flexible),
X is a display/input server (I kinda remember that X implementation on OSX sucks), So people should understand it come at some price, but also with unparalleled flexibility (remote display per application, xdmp for remote login on lightweight client,…. ) , ok that flexibility is only needed by 5% of the computer user.
OS X display isn’t in the kernel. It uses a client-server model like X (with WindowServer being the equivalent to the X server + WM + compositing manager). The drivers are in the kernel, though.
the only GUI related things that live in the kernel on linux, osx, and windows are graphics drivers.
For Linux, at least, that would be “a small part of the graphics drivers”. It used to be nothing, which was a notably suboptimal state of affairs. Then basic framebuffer was added. Then the kernel got involved in some part of DRI that I’ve never quite understood. And now it is getting involved in mode setting and memory management. And I think that is very likely where it will stop. Only the things that are most appropriately handled in-kernel are in-kernel. The rest is never going to make it into the kernel, which is as it should be.
Thats pretty much the way it is for every os, and why bad video drivers can bring down operating systems so easily.
Some people have some pretty crazy ideas about what lives in kernels. I was mostly just trying to address that.
Running X over the network can be *very* useful, but one need a local network to get decent performance. Optimizing X for networks would be something I should do if I had the technical knowledge and time available.
Just use NX.
That was the whole point. When you work in a large organization (academic or otherwise) where you have UNIX boxes running the gamut from Linux to Solaris to HP-UX, nothing beats X Window. Trouble is, most people use it on their desktop (not by choice but because there are no real alternatives) where server and client run on the same computer and the benefits of such architecture are lost.
A fast DSL can get you an acceptable performance level. It’s not zippy by any mean (and forget about eye candy) but it’s usable. Still, there are better alternatives like VNC which support features like Low Bandiwth X, compression, etc.
Reece
from wikipedia:
In computing, LBX, or Low Bandwidth X, was a protocol to use the X Window System over network links with low bandwidth and high latency. It was introduced in X11R6.3 (“Broadway”) in 1996, but never achieved wide use. It was disabled by default as of X.Org Server 7.1, and was removed for version 7.2.
And the other options are also less great:
NX is not installed by default,
vnc still needs an X server running on the remote machine
with X over SSH, I can use remote X clients from, for example, an ubuntu server without a GUI, and show them on my windows desktop. x over ssh is great, but slow. my experience with DSL is that it is still too slow, so X over SSH needs work.
No, VNC is the remote server. So much that on UNIX it can’t even hijack the standard desktop (Display :0) by default. You set it up with a small configuration file where you decide what window manager to run (the simpler, the better) and a few basic X clients — and maybe even set it up to run in 8 or 16 bit color mode to generate less network traffic and obtain better performance. Then again, the goal is getting the job done rather than eye candy.
Reece
You might want to do some research into x11vnc. It allows you to connect to a real display using the vnc protocol.
Installing NX is a breeze. I prefer it for remote X-ing. It not being installed by default is a weak argument, IMHO.
apt-get install freenx?
yum install freenx?
NX is absolutely fantastic for high latency and/or low bandwidth connections.
VNC is OK over a WAN. It is not sensitive to latency like the standard X protocols. But it is noticeably slower than NX, and the video quality is not nearly as good unless you have a pretty fast connection. Both work equally well over a LAN. But there, the simplicity of using regular X is preferable. Unless the clients are Windows clients, in which case VNC or NX can be more straightforward. (Saves a Cygwin installation, or purchase of a commercial server.)
Edited 2009-05-15 13:42 UTC
It’s a very useful abstraction. X11 is a very clever design. It makes crazy things like compiz/xnest and more, easier. The client and server being on the same machine is a small overhead. Many apps use TCP for IPC.
But X is not one of those apps. It uses Unix domain sockets and shared memory for IPC, on the local machine at least.
Cool, so the client/server devision is even more of a non issue to performance. It’s such a powerful design, I would be shocked if it gets replaced any time soon.
Wasn’t there a variant of X that was done by the OpenBSD team?
You mean Xenocara?
No.
For whatever reason Linux on my computers at home has NEVER matched Windows in graphical performance or even perceived performance (i.e. how fast it feels while using an app, navigating between applications etc). It always feels like walking through a vat of treacle 4 feet deep. Windows flies in comparison. It could be my hardware or my inability to configure the system properly. I’ve researched this for years and never found a solution.
X works great on my T43. But that’s because they’ve done good work on the drivers. Also, not using Ubuntu seems to help a lot.
I had the same experience with the ‘traditional’ environments, until I tried a non-*ubuntu distribution.
For example, I found Xfce on Arch comparable to bare Win32 snappiness-wise.
On the other hand, even on Ubuntu, the fact alone that one doesn’t have to hit pixels to move&resize windows is enough for me to not miss the Win UI.
Exploring tiled window managers now, and wondering how I lived without them.. they can also save more time & effort than rendering speed differences.
I think a big part of it is that linux is optimized for throughput, not low latency. Which is great for databases and app servers, not so great for desktop environments.
The same here. I’ve been using Linux since 1999. I’ve tried so many Linux distributions that I can’t even remember them all (including the lightweight ones like Slackware and Arch Linux). I’m also a NetBSD and FreeBSD user. The GUI performance simply can’t come even close to Windows GUI performance. Everything GUI-related just has huuuge overhead compared to MS Windows – drawing, redrawing, scrolling, resizing, keyboard input etc. All cross-platform apps like Firefox or Opera are also much faster on Windows.
There is absolutely nothing that can be done about it. And I’ve tried everything, including various different graphics drivers (closed source, open-source, accelerated, unaccelerated), replacing an NVIDIA card with an ATI card doesn’t make any difference either…
Plus, some things are simply not graphics driver related. When writing text in the simplest GTK+ notepad-like text editor (leafpad, mousepad – no formatting, no colours etc.) takes 3-5 times more CPU cycles than writing text for example in Visual Studio on MS Windows (which does syntax highlighting, code analysis etc.) or even OpenOffice.org on Linux (which has formatting and millions of other features), I think something is seriously wrong in the whole X-based GUI stack, and especially in modern toolkits like GTK+ and Qt. Both are horribly inefficient and CPU-hungry.
The only thing I’ve seen on Linux in the last 5 years or so that can compete with Windows GUI performance is lesstif. nedit (text editor) running on lesstif (not Open Motif, though) is incredibly fast in everything, including scrolling, entering text, redrawing etc. But once you start using the modern toolkits that use all the modern acceleration extensions and features provided by modern X-based graphics systems… It is just gigaslow. And it’s only getting worse and worse every year.
And it’s only getting worse and worse every year.
That’s why I prefer old good command line =)
It is very hard to break CLI from performance point of view. Of course they can render every character as SVG text file, but I don’t believe they are crazy enough =)
Edited 2009-05-14 16:53 UTC
Only when you’re running it in text mode. I’m using CLI programs in GUI terminal emulators and unfortunately, terminal emulators in X are getting slower and slower, too, which means that even writing plain text in the simplest terminal emulator in X takes 3 times more CPU cycles than editing huge files in very sophisticated GUI editors on MS Windows.
I’d like to reproduce a slow desktop and see whether I can tune it to make it snappier. What hardware and distribution do you use?
Others please reply too, if you find your distro sluggish on your hardware.
A lot have been said about this, and it is true.
http://www.art.net/~hopkins/Don/unix-haters/x-windows/disaster.html
http://linux.omnipotent.net/article.php?article_id=10127
I never understood why Unix standardized on X, but not on any specific kernel, C library, window manager, or desktop environment.