FSM has an article about improvements coming our way in X.org. “There’s more coming our way than ‘mere’ graphical goodness: Xorg developers are about to unleash upon us more performance and ease of use than X ever knew before. Not only that, the work being done now will allow older hardware to perform better and new hardware to be supported faster.”
I like the direction they are going here, and am very excited. I hope “about to unleash upon us” means “soon.” Keep up the good work guys.
It sounds like they are implementing shaders with LLVM just like the older Macs and the shader emulation in Mesa. This is tested technology that works.
I only wish more OSs would adopt this technique.
It good to hear such news. but they are very late.
in the time when Microsoft and iBanana rule, it’s a bit late to bring these things now.
Input still need more work
From the unix haters handbook : http://www.art.net/~hopkins/Don/unix-haters/x-windows/disaster.html
I do not have a lot X experience, but i always got the feeling X was far too low level and too network oriented. Almost nobody does the remote display as primary method of running programs. Anyway, it seems to work quite fine nowadays, although some redesigning might not be a bad idea.
Whow! xorg.conf is gone! Thank god! The horrors i had with that file. Maybe there is hope afterall.
We had ~40 elementary school labs running networked X systems (thin-client setup).
We’ve since moved over to a diskless client setup (graphics run locally), but we still use networked X to run the occasional software that won’t run on the 800 MHz Via CPUs/OpenChrome GPU.
We also provide NX access to our ~15,000 students, to allow them to login to their Linux accounts from home. This uses the network X features behind the scenes.
We also use the network X features to manage our VMWare servers (ssh to server, run vmware, console appears on your local screen).
Just because Joe Bozo may not use the networking features on his single home Unix computer doesn’t mean that enterprise, small businesses, school districts, and such aren’t using it.
I can’t agree more. I’m primarily an application developer. My own system is a 32bit system. I often ssh to our 64bit development server and run our X11 based IDE their which then displays on my local screen. I then do development work as normal. It’s as if my own system is a 64bit server. Awesome!
So yes, I use remote features in X11 often!
I agree that that is a useful feature. But still the fact that it’s the non replaceable core is dragging performance for people who don’t need it.
I really believe that there should be two version of the X:
1. networked one, like the classic
2. as close to the H/W as possible, for the most experience for the single monitor desktop user’s like me.
Though I would vote more for having nvidia drivers work with XRandR and having Xinerama style wide worskpace.
You’re making the assumption that X uses networking on local clients, which is wrong. Connecting to a local X server uses a local unix domain socket, and much of the bulk data transfer between the client and the server is done using shared memory. Those are pretty much the fastest IPC mechanisms available on a Unix system.
The X protocol is a little chatty, and often requires rather more round trips than you might otherwise need. This is improved with newer X extensions and libraries (XCB, for example, handles this better than Xlib).
None of this is a fundamental design problem.
By the way, Windows works the same way. The display server process (which resides in the kernel in Windows XP, and as a separate process in Vista) communicates with applications using local IPC mechanisms and shared memory. Any impression the API gives that you have direct hardware access is an illusion, and always has been on Windows NT systems. It may have been true on 16-bit Windows and Windows 9x, but it has never been true on Windows NT based systems.
One of the main problems is that the x network protocol itself is in its drawing routines very low level, a lot of clients with complex uis can bog down a network significantly. Sure there is help with compressors and protocol mappers but none of those is really standard.
The funny thing is that X is more advanced than other remote protocols in its possibilities but it shows its age in real world useage significantly!
And I agree 99% of all users do not use the remote capabilities but complexity has increased tenfold because of the 1% who need that functionality. I just ask myself often if an approach like rdp to just make hooks were remote functionality can be hooked in is not the better approach!
It is RIDICULOUSLY useful on home servers; you can access GUI configuration tools running on a headless server. If forwarded X can save 100 people from having to learn how to manually edit their smb.conf, then it’s worth it! 😀
Just as long as there’s a good way to manually pass settings to Xorg. For the past year or so, X has been able to come up without an xorg.conf anyway.
As long as the autodetection works properly, a fine situation. BUT in many settings, usually such involving hardware that’s not up to date, it can be required to make settings through xorg.conf in order to get a working system.
My home system is such a case. Autodetection leads to completely stupid screen width and height (it’s a 21″ Eizo CRT). In older XFree86, everything worked fine, but X.org required some additional settings. The xorg.conf file was the only reason I got a running X again.
With the rise of HAL and DBUS, xorg.conf has furthermore lost the control over things like keyboard language settings (X-wide, not just regarding KDE or Gnome).
The great deal of not having to use xorg.conf is its use on live system CDs or bootable USB drives. It’s not needed anymore to run detection means across the graphics hardware in order to compose a xorg.conf file.
X-Windows is the Iran-Contra of graphical user interfaces: a tragedy of political compromises, entangled alliances, marketing hype, and just plain greed. X-Windows is to memory as Ronald Reagan was to money.
Good Grief! Iran-Contra and Reagan jokes? This book has dated poorly.
Will it be necessary to have Xorg installed by default in the future ? What about graphics, text modes supported by linux ? I’m asking, because I like the idea of having a thin generic (2D) graphics layer in the kernel ( not the drivers/video crap and not only mode changing support ), but it seems it does not have a big chance as long as there is no such common interface supported by hw vendors. Or do you think VESA is the standard ?
I really like the fact that you don’t have to have a xorg.conf anymore. I run Arch Linux without it and everything works just fine, including my Wacom tablet.