Now, that was an interesting reading in the XFree86 forum mailing list. We get individuals, companies like Sun, SciTechSoft, Red Hat etc. ‘fighting’ for issues varying from what XFree86 really needs, down to replacing fontconfig with Sun’s stsf, XFree86 co-founder David Wexelblat saying that XFree is today obsolete and that needs to be replaced with a direct-rendered model (by retaining backwards compatibility), Keith Packard replying as to why a new organization to handle X is needed, and more. Our Take: One thing is clear after reading all these messages: a lot of people are not happy with what’s happening with the development of XFree86. It is obvious that more discussion is needed to decide what’s going to be implemented and what not, and from these emails there, it seems that there was no real/common direction discussed between the interested parties until yesterday. No real communication seemed to exist!
Let’s hope that this open forum list will show what people want and need and will ‘open’ the XFree86 organization in a way that will allow more CVS commits, as the project seems kind of stagnant and doesn’t move as fast as it should have, as some Red Hat employees also noted (for example, direct changing of resolution was introduced just a few months ago with RandR extension, while Windows 95 could do that in 1995).
The XFree86 project always looked a bit conservative to me while more development and openess is needed. There is no need for a “new XFree”, but there is a need for more development and ‘fixing’ on the existing codebase.
OSS in certain ways just represent a few guys pet project 8-))))
However this debate turns out, I hope XFree86 eventually is refactored into a manageable codebase with drivers, rasterizing code, and communications code that can all be used in other projects if needed.
He’s either brave or a dick.
I could care less what his agenda is so long as XFree86 sees some of the improvements and streamlining it has been in dire need of for more than a year now.
Well it’s about time people are making a movement to get this anchient POS out of here. IMO it’s one of *nix’s biggest downfall’s as far as a GUI is concerned.
Disclamer: not that I can complain as it’s free, but still, it’s hard to understand everyone accepting it for so long.
Here is a quote by David Wexelblat, one of the leaders at XFree86:
“I’ve been
working in the Windows world for years now, and client-server display
systems are utterly irrelvent to the majority of real-world computer users.
X needs to be replaced by a direct-rendered model, on which a
backwards-compatible X server can be reasonably trivially implemented.
But that’s my opinion.”
The client/server display system is a defining feature of X that,
on the contrary, makes it far more sophisticated
than anything else out there.
The “real-world” probably can’t even fatham the exitance of such
a feature and what it can do for them. IMHO, it would be stepping
backwards if they were to replace X with a direct-rendered model.
I think this has gotta be a serious own goal by XFree.
Apart from the major contributions by Keith Packard himself notice who is credited on his response
especially
Jim Gettys (gnome foundation)
Chris Blizzard (linux Mozilla)
Owen aylor (gtk)
Plus Alan Cox seems to be giving support
Also the general response is heavily pro-Keith
X defenitely needs to keep the client/server system. I use it everday. It’s an excellent feature to have and I would not upgrade to a new version of X without it.
/Line72
Since it looks like the only people that like XFree86 now, do so because of the client/server model, I propose that XFree86 development stop. These people can use the current software – they love it so much and it seems to suit them. But for the 99% of people that don’t need this over-engineered Fat Albert, a branch of XFree86 or a totally different piece of software can be developed that optimally addresses our needs.
I say let X rot. What’s with all these elevens and sixes? People should start using fresco.
http://www.fresco.org
“(for example, direct changing of resolution was introduced just a few months ago with RandR extension, while Windows 95 could do that in 1995).”
Not without a reboot. 😉
Only NT/2K worked properly with changing colour depth and res without a reboot.
Up to then, Windows wanted a reboot (albeit not required).
No, it didn’t require a reboot. There were a lot of drivers who did not take advantage of the new API, but the well done ones (e.g. Diamond graphics cards) could change res on the fly on Win95 by using the new (back then) API.
Yeah. I say let X rot too! Fresco baby! No drivers, no support but hey, you can spin your dialog boxes around 360 degrees. 😉
Actually a lot of people are complaining about Linux desktop performance. But it would see some performance-comparisons between different plattforms interesting to find the problems.
I know some people has written some papers/articles that makes sense and we need more of these. This will bring better performance and hopefully the developers will not be to hurt to continuw theeir importnant work
It was one of the powertoys at first before being integrated into Windows(I think with OSR2 or 2.5).
It did not require a reboot, it changed the resolution on the fly.
“for example, direct changing of resolution was introduced just a few months ago with RandR extension, while Windows 95 could do that in 1995”
And mac OS could do that in 1990. (wild guess, but in the region)
Has anyone used the accelerated X packages? If so, how does it compare to the newer versions of XFree86?
The vast majority of X installations don’t need a client-server model.
Nobody’s proposing making remote desktops impossible; the proposal to refactor the display system with the assumption that direct rendering is the standard and remote rendering is the exception rather than the reverse. The performance hit on remote operation is likely to be neglible, but the performance boost on direct operation is likely to be substantial.
I understand that people cherish X’s remote desktop handling and it is pretty cool–but those people need to consider the possibility that X can be “reimagined” in a way that allows them that functionality without demanding that all displays, even on the same machine, go through that mechanism.
…is the only way to go if you want to replace X. Something you could drop in and it works with existing apps.
WattsM made great points.
WattsM, exactly. Very nice post, thanks.
It is a real performance hit as well when the you move a window and the window manager has to talk to the server each and every moment: “I am here”, “I am here now”, “Now, I am here”…
>is the only way to go if you want to replace X
Yes, this is what David said in his email. A layer where it would keep compatibility.
Here’s a system that takes a direct rendered approach and layers a network protocol on top of it. With my limited experience, it works pretty well for most applications, though I’ve never used something like Paint or Photoshop over it.
However, while many don’t care about a remote protocol, it’s pretty evident by the popularity of systems like Windows Terminal Server et al, that there are quite a few that really are interested in the functionality.
Don’t you think, it looks much more smoother than Xft2/Fontconfig? It doesn’t look little focus like Xft2/Fontconfig, but very solid smooth.
X is slow not because Client/Server model but because the communication between client and server is slow.
Use shared memory instead of sockets and you will be better.
Monolithic GUIs are things of the seventies, sorry WattsM, but you just wrong. And Eugeinia’s simplistic description is very amusing but has little to do with the problem.
The vast majority doesn’t need network transparency – true.
But client-server model is something different.
It would be the same as if you say KDE/GNOME should have direct hardware access.
Windows doesn’t do it since Win 3.1 days.
You keep staying in your small Linux world, just lift your head and look around. X is running on more than just Linux. Because of that you cannot put it kernel space – it’s OS independant. Sacrifice portability in benefits of speed –
Are we all linux monkeys here ? Don’t blame X for failure Linux to win the desktop. Linux is a failure on its own.
Going back to the question of fork – forking of XFree doesn’t mean that the whole X framework will be replaced.
He would start a different project then. Fork means that some parts will be coded differently, but the architecture would probably stay the same. Please, read what he is complaining about – not about architecture but about openess of XFree consortium.
>Don’t you think, it looks much more smoother than Xft2/Fontconfig?
Not sure. To really see how good a font rendering engine is, you have to take a good quality font and check it out how it renders on VERY small font sizes. Because problems are not easily visible on big font sizes…
This is good example but it’s irrelevant actually.
There is no good command line interface in Windows, if you need to do any administration on the box you gotta have GUI.
Not in the UNIX world.
Again, Windows remote desktop is doing just high level RPCs, it’s fine in the world of the same software from the same vendor. And it doesn’t scale really well – 4-5 users and it’s hosed.
X is platform independent and it’s about the same problem as why Java is slow. As soon as you drop to Visual Basic you get the speed back. Please, I’m not proponent of Windows and Visual Basic, throw your flames some place else.
about openess of XFree consortium
It isn’t really open at all, and seems resistant to new ideas, like many articles have posted…
Any competitor would be nice, cause Xfree is the the letdown of many *nixes that try to hit the desktop… (Excluding Apple of course, which slaps any other desktop OS out there around with Quartz).
Xft2 and FontConfig are totally different. Xft2 handles type rendering while FontConfig handles fonts installation. Most importantly, neither of them do rendering. FreeType2 does all rendering. I really would be surprised if this stsf project had it’s own font renderer (there’s only a few total in popular use), and if that font renderer was better than FreeType2. The reason these fonts look so small is that they’re huge. They appear to be 20-24 point.
@Vlad: Shared memory X servers have been tried. They don’t help. Communications over a socket are very fast on Linux, especially given the batch-nature of X commands.
After looking at the SciTech site a little, I must say that it looks interesting. Anyone have any experience with it? How does it compare with GGI (does anyone use GGI?) ? I like the idea of having a consistent abstraction hardware layer. It seems like there is a lot of stuff in X that driver writers shouldn’t have to worry about. Any idea what the performance penalty of all this layering is? Also, what is the performance penalty for not being able to expose hardware-specific optimizations.
With a consistent hardware abstraction layer, you could write a GTK+ layer (a QT layer already exists) directly on top of the consistent hardware layer as well as on top of XFree86. On an application server (running apps for clients), GTK+ and QT (and GNUStep…) applications could use X (or even Fresco), while applications running on the client could skip the X layer.
An added benefit of this is that smaller operating systems (QNX, OpenBeOS for instance) could get access to good device drivers by just rewriting the OS shell.
Okay, I’m all for replacing X with a direct rendering app, but we have to get into gear and develop it FAST – we can’t let this slow the OSS movement down too much.
This topic is getting waaaaayy too much attention. The X developers are already talking about XFree 5 here:
http://xfree86.org/pipermail/forum/2003-March/000004.html
If you really want to know what is coming up for XFree, you should spend some time reading the comments. Really great stuff, like window translucency, hot-pluggable devices – just read it.
> Again, Windows remote desktop is doing just high level
> RPCs, it’s fine in the world of the same software from the
> same vendor.
I assume you don’t know that RDP is a documented protocol, and that RDP clients exist for any platform, written by Microsoft, Citrix, or independent developers
> And it doesn’t scale really well – 4-5 users and it’s
> hosed.
This has nothing to do with RDP vs X11. It’s just that 64 bit Windows server systems aren’t publically available yet, and 32 bit addressing isn’t that much when talking about a terminal server
>>”for example, direct changing of resolution was introduced >>just a few months ago with RandR extension, while Windows 95 >>could do that in 1995″
>And mac OS could do that in 1990. (wild guess, but in the >region)
And AmigaOS could do that in 1984.
Regarding Scitech SNAP performance… many people claim that an abstraction layer will kill performance – in practice we have not yet seen this to be true. SciTech SNAP drivers typically are as fast and in some cases faster than the there OEM counterpart.
I would just like to say I cant freaking believe Im still reading rants about being able to change resolutions on the fly. wtf!?!?! you guys spend all day changing resolutions? I set mine the way I want it once and it stays that way. If for whatever reason, (I cant think of one), I needed to change the resolution….it takes what 10 to 15 seconds give or take to restart X. I dont think windows users cared much either when changing on the fly was implemented. It was only one less reboot out of a hundred.
What kind of moron would use a moderately compressed JPEG file to give us a screenshot of font quality? Ever heard of PNG, guys?
The reason these fonts look so small is that they’re huge.
Is that so?
>you guys spend all day changing resolutions?
Web developers do it ALL the time, in order to test their pages on smaller resolutions. Games need it all the time too.
Is it possible to do what Apple has done? Write your own display manager (such as Fresco) and then run XFree rootless?
Yes, but it is not as nice as running natively. It is really clunky and ugly running X11 apps on OSX.
What needs to be done if a new project would be needed, is to do the new thing and add a compatibility layer for XFree.
>>Nobody’s proposing making remote desktops impossible; the proposal to refactor the display system with the assumption that direct rendering is the standard and remote rendering is the exception rather than the reverse. The performance hit on remote operation is likely to be neglible, but the performance boost on direct operation is likely to be substantial. <<
Have you read, for example, Havoc Pennington’s mail that with Gtk the rendering speed of X11 is neglegible and most of the time is spent for font rendering and image scaling in the client.
Applications written for X use an X API right? Why isn’t it possible to make an X implementation that does direct rendering, just like the current implementation uses a client/server model. Maybe it’s even possible to support both (direct rendering and the client/server model) in a single X implementation, if X is used on the local desktop, direct rendering can be used, else the client/server approach is used.
I’ve seen Windows’ remote desktop in action, and it works really nice, but locally it’s directly rendered. So technically this is possible.
Would an X implementation that supports both direct rendering and the client/server model be reasonable?
The client/server display system is a defining feature of X that,
on the contrary, makes it far more sophisticated
X defenitely needs to keep the client/server system. I use it everday. It’s an excellent feature to have and I would not upgrade to a new version of X without it.
Client/server is a defining feature of X, no doubt about that; but that’s what is irritating 95 % of desktop users.
Now, it’s good if you are running a server or if you are remotely managing data. But none applyes to 95 % of the persons using X, I think everybody might agree on this point.
Being a reasonable person one coud think:
“OK, networking a graphical server is a really cutting edge for an Operating System (Unix-like) designed to work (well, not originally, I know) has a server; but today (we are in 2003 for Godness’ sake), due to Linux, it is being redesigned to work has a development Operating System. Most hackers code in a graphical environment where they can save (Ctr + S), copy and paste, see much more lines of code at once, … then in text mode.
This persons (hackers and other users) almost never use X to manage data remotely using a dummy terminal.
They get irritated with the dailly slowness during sessions of many hours; and feel like they ãre forced to tolerate it and wait for the graphical environment at every graphically more demanding operation (like resizing windows, move and render larger files, etc).”
At the same time (being reasonable), “We (the 5 persons against 95) might think the added value of a client/server posssibility is such an amazing feature, technically, but they (the other 95 persons) ain’t using it and don’t need it and that’s irritating them very much. Maybe we are wrong.”
But let’s not forget that this OSSoftware and developers just simply don’t value outside opinions.
(No ofense here).
I would also like to say that many user complains aren’t due to X performance but to GUI environments like Gnome and KDE (which are getting better at this point) and to bugs in the particular Application.
If you start X on ‘safe mode’ it’s starts fast but if you open an App it gets slow many times because of X issues and X philosophy, the main being the presence of the network layer, I admit that I’m not an expert on this subject but I use it so many times for so many time now that I, and others, got a sixth sense on lack of X responsiveness when compared to Windows using exactly the same modelling packages [and this aren’t made with QT for sure, and it’s not the VGA driver also, this a GL driver and it (the driver) just works well on Linux].
Am I making jumpimg to a wrong conclusion ?
I throw down the gauntlet. I’m finally sick of this discussion and I want some evidence, either way. For the next few weeks, I’m going to rig up some benchmarks about the exact cost of client server, and the performance difference between X and GDI. I expect that everyone else who’s made a “X needs to go because it is slow because it is client/server” will do the same. If you get any results, email me at [email protected].
Rayiner Hashem said “, I’m going to rig up some benchmarks about the exact cost of client server, and the performance difference between X and GDI”
The differences between gdi and X are huge : gdi is for ONE plateform -> nearly no asm
gdi is in kernel land -> not possible for a cross plateform like X
client/server thing
etc…I don’t know X, but i doesn’t seem like client/server is the problem….
How is running X11 apps on OSX ugly and clunky, aside from most X11 apps being ugly and clunky? Would you prefer more dock integration, or some sort of appearance mangling?
I run sawfish under Apple’s X11 implementation, and it lets me access the virtual desktop layout of xterms I use when I work pretty smoothly (I can’t stand CodeTek’s virtual desktop app), plus one or two X11 apps that I feel are superior functionally to the OSX native counterparts.
quartz-wm has a nifty proxy mode that lets copy & paste between X11 and OSX work with other window managers, and the only annoyance I can think of is sawfish popping up windows partially obscured by the menu bar.
>Would you prefer more dock integration, or some sort of appearance mangling?
Exactly.
Someone mentioned that X has only been able to change resolutions on the fly in recent months, but I know that to be untrue. I remember years ago learning about ctrl-alt-{+ or -} to increase or decrease resolutions at any point while running X11. You just have to have appropriate modes defined to switch to within the same depth. I recall using this feature back in 95 or 96. X11 might be on the bloated side, but functionally it’s really not that bad IMO, and the ability to run graphical apps over network is actually incredibly useful.
Of course, I would love to see improvements as well.
With most relatively modern video cards these days, the X11 configuration tools do a good job of auto detecting and enabling video card hardwell as well, so I don’t really see that as such a big problem.
People who want the very best or some non-standard feature should be willing to rtfm to get them working.
Many (including myself) have jumped to what seem to be faulty conclusions regarding X:
1)Its slowness largely due to its client server architecture.
2) Direct rendering will make life better.
As I read the XFree forum, it seems even the best hackers can’t agree on this issue. However, their differences seem to be on a philosphical level, not a practical level (* see links below).
At the end of the day, direct-rendering and client server aren’t the key issues. I think the threat of a fork is. It already appears to be a “Good Thing”. Thanks to the forum, it is clear the community is unhappy with the glacial pace of development, slow implementation of updated drivers, lack of focus on the needs of the majority, etc . XFree86 needed a shakeup, and that is what Keith delivered.
My hope is that this shakeup will lead to more frequent XFree86 releases, a faster XFree86, new features for the desktop user, and possibly greater XFree86 integration with the kernel and Gnome/KDE. Oh, and easier configuration would be nice, too.
My fear is that XFree86 will become marginalized by reverting to its secretive ways, preserving its slow release cycle (by not opening its arms to the vast community of excellent hackers who want to help) and focusing on arcane features that “most of us” don’t need.
That’s my $0.02.
c0mpil3r
*
Havoc Pennington:
http://www.xfree86.org/pipermail/forum/2003-March/000160.html
Alan Cox:
http://www.xfree86.org/pipermail/forum/2003-March/000176.html
David W:
http://www.xfree86.org/pipermail/forum/2003-March/000183.html
History time. The GUI didn’t exist out of thesis projects until Smalltalk in the ’70s, which led to the Xerox Alto, basically a concept machine–the first real-world GUI was the Xerox Star in 1981. The Apple Lisa shipped in 1983, the Macintosh in 1984, Windows 1.0 in 1985, and the first X Window System came out of MIT in 1984. In other words, X is pretty much a contemporary of the first widespread “monolithic” GUIs.
And actually, the GUIs that I’m aware of that came after X don’t use a client-server model. If we’re going to be pedantic about things, X would appear to be a special-case anomaly, originally developed to address some specific requirements of MIT as a descendant of Project Athena. With a few high-profile but often short-lived exceptions like NeWS, later GUI designers have almost universally shunned this design approach.
If X11 hadn’t been open source–and thus free Unixen had been forced to roll their own technology, or perhaps adopt something else like MBR–there’s a very good chance this topic wouldn’t be coming up now. Unixen aren’t using X11 as their GUI back end because it’s indisputably superior technology–they’re using it, just like most of the world uses Microsoft Windows, because it’s there.
X actually works quite well, and it’s WAY better than the “old days” (3.x which came with RedHat 5.2 was a major pain. X 4 is much better than 3.x). Things have imporved alot, and I’m left with one major complaint that I can notice. This is a usablility thing that still bugs me. Have your desktop open and click and hold the mouse button on the desktop (not a window). For me (I run WindowMaker for what it’s worth) this causes everything else to stop untill I let go of the button. XMMS stops player after a few seconds (buffer running out), animated things on my screen (wmbubblemon and other animated docapps) stop. Everything stops so that my mouse click can be handled. This bugs me, and I wish it was fixed.
That said, that’s my biggest problem, and that says alot about how good X is. It’s speedy enough for me (no complaints), the network transapency is great (never use it though, just like the fact it’s there , etc. I want to see where this arguement goes, but all in all X is very good and I don’t want it to die.
> but I know that to be untrue. I remember years ago learning about ctrl-alt-{+ or -} to increase or decrease resolutions at any point while running X11
No, you are confusing this with the REAL resolution switch. X could not change the real res on the fly, when you were doing this key combination you were just seeing the virtual portion of your desktop in that res, but not REAL resolution switching was taking place.
So it sounds like a single tasking dinosaur on top of a multi-tasking kernel – even win3.1 would beat that 8-))))
I highly doubt client/server is the bottleneck here either. But I’d like to have a big stack of numbers that I can throw at people to prove a point
>you guys spend all day changing resolutions?
Web developers do it ALL the time, in order to test their pages on smaller resolutions.
OMG, I would like to hear from them “ALL”. Being myself a web developer, I smart enough to simply resize the browser’s window to test page layout on smaller screens. Another trick is to simply create a frameset with different width/height size to “emulate” screen size, and call your pages/site thru it. With some DOM functions, u can even change the iframe size on the fly, and u get a mini display testing application.
Changing screen resolution on the fly is certainly nice, but most of the time, it’s useless.
I’d like to see benchmarks that “X is slow”, as some of you say.
IMHO, it’s the architecture of the desktop environments (gnome/kde)
that’s causing the slow down. The proof is simple…
For anybody who codes out there, write a basic window using
gtk2, qt3, and Xlib, and you’ll see the difference in responsiveness,
load time, etc… Also, check out the memory usage of the windows
(both virtual memory and real memory).
Another example is to run just a window manager
such as icewm or windowmaker. They are truly fast.
The modern toolkits such as gtk2/qt3 generalize a lot of code, and this
is necessary to have a robust toolkit for general GUI building. The
sacrifice for any generalized toolkit is of course speed.
In anycase, let’s see some hard numbers about the speed of X.
>Being myself a web developer, I smart enough to simply resize the browser’s window to test page layout on smaller screens.
So, you are smart enough to resize a browser with your mouse EXACTLY at 640 or 800 pixels?
You are not even ‘smart enough’ to retain the appropriate header on the message you reply.
the basis for all this poly sic is that they have no idea where they are headed… what needs to be done
the first step should be listen to the users
i really could care less on the name
what i need is a stable graphical interface.. easy to config
no XF86Config hacking etc
Can I just make some points:
1) (All?) XFree86 developers are volenteers without them I
wouldn’t be writing this on my X-Window system now.
Support the people developing code if its slow or fast
development.
2) X Window System is used in high end system look at SGI
graphics workstations, and high end CAD systems, most
are running X.
3) X is getting faster XFree v4 is much better than XFree v3
for example.
4) The Client/Server model is a good one, if you look at some
other OSes a GUI problem can take out the underlying
operating system, because of the tightly coupled architecture.
5) The network feature of X is underused in the industry, for thin
clients they are great, no disk, no OS just plug in and go.
Configuration can be done at one high end server.
6) May X extensions exist and more are being developed this
has always been part of the X modular extensions architecture.
And now windows seems to need a client/server archi :
MS Terminal Server, VNC, PCANYWHERE are doing there job far less than X
>>>>>
“(for example, direct changing of resolution was introduced just a few months ago with RandR extension, while Windows 95 could do that in 1995).”
Not without a reboot. 😉
Only NT/2K worked properly with changing colour depth and res without a reboot.
Up to then, Windows wanted a reboot (albeit not required).
X is horrible at multiplexing any requests with a lot of I/O (lots of drawing operations/lots of pixmaps, even if the shm extension is used)
One application can essentially hang the entire server, as under heavy I/O the entire server just blocks.
The fallacy here is that drawing is a server side operation, rather than a client side one. This problem doesn’t occur on display servers like Quartz.
I’m not sure what everyone’s idea of “direct rendering” is, but I’ll say this: you’ll always need code external to the client applications to coordinate the GUI environment.
X places way too much on the server side (with the goal of network transparency, I suppose) Things like decorating windows should be handled client side.
I’m not sure why everyone is calling the client/server relationship antequated… it’s employed by virtually every GUI environment.
X’s big problem is the amount of data which passes through sockets as opposed to shared memory (even with the shm extension), the number of context switches which occur because of this (probably X’s greatest downfall)
I think it’s of paramount importance to keep the display server’s I/O load low, or performance will be lost on the most powerful systems due to poor I/O multiplexing mechanisms available on most Unices and the large amount of context switching required for each write/read call between client and server. Since clients and servers are both in userspace there should be no need for a context switch for every drawing operation (or multiple context switches per drawing operation) since clients can simply draw to a shared memory buffer.
Yes, this leads to more memory consumed. Yes, this is what Quartz employs, and Quartz seems to have garnered a reputation for being slow. I say this is most certainly not the case… Quartz is one of the fastest and most powerful display servers in existance. The only way to prove that the OS X GUI’s slowness isn’t the fault of Quartz would be to completely replicate the Aqua look on top of XFree86 on Darwin. Until someone does this I don’t think it’s really fair to claim that Quartz is responsible.
A solution such as this could be made Xlib compatible quite easily. It could also be X11 compatible for network applications by using a separate process to perform the drawing.
I think the primary concern most people have with X is the incredible complexity of the X server. A shared memory server would be extremely simple by comparison, and would offload much of the work onto the client libraries. Because each client would be a stand-alone process, the entire system wouldn’t lag due to pending drawing requests.
> > Being myself a web developer, I smart enough to simply resize the browser’s window to test page layout on smaller screens.
> So, you are smart enough to resize a browser with your mouse EXACTLY at 640 or 800 pixels?
Very smart… You can do that with a bookmark with a bookmark on your personal toolbar like:
javascript:void(outerWidth=640);void(outerHeight=480)
Guess what, it works on Win32 too…W00t! For more of these “bookmarklets”, see http://www.bookmarklets.com/tools/categor.html
Changing resolution (not viewport) on the fly is IMHO a silly triviality. I’m glad that, since it seems to be a big deal to some, we have it on X too now.
Being myself a web developer, I smart enough to simply resize the browser’s window to test page layout on smaller screens.
So, you are smart enough to resize a browser with your mouse EXACTLY at 640 or 800 pixels?
Get out of your narrow minded behavior, a bunch window managers (like fluxbox or windowmaker to name a few) show you height and width size on the fly as u resize a window. And if the window manager u use or the OS u use can’t handle this, the framset trick i told about just bypass this lack (and of course consider frame size as viewable browser area, not whole browser window size)
You are not even ‘smart enough’ to retain the appropriate header on the message you reply.
I reply to you, hence, i set a different header that points directly to you. But i don’t think this has to be explained, you just seem to react like a kid because you’ve been upset or maybe it’s just like you always are…
> > Being myself a web developer, I smart enough to simply
> >resize the browser’s window to test page layout on
> > smaller screens.
>
> So, you are smart enough to resize a browser with your
> mouse EXACTLY at 640 or 800 pixels?
My window manager (windowmaker) tells me the size of the window as I’m resizing it. I’m sure many others do too. I guess the “desktop environments” don’t.
I agree that it is generally not that important to have on-the-fly resolution changing: this is something I set up on a new installation and then never change. But it would be nice to have. At the very least it would make installation of X nicer.
I would really like to see the benchmarks of X being slow. As a user I dont find X slow at all. Im using the binary NVIDIA drivers btw. Why not run something like FVWM and the speed will hit you. Its not X that is slow. It is toolkits like KDE or GNOME. The only thing I can complain about speed wise on Linux desktop is the time that KDE takes to start and load programs. Once they are running no problem. X has to continue to provide client server for those who use it. MS Terminal Server and Citrix thin clients are becomming extremely poplular here in NZ. Getting rid of the client server would be a massive step backwards. And building onto a direct rendering like what MS did has problems, have you ever tried to run any kind of multimedia application or play an kind of video in a Terminal Server, its not pretty, X’s client server handles all that fine. What I would like to see from X is not bothering to have 10 years backwards compatibilty but throw out some of the old code and bloat and build newer faster routines but still keeping the core client server architecture. Now that we have Xft and fontconfig for client side fonts, why not move everything to that and get rid of the server side font rendering forever, The client server is a must for me because I have thin client X terminals running off my machine everyday.
I’ve used them and I love them. They really are more stable and in general much faster. “much” depends on what you compare. Lots fo the newer XFree drivers are more then sufficiently fast for most folks, but my new laptop uses XIG and the performance perk up was noticable, especially when viewing a movie while doing other graphical things at once. Its smoother. However, its not free $$$….
I’m reading many messages here claiming that few people use the client-server features of X. What are you basing this on? Maybe most people who dabble with Linux at home don’t take advantage of that capability but that’s hardly representative of the Unix world as a whole. I’ve worked at several companies where the ability to remotely run applications over X was used daily by hundreds of workers. Not by the admins and IT staff but by the “regular” employees (developers, engineers, etc.). To lose that functionality would pose a signifcant problem to the way many many people work.
Personally, I don’t really notice that X is any slower than Windows or OSX. I do notice that the high profile desktop environments are quite sluggish (GNOME, KDE, CDE) but if you use a lightweight windowing environment (icewm, xfce, blackbox) they are extremely responsive running on the exact same installation of X. I think a lot of the “problem” is improperly attributed to X.
It seems to me that all those people complaining about the client-server model of X dont quite understand it. I cant say it I know it by heart, but:
In X client-server model, it is the server who does the rendering AND the display, so it gets displayed on the server.
iow, the video card, cpu, memory and display are all resources provided by a server.
The client just tells the server what to display, where, when and how.
Heck! even the mouse and keyboard are resources provided by the server. (never forget, though, that your app is run on the client).
I could be wrong about my asumptions on what users understand of the X client-model server thing though.
And actually, the GUIs that I’m aware of that came after X don’t use a client-server model. If we’re going to be pedantic about things, X would appear to be a special-case anomaly, originally developed to address some specific requirements of MIT as a descendant of Project Athena. With a few high-profile but often short-lived exceptions like NeWS, later GUI designers have almost universally shunned this design approach.
Also don’t forget NeXTSTEP. NeXT wrapped up Display Postscript into a model that was networked as well.
I think it’s notable that they did have this capability built in to their system. I believe they leveraged the Mach architecture to pull this off.
Then, you have these SunRay clients, and I have no idea what they use. They could be simply smart X-Windows clients, however they also seem to need a dedicated server that fronts them to the actual application servers, so they may use some other protocol to talk to their servers, and the SunRay server talks X or Citrix as appropriate. Ultra thin clients with real fat servers.
Of course, most places don’t need the desktop portability provided by something like the SunRays. But here we use Terminal Services, X and PC Anywhere all the time to talk to other machines. PCA is pure evil, but it works.
I would really like to see the benchmarks of X being slow.
I would put a link to some directfb vs xfree86 benchmarks, unfortunately the directfb.org site seems to be down at the moment, perhaps lots of people took sudden interest on dfb and the site went down 😉
I am typing this from a 1920×1440 screen on a AthlonXP 1.4 GHz with a FAST 32 MB GeForce2-400 card, latest XFree 4.3 (normal “nv” driver that is of the same quality as most XFree drivers) with gcc 3.2 on 24 bit color/60 Hz under Gnome 2.2.
It is UNUSABLE. Windows are SLOW to move, text appears 1 second after I type things.
The same machine runs on that res fast on BeOS (with the “Propose Mode” utility). The same model card on a SLOWER Mac (G4 450 Mhz), runs fast as well under OSX.
Now, please explain to me why XFree can’t be usable above 1600×1200 with the normal nv driver. I can’t use the nvidia drivers on this Linux installation, as they completely crash any Linux I tried, and secondly, there are no nvidia drivers for the version of Linux I run (sorry, can’t say which linux that is, as it is an NDA’ed final relaese).
I know peolpe will start saying “ah, it ain’t the nvidia drivers, so it doesn’t matter”, but hte point is that the nv driver is of the same quality and capabilities as 90% of the XFree drivers. So, point is, that either the driver OR XFree is just unusable above 1600×1200. I find this poor, as this is what this major linux distro comes with.
I can run BZflag on my Linux box, and play it with accelerated Open GL graphics under Win2k with on the winbox.
That’s accelerated 3d over a LAN with two different OSs!
The X server on the Winbox is called WinaXe.
This is a little off topic I know, but I was surprised to find how powerful the networking with xfree is, and how little bandwidth OpenGL required. It took so much work to get it to this stage, it would be a shame to see it shatter into many forks that promise so much, but leave us all waiting for the 1.0
How well does Quartz work then when you have a classroom of computers, and a teacher can see exactly on her computer what a student is doing?
Does it appear to the teacher sluggish or the same as it would locally? How does Apple do this?
Also after reading that kernaltrap article where Linus and others were talking about how to improve the interaction with X, I came to understand it as X alone is fast. I therefore also assume that GNOME and KDE at moving around windows is also fast. Therefore if you have an SMP machine you should see to much of the slow downs.
I also agree with other peoples argument about how the design of Xfree needs to be change.
– Keep the same requirement capabilities, but add some new ones.
– Remove some of the legacy stuff that isn’t used in modern systems.
– Solve the graphics card driver issues.
Even X11 is sucks, For a programmer, it must know what
visual color mode current server supports, and it is not transparant. It does not like MS Windows, they have simple
RGB mode which GDI is truely mananging color rendering, it
may atomatically dithering or not.
I run at 1600×1200 and yes, performance sucks. Windows XP flies with regard to rendering – it just seems instantaneous. Above 1024×768, for some reason, XFree86 sucks donkey balls with regards to rendering performance. I’d wager that most of the “X doesn’t suck” crowd, run at or below 1024×768 while most of the “X sucks” crowd run at higher resolutions and colour depths. I’ve heard that XFree86’s internal scheduler has problems and that’s what is causing the performance problem. This hypothesis seems plausible. One thing to note is that Red Hat’s kernel seems to be nicely tuned to XFree86. I’ve used Gentoo 1.4 and Slackware 9.0 and Red Hat 8.0 was faster in 2D performance. The RH kernel has the O(1) kernel scheduler back-ported to it as well as low-latency patches – perhaps this has something to do with it.
A lot of discussion about changing resolutions on the fly misses out one (imho, most important) use for it – for multimedia, e.g. games or fullscreen video playback.
As far as client/server is concerned, it has been already pointed out that TCP/IP is not used, but local sockets, which are very fast and pretty much equivalent to shared memory.
The biggest problem is probably in the multimedia and similar uses, which are by definition against the spirit of what X was designed for.
It has to do with a lot of things.
Xfree86 — there are fixes, code adjustment and tweaking that can be done. Anyone who says otherwise is on crack. There is a kind of jerky feel to any Xfree86 that I have never been able to shake that I do not feel with Windows. I am not going to list all the possible X issues. But they are part of the problem.
Window Managers — XFree86 developers love to blame bloated badly written window managers for all the trouble anyone ever had with X. It is not true but they are part of the issue as well.
Widget toolkit developers — whether it is QT, GTK+, XUL or whatever the hell that OpenOffice uses.
Desktop environment developers — you can have X tweaked to the max and WindowMaker (insert your favorite windowmanager here) sings but the minute you try out the latest KDE or Gnome your whole system goes to a crawl. Guess what that is not X.
Personally, I would love to see the XFree86 guys, Gnome, KDE , QT, GTK, Mozilla and OpenOffice developers all get together and get their shite together on the speed issue. Someone should develop a plan to trap all these guys in a room and lock them in. Tell them make it faster or we won’t let them out!
That would be nice.
> I run at 1600×1200 and yes, performance sucks.
Hmmm, I run at 2560×1024 (dual head) and it runs fine 🙂
X doesn’t suck, it needs some updates… who knows, maybe a rethinking to adopt “new” functions. Server/Client is not a problem, maybe the current design has some issues, I don’t know, but it’s not directly because of the Client/Server paradigm.
As for resolution switching, like it was just mentioned, it’s useful for games, multimedia, tv, and in my case, to resize X when doing presentations with my laptop and the projector can’t handle the resolution. Having to restart X is not a nice option.
Another thing, what happened to xmove? The “screen” equivalent program for X? That’s what I want… minimize an application here, maximize it in another computer.
cl
X might have a chance upping the scale by hooking the draw functions to the GPU instruction set. This approach might help the CPU from working too much I/O, though.
You obviously don’t use X to play games or multimedia otherwise you would now that there are other ways to get those resolutions when you are the only app rendering. The problem with no resolution changes is specifically for window managers and windowed applications. In a desktop you could not change the res from 800-600 to 1024-768 unless you used the virtual screen which stuffed things up. On the other hand games can switch to 320×200 or 640×480 in X fine.
X is not the problem. The problem is the development speed of the Xfree team and at the current rate we will probably see XFree 5.0 in 2005. That is what needs to be changed.
I would say while I place my money on Fresco, there is no open source project anywhere remotely able to replace X11, and XFree86 in specific. One of the few things I would like in a X11 replacement is
– Addresses all the problems on X11 as it is. Not address one problem and leave the others there. There is no use cause after a few years, we would be having posts like “Need for Project XXX replacemnt…”. While maybe 20 years ago having something like Quartz is stupid and can never be accomplish, I much prefer it than the old raster interfaces. No, i don’t mean having it PostScript-based… 🙂
– Binary compatibility with X11 is not nessecary. Most of the important apps on X11 are used by people that probably won’t be interested in dumping X11. What I much prefer is somewhat compatible GTK+, Qt, etc. layers on the new replacement. Source compatible if you like, but the whole point is for developers to port their software to the new one easily.
– Full built-in OpenGL support. In addition maybe something that is somewhat compatible/similar to Direct3D to facilitate 3D apps using that to be ported to Unix. I much prefer full OpenGL support than having to NVidia or ATI provide it via drivers – they don’t have to do that for DirectX on Windows or OpenGL on Mac OS X, do they?
– Multithreaded. Especially different UI widgets, BeOS had proved it would drastically improve response even if the hardware is too slow. While you may say that hardware is getting faster and faster nowadays – what happens if some new killer app chokes what we have today? Suffer?
– A single source of UI widgets. In other words, APIs would base their apps UI on these widgets instead of their own. This would drastically improve consistency. Meanwhile a standard HIG should be made so that different desktops wouldn’t case a big case of inconsistency.
You run at 2560×1024…and it runs just fine. How does it run compared to Windows? I’d wager that Windows runs so much smoother that the difference is like night and day. To me, that’s not fine. I guess I’m picky, but I’ve seen XFree86 run on just about every piece of hardware and it has left a lot to be desired in the speed department. And that’s the problem. A lot of the “it runs fine” people don’t compare it to Windows because they’ve never recently used Windows for an extended period of time with modern hardware (save, a Windows based on a 9x kernel that sucked). If all you’ve known is a Ford, it’s hard to appreciate what a Mercedes offers. Quite honestly, for my needs, the only thing Linux is lacking is good, fast, 2D rendering. It does everything else Windows does for me, better – which is the only reason I’m using it and putting up with jerky window movement, visible screen draws and all the other goodies X brings with it.
direct rendering IS needed. You are not thinking clearly.. client to server architecture can be added as a backwards-compatible option.. and i’d bet it would even improve.
where the h3ll is ADJUSTABLE and FORCEABLE anistropic filtering for open-sourced DRI drivers?!? 3d looks like crap without it!
Also, I support the POV of toolkit consideration for client design enhancement within XF86 – then we’d have smooth window movements (read backing store does not work for toolkits on X – i.e. QT). Load mc in any terminal on xfree86 and mover it over (for example) mozilla or konqueror as fast as you can.. just look at that god-awful garbage. Now do the same thing under windows or mac osx.
Yeah, no wonder AOL is sticking with windows…
First of all, I do not know the first thing about XFree86’s desgin, structure & implementation. I am writing now to give my opinion upon reading the archive of the mailing list above.
It is obvious (and they’ve admitted it readilly) that the XFree86 core developers are afraid of losing control. In the end, it’s what this is all about. I know the feeling of losing one’s “baby” either by a hostile take-over or some other reason.
For the most part, what matters more than WHO controls the project (whether it’s the current developers for life or the community) is that whoever DOES control it will do a Damn, Good Job of it. If a dictator leads its country to greatness, no one complains. If a small group of representatives make decisions for a community that result in rewards, no one complains.
In the same respect, most people who are affected by the XFree86 project (whether as users or as developers) won’t give too much of a hoot if the leaders of the project are doing a fine job.
This brings us to the problem of dealing with leaders who are ineffective in the opinion of the majority of people affected. Hopefully, in the end, that spirit which puts the project and community above all else will prevail in this manner. Hopefully, if the existing core developers are really too slow or are too inadequate, they won’t have the pride to give up their “baby” and hand it over to others who they believe are more suited to leading it. OTOH, if they decide to keep as tight a reign as they have in the past (if not tighter), then they do a Damn Better Job than anyone else volunteering to replace them would.
As the community of users and developers grow, it becomes harder and harder (and more impossible) to please everyone. what we need now are brilliant people who can think laterally as well as vertically with regards to the various issues they face, but more importantly, who can think beyond their own personal biases.
“NVidia or ATI provide it via drivers – they don’t have to do that for DirectX on Windows or OpenGL on Mac OS X, do they? ”
Actually, yes they do There is no such thing as full OpenGL support in the window server. The way OpenGL works these days is via ICDs (installable client drivers). They are almost entirely written by the graphics card maker, from one end to the other.
It starts at the header file. On my system /usr/include/GL/gl.h is written by NVIDIA, so I can access NVIDIA-specific extensions. When you use any OpenGL function, you call into the GL shared library which is written by the vendor. The GL library than talks to the kernel driver which, you guessed it, is written by the vendor. The only support required from the windowing system is some glue functionality for binding OpenGL contexts to windows. This would be GLX or WGL or whatever OS X calls it. Major parts of GLX (and I presume the same is true for WGL) are also provided by a vendor-specific module.
The design is this way to allow vendors to be able to fully optimize OpenGL for their hardware. An OpenGL ICD isn’t analagous to a network card driver. It’s like the network card driver + the entire network subsystem + the part of the C library that exposes network-related functions! This architecture exists on all OSs, including Windows and Linux. In fact, the majority of the Windows and Linux codebase for the NVIDIA drivers are identical, because so much of it is platform independent.
I run Windows XP, Me and two versions of Linux on my machine. Each runs at 1280×1024. They all have good performance. My only beef w/X is that my optical mouse is tracks too fast unless I type xset m 1/2.8 to slow it down. X doesn’t suck. It’s just that XFree86 needs some improvements and a faster dev cycle.
Those who’ve not used Sun’s or SGI’s X have no idea what you’re talking about when you say that the X Windows System is a bad framework.
IMHO it’d be a lot more productive to try and write a new, from-scratch X implementation than to try and dump X. Netscape had to scrap all of its code, but it didn’t try to make Mozilla use a different standard.
man…
you people never heard of ctrl-alt-+ and ctrl-alt– ?
x could change resolution on the fly for aeons…
speed, security both side, i hates X11.
if X11 were so fast 1 years ago, then Linux is NO.1 now.
so i doubt All X11 developers handshake with M********..
i.e. MSX11 i mean just like WINTEL…
do you understand?
Why people keep confusing changing resolution and root window resizing is beyond me. I though Eugenia more knowledgeable than that.
And why people think this is a “must have” feature is just amazing. How many times a day do you change your desktop size? I don’t think I have ever done it even once.
i.e. MSX11 i mean just like WINTEL…
do you understand?
No. I really dont.
People who think it’s almost useless forgot about games. Games NEED it.
Dude games have it under X. Its just the windowed apps that have a problem.
> I’d wager that most of the “X doesn’t suck” crowd, run at or below 1024×768 while most of the “X sucks” crowd run at higher resolutions and colour depths.
I run X in 1600×1200 32 bpp on FreeBSD and I say it doesn’t suck. As for the geezer with the geforce, use NVidia’s drivers instead of the non-accelerated “nv” driver included with X, they [nvidia’s drivers] are quite fast and very crispy.
Sure, X ain’t perfect, but it makes do, and it does a somewhat decent job at it. If you have viable (read: with lots of apps and good driver support) alternative, I’d love to hear it.
I use a resolution of 1400×1050. Performance does not suck.
@Bascule
> X places way too much on the server side (with the goal of
> network transparency, I suppose) Things like decorating
> windows should be handled client side.
That’s not the real problem, the real problem is that there’s only _one_ comminication channel over which all the client/server commucations happen, and hence one application communicating with X can block all the other ones.
One solution would be to have X multithreaded, and have a 1:1 correspondance between client applications and X threads, and each thread would behave like the whole X behaves now. Of course they’d still have to compete for various resources on the server side, however this contention would be much faster since a more fine grained locking mechansism would be possible, at all effects emulating a direct rendering mechanism.
>>People who think it’s almost useless forgot about games. Games NEED it.<<
Games can change the resolution, no problem. The problem in X11 is that the framebffer size does not change, so you have a virtual framebuffer that has a different size. But that’s not a problem for fullscreen applications like games.
In general, there seems to be no reason to change anything in X11 for games. 2D games work fine (with SDL, for example), and 3D is competitive with Windows, as the nvidia drivers prove. Loki Games did all the work that was needed for native games on Linux. The only things that are missing are more games…
Heck! Direct rendering doesnt interfere with the client server model!
It IS the server who renders and display!
Damn it, how does one make it get in your head?
I run X at 1280×1024, and it runs faster than windows, except for dvd playback with mplayer (with xine its better than with windows, except with animated movies), and all that with a PII at 266 Mhz with 128 Mb “SIMMS” and a 19″ monitor and a PCI TNT card when all rpm’s are i386. (im dying to try gentoo, anyone can tell if I would see an increase in performance there?)
Now the resolution thing. It has always been there! the point that it kept the display size constant is totally irrelevant. Im sure it seemed usefull at the time, and when people desided that resolution+display_size change was needed, it was implemented and works great.
Now, as someone has already mentioned, one problem with X is development speed. Since its getting increasingly more attention, I hope this wont be an issue soon.
As for the kernel giving a hand to X, that sounds fair to me. The kernel will give a hand to the “X server”, since its the server who does all the rendering and display.
Anyways, the kernel doesnt need to know who it is giving a hand to.
I see that scrolling through this webpage with Konqueror is way slower than scrolling the same webpage with Internet Explorer. But how does this come?
For me, it is Anti-Aliasing. Konqueror does do anti-aliasing, making it slow. KMail doesn’t here and scrolling is very fast there. Also, just enable anti-aliasing in Windows and change all font sizes to 14, and see what happens to the speed. Or just enable it, open a Word-document, zoom in a bit and scroll.
Why do people thing X should go ? You trolls have any hard evidence for the design of X is a major drawback ? Asking most X developers and toolkit hackers you would know that this is not the case. What is slow in X compared to comparable architectures is largely 2 things.
1. Toolkits are suboptimal, they could do a much much better job in many areas e.g. redrawing only actually what is needed. Use better algoritms and videocard support for e..g scrolling .
2 Drivers sucks. Making videocard drivers that take full advantage of the hardware is not an easy task, and incomplete/nonexistant open specs doesn’t make it easier. Transition to DRI might also help.
So instead of trowing away 15 years of work and some million lines of code (multply this quite a few times if one include tookits, windowmanagers, raw X apps and so on) one could improve toolkits, make better drivers, and add some extensions/improve existing algoritms in XFree.
Oh, besides, X is network transparent But when run locally this goes over a Unix domain socket which is not very much more than a memcpy(.). SysV shared memory between client and X server is also used when run locally.
I have three heads on my main Linux box. Two run at 1600×1200 and the other runs at 1920×1200 (Sony GDM-900 widescreen jobby). Performance is great using the nvidia binary drivers – I’ve got a GF4 card using twinview for two of the displays (one 16001200 display and the 1920×1600 display) and a GF2MX card for the third. Xinerama glues the two together.
The only “problem” is experienced when dragging a window at high speed from the twinview displays to the display on the GF2MX – I get some tearing when crossing the boundry between the two. Other than that it’s great.
The open source nv driver is know to of very low quality – largely because NVidia haven’t been very forthcoming with the required documentation. The binary driver is much, much faster.