David Dawes is maybe the most active XFree86 developer and he is also the lead founder of the project. He works for Tungsten Graphics, which is the main company working on the XFree, DRI and Mesa codebases today. We are happy to host an interview with David, discussing the present and future of XFree86 project. Update: Still confused how a VSYNCed desktop look like? Read here.1. The Weather Channel recently funded the development of the ATi Radeon 8500. How did this happen and what’s in it for the Weather Channel except the obvious publicity? Do you think that more sponsorships will emerge that further help open source development?
David Dawes: I can’t comment on any details regarding the Radeon 8500 work other than what’s already been announced. This and other things happening make me optimistic generally about sponsorship for Open Source development.
2. What exactly is Tungsten Graphics’ mission? What services do you provide for a fee, and what services do you provide for the XFree project?
David Dawes: Tungsten Graphics provides consulting services primarily in areas related to X and OpenGL, which can be anything, including (but not limited to) driver development, infrastructure development, and application-specific work.
Whatever Open Source work that Tungsten Graphics does always gets contributed back to XFree86. Tungsten Graphics also has sponsored my work in getting XFree86 4.2.0 released in particular, and also my ability to continue to support, develop and maintain the XFree86 code base. This support has always been a strong factor for me in choosing whatever company I work for — the ability to keep XFree86 free and independent, and to work on XFree86 with no encumbrances.
3. Linux is now almost a mainstream operating system, however it is currently through its after-hype era. How does this new era treat the XFree project? Do graphics companies continue to support XFree with drivers or fully documentend specs as they used to do 1-2 years ago?
David Dawes: The level of interest in XFree86 is still strong. We have fewer people paid to work full-time on XFree86 than at the height of the boom, as support from hardware and graphics companies seems to go through cycles. Still, major graphics companies seem to be interested in
having their hardware well-supported by XFree86 and Linux, but more of them are now doing that work in-house themselves. Those that either don’t or can’t do it themselves either provide the specs to consulting companies (like Tungsten Graphics) for the purpose of writing drivers, or specifically provide them to individual XFree86 developers.
4. Are there plans for Kyro/Kyro-II 2D drivers on XFree86?
David Dawes: As far as I know, PowerVR has binary-only drivers available from their web site. I’m not aware of any Kyro plans for open source drivers.
5. MacOSX and the (leaked) BeOS 6-Dano version have this great feature where, when you move a window in your desktop, everything stays smooth and even readable, kinda like when using VSYNC to update the screen as many times per second as the current refresh rate. Is this feature planed for XFree?
David Dawes: I don’t know of anyone who is working on that feature at the moment.
6. Why there isn’t an automatic failsafe method on XFree, when a graphics adapter is not supported to automatically try to load the VESA 2.0 driver instead? This way a lot of newbies would find their way around with Unix instead of feeling “locked” to the command line without… being able to use Vi or Emacs and edit the XF86Config file.
David Dawes: This is one of the things I’m currently working on in my spare time: to make configuration automatic. My goal is to make the XF86Config file optional and to provide a facility for the X server to choose the best driver for the given hardware, with appropriate fallbacks if the hardware isn’t explicitly supported. This is something I think XFree86 needs, because, as you mention, it can be very difficult for people new to this environment to get it up and running.
7. Installing, uninstalling, or even.. rendering fonts today with XFree is not a strength. Installing TTF fonts and most importantly – making them recognized by all the applications – is definately not as easy as in Windows, while the rendering quality is also sub-par. The FreeType guys say that this is an XFree problem, because XFree was never designed to support TTFs. What is your opinion on the subject, and what can it be done for it? Also, is there any chance that the XFree project would be allowed to
freely distribute the Microsoft/Apple web fonts (like Verdana, Tahoma etc), which today are the standard fonts used on the web?
David Dawes: Regarding the “standard” web fonts, their licensing prevents XFree86 from redistributing them. XFree86 does have a professionally designed font family called “Luxi”, which was donated by Bigelow & Holmes. There’s a good chance that we’ll have additional high quality fonts
before too long.
8. Apple recently introduced Quartz Extreme for MacOSX, bringing 3D functionality on a 2D desktop. It is also known that Microsoft is developing something similar for Longhorn Windows version. Most notably, the biggest speed increase with such techniques would be in the transparency field. How far are we from a 3D composition rendering technique for a 2D desktop on XFree?
David Dawes: There has been some work on a new rendering model for XFree86 that provides some more advance composition techniques (including transparency), this currently being implemented in software. For XFree86 5.0 we’ll be investigating this as part of our review of rendering models, and seeing if a hardware implementation would not be more appropriate.
9. X is a protocol created in the ’80s. For sure, it carries a lot of legacy. These very legacy issues many times have been in the center of long discussions saying that they keep “XFree” behind the times. Is this true? Would you agree on removing old legacy code, and try to “modernize” the XFree architecture and code in possible expense of the lack of backwards compatibility?
David Dawes: The age of the X protocol has its pros and cons. The biggest pros are that it’s widely used and has a high level of interoperability. X was designed to be extensible. This makes it possible to add new features while retaining compatibility with legacy applications, so the age of the protocol doesn’t force XFree86 to be behind the times. It is a legitimate question as to how long we should keep the legacy code around. I foresee a time when we provide backwards compatibility via some type of compatibility module that’s loaded only for legacy applications, while we move forward making best use of new graphics hardware and technology.
10. What are the main features that need to be added or fixed on XFree86? What the future holds for XFree86, feature-wise?
David Dawes: Well, as I said previously, I think configuration/auto-configuration is a major area of XFree86 which needs improvement. There’s also work in progress now in the area of fonts and rendering models. I’m looking towards XFree86 5.0, which will be the next significant step in XFree86. We’re only just starting to think seriously about it. We’ll start by re-evaluating what we would like from a graphics/windowing system, and not limit ourselves to the ones that currently exist. With XFree86 4.0 our main focus was on the device-dependent component of the X server (DDX), and to do that we needed to provide a more modular infrastructure. The features that came out of that process showed how much it was needed, and it has given us a solid DDX base from which to expand into other areas. For 5.0 I expect that we’ll move more into the device-independent (DIX) and protocol areas as well as making some adjustments to the DDX area based on our experiences with 4.x.
cool. he ignores the ttf font question about why its so poor on Xfree86 and tells us what we know (regarding the ms web font distribution).
still nice tiny insight into whats up and coming in xfree86. shame it was not elaborated more on what would be the changes from v4 to v5 etc.
> cool. he ignores the ttf font question about why its so poor on Xfree86 and tells us what we know (regarding the ms web font distribution).
Yes, I emailed him two minutes before you post this, to ask him if you could send us his opinion on the issue…
Minor correction – Tahoma is not part of the Microsoft web core font set.
5. MacOSX and the (leaked) BeOS 6-Dano version have this great feature where, when you move a window in your desktop, everything stays smooth and even readable, kinda like when using VSYNC to update the screen as many times per second as the current refresh rate. Is this feature planed for XFree?
what the hell? is that a real question?
are you asking if it will support double buffering?
What is wrong with that question? The VSYNC thingie is not only about double buffering, it has more things involved. So I had to explain what I exactly meant. Both to David, and most of all, to readers, who most of them are not aware of this feature.
If you want to troll, go elsewhere, not here.
My XFree desktop is prefectly readable while dragging an opaque window.
Ah, now I understand why you have a problem with that question:
Because none of you have seen a VSYNCed 2D desktop in order to compare it to the “traditional” ones.
NOT all of MacOSX installations support the feature, because when there is not enough bandwidth because the gfx card may be crappy and can’t pull off this feature on 32-bit color, or because it is a G3, or because the resolution is too high, MacOSX falls back to the “traditional” refresh.
As for the BeOS 6/Dano version, only a handful of BeOS users have seen this leaked version of BeOS that supports that feature. If you have seen this feature and you would like to comment on it, please do.
Of course, the smoothness of VSYNC is not very visible on LCD’s, but it makes a whole lot of difference on CRTs!
Based on these facts, I understand why you do not understand this question or how a smooth VSYNC’ed desktop looks like.
I know David understood me very well though.
i actually thought it was a good question. i am always impressed with the quality of quartz. i hope some of the characteristics of quartz find their way into future versions of X, it would only make linux and other BSD flavours more featureful as the already available mac os x and future releases of windows.
another thing too, i’m not entirely sure of the hardware/technology behind it, but even on a crt monitor with a reresh of 60Hz i cannot notice any clipping or blurriness of text while a window is being moved. that to me is impressive when compared to what is being used on X right now and the quick draw technology that windows uses.
BTW, the reason I had to explain extensively what I meant on the question #5, is because there is no naming for this technology. There is no universal name for this, for example “doubleX supa drupa Blah”, so I had to give full description to David of what I really meant.
I know that David understood what I meant, but thing is, if you haven’t seen it with your own eyes (on a CRT monitor), you probably won’t be able to visualize the smoothness and coolness of such a feature.
If you want Quartz then you better start from scratch, or just use OS X. X11 is decidedly NOT going in that direction. X11 developers are still looking at technical means to bypass X11 design faults made at the very beginning. In my opinion they ought to virtualize as much of X11 as possible in a simpler graphics library which developers could port to today, then when it comes time to get rid of X11 once and for all that library could be ported to the new display server.
The only good news is fewer apps are using X11 directly and are instead using graphics libraries which aren’t dependent on X11. Unfotunately they still don’t behave the same so the major old problem will remain for decades from now. Personally I find X11 more bothersome than simply using the terminal, it’s far too chaotic and human unfriendly.
I just asked my husband (ex-Be Inc. engineer, used to work on that stuff) to share his insights about it and he told me that:
“You can’t understand how it’s like until you’ve seen it in action. I tried to explain the same issue 10 years ago when I was doing AtariST demos, and people wouldn’t get it.”
As for Strobe’s comment, no. I believe that this feature can be added to XFree. The feature is not Quartz-specific. BeOS had it too on the version that never shipped. It just needs someone to sit down and code it. 😉
I have several problems with X.
I think it has been an astounding success (something that has been designed in the 80’s and still works really well?), and will be with us for a couple of more years even.
However, I think even they are seeing they’ve should have moved things alot more server-side. Your hardware is at the server-side (3d card), and the user is at the server side (GUI policy/prefs). Does it really make sense that the app creates a pixel-based representation of the GUI at the _client_-side?
I think the way forward is something like Fresco aka Berlin (www2.fresco.org) – which probably won’t appeal to most people here because it’s really a research project, slow, and years (?) off from being usable. Nevertheless, it holds alot of potential, and tries to do the Right Thing.
As an example of coolness: It’s completely device-independent (no more pixels!) – something even MacOSX doesn’t do properly (and yes, everything (windows, buttons,text,…) can be hw-accelerated using openGL – it can even output to postscript, or if you’re the conservative type – pixels).
As for problems i have with the Xfree86 implementation (not X design):
I really hope:
– they clean up their input layer. See Linux’s input API, or the GGI project’s GII library.
– they chop the whole thing up in little pieces: I’d like to be able to run DRI fullscreen without the 2D windowing management or X network protocol. I imagine there’s other people that want to dump other pieces etc. Modularity is good.
– do they really need to run the whole thing as root? jeez.
– the font handling gets sorted. Actually, freetype is quite good, it’s just there’s a need for a lot of other infrastructure. I heard rumblings of a “unitype” service from the freetype people to standardize font locations etc on unix systems (everything that doesn’t have to do with the actual rendering of fonts) I’m sure it’ll get sorted eventually.
I cannot agree with you there about moving things to the server, it is not the solution. They should strive at removing as much as possible from the X server.
What do you really need as a developer of a desktop environment? You need input and surfaces to draw on, nothing else.
First they should split of the network transparency part, I bet 90% of all X users are connected to a locally running X server at all times and those who are not are probably running their connection over a LAN so they could probably afford a less efficient protocol.
If they ditch the network part this would mean that all communication with the X server could run over the locally most efficient means of communication. As it is today everything runs on BSD sockets which are less then ideal when connecting to local servers on most operating systems.
Make the extreme paranoia optional. Every client should be able to get as low as level to the hardware as possible. If a client behaves badly I for one could take that extra 10s to kill of the client manually and then have the X server reset the desktop state. It’s not a big deal if a client paints out of it’s bounds because the data I was working with is still intact.
The biggest error in the X protocol is that is trying abstract away the actual hardware at an to early stage while not providing any tools for compensating this HUGE loss in client side flexibility with anything. This have up until now been solved by ADDING to the protocol instead of exposing more of the hardware. XRender, DRI, GLX, MIT-SHM, double buffer, XShape, XVideo. Just that X11 have all these extensions to even function remotly as a modern display system shows how much of a failed architecture it is.
BTW–does anyone have any information about the expected release date of 4.3? Thanks!
I don’t know when it is coming out, but I know that XFT 2 will be included, which includes native AA support and stuff. I think Mozilla will support the new library out of the box.
However, the new XFT library will be incompatible with both KDE 3 and Gnome 2, so these projects they will need to add support for the new font library in their future versions, in order to get native AA support.
There are already some patches around for XFT 2 support in Qt and KDE. I don’t know if they’re included in the 3.1 release but they do exist. A lot of the changes for XFT though are actually part of Qt though, so you need to patch both to get everything working nicely.
Rich.
Well, you’re advocating the low-level approach, which is fine – if you want to manipulate pixels.
But we are moving towards a higher and higher level of abstraction. Taking advantage of hardware that accelerates these higher levels is going to be increasingly difficult.
If all you have is pixels, and you have an imaginary videocard that can accelerate, let’s say, font rendering – how are you going to take advantage of that? You’d have to create an extention, and make sure all libraries that use fonts use that extention.
Repeat that for every advance in graphics hardware. Painful.
I think I haven’t explained myself very well actually. The example I gave with Fresco is not the best analogy because it’s scope is slightly different.
With Fresco it’s actually the entire GUI that moves server-side. The underlying “graphics library” can indeed be very dumb.
http://www.directfb.org/screenshots/dfbsee.png
😉
Nah, XFree does a good job, I just think it does too much… Currently I’m in a very strong “less is more” mood. Gtk on DirectFB is buggy and useless but still damn impressive (fast). I whish I would understand a little bit more about DirectFB. They constantly have interesting news (like the rootless X Server) but I never really get what those really are, what they do and what they don’t. For example I just don’t get what “rootless” X Server means.
All I know is that it supports alpha blending:
http://www.directfb.org/screenshots/XDirectFB-Rootless-Shaped.png
Sorry anonymous, but things are both better and worse than you think. The requirements for a desktop environment are considerably more than just input and drawing. The main one being coordination – provided in X via the definition of a root window and properties support. We also need support for overlays (such as TV), efficient event dispatch (as provided by X11s Window structure) and this just touches the surface.
Fortunately the XFree86 implementation is less brain-dead than you suggest and can use shared memory to speed things up if the client and server are on the same machine. This still leaves a cost in terms of the serialisation of all the data into a neutral format (X11) but it doesn’t involve a trip through the OS network stack.
Personally I would like to see quite a lot changes to X11:
– Stored server side vector shapes (XRender looks to be solving this one)
– Removal of cruft from XLib
– Window transparency in hardware
– Improvements to the XVideo API that make it work properly. (eg. you can’t find out the strength of the
signal from a TV tuner at the moment so you can’t use it
to scan for channels).
– The possibility of some way to upload code to the server (possibly non-turing complete) in order to reduce
the number of round trips for things like mouse-over effects.
– And as the original article said, I want easier configuration.
Rich.
Everyone agrees on XLib being crufty. However, I’ve heard there’s an alternative for XLib called XCB.
http://www.linuxshowcase.org/massey.html
It’s much more lightweight (27kb). Haven’t really looked into it yet. Of course it’s not useful for all the existing libraries and apps that use xlib, but I can imagine there’s some people who are really interested in it. (embedded space)
First they should split of the network transparency part, I bet 90% of all X users are connected to a locally running X server at all times and those who are not are probably running their connection over a LAN so they could probably afford a less efficient protocol.
While the protocol definately has issues, what you’re talking about isn’t protocol dependant, but a failure of the underlying IPC mechanisms.
If they ditch the network part this would mean that all communication with the X server could run over the locally most efficient means of communication. As it is today everything runs on BSD sockets which are less then ideal when connecting to local servers on most operating systems.
And what IPC mechanism do you propose to use? Most systems have an extremely limited selection of IPC mechanisms, which are usually just pipes and sockets. Often times pipes will be implemented as sockets, thus sockets become the only IPC mechanism available.
The only other choice is SysV message queues, which are connectionless and therefore difficult to use for the purposes of a display server. Multiplexing multiple connections basically comes down to using one queue, which may lead to some issues such as any application connected to the server being able to “spoof” messages as coming from a different application. The API is also quite atrocious. The only advantage over sockets is typically increased speed.
The only real solution, in my mind, is shared memory and sockets, which is basically what X does through use of the SHM extension. The way X fails in this respect is failing to place the IPC code below the drawing library. It sits side by side with the drawing library as an extension. X also suffers from too many widget sets leading to discontiguous interface appearance and user interaction.
I’m all for scrapping X entirely and starting over with a new Quartz compatible display server built on a modified CoreFoundation using sockets/SysV shm in place of a CFMachPort in Run Loop Services. If people want backwards compatibility with X, it should be something that sits on top of the display server like rootless X does in OS X.
Of course this effort would require many, many skilled programmers who are willing to devote a great deal of time for the next few years to such a project. And this is, of course, only for writing the display server. Someone would still have to either rewrite GNUstep to make Quartz calls instead of X/DPS ones or start from scratch rebuilding the NeXT APIs. In other words due to sheer developer effort with negligable benefits, this isn’t going to happen. Either to switch to OS X or be stuck with X.
Berlin is a nice toy, but I don’t ever see it entering production use as an X replacement. It will always be slow due to a number of inherent design decisions.
“””The only other choice is SysV message queues, which are connectionless and therefore difficult to use for the purposes of a display server.”””
Have you tried Local Domain sockets? Or perhaps implementing a stateful protocol over the connectionless IPC?
“””Multiplexing multiple connections basically comes down to using one queue, which may lead to some issues such as any application connected to the server being able to “spoof” messages as coming from a different application.”””
Eh? Did you try IPC_STATing the message queue and obtaining the pid of the last message (msg_lrpid)?
“””The API is also quite atrocious. The only advantage over sockets is typically increased speed”””
The API offers quite a bit more flexability than sockets as well e.g. message priorities/types, persistence, etc. The Also, the atrocious API easily (and should be) be wrapped up into the equivalent of Xlib so the client would have no knowledge of the underlying IPC methods.
Many apologies: that should be msg_lspid
I think the way forward is something like Fresco aka Berlin (www2.fresco.org) – which probably won’t appeal to most people here because it’s really a research project, slow, and years (?) off from being usable. Nevertheless, it holds alot of potential, and tries to do the Right Thing.
I’m really close to the developers and currently working to get it running on Windows. However, it is not slow moving. It is a misconception that it is slow moving because:
– They don’t release too often. 0.3 is coming out, and the current CVS version that blow X away feature wise (but it lacks a lot of things like drivers and applications to do so).
– They have a weird history. Berlin itself was quite old. It was written in assembler in the beginning, and then in 98, because of Stefan, they forked on Fresco, an X toolkit and start a new chapter. Berlin 98 (the assembler window system) and Fresco 98 (the X toolkit) are so much different from Fresco now. So, logically, I would say the current evolved form of Fresco is only 4 years old, and it made much more progress than XFree86 did in it’s first four years you gotta admit that.
– Fresco is writing something new. Well, actually, it’s an combination of different old ideas, but the combination is new. By using CORBA and being vector base, it is made for the future.
– do they really need to run the whole thing as root? jeez.
IIRC, Xfree runs rootless on Mac OS X.
The only good news is fewer apps are using X11 directly and are instead using graphics libraries which aren’t dependent on X11. Unfotunately they still don’t behave the same so the major old problem will remain for decades from now. Personally I find X11 more bothersome than simply using the terminal, it’s far too chaotic and human unfriendly.
I don’t know. Now, a large portion of Linux/UNIX/X11 apps are in GTK+ and Motif, which are very dependant on X. Then you are blaming X for the problem of the front end (like KDE, or the distributor) fault for being human unfriendly. But anyway, I agree with you we need a brand new design.
However, the new XFT library will be incompatible with both KDE 3 and Gnome 2, so these projects they will need to add support for the new font library in their future versions, in order to get native AA support.
They already have AA support. In fact, if XFT2 exist back then, there wouldn’t be Pango and QFonts.
Nah, XFree does a good job, I just think it does too much… Currently I’m in a very strong “less is more” mood. Gtk on DirectFB is buggy and useless but still damn impressive (fast). I whish I would understand a little bit more about DirectFB. They constantly have interesting news (like the rootless X Server) but I never really get what those really are, what they do and what they don’t. For example I just don’t get what “rootless” X Server means.
I’m planing to try GTK+ on DirectFB. I used DirectFB once as a Fresco console, but Fresco just broke support for all consoles except GGI with the SDL support. However, IIRC, DirectFB is Linux-only… not a terribly good idea for those not using Linux, which makes X an ideal choice for them.
Berlin is a nice toy, but I don’t ever see it entering production use as an X replacement. It will always be slow due to a number of inherent design decisions.
Like? I don’t know any bad design decissions. There are some minor ones like the abstration layer between the console and Fresco, which they are busy fixing (more of a bad coding job than a bad design decission). And of course the bidir support in Warsaw was actually forked from Pango, which is kinda a bad fork cause all they did was remove Glib and insert some C++ code to make it C++. As for CORBA, I think it is the single most ingenious decissions ever made. Why? One must reliaze first that Fresco is a vector-based window server, and therefore cannot be compared with pixel-based window servers like X11.
Ok first up. The IPC.
I know you can use shared memory for data sharing in XFree but you still need the sockets or some other equally “selectable” IPC mechanism. The reason? Well most applications INCLUDING Gtk+ use a select()/poll() call to wait on the socket descriptor from the X server. SVR4 IPC mechanisms don’t allow for asynchronous operation and will force anything but the most trivial client to become threaded. I am not knocking the fact they are using IPC but the fact HOW they are using it. Client/Server models regardless if they are statefull or stateless are inefficient due to communication overhead and client service order (which is unknown).
Display systems that use a widget toolkit at the server side has two major advantages.
– Look’n’feel. Every application will have the same look’n’feel regardless when or where they were developed.
– Fast redraw times of static content.
These are two both nice things but they won’t save the server roundtrip times when I don’t use static content or when some widget isn’t provided by the display server. Look at your browser for instance, where would you put the network part of the browser? In the client. Where would you put the DOM tree part? In the client. Where would you put the display routines? In the client. What do you have left to put in the server side widget library? Nothing. The most commonly used application on Earth (except for maybe Emacs and MS Minesweeper =)) doesn’t apply properly to that paradigm and I bet many other application won’t either.
“Of course, the smoothness of VSYNC is not very visible on LCD’s, but
it makes a whole lot of difference on CRTs! ”
It was always used on Amigas and is one of the reasons the GUI looked
smooth even on what by modern standards is a slow computer. The Amiga
graphics chips on the motherboard generate an interrupt on vertical
sync.
If all you have is pixels, and you have an imaginary videocard that can accelerate, let’s say, font rendering – how are you going to take advantage of that? You’d have to create an extention, and make sure all libraries that use fonts use that extention.
Repeat that for every advance in graphics hardware. Painful.
<a href=”http://enlightenment.org/pages/evas.html“>Really?
It would be nice if you had the ability to edit your own posts… for now maybe you could correct my link above? (remove the ‘;’ after the link I guess, although I don’t understand why it affects the link?)
http://enlightenment.org/pages/evas.html
It seems to replace all links and HTML tags are replaced anyway so you can’t create your own links, just write them out.
My biggest gripe with X is that if the low level graphics driver crashes (as happens occasionally with nvidia drivers running openGL apps), then the whole X server can hang, necessitating a reboot of the entire system. If I log in remotely, the X process is still there, but it doesn’t respond to mouse clicks or the keyboard.
So, what I’d really like to see is better crash protection in X, meaning that if it could somehow detect that the driver was hanging it could shut down cleanly or reinitialise the driver.
Eugenia, it would be nice if you could post a short video, like 5 seconds or so, to show us Win/Lin/BSD users THAT feature.
Thanks.
when I used to write demos back in the dos days, everyone worked on the vsync, that was the only way to get that smooooooth just shaved in the morning feel.
vsync rocks. its niiiiice
Have you tried Local Domain sockets? Or perhaps implementing a stateful protocol over the connectionless IPC?
What, you think I was benchmarking against TCP sockets? Of course I was benchmarking against Unix domain sockets, and saw their performance to be 10%-50% better than Unix domain sockets, depending on the platform.
As to implementing any sort of protocol to remedy the connectionless nature of SysV message queues, I’d say this is a waste of time.
Eh? Did you try IPC_STATing the message queue and obtaining the pid of the last message (msg_lrpid)?
*COUGH*HACK*COUGH* I’d say a better solution would be to use a more thought out message passing mechanism such as Mach messaging.
The API offers quite a bit more flexability than sockets as well e.g. message priorities/types, persistence, etc. The Also, the atrocious API easily (and should be) be wrapped up into the equivalent of Xlib so the client would have no knowledge of the underlying IPC methods.
The problem arises precisely when you attempt to abstract SysV message queues. The queues expect the data buffer to be preceeded by a message type identifier. That means a great deal of nasty hacks to provide an abstract interface for zero-copy message construction.
The only solution I could come up with is to provide an allocator which incremented the pointer sizeof(int) before returning, then when the time comes to write the message decrement the pointer passed by sizeof(int). Of course this has the inherent flaw that any buffer to it that weren’t allocated with the special incrementing constructor will cause… shall we say erratic results? It’s just nasty…
A buffer should be a buffer. You shouldn’t force programmers to cram things into the beginning of an arbitrary sized buffer.
IIRC, Xfree runs rootless on Mac OS X.
Forgive me if someone’s already mentioned this, but I believe the fellow this person was responding to was talking about XFree running suid. This person appears to be confusing that to running without a root window.
I know you can use shared memory for data sharing in XFree but you still need the sockets or some other equally “selectable” IPC mechanism. The reason? Well most applications INCLUDING Gtk+ use a select()/poll() call to wait on the socket descriptor from the X server.
I believe the term you’re looking for is “synchronous multiplexing.” Check my forums for my rant about this.
My ideal solution: Using kqueues to multiplex Mach message queues. This may be possible on Darwin soon.
If properly implemented, kqueues provide near constant time synchronous multiplexing.
I was never advocating the use of ONLY shared memory. Obviously you need some means of message passing to coordinate the use of shared memory.
In fact, if you read my post I was advocating (for non-Mach platforms) the use of sockets in conjunction with shared memory.
Well most applications INCLUDING Gtk+ use a select()/poll() call to wait on the socket descriptor from the X server.
I should read before responding. I was assuming you were talking about multiplexing connections on the server end.
NeXT/Apple handled event muxing through a complex run loop event processing system. In OS X this is implemented as part of CoreFoundation in Run Loop Services, which provide an easy broadcast model interface for registering and unregistering events which you wish to be notified about. This includes, but is not limited to, Mach events, sockets, and timers.
I should also add X currently utilizes sockets and shared memory, but the way this was accomplished was less-than-ideal
Whatever X becomes, whatever things/APIs are added or removed, please:
Keep the ability to run remote applications with a local display. That’s just too usefull!
Yves.
rajan_r: I wasn’t saying Berlin/fresco’s development cycle was slow. It is, but considering the scope of the project I think they’re doing very well. I just meant it was currently unoptimized.
As for running X rootless, i did mean running X with superuser privileges as Bascule pointed out.
ealm: well, i think with the evas example you just proved my point. To take advantage of the hardware acceleration you need to rewrite your app using evas. In other words: the apps you currently use (GTK apps, KDE apps, …) won’t suddenly work better because you’ve got hw-acceleration and evas.
In any case, I had a chance to see Rasterman’s presentation at LinuxTag 2001 – it was _very_ impressive speed wise, even his software rendering. I’m curious to see how far it has come since then. It was still very device dependent – very pixel based.
Using several (at least 2) keyboards and monitors would be great !
> do they really need to run the whole thing as root? jeez.
Yes. They do. XFree86 accesses graphics hardware directly, and it needs root privileges in order to do so. Normal users are prohibited from directly accessing hardware, so that they may not compromise the stability of the system.
Hence an Xwrapper script is provided which drops all privileges after having done the necessary work as root.
Please read up on the facts before posting such comments.
ealm: well, i think with the evas example you just proved my point. To take advantage of the hardware acceleration you need to rewrite your app using evas. In other words: the apps you currently use (GTK apps, KDE apps, …) won’t suddenly work better because you’ve got hw-acceleration and evas.
No, but they won’t currently stop working either, just because you’ve got evas running. IMO the weakness with projects like DirectFB, Berlin etc is that they ditch XF completely, and you have to have XF running in rootless anyway to be able to use your old apps. This feels kinda pointless, and it always is a big transition for developers who all need to learn the new way of coding GUIs.
OTOH with Evas you can still do it the old way, but you also have a new way of drawing pictures, primitives, text etc and you can put them in translucent layers. Evas coding also is very easy to learn. Take a look at this excellent (although not very up-to-date) guide to Evas:
http://enlightenment.org/pages/htmlized/evas/
Cheers
Personally, I thought it was a good question.
“is that a real question?”
Hmm, the more important question would be – is you question (the one above) a real question?
“You can’t understand how it’s like until you’ve seen it in action. I tried to explain the same issue 10 years ago when I was doing AtariST demos, and people wouldn’t get it.”
I agree with JBQ, people wont understand it untill they see it. Hopefull OpenBeOS will have something like that when they near a major release.
There is no word about keyboard issues. Nobody cares that at the moment xkb configuration really sucks.
All I can say about X, is if your not inventing for the future your already behind.
The Amiga didn’t just have a VSYNC interrupt – what Amigans called “double buffering”, PC people called “page flipping”, while what PC people call “double buffering” Amiga people called “massive waste of precious graphics memory bandwidth”.
Rather than copying memory into each buffer, the amiga just changed the display hardware’s pointer into graphics memory every frame, and to animate, you just need to copy into each frame the deltas from frame-2 instead of frame-1…
Only some gfx cards on the PC can do this. A minimal prerequisite for decent, smooth, animation is VSYNC support. Page-flipping stops you wasting massive amounts of bus bandwidth.
That’s one reason amiga could do smooth, guaranteed 50/60 fps animation in 1986 – it avoided superfluous memcpys wherever possible. It helped that it had a display-synchronised graphics coprocessor and hardware blitter of course…
As an ex demo coder from many years ago too, I thought I’d clarify the pros and cons of VSYNC.
As you may already know, CRT’s redraw themselves between 60-100+ times per second. A “VSYNC write” is made immediately after the CRT gun hits the bottom of the screen, just before the next screen update. Writing to the screen at this very moment reduces the chance that the CRT gun will be in the middle of updating the screen when you issue a screen write.
This method of updating does come at a cost, in that you’re creating a bottleneck by forcing screen updates to wait until the gun hits the bottom of the screen. In the old days, this was a way for a hacker to slow things down to guarantee that an animation would proceed at roughly the same rate among different machines.
On my P4 1.8ghz, 512mb 800mhz RDRAM machine with an nVidia Geforce III ti200, here is how I’d describe screen updates when I’m moving around a window wildly:
Screen updates, when they’re made, happen instantly. There, however, appears to be a threshold of movement before the screen is updated. So, if I swing my window fast to the right, it only updates dirty rectangles when the window moves every 10 pixels or so. So, from time to time I’ll get garbage in newly exposed areas until the dirty rectangles are cleaned up.
In addition, in KDE, I notice that dirty rectangles are cleaned up on a per widget basis. In other words, the menu bar will clean itself up separately from the icon toolbar, or the title, etc. This will create severe performance problems because instead of 1 large memory move to the screen you’re dealing with as many as 12 or more smaller moves. Even though these 12 smaller moves are more efficient on a pixel count vs. pixel pixel count basis, an accelerated graphics card will perform better with 1 big screen update.
Just some thoughts.
All this talk about vsync, and I still don’t understand exactly what it is.
From what I read here, it seems mean that the screen is only redrawn during the vertical synchronization of the monitor (i.e. while the electron beam is going back from the bottom to the top of the screen). If that is correct, isn’t that the same as double buffering + page flipping in DirectX or OpenGL?
No, it’s not double buffering. Double buffering is using two identical framebuffers and is doing all its drawing operations in the invisible one and makes it visible when done – which is not related to the position of the electron beam.
VSync synchronized blitting is performing its operation in just one framebuffer, but waits until the beam is out of the region that is being modifed.
> this talk about vsync, and I still don’t understand exactly what it is. and Eugenia, post a video about it
I can’t post a video. This feature CANNOT be seen on a video that will have less than 75-85 frames per second!! And most videos only have between 10-29 frames per second! This is why we say you can’t understand it until you see it with your own eyes!
BUT.
There is a way to understand it, with a little experiement.
Take this browser window (not maximized) and drag it around the screen. Don’t do it too fast, but not too slow either. Just at the speed that you always move windows around to re-arrange them on your desktop. You will see that when you are dragging a window, you can’t really read what it is written in the page, because it all becomes grabled.
Now, take a book in front of you and open it in a page that it has two full pages of being written. Keep that book open, at the same distance of your eyes that your monitor is, and drag it around in front of your eyes as you did with the browser window!
You will see that while reading the book has become harder now that you are moving it, however, its movement and character-reading is much more smooth than the browser window!
This is the difference between a VSYNC’ed desktop and a non-VSCYNC’ed one. It makes the desktop overall smoother!! It is not, of course, about trying to read moving windows, this was an experiement to understand the smoothness!
I hope this test made it easier to understand the feature.
Hey Eugenia, I was just going to state that it was extremely easy to read text off the browser even when I went into a state of made dragging… Ok, that’s all…
Yeah, right.
You probably did the dragging extremely slow.
Why don’t we ask the QNX people nicely if we can borrow Photon and port the sucker to Linux? In all honesty, I think X11 is dying in the sense of competition and technology and will continue to do so unless ALL of the Linux/UNIX community decides to drop all their tasks and work on X11 entirely. I have no sympathy to QNX or their developers, it was simply a one time experience, but I noted that Photon was a feasible replacement for X11 on Linux. Anyway… just a thought…
Nevermind, I didn’t think you meant THAT fast…:)
What exactly is that, anyway? The desktop is always drawn at the refresh rate, just like anything displayed on the screen. If you have an 85 Hz refresh rate, the screen gets redrawn 85 times per second regardless of whether or not it actually changes. Do you mean that the GUI collects and schedules drawing operations to prevent them from being executed mid-refresh?
If it could be done, it would prevent tearing, but then again you have another problem: redrawing speed. If the application can’t redraw the window fast enough, dragging will not be smooth. Most applications don’t really optimize that because unlike games they don’t spend much time redrawing.
The Amiga didn’t just have a VSYNC interrupt – what Amigans called “double buffering”, PC people called “page flipping”, while what PC people call “double buffering” Amiga people called “massive waste of precious graphics memory bandwidth”.
Rather than copying memory into each buffer, the amiga just changed the display hardware’s pointer into graphics memory every frame, and to animate, you just need to copy into each frame the deltas from frame-2 instead of frame-1…
Only some gfx cards on the PC can do this. A minimal prerequisite for decent, smooth, animation is VSYNC support. Page-flipping stops you wasting massive amounts of bus bandwidth.
Pure BS!
The only real difference between page flipping and double buffering is that page flipping swaps the pointers to the buffers, whereas double buffering requires a blit from one buffer to the other.
You cannot say that massive amounts of bus bandwidth is saved because the bus isn’t used – both buffers reside on the graphics card. Neither can you say that page flipping requires less memory than double buffering because both require at least two buffers.
PC people have used page flipping since the good ol’ DOS days, and Windows supported it from DirectX 2. Whereas DOS required special drivers for each game, the DirectX API exposes a common interface to this feature.
I have an nVidia Geforce ti200 on this machine, with nVidia’s X drivers and proprietary AGP code (via Options nvAgp 1 in XF86Config) and my text isn’t garbled at all. A misconfigured modern video card or an older card would probably exhibit gooey text during a window move.
Er…I believe Eugenia means that the window refreshes with enough tearing that the eye sees the text as garbled, even though the text is not actually garbled.
Moving a window around the screen and having it look nice is cool but are there any REAL benefits?
I found the text of the book garbled…. (kidding).
OT: Eugenia, you claim to have saw Dano before. Does it look like the leaked screenshots of it?
rajan_r: I wasn’t saying Berlin/fresco’s development cycle was slow. It is, but considering the scope of the project I think they’re doing very well. I just meant it was currently unoptimized.
Unoptimized in what sense?
Well, if you mean speed, at it’s unoptimized version, it runs pretty damn fast in comparison with optimized XFree86 :-).
I think the major problem with Fresco is that it doesn’t have enough developers working on it. The developers really have ideas, they really know how to make things work, but as we all know, code doesn’t write itself.
>> do they really need to run the whole thing as root? jeez.
>
> Yes. They do. XFree86 accesses graphics hardware directly,
> and it needs root privileges in order to do so. Normal users
> are prohibited from directly accessing hardware, so that
> they may not compromise the stability of the system.
>
> Please read up on the facts before posting such comments.
I read up on the facts… Do you?
I said: “do they really need to run the _whole thing_ as root?”
Emphasize “whole thing”. I believe it should be possible to contain all the real hardware communication in a small module, which is among other things more easily debuggable/certifiable.
the Xserver does more than just access hardware. It’s a server, does region management, handles extentions, fonts etc etc.
See for instance GGI/KGI project vs the old svgalib. Or DRI/DRM vs utah-glx.
Please read up on the facts before posting such comments.