Today we are very happy to publish a very interesting Q&A with major freedesktop.org members: the founder Havoc Pennington (also of Debian, Gnome and Red Hat fame), Waldo Bastian (of SuSE & KDE fame), Keith Packard and Jim Gettys (of X/XFree86/fontconfig/w3c fame) and David Zeuthen, a new member who’s taking over the ambitious HAL project. In the article, we discuss about general freedesktop.org goals, status and issues, the role of KDE/Qt in the road to interoperability with Gnome/GTK+, HAL (with new screenshots), the new X Server aiming to replace XFree86 and we even have an exclusive preliminary screenshot of a version of Mac OS X’s Exposé window management feature for this new X Server! This is one article not to be missed if you are into Unix/Linux desktop!
Rayiner Hashem: In your presentation at Nove Hrady, you point out that drag and drop still doesn’t work as it should, mainly because of poor implementations. Are there plans for a drag-and-drop library to ease the implementation of XDND functionality?
Havoc Pennington: The issue isn’t poor implementation in the libraries, it’s simpler than that. When you add drag and drop to an application you have a list of
types that you support dragging or dropping, such as “text/plain”. Applications simply don’t agree on what these types are.
So we need a registry of types documenting the type name and the format of the data transferred under that name. That’s it.
The starting point is to go through GNOME, KDE, Mozilla, OpenOffice.org, etc. source code and document what types are already used.
The other issue requires even less explanation: application authors don’t support DND in enough places.
Rayiner Hashem: Most of the examples listed in your Nove Hrady presentation were desktop level. Yet, you mentioned GTK+ 3 and Qt 4 as well. Do you think more
interoperation at the toolkit level is necessary? What form would this interoperation take?
Havoc Pennington: I don’t really think of freedesktop.org as an interoperability effort anymore. Rather, it’s an effort to build a base desktop platform that desktops can build on.
Many of the things on freedesktop.org would be implemented in or used by the toolkit: icon themes, XEMBED, X server, Cairo, startup notification, and so forth.
Rayiner Hashem: From your experience with GTK+, what do you think are some of the properties that make it hard to write fast applications for X11? What would
you like to see the server do to make it easier to write fast client applications?
Havoc Pennington: Talking only about speed (fast vs. slow) won’t lead to understanding of the problem. Graphics will look bad if you have flicker, tearing, OR
slowness. Most users will perceive all of those problems as “slowness.”
Eliminating the round trip to clients to handle expose events will probably be a huge improvement in terms of both flicker and speed. The proposed Composite extension also allows double buffering the entire screen, which should let us fix virtually all flicker and tearing
issues.
Some clients right now do things that are just stupid; for example, allocating huge pixmaps (image buffers) and keeping them around; improved profiling tools should help track down and fix these mistakes.
Eugenia Loli-Queru: Is freedesktop.org working towards a package management standard? While RPM and DEBs are well known, what is your opinion on autopackage.org?
Havoc Pennington: I don’t really understand the motivation for autopackage. At their core, RPM and DEB are a tarball-like archive with some metadata. You can
always just ignore the metadata, or add additional/different metadata.
For example, file dependencies; if you don’t want your RPM package to have file dependencies, you don’t have to include any.
I would tend to focus more on the question of which metadata we should have, and how it should be used by the installer UI.
autopackage tries to solve the problem that distributions use different packaging systems by creating an additional packaging system and using it in addition to the native one. However, you could just as easily pick one of the current systems (RPM, etc.) and use it on any distribution. RPM works fine on Solaris for example. I don’t see how autopackage uniquely enables portability.
In short, to me the issues with software installation are not related to the on-disk format for archiving the binaries and metadata. I think autopackage could achieve much more by building stuff _around_ RPM and/or DEB rather than reinventing the archive wheel.
I haven’t looked at autopackage in detail though, and I could be totally wrong.
Eugenia Loli-Queru: How do you feel about freedesktop.org becoming an “umbrella” project for all projects that require communication (e.g. if X requires a kernel
extension, freedesktop.org makes sure that the X group is heard from the kernel group and manages the implementation)
Havoc Pennington: Ultimately freedesktop.org can’t make sure of anything; it’s not an enforcement agency. What it can do is provide a forum that’s well-known
where people can go and find the right developers to talk to about a particular issue.
Implementation will really always come from the work and leadership of the developers who put in hours to make things happen.
Eugenia Loli-Queru: How do you grade the support of commercial entities towards freedesktop.org? Is IBM, Novell, Red Hat and other big Linux players helping out the cause?
Havoc Pennington: Individual developers from all those companies are involved, but there’s no framework for corporations to get involved as corporations.
I’m happy overall that the right people are involved. But of course I’d always like to see more developers added to the Linux desktop effort.
Eugenia Loli-Queru: In the plans of freedesktop.org do we only find interoperation resolutions between DEs or innovation is part of the plan? For example, would
freedesktop.org welcome Seth Nickell’s Storage or ‘System Services’ projects which are pretty “out of the ordinary” kind of projects?
Havoc Pennington: I’d like to see more work originate at freedesktop.org, and certainly we’d be willing to host Seth’s work. Ultimately though any new
innovation has to be adopted by the desktops such as GNOME and KDE, and the distributions, to become a de facto reality. freedesktop.org may be the forum where those groups meet to agree on what to do, but freedesktop.org doesn’t have a “mind of its own” so much.
Eugenia Loli-Queru: In your opinion, which is the hardest step to take in the road ahead for full interoperability between DEs? How far are we from the realization of
this step?
Havoc Pennington: I think the “URI namespace” or “virtual file system” issue is the ugliest problem right now. It bleeds into other things, such as MIME
associations and WinFS-like functionality. It’s technically very challenging to resolve this issue, and the impact of leaving it unresolved is fairly high. Here are some links on that here, here and here.
Eugenia Loli-Queru: On Mac OS X, users who require extra accessibility can listen to text from pretty much any application via text-to-speech, as it is supported in the toolkit level. Are there any plans on creating a unified way where all applications (Qt or GTK+) would be able to offer this functionality from a common library? What would be the best way to go about it? What accessibility projects you would like to see produced at freedesktop.org?
Havoc Pennington: This is already supported with ATK and the rest of the GNOME accessibility implementation, you can text-to-speech any text displayed via GTK+ today. I believe there’s a plan for how to integrate Qt and Java into the same framework, but I’m not sure what the latest details are. This is looking like an interoperability success already, as everyone does appear to be using the same framework.
Eugenia Loli-Queru: I haven’t found an obvious way to get Abiword, Gaim, Epiphany (granted, with the mozilla toolkit, but still one of the apps that begs for such accessibility feature) or Gedit to read any texts… How is this done then? Is it a compilation/link option? If yes, the problem is not really solved if it is not transparent to the user and if not get done automatically after a compilation.
Havoc Pennington: I haven’t ever tried it out, but I’ve seen the a11y guys demo it. The toolkit functionality is definitely there, in any case. I think you go to Preferences -> Assistive Technology Support and check the screenreader box, but I don’t know if it works out of box on any distributions yet. It’s a new feature. (editor’s note: “screenreader” on Fedora is greyed out, while on the latest Slackware can be selected, but it is later dumbed “unavailable” and so it doesn’t work yet out of the box for most distros).
Rayiner Hashem: Computer graphics systems have changed a lot since X11 was designed. In particular, more functionality has moved from the CPU to the graphics
processor, and the performance characteristics of the system have changed drastically because of that. How do you think the current protocol has coped with these changes?
Keith Packard: X has been targeted at systems with high performance graphics processors for
a long time. SGI was one of the first members of the MIT X consortium and
shipped X11 on machines of that era (1988). Those machines looked a lot
like todays PCs — fast processors, faster graphics chips and a relatively
slow interconnect. The streaming nature of the X protocol provides for easy
optimizations that decouple graphics engine execution from protocol decoding.
And, as a window system X has done remarkably well; the open source nature
of the project permitted some friendly competition during early X11
development that improved the performance of basic windowing operations
(moving, resizing, creating, etc) so that they were more limited by the
graphics processor and less by the CPU. As performance has shifted towards
faster graphics processors, this has allowed the overall system performance
to scale along with those.
Where X has not done nearly as well is in following the lead of application
developers. When just getting pixels on the screen was a major endeavor, X
offered a reasonable match for application expectations. But, with machine
performance now permitting serious eye-candy, the window system has not
expanded to link application requirements with graphics card capabilities.
This has left X looking dated and shabby as applications either restrict
themselves to the capabilities of the core protocol or work around
these limitations by performing more and more rendering with the CPU in the
application’s address space.
Extended the core protocol with new rendering systems (like OpenGL and
Render) allows applications to connect to the vast performance offered by
the graphics card. The trick now will be to make them both pervasive
(especially OpenGL) and hardware accelerated (or at least optimize the
software implementation).
Rayiner Hashem: Jim Gettys mentioned in one of your presentations that a major change from W to X was a switch from structured to immediate mode graphics.
> However, the recent push towards vector graphics seems to indicate a return of structured graphics systems. DisplayPDF and XAML, in particular, seem particularly well-suited to a structured API. Do you see the X protocol evolving (either directly or through extensions) to better support structured graphics?
Keith Packard: So far, immediate mode graphics seem to provide the performance and
capabilities necessary for modern graphics. We’ve already been through a
structured-vs-immediate graphics war in X when PHIGS lost out to OpenGL.
That taught us all some important lessons and we’ll have to see some
compelling evidence to counter those painful scars. Immediate graphics
are always going to be needed by applications which don’t fit the structured
model well, so the key is to make sure those are fast enough to avoid the
need to introduce a huge new pile of mechanism just for a few applications
which might run marginally faster.
Rayiner Hashem: What impact does the compositing abilities of the new X server have on memory usage? Are there any plans to implement a compression mechanism for idle window buffers to reduce the requirements?
Keith Packard: Oh, it’s pretty harsh. Every top level window has its complete contents
stored within the server while mapped, plus there are additional temporary
buffers needed to double-buffer screen updates.
If memory does become an issue, there are several possible directions to
explore:
+ Limit saved window contents to those within the screen boundary,
this will avoid huge memory usage for unusualy large windows.
+ Discard idle window buffers, reallocating them when needed and
causing some display artifacts. Note that ‘idle’ doesn’t just mean
‘not being drawn to’, as overlying translucent effects require
saved window contents to repaint them, so the number of truely idle
windows in the system may be too small to justify any effort here.
+ Turning off redirection when memory is tight. One of the features
about building all of this mechanism on top of a window system which
does provide for direct-to-screen window display is that we can
automatically revert to that mode where necessary and keep running,
albeit with limited eye-candy.
One thing I have noticed is a sudden interest in video cards with *lots* of
memory. GL uses video memory mostly for simple things like textures for
which it is feasible to use AGP memory. However, Composite is busy drawing
to those off-screen areas, and it really won’t work well to try and move
those objects into AGP space. My current laptop used to have plenty of
video memory (4meg), but now I’m constantly thrashing things in and out of
that space trying to keep the display updated.
Preliminary Exposé-like functionality on the new X Server
(530 KB .png, faster loading 240 KB .jpg here)
Rayiner Hashem: What impact does the design of the new server have on performance? The new X server is different from Apple’s implementation because the server still does all the drawing, while in Apple’s system, the clients draw directly to the window buffers. Do you see this becoming a bottleneck, especially with complex vector graphics like those provided by Cairo? Could this actually be a performance advantage, allowing the X server to take advantage of hardware acceleration in places Apple’s implementation can not?
Keith Packard: I don’t think there’s that much fundamental difference between X and the OS
X window system. I’m pretty sure OS X rendering is hardware accelerated
using a mechanism similar to the DRI. Without that, it would be really
slow. Having the clients hit the hardware directly or having the X server
do it for them doesn’t change the fundamental performance properties of the
system.
Where there is a difference is that X now uses an external compositing agent
to bring the various elements of the screen together for presentation, this
should provide for some very interesting possibilities in the future, but
does involve another context switch for each screen update. This will
introduce some additional latency, but the kernel folks keep making context
switches faster, so the hope that it’ll be fast enough. It’s really
important to keep in mind that this architecture is purely experimental in
many ways; it’s a very simple system that offers tremendous potential. If
we can make it work, we’ll be a long ways ahead of existing and planned
systems in other environments.
Because screen updates are periodic and not driven directly by graphics
operations, the overhead of compositing the screen is essentially fixed.
Performance of the system perceived by applications should be largely
unchanged by the introduction of the composting agent. Latency between
application action and the eventual presentation on the screen is the key,
and making sure that all of the graphics operations necessary for that are
as fast as possible seems like the best way to keep the system responsive.
Eugenia Loli-Queru: How is your implementation compares to that of Longhorn’s new display system (based on available information so far)?
Keith Packard: As far as I can tell, Longhorn steals their architecture from OS X, DRI-like
rendering by applications (which Windows has had for quite some time) and
built-in window compositing rules to construct the final image.
Rayiner Hashem: What impact will the new server have on toolkits? Will they have to change to better take advantage of the performance characteristics of the new
design? In particular, should things like double-buffering be removed?
There shouldn’t be any changes required within toolkits, but the hope is
that enabling synchronous screen updates will encourage toolkit and window
manager developers to come up with some mechanism to cooperate so that the
current opaque resize mess can be eliminated.
Double buffering is a harder problem. While it’s true that window contents
are buffered off-screen, those contents can be called upon at any time to
reconstruct areas of the screen affected by window manipulation or
overlaying translucency. This means that applications can’t be assured that
their window contents won’t be displayed at any time. So, with the current
naïve implementation, double buffering is still needed to avoid transient
display of partially constructed window contents. Perhaps some mechanism
for synchronizing updates across overlaying windows can avoid some of this
extraneous data movement in the future.
Rayiner Hashem: How are hardware implementations of Render and Cairo progressing? Render, in particular, has been available for a very long time, yet most hardware has poor to no support for it. According to the benchmarks done by Carsten Haitzler (Raster) even NVIDIA’s implementation is many times slower in the general case than a tuned software implementation. Do you think that existing APIs like OpenGL could form a foundation for making fast Render and Cairo implementations available more quickly?
Keith Packard: Cairo is just a graphics API and relies on an underlying graphics engine to
perform the rendering operations. Back-ends for Render and GL have been
written along with the built-in software fall-back. Right now, the GL
back-end is many times faster than the Render one on existing X servers
because of the lack of Render acceleration.
Getting better Render acceleration into drivers has been slowed by the lack
of application demand for that functionality. With the introduction of
cairo as a complete 2D graphics library based on Render, the hope is that
application developers will start demanding better performance which should
drive X server developers to get things mapped directly to the hardware for
cases where GL isn’t available or appropriate.
Similarly, while a Composite-based environment could be implemented strictly
with core graphics, it becomes much more interesting when image composition
can be used as a part of the screen presentation. This is already driving
development of minimal Render acceleration within the X server project at
Freedesktop.org, I expect we’ll see the first servers with acceleration
matching what the sample compositing manager uses available from CVS in
the next couple of weeks.
A faster software implementations of Render would also be good to see. The
current code was written to complete the Render specification without a huge
focus on performance. Doing that is mostly a matter of sitting down and
figuring out which cases need acceleration and typing the appropriate code
into the X server. However, Render was really designed for hardware
acceleration; acceleration which should be able to outpace any software
implementation by a wide margin.
In addition, there has been a bit of talk on the [email protected]
mailing list about how to restructure the GL environment to make the X
server rely upon GL acceleration capabilities rather than having it’s own
acceleration code. For environments with efficient GL implementations,
X-specific acceleration code is redundant. That discussion is very nebulous
at this point, but it’s certainly a promising direction for development.
Rayiner Hashem: Computer graphics systems have changed a lot since X11 was designed. In particular, more functionality has moved from the CPU to the graphics
processor, and the performance characteristics of the system have changed drastically because of that. How do you think the current protocol has coped with these changes?
Jim Gettys: This is not true. The first X implementation had a $20,000 external display plugged into a Unibus on a VAX with outboard processor and
bit-blit engine. Within 3 years, we went to completely dumb frame buffers.
Over X’s life time, the cycle of reincarnation has turned several times, round and round the wheel turns. The tradeoffs of hardware vs. software go back and forth.
As far as X’s graphics goes, X mouldered most of the decade of the ’90’s, and X11’s graphics was arguably broken on day 1. The specification adopted forced both ugly and slow wide lines; we had run the “lumpy line” problem that John Hobby had solved, but unfortunately, we were not aware of it in time and X was never fixed. AA and image compositing were just gleams in people’s eyes when we designed X11. Arguably, X11’s graphics has always been lame.
It is only Keith Packard’s work recently that has begun to bring it to where it needs to be.
Rob Pike and Russ Cox’s work on Plan 9 showed that adopting a Porter-Duff model of image compositing was now feasible. Having machines 100-1000x faster than what we had in 1986 helps a lot :-).
Overall, the current protocol has done well, as demonstrated by Gnome and KDE’s development over 10 years after X11’s design, though it has been past to replace the core graphics in X, which is what Render does.
Rayiner Hashem: You mentioned in one of your presentations that a major change from W to X was a switch from structured to immediate mode graphics. However, the recent push towards vector graphics seems to indicate a return of structured graphics systems. Display PDF and XAML, in particular, seem particularly well-suited to a structured API. Do you see the X protocol evolving (either directly or through extensions) to better support structured graphics?
Jim Gettys: That doesn’t mean that the window system should adopt structured graphics.
Generally, having the window system do structured graphics requires a duplication of data structures on the X server, using lots of memory and costing performance. The
organization of the display lists would almost always be incorrect for any serious application. No matter what you do, you need to let the application do what *it* wants, and it generally has a better idea how to represent its data that the window system can possibly have.
Rayiner Hashem: What impact does the compositing abilities of the new X server have on memory usage? Are there any plans to implement a compression mechanism for idle window buffers to reduce the requirements?
Jim Gettys: The jury is out: one idea we’ve toyed with is to encourage most applications to use 16bit deep windows as much as possible. This might often save memory over the current situation where windows are typically the depth of the screen (32 bits). The equation is complex, and not all for or against either the existing or new approach.
Anyone who wants to do a compression scheme of idle window buffers is very welcome to do so. Most windows compress *extremely* well. Some recent work on the migration of window contents to and from the display memory should make this much easier, if someone wants to implement this and see how well it works.
Rayiner Hashem: What impact does the design of the new server have on performance? The new X server is different from Apple’s implementation because the server still does all the drawing, while in Apple’s system, the clients draw directly to the window buffers. Do you see this becoming a bottleneck, especially with complex vector graphics like those provided by Cairo?
Jim Gettys: No, we don’t see this as a bottleneck.
One of the *really* nice things about the approach that has been taken is that your eye candy’s (drop shadows, etc) cost is bounded by update rate to the screen, which never needs to be higher than the frame rate (and is typically further reduced by only having to update the parts of the screen that have been modified). Other approaches often have the cost going up proportional to the graphics updating, rather than the bounded behavior of this design, and take a constant fraction of your graphics performance,
Rayiner Hashem: Could this actually be a performance advantage, allowing the X server to take advantage of hardware acceleration in places Apple’s implementation can not?
Jim Gettys: Without knowing Apple’s implementation details it is impossible to tell.
Eugenia Loli-Queru: How is your implementation compares to that of Longhorn’s new display system (based on available information so far)?
Jim Gettys: Too soon to tell. The X implementation is very new, and it is hard enough to keep up with what we’re doing, much less keep up with the
smoke and mirrors of Microsoft marketing ;-). Particularly sweet is that Keith says the new facilities saves code in the X server, rather than making it larger. That is always a good sign :-).
Rayiner Hashem: What impact will the new server have on toolkits?
Jim Gettys: None, unless they want to take advantage of similar compositing facilities internally.
Rayiner Hashem: Will they have to change to better take advantage of the performance characteristics of the new design? In particular, should things like double-buffering be removed?
Jim Gettys: If we provide some way for toolkits to mark stable points in their display, it may be less necessary for applications to ask explicitly for double buffering. We’re still exploring this area.
But current Qt and GTK, and Mozilla toolkits need some serious tuning independent of the X server implementation. See our USENIX paper found here. Some of the worst problems have been fixed since this work was done last spring, but there is much more to do.
Rayiner Hashem: How are hardware implementations of Render and Cairo progressing? Render, in particular, has been available for a very long time, yet most hardware has poor to no support for it. According to the benchmarks done by Carsten Haitzler (Raster) even NVIDIA’s implementation is many times slower in the general case than a tuned software implementation.
Jim Gettys: Without understanding exactly what Raster thinks he’s measured, it is hard
to tell.
We need better driver support (more along the lines of DRI drivers) to allow the graphics hardware to draw into pixmaps in the X server to take advantage of their compositing hardware.
Some recent work allows for much easier migration of pixmaps to and from the frame buffer where the graphics accelerators can operate.
An early implementation Keith did showed a factor of 30 for hardware assist for image compositing, but it isn’t clear if the current software implementation is as optimal as it could be, so that number should be taken with a grain of salt. But fundamentally, the graphics engines have a lot more bandwidth and wires into VRAM than the CPU does into main memory.
Rayiner Hashem: Do you think that existing APIs like OpenGL could form a foundation for making fast Render and Cairo implementations available more quickly?
Jim Gettys: Understand that today’s X applications draw fundamentally differently than your parent’s X applications; we’ve found that a much simpler and narrower driver interface is sufficient for 2D graphics: 3D remains hard. The wide XFree86 driver interface is optimizing many graphics requests no longer used
by current GTK, Qt or Mozilla applications. For example, core text is now almost entirely unused: I now use only a single application that still uses the old core text primitives; everything else is AA text displayed by Render.
So to answer your question directly, yes we think that this approach will form a foundation for making fast Render and Cairo implementations.
The fully software based implementations we have now are fast enough for most applications, and will be with us for quite a while due to X’s use on embedded
platforms such as handhelds that lack hardware assist for compositing.
But we expect high performance implementations using graphics accelerators will be running over the next 6 months. The proof will be in the
pudding, now in the oven. Stay tuned :-).
David Zeuthen: First of all it might be good to give an overview of the direction HAL (“Hardware Abstraction Layer”) is going post the 0.1 release since a few key things have changed.
One major change is that HAL will not (initially at least, if ever) go into device configuration such as mounting a disk or loading a kernel driver.
Features like this really belong in separate subsystems. Having said that, HAL will certainly be useful when writing such things. For instance a volume manager, as proposed by Carlos Perelló Marín on the xdg-list, should (excluding the optical drive parts) be straightforward to write insofar that such a program will just listen for D-BUS events from the HAL daemon when storage devices are added/removed, and mount/unmount them.
Finally, the need for Free Device Information files (.fdi files) won’t be that big initially since most of the smart busses (USB, PCI) provide device class information that we can map to HAL device capabilities. However, some devices (like my Canon Digital IXUS v camera) just report the class / interface as proprietary so it is needed.
There are a lot of other reasons for supplying .fdi files though. First of all some capabilities of a device that DE’s are interested are hard/impossible to guess. For example, people should be able to use a digital camera and mp3 player as a storage device as many people already do. Second, having .fdi files gives the opportunity to fine tune the
names of the device and maybe even localize it into many languages. Third, we can advertise certain known bugs or deficiencies in the device for the libraries/servers using the device.
Rayiner Hashem: HAL seems to overlap in a lot of ways existing mechanisms like hotplug and kudzu. Will HAL interoperate with these projects or replace them entirely?
David Zeuthen: HAL might replace kudzu one day when we get more into device configuration. In the mean time both mechanisms can peacefully coexist.
For linux-hotplug, and udev for that matter, I’d say the goal is definately to interoperate for a number of reasons; first of all linux-hotplug is already widely deployed and it works pretty well; second it may not be in a vendors best interest to deploy HAL on an embedded device (though HAL will be lean and only depend on D-BUS)
because of resource issues. Finally, it’s too early for HAL to go into device configuration as noted above.
Rayiner Hashem: HAL is seperate from the underlying kernel mechanisms that handle the actual device management. Is there a chance, then, that information could get out of sync, with HAL having one hardware list and the kernel having another? If so, are there any mechanisms in place that would prevent this from happening, or allow the user to fix things manually?
David Zeuthen: There is always the possibility of this happening, but with the current design I’d say that the changes are slim. Upon invocation of the HAL
daemon all busses are probed (via a kernel interface) and devices are removed/added as appropriate using the linux-hotplug facilities.
There will be a set of tools shipped with HAL; one of them will wipe the entire device list and reprobe the devices. I do hope this will never be needed though 🙂
Eugenia Loli-Queru: Gnome/KDE are multiplatform DEs, but HAL for now is pretty tied to Linux. If HAL is to be part of Gnome/KDE, how easy/difficult would be to port it on BSDs or other Unices?
David Zeuthen: With the new architecture most of the HAL parts are OS agnostic; specifically the only Linux-specific parts are less than 2000 lines of C code for handling USB and PCI devices using the kernel 2.6 sysfs interface. It will probably
grow to 3-4k LOC when block devices are supported.
The insulation from the OS is important, not only for supporting FreeBSD, Solaris and other UNIX and UNIX-like systems, but more importantly, it allows OS’es that said DE’s run on to make drastic changes without affecting the DE’s. So, maybe we won’t get FreeBSD support for the next release of HAL, but anyone is able to add it when
they feel like it.
I’d like to add a few things on the road map for HAL. The next release (due in a few weeks give and take), will be quite simple insofar that it basically just gives a list of devices. It will also require Linux Kernel 2.6 which may be a problem for some people (but they are free to write the Linux 2.4 parts; I already got USB support for 2.4)..
Part of the release will also feature a GUI “Device Manager” to show the devices. Work-in-progress screenshots are here.
Post 0.2 (or 0.3 when it’s stable) I think it will be time to look into integrating HAL into existing device libraries such that programmers can basically just throw a HAL object and get the library to do the stuff; this will of course require buy-in from such projects as it adds D-BUS and, maybe, HAL as a dependency. Work on a volume manager will also be possible post 0.2.
It may be pretentious, but in time I’d also like to see existing display and audio servers use HAL. For instance, an X server could get the list of graphic cards (and monitors) from HAL and store settings in properties under it’s own namespace (FDOXserver.width etc.). This way it will be a lot easier to write configuration tools, especially since
D-BUS sports Python bindings instead of editing an arcane XFree86Config file.
There is a lot of crazy, and not so crazy, ideas we can start to explore when the basics are working: Security (only daddy can use daddy’s camera), Per-user settings (could store name of camera for display in GNOME/KDE), Network Transparency (plug an USB device into your X-terminal and use it on the computing server you connect to).
The Fedora developers are also looking into creating a hardware web site, see here so the device manager could find .fdi files this way (of course this must be done a distro/OS independent way).
Rayiner Hashem: How is KDE’s involvement in the freedesktop.org project?
Waldo Bastian: It could always be better, but I think there is a healthy interest in what freedesktop.org is doing, and with time that interest seems to be growing.
Rayiner Hashem: While it seems that there has been significant support for some things (the NETWM spec) there also seems to be a lot of friction in other places. This is particularly evident for things like the accessibility framework or glib that have a long GNOME history.
Waldo Bastian: I don’t see the friction actually. KDE is not thrilled to use glib but nobody at freedesktop.org is pushing glib. It has been considered to use it for some things at some point and the conclusion was that that wouldn’t be a good idea. The accessibility framework is a whole different story. KDE is working closely with Bill Haneman to get GNOME compatible accessibility support in KDE 4. Things are moving still a bit slow from our side, in part because we need to wait on Qt4 to get some of the needed support, but the future looks very good on that. TrollTech has made accessibility support a critical feature for the Qt4 release so we are very happy with their commitment to this. We will hopefully be able to show some demos in the near future.
Rayiner Hashem: What are the prospects for D-BUS on KDE? D-BUS overlaps a great deal with DCOP, but there seems to be a lot of resistance to the idea of replacing DCOP with D-BUS. If DCOP is not replacing D-BUS, are there any technical reasons you feel DCOP is better?
Waldo Bastian: D-BUS is pretty much inspired by DCOP and being able to replace DCOP with D-BUS is one of the design goals of D-BUS. Of course we need to look carefully how to integrate D-BUS in KDE, it will be a rather big change so it’s not something we are going to do in the KDE 3.x series. That said, with KDE 3.2
heading for release early next year, we will start talking more and more about KDE 4 and KDE 4 will be a good point to switch to D-BUS. Even though KDE 4 is a major release, it will still be important to keep compatibility with DCOP as much as possible, so that’s something that will need a lot of attention.
Rayiner Hashem: What do you think of Havoc Pennington’s idea to subsume more things into freedesktop.org like a unified MIME associations and a VFS framework? What inpact do you think KDE technologies like KIO will have in design of the resulting framework?
Waldo Bastian: I think those ideas are spot on. The unified MIME associations didn’t make it in time for KDE 3.2, but I hope to get that implemented in the next KDE release. Sharing a VFS framework will be somewhat more difficult Since the functionality that KIO offers is quite complex it may not really be feasible
to fold that all in a common layer. What would be feasible is to take a basic subset of functionality common to both VFS and KIO and standardize an interface for that. The goal would then be to give applications the possibility to fall-back to the other technology with some degradation of service in case a specific scheme (e.g. http, ftp, ldap) is not available via the native framework. That would also be useful for third party applications that do not want to link against VFS or KIO.
Rayiner Hashem: A lot of the issues with the performance of X11 GUIs has been tracked down to applications that don’t properly use X. We’ve heard a lot about
what applications should do to increase the performance of the system (handling expose events better, etc). From the KDE side, what do you think the X server should do to make it easier to write fast applications?
Waldo Bastian: “Fast applications” is always a bit difficult term. Everyone wants fast applications but it’s not always clear what it means in technical terms.
Delays or lag in rendering is often perceived as “slow” and a more agressive approach to buffering in the server can help a lot in that area.
I myself noticed that the server-side font-handling tends to cause slow-down in the startup of KDE applications. Xft should have brought improvements there, although I haven’t looked into that recently.
Other KDE developers may have better examples.
Eugenia Loli-Queru: If QT changes are required to confront with changes needed for interoperation with GTK+ or Java or other toolkits, is TrollTech keen on
complying? If KDE developers do the work required is TrollTech keen on applying these patches on their default X11 tree?
Waldo Bastian: TrollTech is overall quite responsive to patches, whatever their nature, but in some cases it takes a bit longer than we would like to get them into a Qt release. That said, we have the same problem in KDE where we sometimes have patches sitting in our bug-database that take quite long before they get applied (Sorry BR62425!)
“So we need a registry of types documenting the type name and the format of the data transferred under that name. That’s it.”
Oh yeah it has been in Windows since around Windows 1.0 or 2.0 I don’t know but it has been at least 15 years.
This kind of article keeps me coming back to OSNews!
Anyway, as everyone I’m excited by the progress spawned by fd.o, especially in the regions of HAL and the new X Server.
KDE is a perhaps a little more separated from fd.o than GNOME, but I can assure readers that KDE is just as committed to producing an interoperable desktop as the other project members.
Now all we need to hope for is better vendor support for graphics hardware for the new (and old of course) X Server. From what I have read recently that’s one thing the OSS movement lacks.
Yeah, that’s been in Windows since 2.0 (not 1.0). What difference does that make?
“Rayiner Hashem: What impact does the design of the new server have on performance? The new X server is different from Apple’s implementation because the server still does all the drawing, while in Apple’s system, the clients draw directly to the window buffers. Do you see this becoming a bottleneck, especially with complex vector graphics like those provided by Cairo? Could this actually be a performance advantage, allowing the X server to take advantage of hardware acceleration in places Apple’s implementation can not?”
“Keith Packard: I don’t think there’s that much fundamental difference between X and the OS X window system. I’m pretty sure OS X rendering is hardware accelerated using a mechanism similar to the DRI. Without that, it would be really slow.”
Maybe now you believe me that Apple’s system *does* use the hardware accelerator to draw into the window buffers <grin>
This is the best article I’ve read in a long time..well done Eugenia/Rayiner!
Some very good points raised, I look forward to what freedesktop.org has in store for us in the future.
Havoc didn’t understand anything about the autopackage project, that’s really a shame, because it really is a great project.
Also, autopackage will use native .deb or.rpms if they exist.
You need gail, libgail-gnome, at-spi, gnome-speech, festival, and gnopernicus for this to work. Haven’t tested it myself, but there you have the deps at least
I have gnopernicus and gnome-speech on my Slackware, I don’t know about the rest.
Actually, no I don’t. I don’t think that Keith Packard was stating that as known fact, but rather just offering a conjecture. Apple does have a DRI-like model, but only for OpenGL, not Quartz rendering. Quartz 2D rendering, unless something has changed drastically in Panther without Apple hyping it, is still done via the CPU. In fact, the big improvement in besides Quartz Extreme (which we know is just the compositor) 10.2 was hardware acceleration of window scrolling (essentially a bit-blit). Nowhere in Apple’s literature is anything about hardware acceleration of Quartz 2D mentioned (and it really is rather complicated — you can’t, for example, use the card’s regular 2D line operations because they aren’t anti-aliased). In fact, their SIGGRAPH presentation confirms that 2D rendering is done via the CPU
Anyway, if the new X server gives us 2D via hardware, that will put it on par with Longhorn. I especially like the 6-month estimate that Jim Gettys gave
PS> One further clarification. Raster was measuring the performance of Render vs imlib for a number of compositing operations. While a simple composit was much faster via Render (thanks to NVIDIA’s hardware acceleration) anything that involved scaling was much slower. My guess is that NVIDIA’s acceleration only covers the simple case of non-scaled compositing. This would certainly make sense — since the primary user of Render (Xft, which uses it to compose AA text) doesn’t use any scaling features.
One question that has really been bugging me is: When will the new X server be ready for the public, estiamted time?
and
How long will the release cycle for KDE 3.3 be? (I really hope it is no longer than 6 months).
1. About a year, I would bet.
2. About six-eight months after the 3.2, I would think.
No one has hard numbers yet.
Havoc’s statements are exactly right with RPM. Nothing wrong with file, package, or format. Its the ui and implementation that is shoddy.
@Alex. Well, KDE 3.2 will take at least until late january from the looks of it, so I’d peg KDE 4.0 at about a year from now.
Anyone up for a build and get this out to the masses. not CVS access, but some debs or rpm`s?
I love the talk about the new X server and the HAL project (and finally a real device manager, yay!)
Great article!
OK…um could someone explain exactly what the new x server is trying to accomplish? Ive heard performace and better graphics but im not really sure what that means
[rb0.7]# festival
Festival Speech Synthesis System 1.4.3:release Jan 2003
Copyright (C) University of Edinburgh, 1996-2003. All rights reserved.
For details type `(festival_warranty)’
festival> (tts “/usr/share/apps/LICENSES/GPL_V2” nil)
[… listen and enjoy …]
SIOD ERROR: control-c interrupt
closing a file left open: /usr/share/apps/LICENSES/GPL_V2
festival> (quit)
[rb0.7]#
The way you present it here, this text2speech is _not_ part of the desktop usability embedded automatically on all apps (which is what the question asks), you have to give it a text file to read.
BTW, my fedora has a festival driver app, but not “festival” itself.
Well, the mime type issues, even for DnD, isn’t as desperate as it sounds. Both KDE and GNOME understand MIME types and have their own MIME type databases; they also tend to use generally the same mime type settings for DnD, more so internally than between the various implementations. And therein lies the problem: each desktop has its own set of mimetype definitions. So if you use apps from just one desktop, or are selective in which apps you use, you have few to no problems.
What has been lacking is a MIME type database/registry and a set of standardized types for DnD (and by extension cut ‘n paste) that everyone shares. FD.o is currently working on defining those standards. This is just like how FD.o has standardized other (meta)data registries and definitions, such as the .desktop file hierarchy or icon themes. This will lead to both better interop between the UNIX desktops as well as more consistency between individual apps as we’ll then have a common deffinition to point to when some app get its wrong.
The UNIX desktop is coallescing (as opposed to fragmenting). Tomorrow’s versions of KDE and GNOME will work even better together than today’s versions. This seems to be the opposite direction that closed source systems generally go in, isn’t it.
Sorry, there is nothing like text2speech (my invention)
festival can do more than my example, not only read the file from a texte, for example :
festival>help
[…]
Doing stuff
(SayText TEXT) Synthesize text, text should be surrounded by double quotes
(tts FILENAME nil) Say contexts of file, FILENAME should be surrounded by double quotes
(voice_rab_diphone) Select voice (Britsh Male)
(voice_ked_diphone) Select voice (American Male)
festival> (SayText “Great interview from OSNews”
festival>
I can’t comment how it is embedded in Gnome, KDE or wether it is present on Fedora (I use none of them)
I just presented the basic tool which does the work.
Hi
Yes. You are right. Gnome and KDE are converging. one significant area that I find no roadmap for converging is kparts/bonobo. I am happy that fd.o is a work being accepted by free desktops without much ego clashes. End user must be able to use KDE or Gnome apps without worrying about any interoperability problems. That is the goal we are working towards.
Regards
Rahul
You need gail, libgail-gnome, at-spi, gnome-speech, festival, and gnopernicus for this to work. Haven’t tested it myself, but there you have the deps at least
I just got it to work on my FreeBSD 4.9 box. I needed to install on top of my relatively standard install :
festival-1.4.1_1, festlex-oald-1.4.1, festvox-kal16-1.4.0, and I upgraded to libaudiofile-0.2.4
possibly some other ports were installed along with this, didn’t keep a log.
Also after installation I needed to modify /usr/local/share/festival/lib/init.scm to select the correct audio routines (commenting out options for windows and linux) – this avoids those SIOD ERROR messages.
All that I am saying is that Linux claims to be ahead on the inovation front, but yet the simplist things that Windows has had for 15 years seem to escape Linux.
Today I remembered why I keep osnews on my bookmark bar: the interviews with people in the trenches. Thanks to both the interviewers and the interviewed for a great read and a great expose of what’s up in Open Source desktop world. =)
It really seems we’ve turned a pretty important corner: up ’til now we’ve been largely playing catch-up and “clone the leader”. With enough work done to have created a viable desktop, we’re now able to stop and do some introspection. It’s visible in this article how the community is consciously addressing the issues of interop and consistency while also probing into new areas of functionality and capability. This bodes well and makes for exciting times.
This is SO much better than anything I’ve seen in a long time on OSNews. After seeing “review” after review of what writers do and don’t like about every distribution its really nice to see something on such a wide variety of important topics. It’s also nice because its just not one person droning on subjectively. Really a nice article and doesn’t make me think the site should have been named OSOpinions.com. More factual technology articles and less opinionated ones are the way to go.
So, mime-types then? What exactly is wrong with using mime.types like everything else? And these folks are supposed to lead us to linux desktop nirvana?
First of all, thanks to Eugenia and Rayiner for a great read. Let’s take a look at where autopackage was mentioned.
The argument that RPM is sufficient is worth examining. The theory goes that you can have a single RPM that works on all distros. That’s sometimes true, and in those cases, great. Of course it won’t integrate quite as nicely on Debian or Gentoo but tools such as alien do exist and using them is not such a hardship.
The problems come when it’s not possible to have a single RPM. Typically that’s because:
* The metadata used by the distributions is different.
* Different glibc versions are used.
* Files and libraries need to be put/registered in different places
(2) is kind of a red herring, there’s nothing inherant in RPM that makes this a problem, but the RPM build tools don’t really warn you about it or make it easy to avoid. Maybe in future apbuild will be more widely used, and this will become less of an issue (it will always be a compromise).
(1) and (3) can cause problems. RPM cannot have multiple sets of metadata, the format and tools don’t allow it and extending them to do so would be problematic. If distro A calls the Python runtime “python” and distro B calls it “python2” then you have a thorny problem.
Let’s not even go into the issues of Epochs and such.
(3) is not often so much of an issue, but it can be sometimes and RPM doesn’t make dealing with it easy. You could move them/munge them in post-install scripts, but then the file ownership metadata is wrong and so on.
These things cause the RPM culture to be one of “one RPM for one version of one distro”. Part of the hope is that a new installer framework will start with a new culture – one of binary portability. While it’s true that RPM works fine on Solaris, how many Solaris RPMs have you seen on the net? I haven’t seen any. I’m not interested in making a new package manager with a new format, I’m interested in making installers that work “anywhere” (restricted to linux/gnu).
There are other advantages to using autopackage I guess, none of them seriously important. You can mix source and binary installs more easily. I think the specfile format is nicer. I’m not interested in getting into a pissing match over features though, it’s not worth the effort.
The final thing to note is that autopackage is an installer framework, not a package manager. Yes, at the moment you have the whole “package remove” command thing going… in future I’d like to eliminate that stuff and go 100% with RPM/DPKG/portage integration. It’s technically possible to have “rpm -e whatever” work fine with stuff not installed by RPM, so we might as well remain consistant and do it.
It’s been years and we still have RPMs being built and rebuilt constantly, with millions of different files for different distros – if it’s possible to unify that so the job only has to be done once, is that not worth a shot? I think it is. Maybe one day Havoc will agree with me
Its not a matter of just using mime types, but agreeing on what mime types to assign to different data.
and in response to Aaron J. Seigo: mime types are registered with IANA (http://www.iana.org/assignments/media-types/index.html). There is already a standard.
Hi,
“All that I am saying is that Linux claims to be ahead on the innovation front, but yet the simplist things that Windows has had for 15 years seem to escape Linux.”
We can’t all be first, right? Linux != XFree86.
But perhaps you could have roughed up the HAL guy a bit more, there were some things like when you ask him how portable is the HAL over other kernels and he says it’s OS agnostic but doesn’t really give any details. Also if it’s required to have a kernel interface how are they going to port it to closed source or/and non-unix OSes?
Also if you’re going to ask them to compare against quartz you could also have asked them about fresco and directfb.
Another thing that comes to mind would be asking the gnome and kde guys when are they going to make their software independent from the graphics lib (for X haters of us 😉
But anyway, great interview, keep up the good work.
Great article. It’s great to see that such an all-star programming cast has rallied around FD.org.
I just bought an video card with 128MB video RAM, so bring on the composition engine What’s really neat is that it’s still the same ole’ X, so if you want to use your grandfather’s XFree86 3.3.6 you can, and all applications will work. But if you’ve got the hardware, go with the FD.org X Server. Wouldn’t it be ironic if by the time Longhorn hits the street composition engines would be considered old hat, “that’s so 2004, even Debian has had one for years”
This is one the very best articles I’ve read on OS News. Well done 😉
I’m perfectly aware of the IANA MIME type registry. KDE has a few MIME types of their own registered there. This isn’t a problem of saying “OK, if it is a MS Word file we agree to use
application/msword and expect the file to have a .doc extension.” It goes much deeper.
For instance: when we aren’t dealing with a file, but a chunk of data, what mimetypes do we use? e.g. If I drag and drop an image that is also a clickable link what MIME types should be used, and in which order should they be presented? If I drop it on a graphics app, it should probably display the image for editting. But if I drop it on a web browser, perhaps it should should load the URL in the link. In this case, both a link and an image MIME type should (IMHO, anyways =) be provided in the DnD information and that behaviour should be standardized. Furthermore, should all DnD’d graphics have both a raw bitmap type available (which all apps should be able to use, theoretically) as well as the actual graphic format it’s stored in (e.g. image/jpeg or image/png)? The questions aren’t that hard, but the answers need to be standardized so that we don’t have multiple answers that may all be slightly different and therefore a block to interop (which is the current situation).
Another example: when I have a MS Word document, which application should be launched? mime.types doesn’t provide that information (and shouldn’t). Currently if I want to open it in KWord I need to define that in each desktop environment I wish to use separately. That’s absurd. Or, when I use Open Office Writer and it looks to the MIME type associations for what to do with a PDF file (for instance), it certainly doesn’t look at the same information contained in my KDE settings. Ditto for Mozilla. Internally each is (generally) consistent, and they (generally) use the same IANA-defined MIME types (except for data types not registered with IANA), but between the various systems they aren’t consistent. IANA can’t fix that; mime.types won’t address it; ergo the FD.o MIME type standardization.
Nice article. Good work Eug and Ray.
I hope it becomes a trend.
I want to add my voice to those congratulating OSNews for this fine piece of journalism.
Me too.
Great job, and thanks for a fantastic article.
Excellent interview with extremely interesting technical insights… I want more !
Howdy
Hmmmm this has always seemed rather hard in a Linux distro, i mean i click on a HTML page in Mandrake 9.0 and get a HTML editor ! I don`t quite know why they did this when 99% of the time you would want to view it and not edit the thing *sigh*
Then the fun begins, you want to change the association well good luck as I`ve yet to make it work (maybe this has become more refined now i`ll buy a new distro soon)
It`s little things like thiis that we should be fixing aswell as big things like X.
@Anon E Moose
Well it`s good to see my relatives are interested in this too :p
Just my two cents, thank you all for your time.
“All that I am saying is that Linux claims to be ahead on the inovation front,”
If the kernel’s talking to you and making boasts about itself, I think its won the claim to inovation. If you’re talking about ‘people’ making that claim, no shit. I don’t care what platform it it, people spouting off about how innovative they are almost inevitably overestimating both themselves and whatever they’re talking about, no matter if we’re speaking of windows, linux, or almost any other field in the world. I don’t think I’ve seen anything in the computing field that I’d actually call innovative on any operating system since Clippy burst onto the scene.
“All that I am saying is that Linux claims to be ahead on the inovation front,”
Linux never claimed to be innovative. They’re just working on what they think they should. They’ve never bragged about “OMG look at us we’re so innovative!!!”
This in contrast to Microsoft, who does claim to be innovative but really isn’t.
“but yet the simplist things that Windows has had for 15 years seem to escape Linux.”
And in other news, the simplist things that Unix has had for 15 years seem to escape Windows.
All I can say is: so what? What matters is that it’s here NOW. Whining about how Windows has had it for x years won’t change a thing.
I seem to recall a while ago that there was a very public falling out between Keith P and the XFree86 team. I’m curious as to whether the changes/improvements to X being discussed here will be merged back into the main X tree, or if it will become a completely separate release…..
Gogs
It will be a seperate release, but not a fork. The core of the new X Server (the server is called XWin I think, the core is called Kdrive) is rewritten by Keith and other major parts are also re-written, so they are not forked off XFree86. In fact, there are some new parts now, that XFree86 doesn’t have at all. Other parts of XFree86 though, like drivers, some exteensions etc, will be forked off indeed.
Appears to be a complete rewrite actually. All this new stuff is based on kdrive, Keith Packards own X server. I don’t know what the plans for folding it back into XF86 are. All of this is still very experimental.
Any idea about how fonts are going to be handled i.e. will there have to be a rewrite of freetype qt gtk+ as well, or will kdrive be backwards compatible?
Lets see who can build the new Linux UI first.
1) Fonts will almost certainly be handled through Xft2 + the Render extension. There is no point rewriting freetype — its damn good already.
2) There will be no rewriting of Qt or GTK+. If that was the case, this new server would never make it. KDrive is just another X server, it speaks the same X protocol, so all X apps will be compatible with it. However, toolkits will need to be modified to take better advantage of the new server’s functionality. For example, it would be nice to have GTK+ and Qt render through Cairo natively. According to the interview answers, some additional coordination will probably also be necessary to fix the opaque resize problem.
This article in linked at /.
http://slashdot.org/articles/03/11/24/1751227.shtml?tid=104&tid=121…
Contains a few (imo) interesting comments.
If you’re running Gentoo and would like to run/test check this out
http://www.gentoo.org/news/en/gwn/20031124-newsletter.xml#doc_chap4
At the bottom of the interview with Havoc Pennington, Eugenia notes that screenreader is greyed out in Fedora. I am running Fedora, and saw the same thing. It says that gnopernicus must be installed. Therefore,
yum install gnopernicus
and text-to-speech works.
While it’s cool, it’s also annoying as hell — talks too much, you could say. Does anyone know of another way to access the underlying text-to-speech software? It would be incredibly useful if I could paste text into a textbox and press read. (Like Simpletext on MacOS 7+.)
This is why I mentioned Mac OS X. Because the way OSX does it is far better than text2speech on a seperate app. The speech itself can be triggered from the app itself. For example, on the TextEdit, you can select a piece of text, right click on it, and select “start speaking”. It feels more integrated to the OS, and doesn’t make people with usability problems feel forgotten and forced to use a third party app.
article, very interesting in-deth informations.
shocking for me noticing that os-x doesnt use 2d-hardware acc! why is apple throwing away the speed-gain ?
> Another thing that comes to mind would be asking the gnome
> and kde guys when are they going to make their software
> independent from the graphics lib (for X haters of us 😉
Whoosh.
That was the sound of the point of the article going over your head.
Eugina you need to “gok” package for the screen reader functionality to work. You distro should be able to resolve its dependencies for you. Good luck.
Ah silly me, I just realized you need gnome-speech, gnome-mag and gnopernicus. Sorry if this has already been mentioned.
Great article.
About the XServer/XFree86/Kdrive confusion:
Kdrive was initially a heavily modified version of XFree86 to allow it to run on low-memory devices such as handhelds. It uses as little memory as possible by performing a lot of calculations at runtime instead of storing it in memory. Due to the high load latencies (relative to clock speed) of modern hardware, this actually becomes a speed optimization in some cases. It has less code duplication with the kernel (although there’s still work to do in that area, but it needs sync with the kernel guys). The internals are also cleaned up and it’s supposed to be easier to work with and modify.
For these reasons, Keith Packard chose to base his new efforts on Kdrive rather than just copying the source tree from XFree86.org. At some point it was renamed to Xserver (not XWin, that is a just a website as the title at xwin.org says. And it looks pretty dead now, but fd.o is hosted there).
Xserver doesn’t support everything XFree86 does. Most of these features are useless anyway (like PIE, Ximage and a bunch of other obsolete stuff). The driver modules are gone too (Xserver is more like Xfree86 3.x with separate servers for each card). That’s not too bad since the graphics driver infrastructure in linux is in great need of an overhaul anyway.
X bloatedness was overrated to begin with, but with the new Xserver + the new X C bindings things will get even leaner.
Another nice fact is that the “unsnappines” of X is being solved, on two different fronts using two different approaches and we will end up with the benefits of both.
As XDirectFB shows, reducing the number of expose events significantly improves the feel of the desktop. Xserver will have that, and combined with the kernel CPU scheduling efforts by Andrew Morton, Nick Piggin, Con Calivas and others things will get supersnappy and ultrafast. Like running TWM on a supercomputer.
Am i the only one that thinks that Keith Packard looks like Steve Ballmer (minus the sweaty armpits and the monkey dance) ??
That is an incredible summation. Thank you very much.
Does the Xserver conform to standard X guidelines?
This is interesting because if not it might be difficult for the standard commercial unix folks trying to get some of the gnome stuff to compile around specific linux hooks and such.
It sounds very interesting and I really hope they get the card support rocking and distros start switching.
Speed is king.
Apple does use hardware acceleration, but in very limited cases. It uses bit-blit acceleration for stuff like scrolling, so the CPU isn’t stuck moving large blocks of pixels around. They also use OpenGL, of course, to composit those transparent windows together.
However, they don’t appear to use acceleration for stuff like line or polygon drawing. Current hardware has a sharp divide between 2D and 3D components. The 2D components traditionally used by Windows, MacOS, and X, don’t support anti-aliasing or alpha blending (transparency), or gradients or anything like that. Since OS X uses very high-quality vector graphics, with everything anti-aliased and transparent and whatnot, it can’t use the existing 2D acceleration, except for the aformentioned bit-blit functionality.
That’s why everyone’s going 3D, because even the cheapest 3D hardware supports that functionality. Unfortunately, taking advantage of that hardware is complex. Consumer level graphics cards are game oriented — they often make quality compromises (unacceptable for high-quality 2D), and aren’t designed to handle more than one rendering application at a time. Overall, its just a hard problem to solve.
Longhorn is 3 years away, and I really do hope that the OSS community can throw out something just as good, UI wise.
First of all, great article!
How about gaming though. If some of the video RAM is taken, will this reduce the available video RAM when running a game? Or will it just automatically swap out the unused stuff when necessary until the game is closed and the window contents are required again? I also wonder where it would swap it to and how much of a performance penalty this would be if it happend during a game.
I hope Keith and freedesktop.org can figure out what Mike is trying to accomplish with Autopackage. It seem obvious from Keiths comments that he doesn’t get it. Installing packages was the first wall I hit when I first tried out linux. If we want people to take Linux seriously as a desktop system something like Autopackage is sorely needed. My hats off to Mike Hearn and all the contributers for all their great work. Keep It Coming!
Err it was Havoc who commented on autopackage not Keith. My bad!
>> Am i the only one that thinks that Keith Packard looks like Steve Ballmer (minus the sweaty armpits and the monkey dance) ??
You mean Jim Gettys?
Yeah, that’s what I thought!
I think GNOME and KDE should focus on kdrive more and more, make it difficult for even UNIX’s not to use it.
I am hoping this will become the best desktop and high performance workstation solution.
My reason is simple, make X.org and Xfree86.org irrelevant.
make X.org and Xfree86.org irrelevant.
Especially the X folks but I feel you on the Xfree thing too. People however emphasize X windows too much and ignore that everything from the application to the window manager back to the widget class itself can make things feel slow in the X world.
That is not ignoring the problems with Xfree86 at all. I just hope we do not get too wrapped up in this whole thing only to realize very small eye candy style gains and be left in a hardware support mode worse than before.
Ok, that being said I think we can only hope commercial *Nixes jump but you have to realize the whole network model is actually a part of X that is used in the corporate *Nix world.
The old stuff that people always complain of as cruft is commonly used out there in the old school corporate Unix world and has to be supported before the big boys will play ball.
With my display exported logged into a box over the lan I commonly brought up gui admin tools like Netbackup gui or the Veritas system tools.
That’s why everyone’s going 3D, because even the cheapest 3D hardware supports that functionality. Unfortunately, taking advantage of that hardware is complex. Consumer level graphics cards are game oriented — they often make quality compromises (unacceptable for high-quality 2D), and aren’t designed to handle more than one rendering application at a time. Overall, its just a hard problem to solve.
Any idea if any of the major PC 3D vendors have any plans to support virtualized GPUs the way SGI did? This would seem to solve the multiple apps rendering issue…unless I’m thinking of something entirely different.
Yep, it would, but I have heard absolutely nothing about this. The nice thing, though, is that its more a matter of what software developers want rather than what hardware developers plan. If 3D accelerated desktops take of (with Longhorn), hardware manufactuers will put in support. The only catch is that we’d have to wait ’till 2006, and this seems like it will be ready before then!
http://www.lentus.sk
> I love the talk about the new X server and the HAL project (and finally a real device manager, yay!)
me too, i agree!
virtualized GPUs ?
I am really getting tired of people bitching about linux software installation.
The average windows users have a way of messing up their win* installation by installing/uninstalling a bunch of crap. More than half of these softwares never really get uninstalled, and Win* uninstallers most of time leaves files and entries about the software in the registry.
With that said I am sure rpm/deb could be improved on:
Standardize labeling, versioning and dependency conventions for software. Some packagers label, organize, and place varied dependencies for the exact same piece of softwares they package.
Check out freshrpms, atrpms, and ximian to name a few.
1. We could adopt java packages name space convention to solve this problem.
org.gnome.* or org.kde.*
The label version and dependency (internal) must come
from the developers/organization that create/provide the softwares.
Lets solve the real problem here!!
no standards.
SGI systems have virtualized their GPUs for awhile now. Basically, a virtual GPU means that the real graphics hardware is fully abstracted from the application, and the OS has full control over managing the HW. Just as Windows and Linux have virtual memory and a virtual CPU (preemptive multitasking), IRIX machines have a virtual GPU. You can throw more graphics pipes into the virtual GPU pool, and automatically get increased performance. Also, the abstract nature of the interface makes it easy to share the GPU among many concurrently rendering applications, just as preemptive multitasking makes it easy to share the CPU among concurrent applications.
Congratulations to OSNews for this excellent article!
While I do think that some of the quesitons could have been a bit different, I really liked this article!
To make it short: This is why I really like to read OsNews! Keep it up!
cu Martin
Thanks for the info Rayiner.
It makes me wonder if nVIDIA or ATI have this function for their non average consumer GPU’s.
Standards: tried it, didn’t work out too well. There was little appetite for such a thing. While Red Hat were enthusastic, Debian were cautious and Gentoo were also enthusiastic until internal politiking ended it, getting standards for package metadata was compared to me in private to “asking for world peace”. There are archives on the net if you want all the gory details.
The autopackage approach doesn’t require everybody to suddenly relabel millions of packages, good though that would be. Modern dependency trees are huge and standardising them all is very hard indeed. Maybe somebody else can do this, I don’t know. If they can then best of luck to them.