It seem that there are too many windowing system that intended to replace Xwindows these day. The name such as DirectFB, Berlin/Fresco, PicoGUI, Cosmoe and the fork (really?) Xouvert getting a lot of news coverage. But does it really ready? I dont think so since the only windowwing system that works well and let the job done for open source OS is Xwindows only.
This project give a good promise in term of compatibility with Xlib, I’m hopind they can make it, must be available and usable the soonest. However I still think Microwindows is good alternative since it provide the compatibility library with existing Xlib, but the pity is that it seem not so many developer interested with it. Anyone who wanted to know further can visit this website http://www.microwindows.org
I mean, you ask anyone in the Linux dev world and tehy will tell you that X is still around because of the video drivers more than it is for the need for xLib compatability(though that is needed, it is less difficult to recode some apps, or make a wraper than it is to make a driver for hardware that you hav absolutly no idea on how it works.
The design of this system is really great. These guys seem to have a good grasp of the issue, and are avoiding some key mistakes made by others.
0) Retains the client/server mechanism, and doesn’t go for some “direct access” nonsense that won’t help anyway. This also fits the overall design pretty well. Vector graphics are quite compact, and its much cheaper to send the vector representation over the wire than to send a prerendered image.
1) Complexity is left on the client side. The client usually knows better than anything else how it wants its window to look, so its best from a performance standpoint to put most logic client-side. All the server does is draw images —- the client decides *what* to draw.
2) Has the server act on SVG documents specified in vector form. This is a key advantage over OS X’s mechanism of doing it. In OS X, all drawing is done by the client into a memory buffer. Acceleration is used only for the final compositing. In this model, the client clearly specified how the canvas should look, and the server does all the drawing, taking advantage of any acceleration mechanism available. Experiments with rendering SVG graphics via OpenGL (which this is theoretically capable of) have shown up to a 100-fold increase in performance.
3) Does away with mandatory huge window buffers. Another weakenss of the OS X design — each window has to have a chunk of memory the size of itself allocated to it, even if it is behind other windows. This is so the display server can repaint the window without calling the app’s repaint function. With this model, the server can cache each window’s *vector* representation (usually much smaller than an entire window buffer) and redraw from that representation.
Overall, it seems like a very nifty project implemented by some very level-headed people. I really wishes these guys the best of luck.
Caveat> The “HyperQueues” mechanism they mention is pretty nifty, but has a critical weakness — it depends on the x86 segmentation mechanism. The segmentation mechanism can be abused to allow apps to access memory contexts other than the current one. Other CPUs have formal mechanisms for doing this, but not all. The AMD64 architecture abandons the segmentation mechanism, but doesn’t provide a replacement for this sort of use. Clearly, HyperQueues are not something feasibly implementable in a general purpose OS.
Um, SVG is XML based, hence the XML-based protocol. The data stream is compressed, so the speed hit should be minimal. SVG is also highly optimized to be fast to parse. Any vector representation would have some level of parsing overhead, so the whole complaint is completely innane.
This document doesn’t seem to delve into a number of specific implementation problems which I am curious as to how they are going to resolve…
First, they suggest using XML as the communication protocol, and then using WBXML to compress it. Yet they also mention that they have engineered a high speed IPC system for their operating system. So, shouldn’t they simply use normal text XML documents as opposed to a needless conversion? All I can think of is that instead of ever producing a textual XML document, all messages are generated in WBXML format in the first place, and processed the same way. But if so, why move to ASN.1 and give up the advantages of XML such as schema validation? ASN.1 parsing problems were the source of some of the recent OpenSSL vulnerabilties. They mention computational complexity, so either they are converting XML to WBXML or if they are skipping the intermediate step of XML generation perhaps directly creating WBXML documents is a non-trivial process. I certainly can’t say I’m a fan of the use of SVG, or XML at all for that matter…
Next, there is absolutely no mention of font handling. One can only hope they will be using FreeType 2. I think this is a rather important aspect of any window system, and it’s been completely ommitted from their description.
The document is highly critical of MacOS X’s approach to using hardware accelerated compositing. In the case of MacOS X, this was done for very good reasons: upgrading memory is trivial comapred to upgrading a processor, especially in the case of portables. Quartz consumes a large amount of memory to minimize overhead on the processor.
I’m not sure if I like the direct conversion of SVG to OpenGL entities. First, this would only be beneficial if windows were rendered entirely in vector form. I also worry about how they will intelligently handle constantly changing SVG documents and coverting those into OpenGL primitives. This server will also be placing a great deal of load on the 3D subsystem, so they will be required to excellent drivers for high performance 3D hardware, a task which is decidedly non-trivial. It also means that they won’t be able to take advantage of the SNAP graphics architecture, which would give them instant access to a number of 2D graphics drivers.
It’s certainly an interesting idea, but there are a lot of implementation problems that will require highly complex solutions, and I wonder if they’ll be able to pull it off.
You’ve pointed out some very good features of this, the unfortunate thing, however, will be that the likelihood of it replacing X on Linux is pretty low. In a perfect world, distributors would view this as a “great leap forward”, and get behind its development.
Coupled this the nifty Frontiers GUI System, Linux could move foward and over take Windows and MacOS in terms of GUI responsiveness, consistancy and slimness (non-bloated).
The other platform that could benefit would be FreeBSD. If the is enough programming gurus out there, I am sure they would be interested in making it available as part of the ports for those who wish to run it over running XFree86.
So, shouldn’t they simply use normal text XML documents as opposed to a needless conversion?
>>>>>>>>>>>>
SVG was designed to be transmitted in some compressed form, because the text form is a bit verbose. Its almost always used that way in situations where transmission time matters.
All I can think of is that instead of ever producing a textual XML document, all messages are generated in WBXML format in the first place, and processed the same way.
>>>>>>>>>>
There is no indication that they are not doing this, but remember than WBXML is a very compact format, and some conversion might be required anyway.
I certainly can’t say I’m a fan of the use of SVG, or XML at all for that matter…
>>>>>>>>>>>>
The nice thing about SVG is that its standardized, and a lot of work has been put into SVG renderers. There is even a prototype OpenGL SVG renderer already available.
Next, there is absolutely no mention of font handling. One can only hope they will be using FreeType 2.
>>>>>>>>>>>>>
It would be assumed, since FT2 is pretty much the only choice as far as font-renderers go. And its also very fast and easy to use, so its a no-brainer.
Quartz consumes a large amount of memory to minimize overhead on the processor.
>>>>>>>>
On the other hand, manipulating large memory buffers puts a large pressure on the cache, and with SVG rendering being as fast as it is, there is really no point in dedicating all that memory to screen buffers when it could be better spent on (for example) the disk cache.
I’m not sure if I like the direct conversion of SVG to OpenGL entities. First, this would only be beneficial if windows were rendered entirely in vector form.
>>>>>>>>>>
I don’t really understand this statement. OpenGL (with the appropriate texture-handling extensions) handles bitmaps as fast as any 2D system. It handles vector graphics a whole lot faster. So there is really no downside to using OpenGL. And their goal *is* to have most windows be rendered in vector form, unlike Quartz, where most things are still bitmaps.
I also worry about how they will intelligently handle constantly changing SVG documents and coverting those into OpenGL primitives.
>>>>>>>>>
If I were them, I would cache all scenes as display lists. The first time the scene is displayed, you’d compile the display list and then display it, which would cause a minor hit on the first frame, but would allow subsequent frames to be displayed very fast. I’d also provide a flag the application could set for rapidly changing canvases, and if that flag was set, would just render the canvas in immediate mode each time. This shouldn’t be a big deal, because immediate mode is highly optimized in OpenGL, and you can usually get at least 50% of full performance in that mode.
This server will also be placing a great deal of load on the 3D subsystem, so they will be required to excellent drivers for high performance 3D hardware, a task which is decidedly non-trivial. It also means that they won’t be able to take advantage of the SNAP graphics architecture, which would give them instant access to a number of 2D graphics drivers.
>>>>>>>>>>>
Software OpenGL is rather slow, so they should provide an optimized software SVG back-end. An optimized SVG back-end can be extremely fast (try the AntiGrain renderer demos). SNAP would be largely useless for rendering SVG, because no 2D cards really accelerate the operations SVG requires.
An obvious problem of using OpenGL for rendering of everything is the problem of what happens with graphics involving text. How do you do the compositing? Things would be silly if you rendered the text in OpenGL via its outlines. Thus you need to at least render text onto a texture and compose it with the other graphics. Then there is the question of anti-aliasing. What gets done where?
Doing the fonts as high resolution textures that get downsampled will waste texture memory on the graphics card, and hinting and grid-fitting of the font will be impossible (the graphics card will only see a bitmap texture.)
If the fonts are rendered as a texture, and that texture is positioned with sub-pixel precision, then again, the font will not be drawn correctly (you handle rendering, anti-aliasing, etc. of the outline after you’ve positioned it.)
This, and similar considerations, mean that a PDF viewer, for example, must do all its compositing in software, rendered to a texture, and then use OpenGL only for compositing, as done with Apple’s Quartz Extreme.
So we are left with anti-aliasing text by rendering to a texture and using the alpha channel (which I presume is what they’re going to do.) Possibly using a single texture per glyph, depending on how many small textures the graphics card can handle effectively.
Sigh. That’s not true for the type of relatively simple geometry (textured triangles) that would be used for an SVG renderer. Check out the bencmarks on the CAVE website:
Check the textured triangle benchmark on the GeForce2, which would be the representative case. For < 8 pixel triangles, the performance for display lists is 15M tri/sec, while the performance for immediate mode is 9M tri/sec. For 16 to 64 pixel triangles, the performanec difference is less than 1M triangles moving between immediate mode and display lists.
These numbers mesh well with my experience using OpenGL.
Remember, OpenGL is an immediate mode API, and the immediate mode entry points are highly optimized on good implementations.
Its really very simple. Say you want to render ‘A’ in 12-pt Helvetica. Using the FreeType API you ask the font renderer to render exactly that character, at that size, at a given DPI. The renderer hands you a 8-bit pixmap that represents the exact image of the glyph, with all hinting and anti-aliasing already performed.
Up to this point, an OpenGL system is the same as existing systems. Now, with existing systems, you take this glyph, and use it as the alpha mask for an alpha-blend operation to compose the glyph onto the window. In an OpenGL system, you just use the pixmap as the alpha-texture for an appropriately colored quad. OpenGL then does the alpha-blending for you.
This, and similar considerations, mean that a PDF viewer, for example, must do all its compositing in software, rendered to a texture, and then use OpenGL only for compositing, as done with Apple’s Quartz Extreme.
>>>>>>
Almost all font systems render text via software, because font rendering is so complex as to defeat any practical acceleration. However, performance is fine because glyphs are cached, so the only operation involved in drawing text is composing some glyph pixmaps. Your hypothetical PDF viewer, however, could still use OpenGL to draw non-text elements.
Possibly using a single texture per glyph, depending on how many small textures the graphics card can handle effectively.
>>>>>>>>>>
In practice, you’d almost always use a single texture per glyph. There are only 30-some possible glyphs for most languages, and even with multiple font sizes and styles, that makes up a paltry few hundred pixmaps in the average cache. Thanks to AGP, you can keep most of these pixmaps bound to OpenGL textures, and the card will automatically keep the most-often-used glyphs in local card memory.
i do not have any problems running XFree86, it is fast & stable, what causes me problems is the bog bloasted desktops such as Gnome & KDE, get rid of those and/or install a lighter desktop such as WindowMaker, XFce4, ICEwm and you will see a significant performance boost, i use a computer to run applications such as web browser & email client, OpenOffice, Evolution PIM, and a few games & etc… and i am not going to bog down the use of there applications with a BIG BLOATED and useless desktop environment/application framework…
Anyone get the impression that trashing X is just the latest vougue movement? To me, people seem to want to see X replaced, not because of a real need, but because its a ‘trendy’ way to think.
Well, of course blackbox and all those minimalist WMs run fast. But we want Gnome. We want KDE.
Let’s talk again about the so-called “Linux on the desktop”. Do you think blackbox is as user friendly as Windows/Mac? Hum. I don’t think so. That’s why we *need* Gnome running as fast/responsive as a Windows box.
As new technologies are created sometimes it is just too much work/impossible to improve old code. The thinking is that X needs a complete revamp to have it implement technology of today and not the technology of umteen years ago. Does it really matter anyway? If you aren’t coding anything then who cares what OSS does, either way you will enjoy the results…eventually.
Does X need a pretty heavy re-write of major pieces?
Yes.
Is it inherently flawed?
No.
Is it the only piece of software to blame for the slowness of the windowing environment of linux and BSD?
No just no but hell no.
Folks designing the widget whether it is QT, GTK2, or XUL all share some of the blame. The folks designing the window managers share some of the blame too and so do the people designing the desktop environments. Half of performance is perception. You can’t look at X as the sum of the whole in terms of GUI performance in a *Nix environment.
Lock the GTK, QT, Gnome, KDE, Mozilla, OpenOffice and X developers in a room and tell them they can’t come out without coming up with ideas for improvements of speed that can be implemented quick and in parellel. Guess what? You would see improvements.
People focus too hard on just X. IMHO and all that shite.
I like the ideals of the JourneyOS team, and they seem to have some good ideas, but they are taking on an awfully lot. I do performance sensitive realtime GL work, and to really get this windowing system as proposed up and running and with GOOD performance on a reasonable assortment of OpenGL hardware I’m thinking is probably a man-year in itself. This is a concept white-paper, not a design spec, and it omits all the details that will be essential to performance, giving me the impression that the author has not actually done much real-time GL coding, and will thus have a long learning curve to climb.
The windowing system would be great as a little project, but combined with writing The Perfect Filesystem, and The Perfect Posix Layer, and The Perfect Kernel Personality, and for some reason taking on a fork of Fiasco to presumably make it More Perfect than Dresden has managed, and the scale of the project becomes kinda overwhelming.
The endeavour reminds me of the Alliance OS project, which I was involved in. We all had vast and revolutionary ideas, which we spec’ed and posted to the web site without end, but when it came to actual work it was just too overwhelming to bear. The Free Software movement is littered with the corpses of projects like that.
Hopefully the JourneyOS team recognizes these tendancies and will pick some managable focus and do great things with it.
I think you may have a point, and it would be a shame if this design languished for lack of support. To tell the truth, this aspect of the project is a whole lot more interesting than the kernel bits and pieces, especially since its is unlikely that any such new kernel can be competitive in any reasonable time-frame with Linux and *BSD. Of course, this work might provide a nice foundation for further efforts. There is really nothing stopping somebody from taking the overall design, rolling it up into an X extension, and implementing it that way.
Immediate mode rendering in OpenGL is VERY VERY slow on every single platform except Windows compared to using things like display lists.
This has almost always been the case, and is the reason one of the most important optimizations you can make when working in OpenGL is to NOT USE OPENGL IN IMMEDIATE MODE.
It really depends on the drivers. As the benchmarks I linked-to indicate, immediate mode can be quite fast on simple scenes. Note, NVIDIA’s drivers are the same on every-platform, so this performance is not OS dependent.
They have an interesting concept that may prove difficult to implement, but what I worry about is what is the possibility that they may get shut down for using the album cover images from the band Journey?
Would they have to go through a whole renaming fiasco like Firebird from Mozilla? Would they even survive long enough to worry about renaming?
Absolutely. Geeks are not the only ones that use Linux. It’s got to user friendly for the, shall I say, normal folks…thus without KDE and Gnome, Linux would remain in roughly the same neighborhood as Minix. Well, maybe not THAT bad..
It seem that there are too many windowing system that intended to replace Xwindows these day. The name such as DirectFB, Berlin/Fresco, PicoGUI, Cosmoe and the fork (really?) Xouvert getting a lot of news coverage. But does it really ready? I dont think so since the only windowwing system that works well and let the job done for open source OS is Xwindows only.
This project give a good promise in term of compatibility with Xlib, I’m hopind they can make it, must be available and usable the soonest. However I still think Microwindows is good alternative since it provide the compatibility library with existing Xlib, but the pity is that it seem not so many developer interested with it. Anyone who wanted to know further can visit this website http://www.microwindows.org
From what I understand it complements their own OS as well.
Well, this is not a replacement for X. This is the windowing system of JourneyOS, and it just happens to be Xlib-compatible.
I mean, you ask anyone in the Linux dev world and tehy will tell you that X is still around because of the video drivers more than it is for the need for xLib compatability(though that is needed, it is less difficult to recode some apps, or make a wraper than it is to make a driver for hardware that you hav absolutly no idea on how it works.
The design of this system is really great. These guys seem to have a good grasp of the issue, and are avoiding some key mistakes made by others.
0) Retains the client/server mechanism, and doesn’t go for some “direct access” nonsense that won’t help anyway. This also fits the overall design pretty well. Vector graphics are quite compact, and its much cheaper to send the vector representation over the wire than to send a prerendered image.
1) Complexity is left on the client side. The client usually knows better than anything else how it wants its window to look, so its best from a performance standpoint to put most logic client-side. All the server does is draw images —- the client decides *what* to draw.
2) Has the server act on SVG documents specified in vector form. This is a key advantage over OS X’s mechanism of doing it. In OS X, all drawing is done by the client into a memory buffer. Acceleration is used only for the final compositing. In this model, the client clearly specified how the canvas should look, and the server does all the drawing, taking advantage of any acceleration mechanism available. Experiments with rendering SVG graphics via OpenGL (which this is theoretically capable of) have shown up to a 100-fold increase in performance.
3) Does away with mandatory huge window buffers. Another weakenss of the OS X design — each window has to have a chunk of memory the size of itself allocated to it, even if it is behind other windows. This is so the display server can repaint the window without calling the app’s repaint function. With this model, the server can cache each window’s *vector* representation (usually much smaller than an entire window buffer) and redraw from that representation.
Overall, it seems like a very nifty project implemented by some very level-headed people. I really wishes these guys the best of luck.
Caveat> The “HyperQueues” mechanism they mention is pretty nifty, but has a critical weakness — it depends on the x86 segmentation mechanism. The segmentation mechanism can be abused to allow apps to access memory contexts other than the current one. Other CPUs have formal mechanisms for doing this, but not all. The AMD64 architecture abandons the segmentation mechanism, but doesn’t provide a replacement for this sort of use. Clearly, HyperQueues are not something feasibly implementable in a general purpose OS.
I stopped reading as soon as they mentioned that the protocol is XML-based.
Um, SVG is XML based, hence the XML-based protocol. The data stream is compressed, so the speed hit should be minimal. SVG is also highly optimized to be fast to parse. Any vector representation would have some level of parsing overhead, so the whole complaint is completely innane.
This document doesn’t seem to delve into a number of specific implementation problems which I am curious as to how they are going to resolve…
First, they suggest using XML as the communication protocol, and then using WBXML to compress it. Yet they also mention that they have engineered a high speed IPC system for their operating system. So, shouldn’t they simply use normal text XML documents as opposed to a needless conversion? All I can think of is that instead of ever producing a textual XML document, all messages are generated in WBXML format in the first place, and processed the same way. But if so, why move to ASN.1 and give up the advantages of XML such as schema validation? ASN.1 parsing problems were the source of some of the recent OpenSSL vulnerabilties. They mention computational complexity, so either they are converting XML to WBXML or if they are skipping the intermediate step of XML generation perhaps directly creating WBXML documents is a non-trivial process. I certainly can’t say I’m a fan of the use of SVG, or XML at all for that matter…
Next, there is absolutely no mention of font handling. One can only hope they will be using FreeType 2. I think this is a rather important aspect of any window system, and it’s been completely ommitted from their description.
The document is highly critical of MacOS X’s approach to using hardware accelerated compositing. In the case of MacOS X, this was done for very good reasons: upgrading memory is trivial comapred to upgrading a processor, especially in the case of portables. Quartz consumes a large amount of memory to minimize overhead on the processor.
I’m not sure if I like the direct conversion of SVG to OpenGL entities. First, this would only be beneficial if windows were rendered entirely in vector form. I also worry about how they will intelligently handle constantly changing SVG documents and coverting those into OpenGL primitives. This server will also be placing a great deal of load on the 3D subsystem, so they will be required to excellent drivers for high performance 3D hardware, a task which is decidedly non-trivial. It also means that they won’t be able to take advantage of the SNAP graphics architecture, which would give them instant access to a number of 2D graphics drivers.
It’s certainly an interesting idea, but there are a lot of implementation problems that will require highly complex solutions, and I wonder if they’ll be able to pull it off.
You’ve pointed out some very good features of this, the unfortunate thing, however, will be that the likelihood of it replacing X on Linux is pretty low. In a perfect world, distributors would view this as a “great leap forward”, and get behind its development.
Coupled this the nifty Frontiers GUI System, Linux could move foward and over take Windows and MacOS in terms of GUI responsiveness, consistancy and slimness (non-bloated).
The other platform that could benefit would be FreeBSD. If the is enough programming gurus out there, I am sure they would be interested in making it available as part of the ports for those who wish to run it over running XFree86.
So, shouldn’t they simply use normal text XML documents as opposed to a needless conversion?
>>>>>>>>>>>>
SVG was designed to be transmitted in some compressed form, because the text form is a bit verbose. Its almost always used that way in situations where transmission time matters.
All I can think of is that instead of ever producing a textual XML document, all messages are generated in WBXML format in the first place, and processed the same way.
>>>>>>>>>>
There is no indication that they are not doing this, but remember than WBXML is a very compact format, and some conversion might be required anyway.
I certainly can’t say I’m a fan of the use of SVG, or XML at all for that matter…
>>>>>>>>>>>>
The nice thing about SVG is that its standardized, and a lot of work has been put into SVG renderers. There is even a prototype OpenGL SVG renderer already available.
Next, there is absolutely no mention of font handling. One can only hope they will be using FreeType 2.
>>>>>>>>>>>>>
It would be assumed, since FT2 is pretty much the only choice as far as font-renderers go. And its also very fast and easy to use, so its a no-brainer.
Quartz consumes a large amount of memory to minimize overhead on the processor.
>>>>>>>>
On the other hand, manipulating large memory buffers puts a large pressure on the cache, and with SVG rendering being as fast as it is, there is really no point in dedicating all that memory to screen buffers when it could be better spent on (for example) the disk cache.
I’m not sure if I like the direct conversion of SVG to OpenGL entities. First, this would only be beneficial if windows were rendered entirely in vector form.
>>>>>>>>>>
I don’t really understand this statement. OpenGL (with the appropriate texture-handling extensions) handles bitmaps as fast as any 2D system. It handles vector graphics a whole lot faster. So there is really no downside to using OpenGL. And their goal *is* to have most windows be rendered in vector form, unlike Quartz, where most things are still bitmaps.
I also worry about how they will intelligently handle constantly changing SVG documents and coverting those into OpenGL primitives.
>>>>>>>>>
If I were them, I would cache all scenes as display lists. The first time the scene is displayed, you’d compile the display list and then display it, which would cause a minor hit on the first frame, but would allow subsequent frames to be displayed very fast. I’d also provide a flag the application could set for rapidly changing canvases, and if that flag was set, would just render the canvas in immediate mode each time. This shouldn’t be a big deal, because immediate mode is highly optimized in OpenGL, and you can usually get at least 50% of full performance in that mode.
This server will also be placing a great deal of load on the 3D subsystem, so they will be required to excellent drivers for high performance 3D hardware, a task which is decidedly non-trivial. It also means that they won’t be able to take advantage of the SNAP graphics architecture, which would give them instant access to a number of 2D graphics drivers.
>>>>>>>>>>>
Software OpenGL is rather slow, so they should provide an optimized software SVG back-end. An optimized SVG back-end can be extremely fast (try the AntiGrain renderer demos). SNAP would be largely useless for rendering SVG, because no 2D cards really accelerate the operations SVG requires.
An obvious problem of using OpenGL for rendering of everything is the problem of what happens with graphics involving text. How do you do the compositing? Things would be silly if you rendered the text in OpenGL via its outlines. Thus you need to at least render text onto a texture and compose it with the other graphics. Then there is the question of anti-aliasing. What gets done where?
Doing the fonts as high resolution textures that get downsampled will waste texture memory on the graphics card, and hinting and grid-fitting of the font will be impossible (the graphics card will only see a bitmap texture.)
If the fonts are rendered as a texture, and that texture is positioned with sub-pixel precision, then again, the font will not be drawn correctly (you handle rendering, anti-aliasing, etc. of the outline after you’ve positioned it.)
This, and similar considerations, mean that a PDF viewer, for example, must do all its compositing in software, rendered to a texture, and then use OpenGL only for compositing, as done with Apple’s Quartz Extreme.
So we are left with anti-aliasing text by rendering to a texture and using the alpha channel (which I presume is what they’re going to do.) Possibly using a single texture per glyph, depending on how many small textures the graphics card can handle effectively.
>> because immediate mode is highly optimized in OpenGL, and you can usually get at least 50% of full performance in that mode.
lol, that’s so untrue..
Immediate mode is dog slow (function overhead, setup of variables, …) and should be killed asap.
(the reason being that many (beginning) people use it to render their graphics while things could be much much faster by using the real meat)
Killing it is one of the very few things that Microsoft ever did correctly with their D3D.
Sigh. That’s not true for the type of relatively simple geometry (textured triangles) that would be used for an SVG renderer. Check out the bencmarks on the CAVE website:
http://www.evl.uic.edu/pape/CAVE/linux/nVidia/charts.html
Check the textured triangle benchmark on the GeForce2, which would be the representative case. For < 8 pixel triangles, the performance for display lists is 15M tri/sec, while the performance for immediate mode is 9M tri/sec. For 16 to 64 pixel triangles, the performanec difference is less than 1M triangles moving between immediate mode and display lists.
These numbers mesh well with my experience using OpenGL.
Remember, OpenGL is an immediate mode API, and the immediate mode entry points are highly optimized on good implementations.
What gets done where?
>>>>>>>>>
Its really very simple. Say you want to render ‘A’ in 12-pt Helvetica. Using the FreeType API you ask the font renderer to render exactly that character, at that size, at a given DPI. The renderer hands you a 8-bit pixmap that represents the exact image of the glyph, with all hinting and anti-aliasing already performed.
Up to this point, an OpenGL system is the same as existing systems. Now, with existing systems, you take this glyph, and use it as the alpha mask for an alpha-blend operation to compose the glyph onto the window. In an OpenGL system, you just use the pixmap as the alpha-texture for an appropriately colored quad. OpenGL then does the alpha-blending for you.
This, and similar considerations, mean that a PDF viewer, for example, must do all its compositing in software, rendered to a texture, and then use OpenGL only for compositing, as done with Apple’s Quartz Extreme.
>>>>>>
Almost all font systems render text via software, because font rendering is so complex as to defeat any practical acceleration. However, performance is fine because glyphs are cached, so the only operation involved in drawing text is composing some glyph pixmaps. Your hypothetical PDF viewer, however, could still use OpenGL to draw non-text elements.
Possibly using a single texture per glyph, depending on how many small textures the graphics card can handle effectively.
>>>>>>>>>>
In practice, you’d almost always use a single texture per glyph. There are only 30-some possible glyphs for most languages, and even with multiple font sizes and styles, that makes up a paltry few hundred pixmaps in the average cache. Thanks to AGP, you can keep most of these pixmaps bound to OpenGL textures, and the card will automatically keep the most-often-used glyphs in local card memory.
i do not have any problems running XFree86, it is fast & stable, what causes me problems is the bog bloasted desktops such as Gnome & KDE, get rid of those and/or install a lighter desktop such as WindowMaker, XFce4, ICEwm and you will see a significant performance boost, i use a computer to run applications such as web browser & email client, OpenOffice, Evolution PIM, and a few games & etc… and i am not going to bog down the use of there applications with a BIG BLOATED and useless desktop environment/application framework…
Anyone get the impression that trashing X is just the latest vougue movement? To me, people seem to want to see X replaced, not because of a real need, but because its a ‘trendy’ way to think.
Just my 2 cents…
Why else are XFce and Windowmaker so fast? Especially KDE is very bloated.
Lets all invent the wheel. Over and over.
Wouldn’t it be nice if everyone just started improving the big and already working wheel we already have. ?
Or is “X is slow” the next buzzword(s) ?
Well, of course blackbox and all those minimalist WMs run fast. But we want Gnome. We want KDE.
Let’s talk again about the so-called “Linux on the desktop”. Do you think blackbox is as user friendly as Windows/Mac? Hum. I don’t think so. That’s why we *need* Gnome running as fast/responsive as a Windows box.
Victor.
As new technologies are created sometimes it is just too much work/impossible to improve old code. The thinking is that X needs a complete revamp to have it implement technology of today and not the technology of umteen years ago. Does it really matter anyway? If you aren’t coding anything then who cares what OSS does, either way you will enjoy the results…eventually.
Does X need a pretty heavy re-write of major pieces?
Yes.
Is it inherently flawed?
No.
Is it the only piece of software to blame for the slowness of the windowing environment of linux and BSD?
No just no but hell no.
Folks designing the widget whether it is QT, GTK2, or XUL all share some of the blame. The folks designing the window managers share some of the blame too and so do the people designing the desktop environments. Half of performance is perception. You can’t look at X as the sum of the whole in terms of GUI performance in a *Nix environment.
Lock the GTK, QT, Gnome, KDE, Mozilla, OpenOffice and X developers in a room and tell them they can’t come out without coming up with ideas for improvements of speed that can be implemented quick and in parellel. Guess what? You would see improvements.
People focus too hard on just X. IMHO and all that shite.
I like the ideals of the JourneyOS team, and they seem to have some good ideas, but they are taking on an awfully lot. I do performance sensitive realtime GL work, and to really get this windowing system as proposed up and running and with GOOD performance on a reasonable assortment of OpenGL hardware I’m thinking is probably a man-year in itself. This is a concept white-paper, not a design spec, and it omits all the details that will be essential to performance, giving me the impression that the author has not actually done much real-time GL coding, and will thus have a long learning curve to climb.
The windowing system would be great as a little project, but combined with writing The Perfect Filesystem, and The Perfect Posix Layer, and The Perfect Kernel Personality, and for some reason taking on a fork of Fiasco to presumably make it More Perfect than Dresden has managed, and the scale of the project becomes kinda overwhelming.
The endeavour reminds me of the Alliance OS project, which I was involved in. We all had vast and revolutionary ideas, which we spec’ed and posted to the web site without end, but when it came to actual work it was just too overwhelming to bear. The Free Software movement is littered with the corpses of projects like that.
Hopefully the JourneyOS team recognizes these tendancies and will pick some managable focus and do great things with it.
-braddock gaskill
I think you may have a point, and it would be a shame if this design languished for lack of support. To tell the truth, this aspect of the project is a whole lot more interesting than the kernel bits and pieces, especially since its is unlikely that any such new kernel can be competitive in any reasonable time-frame with Linux and *BSD. Of course, this work might provide a nice foundation for further efforts. There is really nothing stopping somebody from taking the overall design, rolling it up into an X extension, and implementing it that way.
Immediate mode rendering in OpenGL is VERY VERY slow on every single platform except Windows compared to using things like display lists.
This has almost always been the case, and is the reason one of the most important optimizations you can make when working in OpenGL is to NOT USE OPENGL IN IMMEDIATE MODE.
It really depends on the drivers. As the benchmarks I linked-to indicate, immediate mode can be quite fast on simple scenes. Note, NVIDIA’s drivers are the same on every-platform, so this performance is not OS dependent.
They have an interesting concept that may prove difficult to implement, but what I worry about is what is the possibility that they may get shut down for using the album cover images from the band Journey?
Would they have to go through a whole renaming fiasco like Firebird from Mozilla? Would they even survive long enough to worry about renaming?
Absolutely. Geeks are not the only ones that use Linux. It’s got to user friendly for the, shall I say, normal folks…thus without KDE and Gnome, Linux would remain in roughly the same neighborhood as Minix. Well, maybe not THAT bad..