The X Window System, which is the foundation of graphical displays on Linux, Unix, BSD, and optionally on Mac OS X/Darwin, has long stayed submerged in the public consciousness just as it has been submerged under window managers and heavyweight desktops.
X is crap because some idiot made a beautiful network display system and then NAILED a crude&horrific keyboard&mouse input pair directly to the heart of what could’ve been perfect(ly expandable) software. And thus X is crap, because it did something it was not: be a user input platform.
All I want is two seperate cursors on my desktop damn it.
Agreed , plus any game dev will tell you good fps is hard to achieve under X, i could name dozens of flaws , but when i do i have others going”your ignorant its fast”
point: if it appears slow , it is slow
nuff said
What is slow a protocol or incomplete drivers?
Sorry I should have said.
What is slow? An implementation of a protocol or incomplete drivers?
BTW is there a FAQ somewhere about the major implementation. I still get confused about Xfree86/XOrg and was there an original “X.org”?
I’ve never heard anyone say, “can’t wait to dig in and do some X progamming this weekend!”. I’ve read lots of folks saying how much they liked, say, the BeAPI.
Also related; I think I recall once seeing those books (Oreilly?) on X. You know the series of thick volumes I’m talking about? Who’s got time to read and absorb all that?
Graphic application developers don’t use X’ implementations directly. Instead they use a graphic toolkit like GTK+ or Qt. These toolkits have APIs comparable to any toolkit you would find on any operating system.
Learning an API is as arduous as learning a programming language. If you haven’t got time to read and absorb books, tutorials, guides and documentations, forget about programming.
X is a jungle , and its bullshit that its standard. Xorg, X11, Xfree86 is a jungle.
i think 99 % of the users of *bsd´s, Gnu/Linux´s and other *nixes have it difficult about the gui of thei OS.
its x´s everywhere and confusing, have always trouble configurating with the XF86config, XFreecfg ……… (dont care to remember the commandos, because its different everytime)
it should be rewritten from start with no backward compatibility , except maybe a standard rdp to connect to an old Xserver.
the companys who wants to make money on free OS´s must consider that the X environment (server) is the trouble for their future moneymaking OS.
I don’t see the slowness, seriously. I can see the gtk2 text render delays, but X delays aren’t really noticeable (not any more than I’m used to from the Windows platform (OS X is not a fair comparison).
There are a lot of X nay-sayers. And I’m sure they have good reason… But they should shutup or start coding/testing/helping/funding.
That’s right, put up or shut up.
”
Learning an API is as arduous as learning a programming language. If you haven’t got time to read and absorb books, tutorials, guides and documentations, forget about programming.”
Bullshit. The only arduous API is a poorly conceived one. GTK#’s API is brilliantly done in most places, for instance, despite the GTK+ C API being less than stellar (in my arrogant opinion).
Bullshit. The only arduous API is a poorly conceived one. GTK#’s API is brilliantly done in most places, for instance, despite the GTK+ C API being less than stellar (in my arrogant opinion).
For a C-based api, Gtk+ is great – really well thought out. Seriously. The problem is that you’re still working in C, which isn’t all that great for GUI apps.
Personally, I love PyGTK. It’s even got the bindings to gtk+2.4, which C# doesn’t.
People, listen. It’s not X that is slow, it’s the freaking toolkits and all the other crap that is going on on your desktop. I hear QT is pretty fast and hopefully Pango will get a fastpath for western languages for those that can’t afford a hundred bucks or so for a machine running at 1GHz+.
That’s not to say that various X implementations aren’t a mess. Hopefully X.org can clean it up.
People, listen. It’s not X that is slow, it’s the freaking toolkits and all the other crap that is going on on your desktop.
But it can’t only be the toolkits. About gaming; I have been looking at SDL and Allegro lately and it seems that you can only get 2D hardware acceleration (with X) when using DGA (fullscreen). Never in windowed mode. Also I’ve had many problems with fullscreen X and when using DGA you have to be root.
And then we have http://forum.zdoom.org/viewtopic.php?t=1913
Seems he’s using opengl.
Somewhat depressing: http://www.amdzone.com/modules.php?op=modload&name=Sections&file=in…
Especially since they say that nvidia basically uses the same code for their linux and windows drivers.
There are performance problems and it’s not only the toolkits.
You can play fully accelerated 3d in a window in X (not as fast as fullscreen, but same with windows).
For the most part, opengl-acceleration is not as fast in Linux as it is in Window, but it’s still pretty fast.
My problem is mouse-movement in games like Quake3. It just never seems as crisp and controllable as in windows. I’ve tried tweaking SampleRate and some other parameter in XFreeconfig-4, but it doesn’t seem to help.
Whether an API is ill constructed or well designed is irrelevant. You are still going to need references of some sort before you can begin using any library. Of course, experienced programmer can make informed guesses regarding how an API should work, but you are still going spend months, even years, figuring out the quirks of an API. Yes, they all have issues. And of course, I’m refering to large libraries like Gtk+, Qt and Xlib.
Many experienced programmers can pick up a new language in less than a week. You have those same programmers struggling with Gtk+ for months. I have used both Gtk+ and GTK#. Gtk# is still buggy. I can also tell you that Gtk+ is one the best C APIs I have ever seen for GUI applications. PyGtk is still my favorite Gtk+ binding though. I still maintain that learning a new API is as challenging as learning a new language, if not more.
There are a lot of X nay-sayers. And I’m sure they have good reason… But they should shutup or start coding/testing/helping/funding.
That’s right, put up or shut up.
Agreed. Or rather, proof with benchmarks it ‘sucks’ (a very subjective, non-saying statement anyway). I don’t give a rat ass for the next widiot who wanks something is ‘slow’. Show us your numbers and testing circumstances already! Far more interesting…
Testing circumstances means that for example on software aspect you define precisely what software and what kind of version of software you used, what distribution, how you compiled it (or what binaries) and other noteworthy aspects. Keep in mind i’ve not even defined hardware aspects. If you cannot or do not understand this, you are NOT qualified to even discuss it hence you STFU and start either learning or you leave this up to people who are qualified. Thank you.
I can state X is slow to, while i’m referring to Cygwin’s XFree 4.3 on Windows 98 on a P1/133 with 32 MB RAM. Ofcourse, wether i define these details or not is of huge importance. Just because Mozilla starts up slower on ‘Linux’ than on ‘Windows’ doesn’t mean X sucks. *Sigh*..
You know, it is kind of pathetic the people sitting there bashing x.org. These are the reasons why.
1) None of these people have contributed code and/or money to x.org, or helped them in any way, shape or form. Most of them have probably just downloaded it bundled with a Linux distro for free. Sort of like a bunch of homeless people sitting there bashing the managers of a homeless shelter. Most decent human beings (ie ones that aren’t complete fucking losers like a lot of the people here) don’t bash people that give them stuff for FREE.
2) None of you back this up with any numbers, benchmarks, etc.
3) In regard to Doom 3…how many of you honestly believe ID has spent as much time and money getting it to run on Linux as they have with Windows????
Maybe you guys should run out and donate some money to these non-profit organizations….then you’ll have a reason to bitch
Although I agree with what the author of the article is writing about-ie. the lack of attention that X(x.org) recieves, I feel that this article does little other than to decry this situation. I would far prefered to have read an article in which the author would actually have done the work of providing those of us, who use X, with more information about the people and organizations behind X. As such the article was so cursory that it did little beyond complaining.
Of course I would also enjoy reading threads where 2/3’s of the posts were not ignornat rants by people who have little, if anything, of interest to say. But then again that would be asking to much-its much easier to just say “X sucks”.
I certainly hope with the reforming of x.org that X will finally get the attention it deserves. And I believe we are already witnessing this-more is currently a afoot in X development than has been seen in the last 10 years. I think one of the factors which plays a crucial role in the amount of attention that X recieves is the length of time it takes for changes to the underlying X system to propagate up to the end-user application layer. Its seems there is usually a 2-3 year gap between fundamental changes to X and the end-user applications properly implementing such to everyones benefit. It has been somewhat sobering to realize that the initial enthusiasm concerning composite/fixes and damages was very untimely due to the fact that we as end-users will not directly benefit from these advances for at least another 12-18 months, with only minor exceptions.
What I am saying here is not a critique against X-or X development-x.org can do little to nothing about this situation. Freedesktop.org, which is now hosting x.org, thankfully provides a common focal point to help propagate such changes in X upstream. The positive response of the desktop environment communities(GNOME/QT-KDE) and application communities(Novells’ mono, Mozilla) to these changes is very encouraging.
Of course I fully suspsect that yet another factor in the lack of attention that X(x.org) recieves is due to the fact that X is so solid, reliable and dependable that it simply recedes into the background of perception. Sure some people have difficulties initially setting up X- and quite frequently there can be difficulties involved with setting up X forwarding etc. but by and large X is so stable that people usually only complain about the applications which run atop of X-ie. if X itself was not stable and reliable the whole plethora od DM’s and DE’s would have never arisen for they necessitate and presuppose a working X configuration.
True, but you’ve got to go through the slower system RAM instead of writing to the card’s framebuffer in common 2d graphics, unless you’re root and can use DGA. It’s annoying to not be able to do fast 2d graphics on graphics cards that doesn’t support hardware accelerated opengl. It shouldn’t really be necessary to use opengl on simple 2d games either.
you guys keep babbling on and on about X’s plugins. X was not meant to make things go to your screen fast. Thats not its purpose. It has plugins for that. X is just a framework.
As for your performance issues <a href=”http://www.linuxhardware.org/article.pl?sid=04/10/12/1725246&mode=t… has the less than stellar (doom3) truth.
There are a lot of very uninformed people posting in this thread.
1) X is not slow. Go benchmark it against GDI and tell me it’s slow. X is not a GUI, it’s an API for creating windows and drawing to them. For what it does, it’s fast. Even the Athene folks’s own benchmarks show that their Athene window system is only 25% faster than X, and that the difference between X and Windows is less than 10%. Toolkits are at fault here, and you’d be really surprised how stupid some toolkits act.
2) X is not designed to be easy to use. Nobody in their right mind programs to X directly. It’s not a GUI API, it’s a low-level windowing API. For example, lot’s of people find the BeOS API a joy to use, just as lot’s of people find Qt a joy to use. Have you ever heard anybody talk about how wonderful programming to the internal app_server protocol is on BeOS? Because that’s exactly the level of abstraction at which Xlib operates! Indeed, recent steps are to make the programming interface to X even more low-level (Xcb), to make it more efficient for toolkits to use.
3) OpenGL acceleration, on NVIDIA cards, is not slower in general on Linux. Certain apps might be slower, due to different optimizations, but apps that are important on *NIX machines, for example the apps in the Viewperf benchmark, perform equally well on Linux and Windows. When the folks at ILM moved to XSI on Linux, they noted how Linux’s performance in XSI compared favorably to NT’s performance in XSI.
The only legitimate complaint I heard on this board was about configuration. Yes, X.org’s configuration is less than ideal. There is no reason it can’t be auto-detected entirely. However, if you’re on a recent mainstream distro (eg: RedHat, SuSE), you shouldn’t even have to touch X’s configuration mechanisms, because it’s handled for you by the installer.
Somewhat depressing: http://www.amdzone.com/modules.php?op=modload&name=Sections&…
Especially since they say that nvidia basically uses the same code for their linux and windows drivers.
From the article:
“For our Linux distribution, we used Gentoo Linux for
AMD64, the 64-bit version of Gentoo. This is the fastest
Linux setup to date and although Doom 3 is a 32-bit
game, we saw small speed boosts from running a 32-bit
game under 64-bit Linux in our last review. We installed
Windows XP SP2 for our Windows installation. We did not
go for the 64-bit version of Windows”
Since they used a 64bit version of Gentoo and compared the results with a 32Bit Windows, the results are more in favour of Windows that the numbers show
1.X is very slow. Benchmark you right hand stimuls that move mouse that make mouse interrupts that stored in X event buffers and so over that finally drag window (for example, xterm in twm ). So ugly, so buggy with respect what i want to get. GDI + user+ kernel+watever do the same very predictively since Win 3.1. Thought they use internal float point coords and filter them, or use physically corect model of arm, result is perfect. 3 years ago I look at X “threshold mouse acceleration” algorithm and sorry it is so kiddish so I immediately forget about X Conorcium Big Names and imagine 12-14 year old kids making such code. Sorry, 30+ years of development and that “threshold alg.” ? Mouse is so commmon device, everyone include IBM, SGI, HP, whatewer top managers use it every day. May be their boxes not *nix based ? Of course, 2 years ago I try to contact about this and got “do it yourself becouse we buzy and it is not issue anyway”.
2.X designed about 30 years old “reduce memory for all cost” idea (palette based hardware) and such legacy technical tricks and too bloat. Do you really need cool benchmark result with funny stencil checkerboard bitblt ? How it help to draw subpixel AA text ( 99.9% of desktop) ?
3.OpenGL is slow because poor gcc optimizer first (even 4.0 way slower than competitors) and lack of developers that optimize even more handwriting in native assembler. Lot of supported platforms is great but first do decent gcc optimizer then make benchmarks.
4. Hardware OpenGL do not save X slowness because of lack Composition operation. A*V1+(1-A)*V2 must be performed in linear color space not gamma-precorrected. You got some “transparent-like” picture but error is big to ignore it, you must do pow( A*(pow(V1,gamma) + (1-A)*pow(V2,gamma), 1.0/gamma). And immediate results must be at least 16 bit to avoid to loose precision in dark areas. There is no video cards that support that. So only way to implement is compose by CPU, and gcc optimizer strike again.
5. There is no way to track and sync output with Vertical Blank interrupt portable way if any.
If it is not enough I can continue endlessy. I know that network platform – independent protocol is very useful but please do not say to me “X is not slow. Go benchmark it against GDI and tell me it’s slow.” or such. I use Linux and Windows on different hardware long enought( Linux since RH 6.2, windows since Win3.0) , writing OpenGL programs, surf net, compile kernel in xterm. X is Achilles’ heel of Linux. Only consolidated effort and unbiased reviews help all of us.
Benchmark you right hand stimuls that move mouse that make mouse interrupts that stored in X event buffers and so over that finally drag window (for example, xterm in twm ).
What exactly are you talking about? I really couldn’t parse this statement.
How it help to draw subpixel AA text ( 99.9% of desktop) ?
Again, what are you trying to say? Are you trying to say that it’s not useful for X to be fast at drawing sub-pixel AA text, or what? Because that certainly is useful.
3.OpenGL is slow because poor gcc optimizer first (even 4.0 way slower than competitors)
It really depends on what benchmark you’re running. If you’ve got a high-end card with hardware T&L, your CPU isn’t going to be the bottleneck, so GCC isn’t going to be a factor. Even when it is, you’re bottleneck is going to likely going to be integer processing (since the GPU is handling most of the floating-point calculations), and it’s been shown that GCC’s integer performance is quite competitive with ICC’s. With regards to GCC 4.0 — any numbers you get from it are invalid. GCC 4.0 is pre-release software, based on an entirely new optimization architecture. Lot’s of optimizations haven’t been ported to the new framework yet, and it’s certainly not release quality. It’s futile comparing it to other compilers at this point.
and lack of developers that optimize even more handwriting in native assembler.
It depends on the application in question. In markets where *NIX is important, *NIX versions do get optimized properly. Go load up Viewperf or XSI on a Quadro and watch how it’s just as fast as a Windows box. If games aren’t optimized as well on *NIX, well, that’s not a problem with X, but a problem with the developer.
4. Hardware OpenGL do not save X slowness
X *isn’t* slow! Go run the benchmarks yourself! Take something like Qt Designer. It’s every bit as fast in X as it is in Windows. If X were slow, that would not be possible. So X isn’t slow. QED.
because of lack Composition operation.
Composite is accelerated on NVIDIA and Radeon. Acceleration will only get better when the X server is retargeted on top of OpenGL. But that’s functionality needed to compete with Longhorn, not XP.
There is no video cards that support that. So only way to implement is compose by CPU
For the desktop, you aren’t doing multi-pass composition, so you don’t need a high-precision color buffer for intermediate results. It’s sufficient that you use high-precision internally when you do the composition operation. Recent high-end graphics cards use 96-bit to 128-bit precision internally, so that’s sufficient. If you want to do multi-pass composition on HDR images, both NVIDIA and ATI support 64-bit floating-point color buffers.
There is no way to track and sync output with Vertical Blank interrupt portable way if any.
You can do this in OpenGL (on NVIDIA, anyway) on X. Besides OpenGL apps, who else needs it? Even 2D games use OpenGL these days.
I use Linux and Windows on different hardware long enought( Linux since RH 6.2, windows since Win3.0)
I’ve used RedHat since RH 5.x, and Windows since 3.1. Linux GUIs *can* be slow, but it’s not the fault of X. The numbers simply don’t support that conclusion. Besides, it’s not like GDI is very fast either. I remember writing software polygon rasterizers that totally trashed what NT could do in 4.0 (even though it had the advantage of hardware acceleration). I remember being blown away when I found X giving me the same sort of bit-blit performance as I got under DirectX. However, Windows toolkits take advantage of a lot of perceptual tricks (like priority boosting the foreground application), that makes Windows *seem* faster.
The goal is to eliminate the need for xorg.conf and other things, In fact, I only use xorg.conf for minimalistic configurations now, X is able to detect my video just fine.
X works fine thanks, it’s people who are ignorant about it that complain.
We’ll get there, and with X.org now, we’ll get there faster.
“> Benchmark you right hand stimuls that move mouse that make mouse interrupts that stored in X event buffers and so over that finally drag window (for example, xterm in twm ).
What exactly are you talking about? I really couldn’t parse this statement.”
I think he’s trying to say that you should include (mouse) input events on the benchmark, and not just bliting time or whatever.
Why would you put mouse input events in the benchmark? The mouse produces data at the rate of 100s of bytes per second — there isn’t a windowing system in existance that can’t handle that rate. Also, more mouse input events does not equal better response. Indeed, throttling input events can improve responsiveness, by decreasing the amount of event handling that needs to be done. That’s why most toolkits do stuff like motion compression, to compress multiple mouse movement events into a single event.
There are definite issues with mouse responsiveness under X. I went ahead and installed quake3 on my ubuntu partition a few days ago and there’s something not quite right there. Even tweaking SampleRate and some other parameter in the X config didn’t resolve the issue.
The one surprising thing I did notice though was how much better Quake3 looked at 1600×1200 with everything maxed out than on windows. It was quite startling. Don’t know what’s going on there. I’ve got a ATI 9600 Pro. But was also surprised that I was only probably averaging 40-45 fps with those settings on a P4 3.2 ghz.
First, sorry for ugly spelling. Lot of time I claim myself never comment but “X is not slow. Go benchmark it against GDI and tell me it’s slow” turn me, usual peaceful human to something furious.
User do not need “low latency kernel”, “super-fast bitblt”, “hi-precision mouse (MX510 )”, or such. He want be fulled enough that FEEL mouse+monitor like “continuation of arm”. I very like “fast click” games such as good old StarCraft, now excellent PopCap Zuma Deluxe. If you need to very fast click, double click and drag then M$ desktop circles around Linux desktop. Example – you must left click one monster then right click destination, repeat ASAP becouse you base under heavy attack( if you do not like games, imagine to different menu items that need activated fast to finish job and go home early). Try yourself and you discover that it is so hard to just click without any move. So, good X must have “recognizer” that tolerate to such human errors and “filter” mouse events using complex, well designed alg.
The same with Vertical Blank interrupt. If I am can not determine when exactly my frame be visible I never can do decent smooth animation, period. Not only games and useless screensavers, but menu appearing and text scrolling. Fire some commands to X and HOPE that MAY BE some of them finished with nearest display scan sweep and other to next+1 is not good. I cannot even get feedback, what happend with them. OpenGL 2.0 introduce “counters” but VB interupt never be implemented on my hardware (Matrox G200, ATI 9600XT, 9800Pro at home, lot of Intel 815 s3 trio3d/2x at work).
About gamma. Read this twice: numbers in frame buffer are not colors, but pow(color,1.0/gamma). Typical Gamma 2.2 so for example if you want half gray you must store not 128 but
pow(128/256,1.0/2.2)*256=186!!! So any direct arithmetic calcs are wrong. Any hardware alpha-blending, compositing is a dirty trick with unpredictable result. Conclusion: do not expect to see good desktop even with Fedora Core 4. May be ATI or Nvidia make fast cheap 16-bit or better float point frame buffer and make linear gamma correction, but now it is only dream. Of course, user-land CPU based composition/blending work good but speed is always depend on gcc optimizer.
Another X stopper: subpixel AA font kerning. Seems X use only integers when calculate glyph position and loose subpixel information, result is before me in my 15′ LCD. People stoped to talk to developers about kerning because no answers. They just boot Windows with shiny ClearType. Subpixel AA designed since 2001 so 3+ years not fixing it push users to alternate OS.
BTW, why there is no big X based 2d/3d independent competition reviews such as huge cool full color graphs at ixbt.com ? With AF and AA, different color depth, resolutions, picture quality test? At least TuxRacer and glxgears ? Rewiews that newbie can read and make choice what video card best to him ? We all know answer, real numbers shocked him (hint: most test do not start at all or crash during ~1-2 min). Bad Nvidia/ATI binary drivers do most regression, but part of it responsibility still on X developers.
@Lumbergh: I notice mouse issues under heavy load, but then again, the only OS in which that didn’t happen was BeOS. I don’t play games, though, so I can’t comment on that. Of course, I’m pretty sure Quake III
@Anonymous:
User do not need “low latency kernel”,
They certainly do. That’s what makes skip-free music possible.
if you do not like games, imagine to different menu items that need activated fast to finish job and go home early). Try yourself and you discover that it is so hard to just click without any move. So, good X must have “recognizer” that tolerate to such human errors and “filter” mouse events using complex, well designed alg.
Are you saying that it’s hard to click the mouse without moving it a little? Because I’m having no problem doing that right here.
If I am can not determine when exactly my frame be visible I never can do decent smooth animation, period.
You can. You know that the frame is done when XSync() completes. You just can’t sync that to the vertical blank.
but menu appearing and text scrolling.
This is potentially useful, but there are only two OSs that know of that sync the desktop to the vertical blank — OS X and and BeOS Dano. Windows XP certainly doesn’t do that. Since 95% of people live just fine without vblank-synced text scrolling, I’m going to say that this complaint is a bit hollow — if X still lacks it when Longhorn comes out (which will have it), then you can complain.
About gamma. Read this twice: numbers in frame buffer are not colors, but pow(color,1.0/gamma).
Nearly everyone treats the framebuffer as storing values in linear intensity space.
Typical Gamma 2.2 so for example if you want half gray you must store not 128 but pow(128/256,1.0/2.2)*256=186!!!
You’re forgetting to factor in the fact that graphics hardware has gamma correction tables that apply this correction before the image is sent to the DAC. So if the frame-buffer holds 128, the graphics hardware will transform this to 186 (if the LUT is set up for a gamma of 2.2), before sending it to the monitor. Thus, you don’t have to store gamma-corrected values in the color buffer, you can store everything in linear space.
So any direct arithmetic calcs are wrong.
No, they are correct, because colors are linear values. The reason people do gamma-correct compositing is that depending on hardware gamma correction causes you to lose intensity resolution in the low end. You can easily this in the example above. If the first 128 linear values has to map to 186 corrected values, that means the second 128 linear values only have to map to 69 corrected values. That’s about 3x the resolution in the upper half of the spectrum as in the lower half of the spectrum.
Gamma corrected compositing can prevent this loss of resolution. However, if you’re compositing with a 16-bit per channel color buffer, then the resolution loss becomes insignificant, and you can treat all colors as linear values. That’s why OpenEXR represents everything as 16-bit values in linear space, and Shake (a compositing app), tutorials tell you to disable gamma-correct compositing for 16-bit images.
subpixel AA font kerning.
This has nothing at all to do with X. X’s core text routines are effectively deprecated. It doesn’t do text rendering at all, the client does. X just composites the resulting glyphs together. This is something that needs to be fixed, but it can’t be fixed in X. Rather, it needs to be fixed in userspace libraries like Pango. Owen Taylor did a paper about grid-fitted kerning (presumably for Pango) not too long ago, and Qt-4 will have proper kerning support as well.
BTW, why there is no big X based 2d/3d independent competition reviews such as huge cool full color graphs at ixbt.com ?
Because ixbt, from what I can see, is the Russian equivilent of ZDNet? It’s no story that consumer magazines don’t really cover *NIX graphics, especially since *NIX graphics are mainly relegated to the workstation market.
We all know answer, real numbers shocked him (hint: most test do not start at all or crash during ~1-2 min).
Really. That’s news to me. Man, I’m surprised those ILM artists could do Star Wars with their X-based Linux machines crashing every 1-2 min! Maybe that’s why Jar Jar sucked so much!
Nearly everyone treats the framebuffer as storing values in linear intensity space
Yes, it is very evli and wide spreaded mistake.
You’re forgetting to factor in the fact that graphics hardware has gamma correction tables that apply this correction before the image is sent to the DAC
Seems you just do not program any hardware and do not check it yourself . Find gamma.gif on ‘net or http://ppewww.ph.gla.ac.uk/~flavell/www/gamma.html
and see how stored values different with “dithered emulated” using 0 and max_color. You be surprised, I promise. Anyway, “hardware gamma correction” is nothing but just lookup tables that nobody touch except “software color calibrator” like Colorific of Adobe Gamma for reason to very small tuning real monitor close to 2.2 gamma. If you do linear gamma ramp that it solve problem, but You must repaint all hand-writed desktop arts, and still get bad image because of loose dark color precision.
Gamma corrected compositing is just compositing. There is Compositing and BS like present arithmetics in non-linear space that nobody know what mean. It LOOK like blend two pictures, but errors too big to ignore.
Another gamma-related trick: White on black text. Seems that Pango/mango/xft/X/GTK/Xterm developers think exactly like you and result very funny. xgamma -gamma 2.2 return white-on-black AA to linear space but of course with lost in color resolution in dark areas.
I developed software only OpenGL driver on Win32 and do huge searching and of course “weel inventing” in gamma correct dithering and as you be surprised. But end result very cool, espetially on low bits color resolution (3:2:2) sorry can not post screenshot, if interested I send sources (very ugly but still work).
Owen Taylor did a paper about grid-fitted kerning (presumably for Pango) not too long ago, and Qt-4 will have proper kerning support as well. Sound like music to me. I am GNOME fan and if QT4 do it I never again install GNOME desktop.
Jar Jar In past I tryed to clone SoftImage|3D+MentalRay, that is reason why I reinvent wheel making software OpenGL, to have all sources from hi-level editor to low-level libs. Do not talk me about ILM, I have my own hand-writen patent free ILM (small, slow, ugly, buggy, w/o motion at all but it can produce nice raytraced NURBS bottles-on-big-shiny-balls or such) .
Fundamental problems that be discovered during rewrite clone to Linux I mention above. Unfortunately main stopper is not so efficient gcc optimizer. I gave up when YEARS compare tracer speed under gccX.XX and VC6. I am very experienced x86 asm coder but I think that it is bad idea because of wide different CPU/platform that Linux support is good.
About 1-2 min test crashes. Yes, I never see it myself, because Internet too expencive to me to download Linux Doom3, America Army, NeverWinter Nights and such ( >70MB) so I use rage3d.com Linux forum and links as source.
Yes, it is very evli and wide spreaded mistake.
If it’s such a mistake, why does OpenEXR do it? Storing linear values in the framebuffer is only a mistake if you don’t gamma-correct the framebuffer before sending it to the DAC. However, nearly all current hardware allows you to gamma-correct the image.
Anyway, “hardware gamma correction” is nothing but just lookup tables that nobody touch except “software color calibrator” like Colorific of Adobe Gamma for reason to very small tuning real monitor close to 2.2 gamma.
Wrong. Hardware gamma correction is just a lookup table, yes, but that’s all that is required to convert the linear space values in the framebuffer to the response curve required by the monitor. And it’s not just color calibrator software that uses it — a gamma adjustment tool is built into NVIDIA’s control panel, and is accessible in X via the xgamma utility, or the KControl gamma utility.
If you do linear gamma ramp that it solve problem
Linear gamma means that gamma = 1, which is the same thing as linear intensity space. And that’s precisely what I said — if you store framebuffer values in linear space, and use the lookup table to gamma-correct the image before it’s sent to the DAC, your image looks correct.
but You must repaint all hand-writed desktop arts
You don’t have to redraw them, you can use existing software to convert between the gamma in the file to linear gamma.
and still get bad image because of loose dark color precision.
Yes, you lose dark color precision, but this loss is no problem if you use a 16-bit per-channel framebuffer. Even if you’ve got a 10-bit per channel DAC, you’re not going to notice the loss.
BS like present arithmetics in non-linear space that nobody know what mean.
If you store color values in linear space, and you have a high-precision color buffer, you can do uncorrected compositing without loss of quality. That’s not BS, that’s just how it works.
xgamma -gamma 2.2 return white-on-black AA to linear space but of course with lost in color resolution in dark areas.
And gamma-correcting on Windows doesn’t result in loss of color resolution in dark areas? Of course it does! Neither X nor Windows (nor MacOS X) are set up to do high-quality composition at the moment. In Longhorn, Windows will be, and there is nothing in the architecture of OpenGL or X that prevents X from doing it too.
I developed software only OpenGL driver on Win32 and do huge searching and of course “weel inventing” in gamma correct dithering and as you be surprised. But end result very cool, espetially on low bits color resolution (3:2:2) sorry can not post screenshot, if interested I send sources (very ugly but still work).
You don’t need all those gamma correction tricks if you use linear color values with a high-precision framebuffer. And that’s precisely the model OpenGL is moving to today. NVIDIA cards, for example, support the OpenEXR color model directly in hardware.
—comments about software OpenGL clipped—
Nobody cares about software OpenGL. Nobody cares how GCC’s optimizer affects software OpenGL. When you’ve got a $400 graphics card that does 50-100 gigaflops in hardware, nobody cares about software GL. Besides, it’s not like GCC is the only compiler on Linux. If you’re writing high-performance code, and find that GCC is holding you back, just buy Intel’s or Portland’s compiler. Both are way better than Visual C++.
[i]so I use rage3d.com Linux forum and links as source.</i.
Hah, rage3d! ATI simply does not count. ATI’s drivers are absolute shit, they’ve always been shit, even on Windows, and these days, they are just slightly less shitty. Nobody who uses Linux seriously for 3D work uses ATI hardware. ILM (and everyone else), uses NVIDIA for their work. If X on NVIDIA is stable, and X on ATI is not stable, that suggests that it’s ATI that’s the problem, not X.
http://www.povray.org/ftp/pub/povray/Official/gamma.gif.
Too lot of words. Just try it yourself. If you do not beleive your eyes, got colorimeter and measure real brightness of values 0, 128, 255 stored in framebuffer that in you box. Stop spread BS. Too many ppl think as you. Ask Owen Taylor, Keith Packard if you think colorimeter misconfigured. Got ATI/NVIDIA tech docs . FRAMEBUFFER STORED INVERSE GAMMMA CORRECTED VALUES. Only your great contribution to OSS motivate me to write it third time.
I note my software OpenGL lib only as explanation why I so aggressive on topic and that I know well OpenGL pipe and protect from moderated down.
just buy Intel’s or Portland’s compiler
Never try any of them. May be it is good advice. But do not expect GPL version of my program if development require lot of money (of course if I finish it).
OT:
ATI linux drivers permanent alpha version but windows version fast enough. I use ATI cards at home since Rage128 (now 9800Pro) and very like it output image quality compared to noname MX440 2 years ago. Time changes and may be my next card be 6600. Anyway, it is all very disturbing, X is a face of Linux desktop and we all hostages of proprietary driver quality.
FRAMEBUFFER STORED INVERSE GAMMMA CORRECTED VALUES.
Okay, let’s approach this another way. Assume the following:
F = value in the framebuffer (0-255)
V = voltage at the DAC output (Vmin to Vmax)
I = intensity of pixel
y = gamma (2.2 for CRT)
I ~ V^y
If F ~ V, then I = F^y. For a y of 2.2, that results in an F of 128 being 1/4 has bright as an F of 256. However, it is not necessarily true that F ~ V. If you’ve got correction hardware (both OpenGL and X define that), then V can be some function of F. If you set up the LUT to have V = F^1/y, then I ~ (F^1/y)^y ~ F. Thus, intensity is directly proportional to the value in the framebuffer. In that case, the framebuffer stores values in linear space. Since the compositing operators are defined in linear space, you can do direct compositing with framebuffer values.
Yes, this trade’s off color precision. However, when you’ve got 16-bits per channel, the lost color precision doesn’t amount to much. See this page:
http://www.teamten.com/lawrence/graphics/gamma/
Specifically, this quote (emphasis mine):
A solution that Silicon Graphics machines have used for a while is to have the DAC generate voltages non-linearly. There’s a command called “gamma” on SGIs that many people use to brighten their displays…
The great advantage of this scheme is that the graphics math stays simple and intuitive. A 50%-covered pixel has a color value halfway between the foreground and the background. Lighting, blending, and anti-aliasing stay linear…
Note that if your frame buffer had more than 8 bits, say 16 bits per component, then this scheme would be perfect. That’s a few years away and hopefully after that I can delete this page and never hear of gamma again.
Well, it’s the latter model we’re moving to. Neither X nor OS X nor XP does gamma correct compositing right now, and by the time the bar is raised to the point where it needs to, we’ll have 16-bit framebuffers and gamma correction will be unnecessary anyway. The requisite hardware is already available in the GeForce6, and by the time Longhorn comes out, they’ll be $50 on pricewatch.
But do not expect GPL version of my program if development require lot of money (of course if I finish it).
Intel’s compiler is free for open source development.
I agree, X is slow on bloated. Gosling wrote a plan for how he would writer a new GUI system which outlines many of the problems of X fairly well, everyone should check that out.
I think this article, trying to hail the greatness of X, and make us all feel guilty for not pledging allegiance to it, was fairly lame and pointless. X is old technology, designed with old assumptions (like the fact that there is a font server, which is 100% irrelevant today).
Try running Moz Firefox or Thunderbird under XP, on the same machine you are running in under Linux, and feel the difference. X is slow, no doubt about it, and blaming it on driver vendors is not the whole reason.
Hah, rage3d! ATI simply does not count. ATI’s drivers are absolute shit, they’ve always been shit, even on Windows, and these days, they are just slightly less shitty.
HOW SHITTY?? Yeah, I agree those days before Catalyst(tm) was pure shit, but not anymore in these days.
But for Linux, yeah, I think I gotta agree w/ you….it’s nearly shitty.
BTW, IMHO, nVidia was doing complete shit making cheating drivers for Windoze for extra bench scores. (Do you remember notorious 3DMark/GeForceFX incident? )
Id just like to comment that blender runs better for me in linux then windows-
And its a 3D app.