Good information at MacNN: “A PDF slideshow used at Siggraph 2002, written by Peter Graffagnino of Apple, provides information on Quartz Extreme. It will be available with Jaguar (10.2) on August 24th, and is the first OpenGL-based windowing system. It will feature true system integration (Quartz2D on a texture, Video on a texture), improved texture upload performance, programmable shaders, many new extensions, and new tools.“. Read more to check out if your Mac can support Quartz Extreme.The Quartz Extreme functionality is only supported by the following video GPUs: nVidia GeForce 2MX/3/4MX/4Ti or any ATi Radeon and ATi Radeon Mobility GPU.
A minimum of 16MB VRAM is required for lower resolutions (up to 1024×768), more VRAM you will need for higher resolutions. The more graphics memory the better, especially if you have many windows open at the same time, even if they are minimized.
Due to the needs of this kind of technology, the board needs to be at least an AGP 2x (AGP 4x recommended). PCI can not be supported because of its bandwidth constrains: PCI only does 133MB/s while the minimum needed by the technology, AGP 2x, does 533MB/s and AGP 4x does 1066MB/s.
The ATi RagePRO/128/Ultra as found in the older G4s/Cubes/Classic-iMacs, any iBooks purchased before 2 months ago or any other Mac that do not match exactly the above configurations, will not be supported by Quartz Extreme.
Especially if you own a G4, MacOSX 10.2 Jaguar has optimizations that will make your desktop almost twice as responsive when compared to OSX 10.1.5, even if you do not have a Quartz Extreme capable graphics board. With QE enabled, performance goes up to three times faster than the current 2D-only desktops (provided that the resolution is not ultra high and not a zillion of windows are open or even minimized).
It seems as if Apple is offsetting their realtively poor cpu with the graphics card.
Too bad I can’t try it out (too expensive for me).
Cool, now we can watch an OpenGL rendered spinning beach ball (3D rendered hourglass for Windows folk). Three cheers for Apple.
Now, you can *literally* calculate in FPS (frames per second) your desktop! (no, this is not a joke)
The desktop will still look the same as it looks today (2D that is), but underneath, the whole work on rendering the desktop will be done with the 3D part of the graphics chip.
I believe Microsoft is also going to use a 3D accelerated windowing system with Longhorn and offload some or all of it’s rendering to the graphics card. This is actually a really good idea, I am surprised it didn’t happen earlier. This should not give a performance hit for gaming because the desktop is not displayed. I think that’s the case anyway.
> I am surprised it didn’t happen earlier.
I am not surprised at all. You need truly fast 3D GPUs in order to do that kind of stuff. Older cards like Voodoos, Matrox G4xx, older GeForce or TNT2s wouldn’t be able to do that kind of stuff. They would be really slow. Even the current GeForce4Tis will be kinda slowish for the job. This is why Ms is waiting for some years before they introduce their own implementation.
I mean, when you compare the speed of running a 3D game and when rendering a desktop, you would be expecting that rendering the 2D desktop would be easier and faster for the card. But in surprise, this is not true. I won’t go into technical details on it, I will just say that I recommned a GeForce4 Ti with more than 64 MB of RAM in order to be able to drive your Mac on high res “faster” (I won’t say “fast”, just “faster”).
Yeah. In longhorn. In 2005. It’s like: yeah, we’ve got that. Well, we don’t but we *will* some day. Perhaps. If we don’t change our mind, as we often do. Anyway, that’s the way MS works. When they saw the first GUI in a personal computer (Lisa?) They said ‘yeah, we’ve got that, it’s coming out in 1983’. And they didn’t have a half decent gui until late 1995. Whoo, that’s thirteen years, quite a lot! I hope you windows users don’t have to wait that much โ not again. Oh, and, slightly off topic: when Apple came out with QuickTime 6 they had an announcement: Windows Media 8 (or nine, or whatever version is the next) will reach alpha stage in about six months, and won’t support Mpeg-4. They’re always the same. I mean, use their products, they’re cool (or at least usable), but please don’t listen to them when they are like that.
My hat is off to Apple on Quartz Extreme. A very good effort and it is super to see Apple leading the OpenGL movement.
Based on what I read it will definitely benefit from big iron graphics hardware, especially since shaders/textures for OpenGL work much better when you have lots of graphics RAM and a killer graphics processor.
The Radeon 9700 should ship on the Mac in not too long… hopefully in the September PowerMacs. So get your video card upgrade requests to Santa early ๐
#m
Please do not talk childish. It is very well known that Microsoft tries to support legacy hardware and software better than Apple does. Apple many times has rendered obsolete 1.5-2 years Macs. Microsoft would never do that, as a lot of businesses use these older hardware.
Ms is doing the 3D composition for Longhorn in 2005 because it makes more sense for THEIR business model. At the time, graphics cards will be much faster and they will be able to deliver the technology better, plus the current high end cards (that by 2005 they will be “old”), they will still work with that technology.
The right thing, in the right time, depending on the business models of the two companies.
” I am surprised it didn’t happen earlier.”
Unless you heard of PicassoGL. I know i know, it never even left Alpha stage, but had Be, Inc thrived, it may have been out by now, or RSN ™. Oh well, all in the past now
Forum Administrator’s Note: This is not correct. Be’s PicassoGL had nothing to do with 2D/3D composition, we have alerady talked about it in an earlier discussion. Pretty much, it was about putting the 2D and 3D accelerant driver together – under a single lock. It was a developer’s issue, it had nothing to do with what the user sees in the screen or how things are done in his desktop. It was just a change in the driver model. Consequently, it would have made it possible to use 3D accelerated primitives for the interface kit. Especially for anti-aliased text, bitmap scaling, etc, but that work was never done, as Be was sold to Palm and all such development ceased. There is a good thread on begroovy’s “general discussion” forum if you want to learn more about it.
…if I could buy a current ibook and be able to run quartz extreme well. Now im very excited about getting an ibook. =)
The spinning beachball is actually supposed to be some aqua glob or something now in Jaguar. But I think it’s unfair to criticize apple for not having Quartz Extreme worth with all cards. They’re pushing for a new top improvement, might as well take advantage of what they’ve got now. And it’s not like Quartz extreme is the only improvement in Jaguar, or even the only speed up..
That doesn’t make sense either though. Why do it later when you can do it now? It’s not a “must have” feature, just a nice improvement. That’s like saying you shouldn’t release a vaccine for a disease because right now it’s more ecconimical to market a treatement..
Microsoft cant do it because Microsoft doesnt control what hardware you run, as such they have to support a MUCH larger set of possible video cards. Apple can press ahead much faster because there are only a handful of hardware configurations to support. It’s not because microsoft is “waiting for faster hardware” trust me if they could find a way for you to pay them some more money TODAY they would surely do it. Microsoft’s weakness has always been in supporting legacy technology that didnt make them money anymore, such is the price to be paid when you own 98% of the market. Do you have any idea how fast grapics cards will be by 2005? Exponentially faster than they are today, yet apple is reaping the benefits of a relatively current crop of cards, the lesson is that it can be done, it’s just harder for microsoft to do it.
It has to be said, while Apple and possibly Mickeysoft are innovating, the Linux desktop isn’t really moving ahead at all. Nobody seems to be addressing the infrastructure and integration issues that Apple have tackled. Instead we’re stuck with endlessly themeing X-Window based toolkits.
OK, there are disparate efforts like DirectFB, SDL, Berlin/Fresco etc, but X definitely is where all the momentum is. I mean, is it possible to achieve an OpenGL rendered desktop using X? What complexities would need to be tackled? Would Xs architecture be compatible with this new approach?
Respect to X, but I think it’s had its day – it was designed principally to support GUIs in a networked environment through disconnecting the display client from the server. Nothing wrong with that of course, except that was the primary design goal (someone correct me if I’m wrong please).
I think it’s time now to have a complete rethink and redesign and rebuild the Linux GUI from the ground up.
Talk to Thomas —
http://www.protocopy.com/osgui.html
#m
This could be really big if the GFX card vendors would be more after it, providing drivers as optimized as the Windows version. Looking through the slides (too bad I couldn’t attend the presentation), I get the impression that Apple is taking OpenGL a lot more seriously than all the other OS vendors.
I want a new TiBook. And the Shader builder. NOW!
Yes it is entirley possible
All you need is someone to do it.
Make an alteration to X so that the graphics are rendered to a opengl texture and drawn.
Pass the event through as per normal.
If you want to do fancy effects you need to make a window manager and implement a new event manager. Because mouse click events are different in 3D than 2D.
I would be very leery of getting any Apple computer with a 16MB video card. The Powerpoint slides recommend 32MB. Since “recommended requirements” are always the real minimum requirements, a card with 64MB or more video RAM is what you want to get. As “AGP 4X” is also the “recommended” requirement, make sure you get AGP 4X or higher.
That rules out every currently shipping Apple machine except for the PowerMac G4.
Apple only sells one video card on one system that offers good 3D — the Geforce4 Ti 4600 128MB on the PowerMac G4.
TiBook – 32MB Radeon 7500… not a 3D powerhouse by any means.
iBook – 16MB ATI Mobility… scary for 3D… only AGP2X
iMac2 – 32MB GeForce2MX … 3 years old 3D AGP2X
iMac2 – 32MB GeForce4MX… today’s crippled 3D card AGP2X
eMac – 32MB Geforce2MX … 3 years old 3D AGP2X
PowerMac G4 – 32MB Radeon 7500 … slow 3D AGP4X
PowerMac G4 – 64MB Geforce4MX … decent RAM, slow 3D AGP4X
PowerMac G4 – 128MB Geforce4 Ti 4600 … this is good AGP4X
The September G4 PowerMac’s should have better choices.
For a laptop, I’d definitely wait. All the top-end PC laptops have 64MB video RAM and higher-performance NVidia chipsets. Apple’s next TiBook should have the same. And when that happens, the iBook will be bumped to 32MB.
#m
Heh, I knew it was only a matter of time that someone started slagging off Xfree86 & Linux. It always happens, no matter what the topic under discussion is supposed to be.
Perhaps OSNews should change its name to comp.xfree.endless.complaints so the anti or pro-Xfree86 zealots can leave us alone to discuss the actual article in question (which is a bit radical I know).
Sort of off topic but would 10.2 make my machine run much smoother? Thanks!
>>Sort of off topic but would 10.2 make my machine run much smoother? Thanks
If this is not smooth enough yet then probably not. The Aqua on top of BSD is just slow. All unix systems are like that, they are not really for the average user’s desktop.
SGI has many super fast graphics workstations. While they can do wonders with OpenGL, the rest of the environment outside of the graphics environment feels slugishly slow. It’s the unix!
ciao
yc
unix slows the desktop? strange to me that posix + bash = slow desktop.
>If this is not smooth enough yet then probably not. The
>Aqua on top of BSD is just slow. All unix systems are like
>that, they are not really for the average user’s desktop.
That’s pure BS! If a windowing system is slow on todays OSes then it means that its implementation is suboptimal *or* it just does a whole lot of things for which the machine is too slow. Unix systems do *not* slow down any desktop.
>SGI has many super fast graphics workstations. While they
>can do wonders with OpenGL, the rest of the environment
>outside of the graphics environment feels slugishly slow.
>It’s the unix!
Again, pure BS – did you ever work on a SGI machine? I do, every day – and I do not experience a slow environment at all.
BTW there is no such thing as ‘the unix’, there are a lot of different unix kernels out there, a lot of different X11 implementations and other windowing systems and so on.
Stop trolling, that’s no fun anymore…
-GnorpH
Heh, I knew it was only a matter of time<i/>
I doubt it.
[i]that someone started slagging off Xfree86 & Linux.
As a Linux Desktop User, I can see huge benefits in any approach which innovates in the way Apple are doing with Quartz/OpenGL. I’d love to see this kind of technology available and usable on Linux systems. Therefore, the next question is “can this be achieved under the status quo” but as always, there seem to be people who won’t countanance any free thought outside the X-Windows box.
It always happens, no matter what the topic under discussion is supposed to be.
Rubbish.
f this is not smooth enough yet then probably not. The Aqua on top of BSD is just slow. All unix systems are like that, they are not really for the average user’s desktop.
Once again, YC is talking crap. Remember, this guy thought the Evil-la was cool! I think he even bought one, mwuhahaha ๐
-fooks
“If this is not smooth enough yet then probably not. The Aqua on top of BSD is just slow. All unix systems are like that, they are not really for the average user’s desktop.”
I’ll keep that for any time I need to die of laughter.
As someone else mentionned above, this is nice for desktops with lots of system memory, a fast processor, a fast video card and lots of video memory. But what about older systems and laptops. Nonetheless, OpenGL rendering in the OS is a novel idea and one more reason I’d consider buying a MAC now.
I can imagine that the rendering might take up more processor time and with heavier memory and video card reuirements, I don’t think this will be good for battery life. I’ve noticed siginificant improvements in XP by turing off all the eye candy. Will there be an option to turn off OpenGL rendering in OSX?
I’m very interested to see how this works with 10.2. We have a flat panel iMac and eMac and both have the bare minimum 32 MB cards.
Actually, I’m more interested to see what 10.2 does for Macs like Eugenia’s Cube and our 700 MHz G3 iBook. I have to believe that there will be a speed increase of some degree for all – otherwise, even the majolity of people who switched to OS X from 9 will be out in the cold as far as the Big Issue (speed) is concerned. I know Jobs wants people to buy OS X and new hardware, but he isn’t stupid. He’s also run out of hot air on the MHz Myth, so I would be stunned if 10.2 doesn’t speed everyone up, at least to a degree (:::preparing to be stunned…<g>).
Your arguments don’t entirely make sense.If there are video cards available right now that can support 3D composition, why not simply have the OS installer decide which system to use based on the graphics card or GFPU detected?
After all, there are games that have tons of fancy features
e.g. fog, transparency, etc., that are supported on older graphic cards – disable those features and the game will run ( and look ugly, perhaps)
Also, hardware drivers in the PC world are written by the manufacturers, whereas Apple do the bulk of the driver-writing for their systems. So M$ has only to release a reference platform and let the manufacturers worry about making their hardware support it.
Rafa may be trolling but I don’t think he’s entirely off the mark in this case. Microsoft is playing catchup and are using the “waiting for faster/ more advanced hardware” to buy themselves time.
The new Enlightenment, E17, uses Evas which is a hardware-accelerated canvas API for X-Windows that can draw anti-aliased text, smooth super and sub-sampled images, alpha-blend, as well as drop down to using normal X11 primitives such as pixmaps, lines and rectangles for speed if your CPU or graphics hardware are too slow. So, before you start screaming “X can’t do it. X is too outdated!”, remember that Linux/X has support for a little technology called DRI with which you can achieve 3d performance that’s very close to what you’d get under Windoze.
who needs a gui?
E17 may never be out officially… when will it really be an official release everyone can count on?
Has Rasterman given up on the desktop scene? I read an interview last week or so… hmmm..
From my very limited understanding of Evas, it looks like apps will need to be specially written to use the acceleration. That just won’t fly, although it’s a start.
Way to go Apple, you managed to do what the Linux community has been wanting in a very small amount of time. Just goes to show that concentrated efforts on something can really pay off… how many DIFFERENT OpenGL WMs are in the works for Linux now? I’ve heard of at least 4. Let’s FINISH one!!!
>>That’s pure BS! If a windowing system is slow on todays
>>OSes then it means that its implementation is suboptimal
>> *or* it just does a whole lot of things for which the
>>machine is too slow. Unix systems do *not* slow down any
>>desktop.
Pure BS my a$$! most unix systems use X for windowing and it’s slow as sh*t! Have you ever used BeOS? Have you ever used Windows even? I *have* use Solaris (Sparc & Intel), HP-UX on PA-Risc, Linux on Intel & PPC, All the BSDs on Intel, I have played around with a variety of sgi boxes. X-Windows is slow on all of them when compared to Windows NT or BeOS! Unix is a very heavy OS and IMHO should stay the hell off the every day user’s desktop!
>>…did you ever work on a SGI machine? I do, every day – and I do not experience a slow environment at all.
Yes! I have and the user environment is slow just like all the other unix based oses! You should try using something else and compare! Try Windows 2000, that should be easy enough!
Unix belong on servers, Period! They never have and never will make it big on the desktop. Why? The user interfaces are relatively unresponsive! Most of them are slow as sh*t. MacO X is probably one of the fastest GUIs around for unix and it’s still slow.
>>BTW there is no such thing as ‘the unix’
Get a life man! “the unix” refers to the BSD Unix on which the MacOS X is based!
>>there are a lot of different unix kernels out there, a lot of different X11 implementations and other windowing systems and so on.
In terms of responsiveness they are f&ck*ng! slow! when compared to Windows 2000 or BeOS. They are not nearly as friendly either!
I like Macs. They are soooo stylish, innovative and soulfull. I will never use Windows Longhorn, Palladium etc, so does not interest me a gram. : )
Honestly, I thought Jaguar was going to be more impressive. What it appears now is that it is nothing more than a an accelerated compositing engine. If you look at slide 10, you’ll see that all Quartz-2D is still software rendered to a window buffer. The only place the GPU is used is to composit these window buffers to the screen. This is sub-optimal in two ways: first, the GPU was designed to deal with triangles and textures that big. Modern GPUs are optimized for small triangles and small textures. Second, rendering Quartz is quite expensive. In Jaguar, that part still uses the GPU. That might explain why the performance increase is a measly 3x (even through Apple’s reality distortion field). Contrast this to E17’s EVAS, which is getting 200+ fps on a relatively ancient Riva TNT-2. A far better design than Jaguar’s would have been to render window contents with OpenGL as well. For example, when rendering a PDF path, one could use OpenGL’s support for bezier curves. By treating each window object as an OpenGL object, you would get a design that utilized the GPU far more throughly (accelerated more stuff), that gave developers access to stuff like vertex shaders for their 2D objects (rather than the Jaguar model which can only possibly use vertex or pixel shaders to do window-level special effects), and lastly one that was far more suited to current 3D hardware.
Actually, rendering a 3D game is much more complex than any 2D desktop. Besides, E17 is getting huge framerates even from older hardware like the TNT-2 you mention. Thusly, the flaw is in Apple’s design, not in the speed of todays hardware. Think about it. A modern graphics card has fill rates on the order of gigapixels per second and triangle rates into the millions per second. A 1000×1000 window has only 1 million pixels. And even a complex one would have less than 10,000 textured triangles worth of stuff in it. Thus, (taking into account that you can’t get peak performance all the time) you’d have hundreds of full-size windows before hitting either the fill rate or triangle rate limits.
> who needs a gui?
Well, nobody actually NEEDS one. But they do make it
a little easier to interact with the machine. Not
everyone likes the CLI.
The Aqua on top of BSD is just slow.
Well as long as you’re trolling blame the right thing. The particular component you should be blaming is CoreServices WindowServer. Aqua sits on top of that.
All unix systems are like that, they are not really for the average user’s desktop.
This is actually true to a certain extent, and I know I’ve rambled on about this in the past. When it comes to IPC requirements, there’s nothing quite like a display server. Most unices just don’t have good IPC channels, whereas Windows actually does.
With OS X, this isn’t the case. XNU, the OS X kernel, is partially Mach based and thus has Mach messaging, an incredibly sweet IPC mechanism.
Unix is a very heavy OS and IMHO should stay the hell off the every day user’s desktop!
See once again you’re kinda blaming the wrong thing. You should try to look for where the real bottlenecks are before you start making random accusations. Part of being a good troll is people have to think about your post a bit before realizing you’re a troll.
Unix belong on servers, Period! They never have and never will make it big on the desktop. Why? The user interfaces are relatively unresponsive! Most of them are slow as sh*t. MacO X is probably one of the fastest GUIs around for unix and it’s still slow.
And what of QNX/Photon?
You’re on crack.
A) X is slow? Prove it. On my old 300 MHz PII with a Riva TNT, X will blit about 3000 10,000 pixel bitmaps to the screen per second. Via direct X, that number is closer to 2000. Don’t even get me started on how much slower the GDI is. Given all the pixmaps the average desktop uses (icons, buttons, scrollbars, text glyphs) that’s a damn important number.
B) There are loads of different UNIX’s. QNX is extremely lightweight, more so than even BeOS. Linux is more heavyweight, but runs like a bat out of hell. In most of the important kernel benchmarks (speed of primitive operations, filesystem throughput, etc) Linux whips Windows NT’s ass.
C) I find Windows XP (with new Luna GUI) to be signficantly slower than KDE 3.0.2. I’m talking about stuff like resizing windows, etc. True, its not as fast as BeOS, but its also got far more features. Besides, if you want BeOS-level performance, you can get pretty close with Fluxbox and some well-written applications (like Rox-filer, etc). Under load, Linux (properly tweeked, of course) blows WinXP away. Running big compiles in the background doesn’t even affect the responsiveness of the UI.
Now I know my experience probably is veyr different from yours. Given that you sweepingly refer to UNIX and X (which encompass dozens of very different systems) I don’t know exactly how coherent and deep your experience with these machines has been. If you’re tired of living in the dark, give me an e-mail, and I’ll show you how to properly set up a Linux machine for the optimal user experience.
For example, when rendering a PDF path, one could use OpenGL’s support for bezier curves. By treating each window object as an OpenGL object, you would get a design that utilized the GPU far more throughly (accelerated more stuff), that gave developers access to stuff like vertex shaders for their 2D objects (rather than the Jaguar model which can only possibly use vertex or pixel shaders to do window-level special effects), and lastly one that was far more suited to current 3D hardware.
See, the problem with this argument is you’re missing the point. The main thing hindering Quartz’s speed is alpha blending. As they already have astoundingly fast code for everything else, all they would’ve accomplished is requiring even higher end 3D hardware.
For those wondering about speed…
… especially on hardware tah does not take advantage of QE, see below. Please NOTE: This is 6c106, NOT 6c115 which is rumored to be GM.
Here’s the link (at arstechnica)
http://arstechnica.infopop.net/OpenTopic/page?a=tpc&s=50009562&f=83…
Here’s the text
Just installed and daim, there’s lots and lots of little enhancements that really make a difference, like
-ability to display file info, eg number of items in a folder, along with the name in icon view
-much much faster ui, as in moving windows around, using the genie effect on IE and iTunes windows while playing songs, right-clicking on folders in the dock to display a list(although this is disk I/O too) – I am using a Rev2 B&W G3 with 192mb ram and an ATI 128(no QE for me)
-a lot more options in the preferences, eg simple editing of inetd.conf and ipfw settings from the sharing pane, built in internet sharing(rather than having to do the CLI natd dance, or gNAT); software update now shows installed updates; option to decide what to do when a CD or DVD is installed (different options for blank/audio/video/photo); great Universal Access options for all kinds of things
-Several new apps in Utilities – “ODBC Administrator”, “Bluetooth File Exchange”, “Audio MIDI Setup”
-Copying between disks seems to have sped up a little bit
-Terminal.app has been changed a fair bit, you can now enable/adjust transperency without using TinkerTool
-“update_prebinding” seems to have changed, maybe they have just changed it to be more verbose, but it seems to throw up less Kerberos errors(in 10.1.x everytime i did ‘sudo update_prebinding -root /’ it chucked up lots of errors that had something to do with kerberos, and failed to update the prebinding for a lot of files), and “Optimising”, ie updating prebinding done in the installer, now shows the percentage of optimisation complete, even if it does take a fscking long time
-Directory Access now has a whole lot more than just LDAPv2 and NetInfo: AppleTalk, LDAPv3, BSD Configuration Files, Rendezvous, SLP, SMB, most of which have an option to configure
-OMFG I’VE DIED AND GONE TO FAPLAND TERMINAL.APP NOW DRAWS TEXT PROPERLY!!!!(if anyone uses BitchX from the Terminal, you’ll know what I mean, but no there are no more ??:3:?1m crap, they have proper characters, w00t!!!!!!!!!!!!!!!
Um, that’s about all I’ve noticed in the first 30mins of using it, if anyone wants screenshots of pref panes or utilities, or to check something,
just ask me
Chiddler
From my very limited understanding of Evas, it looks like apps will need to be specially written to use the acceleration.
This is incorrect. Evas provides an API that ties into whatever backend rendering engine you want to use. In a typical application, the user can select OpenGL / software based / whatever… In your actual code, you don’t have to change anything. There’s an Evas programmers guide on enlightenment.org that explains the whole deal. In the future there will be additional backends such as postscript, pdf, etc…
E17 may never be out officially…
Well, you can always grab it from CVS and hack on it yourself. That’s the beauty of open source. If people really want it, then they’ll give rasterman a hand, and it will get done faster than just one person working on it.
Just thought I’d point out when you compile BitchX configured to stick to basic ASCII, it looks perfectly normal and still runs fine. *learned this trying to get BitchX installed and NOT fugly*
>>Well as long as you’re trolling blame the right thing.
>>The particular component you should be blaming is
>>CoreServices WindowServer. Aqua sits on top of that.
Ok. I’ll take your word for it. I’m not really familiar with Aqua internals.
>>When it comes to IPC requirements, there’s nothing quite
>>like a display server. Most unices just don’t have good
>>IPC channels, whereas Windows actually does.
Agreed! Thank you!
BeOS also has an awsome display server leading the most responsive desktop UI I have ever used.
>>With OS X, this isn’t the case. XNU, the OS X kernel, is >>partially Mach based and thus has Mach messaging, an >>incredibly sweet IPC mechanism.
I think OS X’s speed problem comes from other issues. From what I know of the architecture, issues stem from the complexity of rendering in Aqua, the file system also takes a long time to load files, the size and complexity of the underlying BSD OS makes it realatively heavy.
>>And what of QNX/Photon?
While QNX/Photon is posix compliant, it does not ussualy carry all the baggage typical unix systems. It’s posix compliant but I think it’s implemented very differently. It’s a very light OS compared to Solaris on Intel or OS X on PPC.
>>Part of being a good troll is people have to think about
>>your post a bit before realizing you’re a troll.
It’s good to troll once in a while.
“””very light OS compared to Solaris on Intel”””
Why would you ever consider Solaris on Intel to be a desktop system? It’s not designed for such (as indicated by Xsun’s plethora of drivers) its sole purpose is a development platform for shops without money to fork over for Suns own equipment.
Chances are as well you were running x86 Solaris with XFree which I can’t imagine providing any significant performance gains across platforms (*BSD, Linux, x86 Solaris, et al) without some sort of external driver (e.g. NVidia’s).
Sun’s Xsun is resonably fast, the only Suns I’ve ever seen run slow are those aging Ultras 10 (that seem to ubuiqitious) and a few Blades running on a busy network.
>Actually, rendering a 3D game is much
>more complex than any 2D desktop
You are wrong here, take it from someone who has done this. I think Eugenia probably knows what she’s talking about, as well.
OpenGL-only desktops are pretty hard to render. In my experience, the number of polygons can be higher than most games. Getting clear text is particularly tough.
Games can afford to be “sloppy” in many ways. You’ll notice that most “splash screens” on videogames actually have lots of movement, which hides the imperfections. But a desktop has to look perfectly clear, at any resolution, even when sitting still.
-Ben
>>You’re on crack.
Am I?
>>A) X is slow? Prove it. On my old 300 MHz PII with a Riva
>>TNT, X will blit about 3000 10,000 pixel bitmaps to the
>>screen per second. Via direct X, that number is closer to
>>2000. Don’t even get me started on how much slower the
>>GDI is. Given all the pixmaps the average desktop uses
>>(icons, buttons, scrollbars, text glyphs) that’s a damn
>>important number.
It’s all relative.
Ok, Given a fine tuned X environment on a powerful box with super fast graphics. (Riva TNT2 *is* a very fast graphics card by the way) Once loaded (that may take a while because X is complex!) it can blast pixels on the screen very nicely. Even over a fast network it works OK.
However, when you factor in the time it takes to load on a desktop, when you compare it with lighter weight display servers (i.e. BeOS), when you use it over a shared network, when you factor in other complexities of typical unix boxes you can understand why for general purpose office environments Windows 2000 or BeOS will be more responsive.
I agree that if you have *a window* in which you want to blast pixels, once you have X loaded and fined tuned it can blast pretty well but that’s not what most people use their desktops every day for. Right?
ciao
yc
Make an alteration to X so that the graphics are rendered to a opengl texture and drawn.
Pass the event through as per normal.
You cant pass DRI messages normally in X. They never go through the client/server socket like the rest of the messages do. They go straight to where they need to go. Thats why you cannot do OpenGL over a network in X (oh i’ve tried).
Im sure it would be possible to make an opengl accelerated desktop for X, but then you lose just about the only good part about X, the possibility to run remote applications.
They go straight to where they need to go. Thats why you cannot do OpenGL over a network in X (oh i’ve tried).
Sure you can, that’s what GLX is for! You are probably confused with Direct/Indirect Rendering. When your OpenGL client runs on the same machine as the server/3D hardware you can do Direct Rendering, meaning commands can be DMA’d directly to the 3D hardware. When an OpenGL client is run remotely all the commands are transferred over the network after which the server pushed them to the 3D hardware (after decoding of course). The bottleneck is the network connection. I can run quake3 over my 100Mbit network at acceptable framerate (not great, but acceptable) i.e. the quake.x86 executable is running on a box across the room, while my main workstation is displaying the 3D graphics (NVidia).
The URL below should explain things in more detail:
http://www.opengl.org/developers/documentation/Direct/direct.html
-fooks
X is slow? Prove it.
Okay. Try using my celeron366 with 128 megs of ram running kde. Stock. No changes to the source code, no chaning obscure settings that only linux/unix gurus woul;d know. It’s SLOW. Any one who says otherwise is simply lieing.
. QNX is extremely lightweight, more so than even BeOS.
Lightweight… yeah, it is. But in my experiance i’ve found BeOS to be a more responsive. Still, QNX is a nice OS.
Linux whips Windows NT’s ass.
That’s not saying much.
Linux (properly tweeked, of course) blows WinXP away
Properly tweaked Linux vs stock XP is not a fair comparison. I dont disagree with you tho.
See, the thing is. Saying Linux is faster than Windows is not really helping you any. It’s like saying a Dodge Viper is faster and more responsive than a Toyota Corolla. Well, yeah. That’s not hard to do. But how does it compare to a Mustang (or Camero or whatever)? There’s the real test.
> who needs a gui?
Well, nobody actually NEEDS one.
Ya, Sure. Uh huh. Right. Yeah. Where did ya think that one up.
Trying to do realtime video editing, 3d graphics with, and other media intensive tasks without a GUI would just be stupid.
BeOS 5 Pro is one of the eight desktop OSs that I use for at least an hour each week. I like BeOS and hope that OpenBeOS can get a noticeable market share of desktop computers.
I also have a top of the line 800 mhz iMac with the Superdrive. Looking at both BeOS and Mac OS 10.1.5 you can see a noticable difference. The BeOS desktop is very stark and simple and the OS doesn’t have to worry about legacy applications (OS 9 capatibility mode).
Keep in mind that Apple is going where no other group of programmers have ever gone before with production off the shelf computers. Yes others have tried doing this and/or are working on it. But none have gotten as far as Apple has yet. And it always takes a lot more time to build something from scratch than copying what others have done before you.
I would not have mattered in any way, shape, or form if OS X had been fast but very simple. It is much easier to make something look the way you want in the first place and then keep working on making it more efficent (read faster). Than to take something that is simple and fast and keep tweaking it to look better as you try to keep it running fast. Besides, you really make a lot of people mad when you keep making things look different. Look at all the flack that Apple took for ONE change in looks from Mac OS 9 to Mac OS X.
“Look at all the flack that Apple took for ONE change in looks from Mac OS 9 to Mac OS X.”
That was nothing compared to the flack from OS 6 to OS 7. Now that was a flame war.
Eugenia, you take some awefully hard lines sometimes so I’m pretty shocked for your MS apologizing:
“Ms is doing the 3D composition for Longhorn in 2005 because it makes more sense for THEIR business model. At the time, graphics cards will be much faster and they will be able to deliver the technology better, plus the current high end cards (that by 2005 they will be “old”), they will still work with that technology.”
Baloney! MS is doing GPU-assisted GUI rendering in 2005 because their “advanced” XP interface wasn’t advanced and barely uses their own allegedly advanced GDI+, DX9, and ClearType. But MS always responds to what they don’t have and they will need a more advanced GUI for Longhorn to stand up to Aqua.
It’s not a limitation of current cards because they only started to think about this, seriously, in recent months. It’s not as if they had been pursuing this for years but gave it up because it couldn’t be done.
On the other hand, as soon as the PB of OS X came out, there were rumors that this would be accomplished, and it has been. Who knows to what extent Apple has been working on this, what part is from Raycer tech, how long they have been collaborating with Nvidia and ATI. It will improve over the next 3 years while MS is catching up.
“Now, you can *literally* calculate in FPS (frames per second) your desktop! (no, this is not a joke)
The desktop will still look the same as it looks today (2D that is), but underneath, the whole work on rendering the desktop will be done with the 3D part of the graphics chip.”
No, this is wrong. This has nothing to do with constantly redrawing textures–this is the *compositor* refresh rate–why would you have to continuously redraw the composed “image” of 2D texture windows unless the UI was being updated or “in motion”? You don’t. You are making a inappropriate criticism. Look at 2D Window Moving–400 operations per second… Guess what 360 redraws per second is fast enough for my eyes, optic nerves, and brain to deal with.
“Keep in mind that Apple is going where no other group of programmers have ever gone before with production off the shelf computers. Yes others have tried doing this and/or are working on it. But none have gotten as far as Apple has yet. And it always takes a lot more time to build something from scratch than copying what others have done before you.”
First of all, apple is *not* using off the shelf computers. Apple custom designs its hardware to run its software, something microsoft and linux developer do not have the luxury of doing. Windows and Linux have to actually run on “off the shelf” computers, running hardware that god-knows-who built. OSX runs on what, 5 different machines?
Also, about the eye candy in osx. Eye candy is *not* a necessity. I would much rather have a desktop that is fast and responsive than one which looks sooo beautiful, but takes 3 minutes to redraw the mouse cursor.
Chris, what sort of apology is this hardware claim? I would imagine that what’s of primary importance is the 3D spec. If the graphics cards support OpenGL (or in the case of MS, DirectX), what does it matter where it’s coming from. On the system side, I see even less reason to be concerned about hardware. So doesn’t MS actually have the advantage by having full control of their own 3D spec? Yes, they do.
As for eyecandy, no, it’s not necessary, but there’s the present day criticism that software doesn’t even use the advanced capabilities of chips… Why not design a system that will evolve to do so? And, XP was primarily a cosmetic release, but in a completely unadvanced manner.
I’d much rather have a system that pushes the limit on visualization, which means the design moves quicker to GPU assisted GUI display, and which will get faster and faster with new CPUs and GPUss rather than something that layers on crap to make it look like it might be advanced.
A) X is slow? Prove it.[i]
“Prove it?” It’s called using X on a lower end system on a daily basis. Every day I come to work and xscreensaver is running on my K6-2 500MHz running FreeBSD. I use Window Maker which doesn’t exactly have the world’s greatest virtual desktop implementation. Anyway, I’ll usually have maybe 10 xterms open across 4 desktops, along with Opera. After xscreensaver exits, the entire system sits there processing redraw events for a good 10-15 seconds. You call that fast? 10-15 seconds to handle exiting a screen saver?
[i]On my old 300 MHz PII with a Riva TNT, X will blit about 3000 10,000 pixel bitmaps to the screen per second. Via direct X, that number is closer to 2000. Don’t even get me started on how much slower the GDI is. Given all the pixmaps the average desktop uses (icons, buttons, scrollbars, text glyphs) that’s a damn important number.
It sure is, however the problem is how those pixmaps are passed to the X server. In the case of X, it’s through a SOCKET. So you kinda have this little bottleneck thing going. It wouldn’t be an issue of MIT shm weren’t tacked on as a complete afterthought, but unfortunately it was so nothing uses it.
After I hit sumbit I knew that I should have worded that one part differently. Instead of saying “off the shelf” I was trying to say that Apple is the only group of programmers that will actually have it out there on the shelves. Totally my fault for the way I worded it. Hopefully this makes more sense and is more correct.
You say eye candy isn’t important. It’s not whether you or I disagree on this. Apple, MS, KDE, GNOME, OS/2, BeOS, Geos, Palm, etc., are all giving eye candy in one way or another on their “desktops” and it is important. Why do you think we aren’t all dealing strictly with text? That day has gone and now we use desktops with graphics. With (more or less) pretty icons.
I’ll agree that things flashing around the screen just be flashing (animated ads for instance) are more than just very irritating. But eye candy (transparent icons and windows, for instance) aren’t just eye candy but usefull eye candy. If you don’t want eye candy. Don’t use a GUI that hit the shelves or production computers after 1992.
Am I the only one’s noticed that 32mb is only the reccomended card? It’s that because Apple realizes not everyone wants to work in 1024×768 normally. It’s take more video ram to run at higher resoloutions and Apple’s accounting for that. People with lower cards should still take advantage of Quartz Extreme, just not as far up in the res. scale. And on most bottom end macs, they may only have 16mb video ram, but that should be enough for 1024×768, the bottom default res in OS X on current systems.
> And on most bottom end macs, they may only have 16mb video ram, but that should be enough for 1024×768, the bottom default res in OS X on current systems.
Provided that the user won’t have many windows open or even minimized (because they are all stored in the gfx memory individually). Personally I believe that 16 MB of VRAM is not enough for QE, especially if you have more than 6-7 windows open at the same time. Not even at 1024×768.
Why’d you pull my comment, Eugenia? It was obsene or insulting. I’m just curious–why are you making stuff up. You’ve said that you do not have current builds of Jaguar and you have a wimpy Mac and you seem to misunderstand QE, but somehow you know what QE can and cannot handle. That it needs more than 32MB of memory and that it degrades if there are more than 5 windows open with 16MB. Come on. I’ve seen you get all over people for speaking on something they don’t know about… Doesn’t the same apply to you? Or do you just like having the power of doing what you want with your site?
First of all, I do have the power.
Second, QE is not something that I need to have the source code in order to understand what and how it is doing it. 3D composition is not exactly alien technology you know. It just needs some basic knowledge to understand that it needs AS MUCH VRAM as possible, and if you have many windows open (the “6-7 windows” I said was just a number and this number DOES change depending on the SIZE of the windows). *For example*, if your windows are very small (eg 200×200), you might be able to have 10-15 windows open with 16 MB VRAM without QE falling to software rendering. If your windows are “normal” 600×600 etc, the number of windows you can have open at the same time *in QE hardware mode* FALLS DOWN. This is a fact, I did not make it up. It is how the 3D composition works and not Apple, not Microsfot not even God can make it work differently without hitting this limitation.
I am not talking crap, but it seems that you are. You are insulting me in this forum, without *you* understanding the technology. Become as insulting and an ass again, and you will see your geomatrix.com domain banned.
I do not run OSNews to be treated this way, I do it for fun, and I try to be as direct, *truthful* and explanatory on all my articles/comments. I do not try to be kind though. Eugenia in Greek means “kindness”, but that is not something I can offer here. If you do not like it, you can go and fuck yourself. Alternatively, you can apologize.
What I am trying to say is that with the traditional 2D rendering, you only have a huge *flat* bitmap on the graphics card memory, no matter how many widnows you have open. One picture of the whole desktop each time stored.
With 3D composition, EACH window is RENDERED and STORED *individually*, so you soon you can have memory limitations when you have more (*about*) 6-7 medium sized windows on 1024×768 with 16 MB of VRAM. The more memory the better. Especially if you open many windows to do your job and if you are working on high resolutions. Even minimized windows are still stored and take memory space.
This new technique has good and bad things as you can see. If we all had a Matrox Parhelia 256 MB VRAM we probably wouldn’t care much, but not all of us do.
But AGP can. This is why QE requires an AGP card. That which can’t be stored in VRAM will spill over into main memory.
Sure, and the fastest AGP slot the better. An AGP 8x and a fast 3D graphics card with 128/256 MB of vram would do the trick perfectly for making sure you will never hit limitations and slow downs with QE.
But you are missing the fact that the windows are still 2D bitmaps “wrapped” in OpenGL so that they can be hardware accelerated… We aren’t talking about complete and full 3D image RENDERING, 3D effects (this is involved but, what, 10% of the UI), and 3D transformations (only 2D and again, 10%)–we are talking about the same window bitmaps as before dropped into a OpenGL 2D layer–not image rendering. This is not the same thing as rendering a screen image with millions of polygons which is textured, effect-ed, and transforming through animation–and requiring rendering these objects for every frame. We are talking about bitmaps stored in layers with some effects and some transformations, not Quake. The amount of “drain” on the GPU is a complete unknown to you, me, and everyone else.
Also, your initial framerate comment makes no sense to me–I wonder if you can clarify. You could always measure any rendering system in framerates, true? But are you suggesting that the QE-enhanced UI would be slower than the current Aqua implementation? Or that each object and frame is being rendered through OpenGL? If so, you are misinformed about QE.
Ban me if you must, you obviously aren’t interested in discussion.
>Ban me if you must, you obviously aren’t interested in discussion.
If there is one person in this forum that he is not interesting in discussion and instead is spitting poison, that person is you. You only started “discussing” in your last comment. The rest was bitching and moaning for things you do not comprehend and you do not accept that Apple has created something that has a natural VRAM limitation, because “Apple would not allow that”. Yeah, right.
The more windows you have and more big they are, the more vram you need, in QE especially for fast operations. Once you start to bleed over main memory via AGP, things start to go bad.
It’s irrelevant of whether you use 2D or 3D acceleration. Blitting is faster from local (GPU) memory than it is from main memory. If you want to use back-buffering for your windows *and* you want them to be fast, you want them to fit in main memory. QE or regular Quartz.
No matter how hard you try, internal fragmentation is bigger with a 2D allocator than with a 1D allocator. The 2D accelerator can deal with a 1D allocator. The 3D accelerator needs a 2D allocator.
QE *is* faster than the current Aqua. No one said the opposite. The problem starts when you are out of VRAM (especially if you only got 16/32 MB – 16 MB sounds like an absolute-absolute half-assed minimum, Jobs had initially said that it would really require at least 32 MB), and then things are getting really ugly (and slow). In short: QE is great, as long as you are not out of its VRAM memory. If yes, there will be many cases that you would better wishing you were on the traditional 2D.
My first comment asked you where you got your “specs” from since you have admitted to not using Jaguar, that’s all.
You still haven’t explained or even suggested an explanation for how you know what the load per layer would be.
And before things get any testier, could you clarify: it IS your suppostion that if your GPU is ‘saturated” with UI data that the UI actually becomes slower than if there was no acceleration, correct?
This is 100% contrary to every bit of information I have heard about QE.
The minimum requirements for good Quartz Extreme are beyond what ships on any Mac except a PowerMac.
And with the PowerMacs due for imminent upgrades, it makes sense to wait.
Quartz Extreme was built for the future. There are still two pipelines (Quartz 2D and Quicktime) that are not even hardware accelerated yet.
Wait until Apple updates its line with current hardware. The entry level machines are still running 3 year old Geforce2MX graphics on AGP2X bus. The iBook has only 16MB video RAM, not much for double/triple buffering + textures.
#m
>You still haven’t explained or even suggested an explanation for how you know what the load per layer would be.
These were *suggested* numbers. But they are really close. I do not know *exactly* how much memory each 1024 window uses wihtout knowing more about their implementation.
But it can’t be less than 3 MB per such 1024×768 window, because that’s how much 1024 res takes on the graphics memory when used in 32 bit color (it needs 2.25 MB per window for 24 bit color). (unless they use on-the-fly texture compression, which I highly doubt)
>if your GPU is ‘saturated” with UI data that the UI actually becomes slower than if there was no acceleration
Yes. Like there would be no 3D acceleration that is. The rendering would be done via the main memory, which is naturally slower for this kind of thing (especially the Mac’s SDRAMs that all the consumer G4s have). They might use the plain Quartz system as a fallback, but still, it will be slow just because it will blit via n the main memory and not from the VRAM. And there is where exactly the AGP transfer really matters, and if your CPU is really slow, well, the whole thing will become slow in general. You just wish you never fall back outside of QE and onto the main memory.
Michael, Quartz (PDF) and QT are unlikely to be built into Nvidia or ATI GPUs so I doubt that they’ll ever be hardware-optimized in the same way that OpenGL objects will be. This is part of my point: there is very little that QE is doing besides offloading composition, not rendering in the true sense since these are primarily 2D layers containing bitmaps, to the GPU when it is able to, if the GPU is occupied or insufficient to ACCELERATE the UI, the UI performs in the traditional way. Repeat, the UI still functions in the traditional way if QE cannot be used. It is an ACCELERATOR, it is an ENHANCEMENT.
> (unless they use on-the-fly texture compression, which I highly doubt)
And to keep my friend dp happy, more explanation on why they don’t use on-the-fly texture compression:
The PDF says that: “Everything is a textured polygon and it is compositing via blending and multitexture.”
So they have to use either complex allocators or tiling – or both. They might do texture *de*compression. But in that case, texture compression has to be done by the CPU, which means that hardware acceleration to draw inside the windows becomes out of the question – so I don’t think they’d do that, it would not make any sense technically.
It would be possible in theory to use texture compression, but I don’t know if it would translate into a speedup in the real-world.
The smallest cut of 1024×768 is 2 textures, 4 triangles, I think. (and it’s a very very nice number to work with. 800×600 is a real PITA).
See, the problem with this argument is you’re missing the point. The main thing hindering Quartz’s speed is alpha blending. As they already have astoundingly fast code for everything else, all they would’ve accomplished is requiring even higher end 3D hardware.
>>>>>>>
I agree that alpha blending is a major bottleneck in the current model. However (and note the title of my post) I’m dissapointed that Apple did not make nearly as effective a use of Quartz Extreme as the could have. Apple’s PDF algorithms aside, software rendered Quartz will always be an order of magnitude slower than hardware accelerated Quartz. There is no argueing it. Not only is the hardware optimized to do this kind of drawing, but it has access to its graphics memory at something like 8 GB/sec while the CPU has access at a measly 1 GB/sec. Not only that, but because the AGP bus has higher latency than the graphics card’s local bus, anything (like anti-aliased line drawing) that needs to do both reads and writes will have terrible performance over AGP. Even if Apple has totally tricked out algorithms, it has a bad design, and no amount of optimization will help a bad design. Instead of batching together drawing calls and just shipping it over in bulk transfers (something the AGP bus is good at) to the GPU to draw, it tries to do a major part of the work using the local CPU and going over the AGP bus to access the framebuffer. What sucks here is not that QE won’t be faster (because it will) but that it could be so much more. Apple could have gone with a totally vector-based display model with OpenGL display lists that were sent en-masse to the GPU to be rendered whenever something needed redrawing. Instead, it has giant window bitmaps that are rendered to via software, shipped over the AGP bus to the GPU with its multiple vertex and pixel shaders just for compositing. Its like using a Ferrari to drive around a golf-course.
It took me some time to find in the archives that very intelligent guy who have posted on the macnn forums explaining QE some months ago. And yes, we did not mentioned that the windows are doublebuffered, so a 3 MB window will actually take 6 MB of VRAM for a single window.
Now you know why I say that I want that Matrox Parhelia with 256 MB of VRAM and AGP 8x for use with QE and why that pesky 16 MB card won’t cut it. I don’t say that because I say that Apple didn’t do the job adequately (on the contrary). I say that because that is the reality of this technology. It needs a speedy card, lots lots of RAM and fast AGP. And this is one of the reasons why Microsoft’s waiting to do that in the future, as their business model is different than the Apple’s.
Quoting:
“Qhat the Extreme part of Quartz does is use the CPU to render the display information of a window to an output image (RGBA) and send it to the video card’s memory. The GPU on the card then applies that image to a surface and renders it. This concept has been batted around for a long time in CS departments and probably more than a few companies. The problem has always been the video chip, up until the GeForce and Radeon series chips all they did was take a scene and rasterize it into a 2D image that could be sent to a monitor. The important capability of the GPUs is the ability to do geometric transforms on polygons. When you press the – button on a window and it genies into the Dock the GPU is going to transform the mesh and render it so it sinks down into the Dock. If a drawer pops out it can do a live transform on that mesh like you’d tell something in Lightwave or Maya to form out of somewhere. Due to the advanced MIP mapping of textures to a mesh as the window deforms so will the texture and blamo you’ve got the smooth genie effect. Using the GPU and OpenGL in such a way also allows for “windows” that aren’t your normal 2D rectangles. You can have all manner of shapes be windows and each having its own transforms. A circular window could fold up like a Chinese fan when minimized or bead down into the dock like rain water dripping off a plastic surface. Anybody who uses a 3D modeler and animator is probably salivating over the possibilities of having desktop windows treated as 3D meshes.
As for the memory, several threads are whining and crying if not outright bashing Apple and its programmers for such a hefty memory requirement. First QuartzExtreme is not required to run 10.2, the cool features of it won’t be available unless you’ve got a capable video card. Tough cookies. Your Mac SE doesn’t run Quake or even OS9, you’re not *****ing about that. Remember that the desktop under Extreme is now a 3D scene, there’s got to be a texture stored for every mesh displayed on the screen. If you’re running a 1280×1024 pixel display at 32-bit colour that’s 1,310,720 pixels at four bytes per pixel which puts us at 5,242,880 bytes for a single frame, double buffered to make for a smoother display (10,485,760 bytes now) for JUST the framebuffer. Add to that 4 byte/pixel textures for each surface being displayed and a z-buffer and you’re talking a lot of megabytes. Any less than that and you’re going to be hitting up your main memory apeture which is far slower (limited to a 2x AGP bus or about a GB/s which is far slower than a video card’s onboard VRAM) and in most cases higher latency. For a smaller display 16MB is a possibility but 8MB is just going to be far too low. Besides cards with 8MB of VRAM don’t come with capable GPUs. You can’t compress the surface textures or else you’d just be using the processing power freed by offloading all the rendering to the GPU. Unfortunately you’ve got to use uncompressed textures which has a massive memory and bandwidth requirement.”
More from that guy and the whole thread can be found here:
http://forums.macnn.com/cgi-bin/ultimatebb.cgi?ubb=get_topic;f=46;t…
People seem to have many misconceptions about a 3D UI or a GPU-accelerated UI. Apple will stay with primarily a 2D interface for many years to come–why would it be a good idea to make the entire UI presentation OpenGL? Now, that is way too processor intensive… If it’s a 2D UI, it makes sense to restrict the use of OpenGL to effects and transformations. Keep the rest bitmapped and save the memory.
> Even if Apple has totally tricked out algorithms, it has a bad design, and no amount of optimization will help a bad design.
You are not necessarily right – and not necessarily wrong.
Grpahics is never black-and-white, it’s always (ALWAYS) about finding a middle-ground.
I can give you a way to draw AA lines super-quickly in software, but then fills will be super-slow. The idea that HW AA will be better than software, generally speaking, is ludicrous. Even the Parhelia’s AA wouldn’t do better than the very crappy font AA in BeOS.
One thing *is* true in graphics, it’s a constant. Whenever you write code such that the CPU reads from graphics memory, you’d better have a damn good reason. At least, you’ll want the graphics card to write to the CPU’s memory. Look at BeOS in VESA mode. It’s super-slow, because it does reads. My husband’s prototype toy OS doesn’t do the reads, and is significantly faster but uses significantly more memory to do so.
BeOS have managed by only keeping one copy of the framebuffer in main memory. 1024x768x16bpp, 1.5MB, not the end of the world. It’s nice, it’s an interesting engineering challenge, but the Be approach (which is the same as the Apple approach with QE) is perfectly valid.
>If it’s a 2D UI, it makes sense to restrict the use of OpenGL to effects and transformations.
Sweetheart, in order to do these “effects and transformations” via GL you HAVE to save/store the windows in memory and treat them as GL objects that the GPU can handle. It is not entirely OpenGL. It is just the way it works.
Okay. Try using my celeron366 with 128 megs of ram running kde. Stock.
>>>
Sissy. I bet your car’s stock too. Besides, I never said Linux was *easy.* I said it was fast.
No changes to the source code, no chaning obscure settings that only linux/unix gurus woul;d know. It’s SLOW. Any one who says otherwise is simply lieing.
>>>>>>
True. But I’m not a Linux/UNIX guru and I figured it out. Its some relatively minor stuff. Renice the KDE and X processes to -10 and -11 respectively. Patch the kernel with the preempt and low-latency patches. Set konsole to start at priority 0 (so any batch processes starts with a lower priority than GUI processes). Note: why all the manual priorty stuff? Well, Linux is more general purpose than either Windows or BeOS. Windows and BeOS automatically give higher priorities to GUI threads, in Linux it has to be done manually. But because of the nice UNIX priority inheritence model, you only have to change some initscripts.
Lightweight… yeah, it is. But in my experiance i’ve found BeOS to be a more responsive. Still, QNX is a nice OS.
>>>>>
Only because QNX is a strict real time OS. The priority tricks that BeOS’s scheduler does automatically don’t get applied in QNX.
However, when you factor in the time it takes to load on a desktop,
>>>
Which you do how often? And X loads in a few seconds if you run a lightweight window manager.
when you compare it with lighter weight display servers (i.e. BeOS), when you use it over a shared network, when you factor in other complexities of typical unix boxes
>>>
Let me guess. You don’t turn off sendmail in your Redhat desktops? I bet you also have 30-billion things sitting in your systray and RealPlayer and AIM sitting in your windows startup folder…
you can understand why for general purpose office environments Windows 2000 or BeOS will be more responsive.
>>>>>
Nope? What UNIX complexity? Windows does a hell of a lot under the hood and behind your back than Linux does. I agree that KDE 3.x isn’t as responsive as BeOS, but IMHO, that has a lot more to do with the antiquated ideas that UNIX devels have about multithreading than any fault of the OS or windowing system.
I agree that if you have *a window* in which you want to blast pixels, once you have X loaded and fined tuned it can blast pretty well but that’s not what most people use their desktops every day for. Right?
>>>>>>>
X is the graphics interface on Linux. Its sole purpose in life is to blast pixels to the screen. It does this job very well. Any other problems you may be having are related to your desktop environment setup.
You are not necessarily right – and not necessarily wrong.
Grpahics is never black-and-white, it’s always (ALWAYS) about finding a middle-ground.
>>>>
Increasingly, that middle ground can be found on the other side of the AGP bus, sitting next to the quad pipe vertex shader…
I can give you a way to draw AA lines super-quickly in software, but then fills will be super-slow. The idea that HW AA will be better than software, generally speaking, is ludicrous. Even the Parhelia’s AA wouldn’t do better than the very crappy font AA in BeOS.
>>>>>>>
I don’t see why the statement that HW AA is better than software AA is ludicrous. At least in the current time frame with current and near-future hardware and software. From a speed perspective there is no question. The GPU has acccess to RAM at anywhere from 8 GB/sec to 19 GB/sec (in the new Radeon 9700). The CPU has access to its own RAM (if you’re willing to render to a temporary buffer than blit later) at 2.1 – 4.2 GB/sec. The GPU has hundreds of gigaflops worth of parallized floating point hardware. The CPU is lucky if it has a few gigaflops. From a quality perspective, you’re closer. While the quality of the AA on the current crop of cards might not stand up to the quality of new-fangled software AA algorithms, its not a deficiency inherent in the GPU design. 3DLab’s new P10 has a programmable AA pipeline, so you can get speed *and* high quality AA. As for the font rendering, you point out a special case. Since text is highly regular, it makes perfect sense to pre-render all the glyphs on the host CPU. Still, it is not inconceivable that with the next GPU architectures, one could write a TrueType renderer with high quality hinting implemented as a vertex and pixel shader.
As for the second part, I think you misunderstood my comment. I don’t fault Apple for keeping window surfaces in main memory. I fault them for keeping window surfaces at all. It is entirely possible within the scope of an OpenGL UI to just keep track of a vector representation of each window. That cuts down memory costs dramatically (because the only bitmaps involved are actual bitmaps used in the window) and speeds things up dramatically (because the only thing you’re sending over the AGP bus is the vector representation, which is almost always more compact than the pixel representation).
“If you’re running a 1280×1024 pixel display at 32-bit colour that’s 1,310,720 pixels at four bytes per pixel which puts us at 5,242,880 bytes for a single frame, double buffered to make for a smoother display (10,485,760 bytes now) for JUST the framebuffer. Add to that 4 byte/pixel textures for each surface being displayed and a z-buffer and you’re talking a lot of megabytes. Any less than that and you’re going to be hitting up your main memory apeture which is far slower (limited to a 2x AGP bus or about a GB/s which is far slower than a video card’s onboard VRAM) and in most cases higher latency.”
So, isn’t this primarily a matter of throughput (easily handled by AGP), and not the requirements on the RAM of the GPU? Wouldn’t the GPU only be doing “heavy lifting” when I/O is triggering a change in the UI display and only in respect to how the 2D texture layers are being transformed ? That’s my interpretation of this statement, but I’d like to hear from some who may know more info.
> So, isn’t this primarily a matter of throughput (easily handled by AGP), and not the requirements on the RAM of the GPU?
No. Reread what the guy said. Really, dp, you don’t get it… Me, Rayiner and that guy over at Macnn are not enough to make you understand.
The AGP is not “enough”. Shifting data over the AGP to the main memory all the time is slower, no matter how fast AGP you got: “hitting up your main memory aperture which is far slower and in most cases higher latency” the guy said.
>Wouldn’t the GPU only be doing “heavy lifting” when I/O is triggering a change in the UI display and only in respect to how the 2D texture layers are being transformed?
I don’t quite understand what you mean there. But if you mean that the GPU will only be used just to do any “transformation and effects”, you are wrong. The GPU will do the whole thing. From A to Z. Therefore, it needs to be able to do it, and not hit the main memory when its own VRAM is not enough. Because:
1. the AGP is slow for the job, 1 GB/sec is not as fast as the VRAM troughput. There’s a truckload of bandwidth involved. AGP isn’t the all-out answer, because it’s limited to 256MB as well. That’s the size fo the AGP aperture- it might be chipset-dependent. It’s similar to the limit that x86 can only see 4GB at a time even if the motherboard can support 64GB.
Still. AGP, at the fastest, is 10x slower than local memory. This means that you will experience slowdown, not only when the data are in the main memory, but when the data are on the way to the main memory, via the AGP.
2. The main memory has slower latency and all this shifting of data will result to a slow experience.
If you do not understand all this stuff we all wrote in the last 20 messages, I recommend you take with you this and then go to bed:
“QE is good. As long as you don’t have many windows open/minimized and you do have enough VRAM.”
Which is what exactly I wrote in the article, trying to make it as simple as possible for the boneheads to comprehend it. Too bad that when we give more explanations it all goes to waste and they continue asking questions without having understood what we already wrote.
AGP is fast. However, NOT for the kind of thing that Apple and Ms wants to do for composition.
A full-screen window 1600×1200 or 1920×1024 is somewhere around 8MB. AGP 8x can “only” trasfer 8MB at 250fps. 250 fps sound enough at first glance. It is more than enough for a 3D game. But it is not enough for a desktop that has many such windows open. It is like running many games at the same time!
Try to put 2 or 3 such windows. You’re using 70% of your time for 2 windows, assuming 85Hz refresh rate as many modern monitors do. And you’re sucking 50% of your main memory bandwidth, assuming *333MHz DDR*.
If you’re running on regular PC-133 SDRAM with AGP 4x (like the current Macs), you don’t even want to think about it.
Think today’s top-of-the-line Mac.
1600×1200, 32bpp, 85Hz refresh.
That’s 622MB/sec. That’s your graphics chip spending 60% of its time waiting for AGP, and 60% of your main CPU’s memory bandwidth gone. Those are numbers that are way too high.
Again, in short:
QE is good. As long you have a Parhelia or another card with freaking lots of VRAM on it, that is. If you are starting to transfer the data to the main memory via the AGP because you don’t have enough VRAM, you are dead in the water, concerning performance.
The 16 MB minimum Apple says it is pretty much an absolute edged minimum. You start talking about good steady QE performance when you got more than 64 MB of VRAM.
And that’s the bottom line, and don’t dare to ask yet another such question without re-read everything we said before. Jeesh…
Thanks for the insults, Eugenia. Obviously you haven’t read your own T&Cs lately. And I really don’t belive you have a very strong handle on this yourself… Quartz doesn’t consume as much memory as you are predicting now on my 400MHz G3 with 256 MBof RAM with 30 windows open, so whatever. Ban me. Your arrogance will keep me from returning her again.
> Quartz doesn’t consume as much memory as you are predicting now on my 400MHz G3 with 256 MBof RAM with 30 windows open, so whatever.
Quartz is not Quartz Extreme.
> Your arrogance will keep me from returning her again.
And that is the thanks I get for spending my afternoon try to explain everything to you.
What an ass you are. I mean, really.
The latest Jaguar builds are apparently readily available on the various P2P services (not that I would know, of course). I wonder if anyone who has used QE on computers with 16MB or 32MB of VRAM firsthand could give us some details as to how well it runs, and what it’s like, in their experience.
“Your arrogance will keep me from returning her again.”
I think he ment to say
“My arrogance will keep me from returning herE again.”
and we say
“Good”
Glenn
It is fairly clear that cards with less memory will run into a wall faster than those with more. However, even rendering across AGP should be faster than quartz rendering with the cpu in local memory.
The real factor here is that windows that are off screen wont be occupying video memory, they will be swapped out. And just as with any limited system, users will get a feel for how much they can be doing before they hit the wall.
As for how often the interface has to update, it will be everytime anything changes. Hopefully Apple are using the hardware accelerated cursors, and moving the mouse doesnt prompt a redraw. But older cards only support 1 bit cursors, so any movement on those systems will cause redraw ( this is about the only area I can see where QE will not be as good as Q, which can perform an update to only the affected regions ).
I think it’s best to wait and see what kind of real world perfomance is gotten with Quartz Extreme. I mean no one here that’s posted has actually used/seen it first hand so.. we’re all in the end doing (even if logical) guess work.
> However, even rendering across AGP should be faster than quartz rendering with the cpu in local memory.
This is highly debatable… It might depend on the size of the window, the bpp, the resolution and how fast your AGP will do the job transfering the data to the system memory without the user noticing it, how many MB of data that would be (because don’t forget that an OSX machine with 256 MB of RAM is not enough for assisting QE – you will need at least 512 MB to make sure you will never hit system swap).
As you can see, there are a lot of “if” there in order to say that it will always be faster.
I have a beta build of Jag running on a 32 meg VRAM setup here now. What test would you like to see, alexd? I assume you want me to launch more than 6 windows to see if it slows or what? I’m haveing trouble following the ‘debate /fight’ there. LOL Take a deep breath and breath you two, the world hasn’t ended yet. ๐ Cheer up… be happy… life will go one.
The voice of reason… ๐
With 32 MB VRAM you have more “space” to launch more than 6 windows. Launch 15-20 windows of TextEdit. Resize all the 15-20 TextEdit windows to have different sizes, from big/fullscreen down to the default size of TextEdit. Because TextEdit is a fast application and it does not do much, it won’t hog down your CPU and your main memory, so you will be able to see clearer if QE can keep up rendering fast all 20 windows or not.
if you minimize the windows, would the effects not be sufficient enough to cover for the lag you get if Appl swaped the window into main memory?
Hmm. I just tried that. I opened Textedit. Open and resized 30 windows all different sizes to make sure. Everything is just as fast as if I had 1 window open. I even have IE open to type this and also a transparent terminal window. I even minimized a few of the windows to see if the ‘genie effect’ would choke, it also runs as well as when I had only 1 window open.
it seems that Apple has conducted some sort of voodoo in there since, as all logic dictates to the ones in here who are in the know about 3d, that even with 32 MB and agp 4x, all that would slow down a bit or even choke……I wonder if apple has done what I asked above and hiden the AGP transfer lags in some sort of transition or even found an inovative way of using the graphics card…
I have no idea. I wonder myself, J. I just tried it again with 50 windows and I have a screen shot as well. Still runs fine for me. No noticable slowdown at all. minimizing is just as fast as before.
[URL=http://homepage.mac.com/excaliburgraphix/jag.jpg]Test Shot[/URL]
http://homepage.mac.com/excaliburgraphix/jag.jpg
sorry ’bout that all
window buffer compression? It is activable in Mac OS X 10.<2, although with some performance hit.
My theory is that each window is made into a texture. The textures are copied into the video card. When the OS wants to move a window it just tells the video card to draw the texture in a different position. Docking invloves a texture transform.You do not need to constantly copy the windows across to the video card unless you run out of video memory or you altered a windows output. Then you would need to copy that window back into the video card.
Video bandwidth is not the limiting factor, memory and video card drawing speed is.
Because Mip Mapping is used you need to store the window, and smaller instances of it in video memory eating it up a bit more than normal.
Unfortunatley I don’t have a MAC to test my theory
Eugenia wrote : Please do not talk childish. It is very well known that Microsoft tries to support legacy hardware and software better than Apple does. Apple many times has rendered obsolete 1.5-2 years Macs. Microsoft would never do that, as a lot of businesses use these older hardware.
Ms is doing the 3D composition for Longhorn in 2005 because it makes more sense for THEIR business model. At the time, graphics cards will be much faster and they will be able to deliver the technology better, plus the current high end cards (that by 2005 they will be “old”), they will still work with that technology.
The right thing, in the right time, depending on the business models of the two companies.
Excuse me, but that sounds childish. 1) Apple never rendered 1.5-2 years Macs obsolete. That’s just plain wrong. There are very many people using reall “old” Macs. 2) Introducing Quartz Extreme doesn’t make older Macs obsolete. It just improves performance on newer Macs. Even older Macs will benefit from the 10.2 upgrade. Something you can’t really say about WinXP… My Mac is 4 years old and runs OS X beautifully…
Again *no* Machines become obsolete by the introduction of Quartz Extreme. Fact is, nothing speaks against introducing this technology now. If you can’t take advantage it’s still *no disadvantage*. Sorry, but all you say is twisted. M$ is just waiting to see how things work out because they never had the guts to do anything first.
Would you care to specify any instance where Apple has made 2 years old machines obsolete ? I’ve been around that business a long time and I can’t remember seeing this.