DCE is short for Desktop Composition Engine which is the video-accelerated graphics engine in Longhorn which will be a part of the Aero User Interface. A few screenshots and video available here.
DCE is short for Desktop Composition Engine which is the video-accelerated graphics engine in Longhorn which will be a part of the Aero User Interface. A few screenshots and video available here.
IBM would probably do a better job than SUN if they had control over java.
quartz extreme
Nah. Seeing what they did to OS/2, I would not bet money on that.
Keep in mind that this build is from 2002, it doesn’t reflect the current state of DCE.
…download the video. Is it worth waiting?
>quartz extreme
No, you can’t say that, because Longhorn takes one step further beyond QE. Besides, this is the natural evolution, Be, Inc. had similar plans in 1999 already.
> No, you can’t say that, because Longhorn takes one step further beyond QE.
Eugenia – I’ve heard this now from a few people (saying Longhorn takes it one step further) but I haven’t found any information as to what the “extra” step is. If you don’t mind, would you explain or point me in the direction of somewhere where I might be able to read about the differences between QE and DCE.
Thanks
Longhorn’s version of hardware acceleration does full composition in 3D, QE only does partial, on specific cases only.
At the same time, there really isn’t a need for a 2D desktop to do everything in 3d. It’s somewhat of a waste of computing power.
Plus, does it matter if its only partial 3D vs. full 3D to the consumer? If they can both do the same effects, then I don’t think it does.
It does matter, because when you do full composition, you can have performance gains.
DCE is short for Distributed Computing Environment, an early integration middleware from The Open Group based on work by Digital and others.
DCE is short for Distributed Computing Environment, an early integration middleware from The Open Group based on work by Digital and others.
Raven: “Eugenia always makes is sound as though Apple is sitting on its a** while MS is busy “innovating”. By the time Loooooonghorn actually makes it to the desktop, the incremental improvements in OS X will send MS back to the drawing board once again to “innovate” some more.”
Yes – that’s exactly what happens inside MS, inside Apple. It is a result of competition.
However – one of the benefits of what MS is doing, over what Apple is doing (and over what MS *used* to do), is that we *know* what their plans are.
So, when I’m looking at my medium- and long-term development horizons, I can architect for today, while preparing for what’s coming down the pipe in two year’s time.
Typically, I get one major iteration of my product every 18 months, and if I want to have LH technology available in the LH timeframe, I need to know what I need to do *right now*.
It does matter, because when you do full composition, you can have performance gains.
That depends on your basis for comparison.
Do you mean productivity performance gains? I doubt the move to 3D will help, in that sense.
Performance for 3D objects? Yes, definitely here. Pure 3D in a pure 3D environment will be faster than having anything be in 2D.
Performance for 2D objects? No, most likely not. 2D performance, especially in Windows, has been mostly tapped out for years. Translating them into 3D and mapping them onto a 3D object is not going to make them much faster, if any faster at all.
I think there are opportunities for 3D Desktops, but it will not be that great. Most of it will be eye candy for a while, and some visual aids.
2D desktops work so well because they are inherently simpler. It’s a lot easier to find a place on a map, then it is to drive to the place.
>Longhorn’s version of hardware acceleration does full
>composition in 3D, QE only does partial, on specific cases
>only.
But is this always better?
One great thing about the Aqua enviroment is that since the windows are NOT drawn directly onto the video card in all cases but rather into main RAM is that you don’t get the nasty window tearing effects that Windows gets today.
If Windows is going to do ‘full composting in 3-D’ that means that they are going to have some interesting performance problems…
If all rendering is done on the video card then you’ll need a monster video card with tons of RAM. Remember that Video cards can’t address memory with the same flexability that CPUs can (which is why they are so fast) So you will be forced to have huge amounts of ram for composting the windows.
On OSX where all composting is done in the Main RAM you at have the advantage of Virtual Memory so that if you open too many windows it will swap the ram automatically (slow but works). If all composting was done on the video card, then it will be very slow when the video card runs out of RAM. (First copy from video card RAM to Main RAM then to disk, and copying from video card RAM is slow).
Course I have no idea if they are actually doing this. But if you’re going to imply that they are doing full composing in 3-d then that implys that they are using the video card to do all drawing operations. If they draw directly to the frame buffer then we are left with a system that is IMO, not better then the one they left because you still have window tearing.
A hung app makes windows look bad. While Apple gives you the spinning beachball of death, windows gives you a screen. that cannot repaint itself. Which is better? I say the beachball.
Flexbeta is dead.
🙁
If all composting was done on the video card, then it will be very slow when the video card runs out of RAM. (First copy from video card RAM to Main RAM then to disk, and copying from video card RAM is slow).
No – if the videocard runs out of memory, the card throws the data away.
If the videocard needs the data again, a repaint event is sent to the application, just like in the old days.
>> steve: Course I have no idea if they are actually doing this. But if you’re going to imply that they are doing full composing in 3-d then that implys that they are using the video card to do all drawing operations. If they draw directly to the frame buffer then we are left with a system that is IMO, not better then the one they left because you still have window tearing.
Where did you get the idea that 3D drawing is done directly to the frame buffer? 3D cards have been doing accelerated double and triple buffering for ages – I believe since the first public version of DirectX was released way back on Windows 95. Sheesh!
>> steve: A hung app makes windows look bad. While Apple gives you the spinning beachball of death, windows gives you a screen. that cannot repaint itself. Which is better? I say the beachball.
I do not understand the basis of these claims. Could you please explain further? Specifically, it was my understanding that QE stored the windows as textures in video RAM, too, so why would DCE be any worse?
I think some of you are confused. Both OS X and Longhorn use 3D graphics accelerators to accelerate 2D graphics. Quartz Extreme only uses the frame buffer of the graphics card. Every window is drawn traditionally (using the CPU), the only advantage you get is in composition: it’s faster and you get cheap transparency effects (using the graphics hardware). Longhorn Aero/DCE uses the GPU of your graphics card to draw the window contents, including vector graphics, per pixel effects, etc (it uses what used to get called the “T&L” – transform and lighting – hardware, which only became available on more recent cards). This allows for far greater speed advantages and more complex effects than simple alpha blending. None of this has anything to do with 3D on the desktop, although DCE can do that.
I bet that os X 10.4 or 10.5 will have that capability, though, you will need the right card for it.
But is this always better?
Usually.
One great thing about the Aqua enviroment is that since the windows are NOT drawn directly onto the video card in all cases but rather into main RAM is that you don’t get the nasty window tearing effects that Windows gets today.
There would be no reason for Aero to draw directly into visible video memory. Cards have supported accelerated off-screen rendering for awhile now. You can draw via OpenGL into a video memory buffer, and then composite all the buffers during the VSYNC interval.
If all rendering is done on the video card then you’ll need a monster video card with tons of RAM.
Yes, you probably will. Though, most buffers will be idle, so can be swapped to main memory (this will help a lot when we have PCI-Express, which has the same bandwidth in both directions), and can be compressed.
Remember that Video cards can’t address memory with the same flexability that CPUs can (which is why they are so fast)
That’s not necessarily true. 3DLab’s P10 architecture basically has an MMU for the graphics card. Its not a significant performance issue (they use it on their Wildcat cards too).
Apple can probably match Aero/DCE but I don’t think they have the time or the resources to match Longhorn feature for feature. I think it’s important to realise the significance of Longhorn. Love or hate Microsoft, this is a massive release – a real release, even if it is two years off – and it will take substantial effort for Apple and open source alternatives to match it. Some analysts have said that Apple can’t match it; the development cost of Longhorn simply exceeds anything Apple can budget. How that translates for Linux’s chances I don’t know.
umm….
if that is not a bunch of pure unadulterated fanboyism on the verge of trolling I don’t know what is.
Apple already is ahead of where XP is, and longhorn is not some revolutionary new OS. the features you speak of are not really features that are needed by other OSs.
the Managed API system is to make the OS more stable and less infectable from malware.
the WinFS is probably going to cause more pain than it is wellness, and even if it is a good feature, all Apple needs to do is add smart folders to their OS and you get the same thing that WinFS gives you.
Aero is not some gigantic leap past QE. in fact, like I said, Apple could have full composition in 3d hardware in the net iteration of OS X. the reason it is not there now is because the GFX cards needed are not affordable to most people yet.
.net is just a programming environment for Windows. big deal. XCode and Cocoa along with applescript studio and java are perfectly fine for code development.
speculations.
One great thing about the Aqua enviroment is that since the windows are NOT drawn directly onto the video card in all cases but rather into main RAM is that you don’t get the nasty window tearing effects that Windows gets today.
I presume you are talking about the way Windows’ windows sometimes don’t redraw.
This has nothing to do with whether or not the windows are “drawn directly onto the video card”, it happens because in Windows, it is the application’s responsibility to keeps its window(s) up to date and not the OS’s. Whenever an application’s window(s) are exposed, the OS tells the application to redraw them appropriately. Should the application be unable to respond to this message, the window(s) won’t be redrawn and will be simply painted over by the OS in the default background colour. In short, Windows doesn’t do double buffering.
If all rendering is done on the video card then you’ll need a monster video card with tons of RAM.
Not really. Well, not in contemporary terms at any rate (I still consider more than 16Mb of VRAM “lots”, but then again I don’t do much that involves 3D rendering). These days the average video card has 64MB of RAM (it’s rare to see less except on low end laptops and bottom end budget PCs). A 1600x1200x32bpp screen (absolute top end of the typical user desktop) only takes up about 7 megabytes. So, on the average video card today, there’d still be ~57Mb of RAM leftover to do double/triple buffering, composite windows, etc. And that’s assuming no sort of compression at all.
Remember that Video cards can’t address memory with the same flexability that CPUs can (which is why they are so fast) So you will be forced to have huge amounts of ram for composting the windows.
Video cards *already have* “huge amounts of RAM”. Heck, I bought a new video card a couple of weeks ago for AU$180 (probably about US$120) with *256Mb* of VRAM (cheap & nasty, but it’s only for playing games). 256Mb ! The average consumer machine has only recently started shipping standard with that much *system* memory.
On OSX where all composting is done in the Main RAM you at have the advantage of Virtual Memory so that if you open too many windows it will swap the ram automatically (slow but works). If all composting was done on the video card, then it will be very slow when the video card runs out of RAM.
They came up with something alleviate this problem some time ago – it’s called AGP.
(First copy from video card RAM to Main RAM then to disk, and copying from video card RAM is slow).
Compared to paging to disk, it’s *blazingly* fast. Orders of magnitude faster.
Course I have no idea if they are actually doing this. But if you’re going to imply that they are doing full composing in 3-d then that implys that they are using the video card to do all drawing operations. If they draw directly to the frame buffer then we are left with a system that is IMO, not better then the one they left because you still have window tearing.
I do not understand why you think “video tearing” is somehow related to “drawing directly to the frame buffer”. It’s not, because it’s handled by a higher level subsystem which is not concerned with how the data is being stored and retrieved (or shouldn’t be, at any rate), merely whether or not it is there.
in fact, like I said, Apple could have full composition in 3d hardware in the net iteration of OS X. the reason it is not there now is because the GFX cards needed are not affordable to most people yet.
It would be a very significant architectural change for them to do things that way. Currently, in OS X, nearly all drawing is done directly by applications into software buffers. The OS itself doesn’t see the Quartz2D command stream — Q2D is a library in the application’s process space. The simple evolution of this model would be for each app to have its own OpenGL context, rendering into its window buffer. Unfortunately, this is a terrible model to use for current graphics cards, which don’t do a good job of handling more than a couple of clients at a time (they are optimized for a single client). A better way to proceed would be to have a single graphics server that got sent the Q2D command stream and rendered it with a single OpenGL context, but that would represent a major change in the overall architecture of the OS X graphics subsystem.
Well – Longhorn ‘does’ quite a lot, actually. The PDC alpha lets me see how I will be expected to architect my apps. My developers can learn how to wield the new UI and visualization technologies. We can experiment with integrating our existing assets, and seeing how they play in the new environment. We can look at how we can architect our current Web Service implementations such that they can a) interop and b) be used in the implementation layer of Indigo-era services.
So, as a developer (or IT specialist, or analyst), LH does nearly everything I need it to right now – it allows me to prepare for the next wave of technology, such that my investment today has value 5-10 years hence.
That is a crucial business advantage that Apple is not offering me, and some OSS-space contributors are arguably offering (Miguel and the Mono/Gnome folks, possibly, IBM maybe).
By the time Longhorn comes out, who will care about these minor performance hits? Both systems will have the capability to do eye candy that is useful and helpful to the user, and the hardware in 2006 will chew through it like a drop in a bucket.
Gates said PC’s would be virtually free in, what…. 2006? 2008?
Apple doesn’t need to change its compositing system, and Aero’s advantages will be press that they can use when they release. Everyone is happy.
I swore that WinXP will be my last MS OS, and I have to say nothing seems great about longhorn so far many of the “features” I’ve seen in KDE and Gnome and OSX so its all old, I’m gradually making my way to Linux full-time and I believe this may be the final push
There are quite a few things in Longhorn that I find interesting, but man, the horrible waste of space in those Explorer windows…
hence it showing up in 10.4 or 10.5.
It does matter, because when you do full composition, you can have performance gains.
But one asks, what happens if one is already satisfied with what Apple has delivered 😉
I’ve yet to see a case where one could justify wanting to compose everything in the manor you describe. The fact is, I’m very happy with MacOS 10.3.3 performance, why would I need/require better?
looking glass is innovative and practical than this dce
All this talk about neato drawing system, but I’ve seen those screenshots and the UI is still horrible. Reminds me of the new MS Office, the windows and menus are BLUE for god’s sake. It makes no sense at all. MS simple can’t produce a system which feels natural to use.
This will kick ass… Getting a woody now.
Well, if you’re happy don’t upgrade/switch. Don’t be surprised when the rest of the world keeps moving forward, though. I still drive a manual trasmission car, but that doesn’t mean the rest of the world will.
The new UI hasn’t even been put into the main OS build yet, not to mention the fact that these are Alpha builds, WinXP’s UI changed quite a bit between beta 1 to final…. this isn’t even beta yet.
A better way to proceed would be to have a single graphics server that got sent the Q2D command stream and rendered it with a single OpenGL context, but that would represent a major change in the overall architecture of the OS X graphics subsystem.
Quartz works mostly like that already today, well mostly. The Quartz Compositor is the “Single Window Server” that renders the desktop as a OpenGL scene. The compositor can handle Quatrz2D, QuickTime and OpenGL “buffers” to composite the desktop today.
It wouldn’t be too hard for Apple to make all Quartz2D APIs render using OpenGL.
One more thing is when Quartz was designed video cards didn’t have gobs of memory, so apple had to make desgin decisions, so they decided to keep most of the buffers in memory, because video memory was limited.
Apple demoed Quartz Extreme at Siggraph in 2001, The year the Avalon team was formed. Avalon is designed with the video cards of 2006 in mind with minimum memory for all the pretty effects to be had with good performance.
The fundamental architecutre of Quartz and the Quartz Compositor doesn’t prevent Apple from moving the rendering that the Quartz2D API from CPU to the GPU. Apple just started shipping 64MB cards with lowerend macs. In a few months to a year when the 128 MB cards are cheap enough to include in the lowend mac, I wouldn’t be surprised to find Quartz rendering every thing in hardware.
Quartz and Avlaon have similar design goals, device independant graphics. Quartz already achieves that and has since it’s inception. The hardware acceleration is implementation details that were concieved with the hardware available at the time of design in 1999-2001.
So before claiming that longhorn will do it better, understand the decisions engineers involved in Quartz’s architecture had to make in 1999. They achieved most of the goals avalon wants to achieve in 2006, device resolution independant rich graphics and rich drawing APIs.
Videos:
http://board.iexbeta.com/ibf10/index.php?showtopic=40950
Screenshoots:
http://www.neowin.net/forum/index.php?showtopic=161454&st=0
Edonkey link:
ed2k://|file|Windows.Codename.Longhorn.build.3718|535867392|6CE1AF48C6 17BB21B5450FE7EDB7FBFE|/
http://board.iexbeta.com/ibf10/index.php?showtopic=40988
(I didn’t download it so don’t know if is a fake)
@Raptor –
Actually, just to correct a point that Raptor has just made, the current MacOS X UI is built chiefly on resolution-dependent bitmaps (e.g. all the buttons etc.), not resolution-independent (i.e. vector) imagery.
M
Actually, just to correct a point that Raptor has just made, the current MacOS X UI is built chiefly on resolution-dependent bitmaps (e.g. all the buttons etc.), not resolution-independent (i.e. vector) imagery.
I don’t beleive ever claiming anything about the UI (eg. buttons being vector based). Quartz is capable of both vector based and bitmaped support. Just because the UI is implemented today (or in pre 1999) doesn’t mean that it is incapable of using vectors.
The point I made above directly supports why the UI elements would be bitmap based. Vector images for UI make no sense unless you are drawing the elements using the GPU, it is infact slower if you use vector images becuase you have to do a two step conversion. As I illustrated full hardware accelration would require gobs of video memory, which was not avaiable in 199x when Aqua was developed or in 2001 for Quartz Extreme. Todays Displays do not require vector images, bitmaps serve fine. Avalon is based on the assumption that at the time of release, there will be very high resolution displays and cheap 3D graphics cards with atleast 64MB RAM or 128MB for better performance (with microsoft’s recommendations track record I would say 256MB)
Aqua looks just as stunning on a 800×600 12″ display or a 23″ display at 1920×1200. I haven’t tried it on any other displays. But for todays Apple systems bitmaps are fine and perfrom well.
3c. (vector level) retention automatically fast(er)
Once you are not drawing retained vectors with the GPU, there is no particular reason why it should be faster than immediate mode drawing, and experience with DPS, for example, showed that it almost never was.
The reason for this is that you are doing extra work: building up the retained graphics representation from the application-level representation, and then drawing that. Drawing directly from the application-level representation eliminates one step. -mweiher http://www.ondotnet.com/pub/a/dotnet/2004/03/08/winfs_detail_3.html…
My point was the architecture is already in place, becuase Raynier claimed it would need a whole scale redesign. Also people claiming the Apples UI being bitmap based is a “bad thing”. It is a set of implementation compromises given the state of hardware at the time desgin and release. But the fact that the architecture is in place is a testament to the Quartz design team. When the hardware catches up Quartz will implement full vector based rendering. The hardware is already catching up in 2004 (powerbooks with 128MB cards).
1. High DPI displays
Quartz will be quite capable of adapting to high DPI displays, just not using the mechanism you envisage. In fact, Quartz uses the PDF/Postscript imaging model, which as you may recall, is device independent and theoretically arbitrarily scalable, and practically scales at least to imagesetter and film-printers, with around 2500 dpi, and has done so for a couple of decades.
The mechanism is quite simple and has been in uses for several decades: apply a device-transform with a higher scale factor before any other rendering is done. Voilá, you get a bitmap rendered at a higher resolution.
Your proposed mechanism of applying a compositing transform to the completed Window is unlikely to work well even in the Windows world, because it assumes that *all* drawing by *all* applications will use the retained vector API. This assumption seems unlikely unless non-retained APIs are removed, and completely hopeless once you consider legacy applications. -mweiher
You really must reag the rest of the discussion from which I pasted these sections.
When the hardware catches up Quartz can be made to implement full vector based rendering if there is a need for it.
Does anyone now if there is a torrent, I’d like to play.
first of all, sorry if my english isnt very well.
i think longhorn will be better than xp, but not all people have enough many to buy more ram, i have a lot of friends who has 128-192 mega ram, and athlon xp 1700+, with no 3d cool video card, ones have one, but isnt a powerfull one. i think longhorn will need more than this “power”. if you think about 64 bit processor… a opteron still expensive.
thats my personal opinion, bytheway, im very happy eith my mandrake 9.2, except i still cant use my minidisc with it
bye!
(chile ruls!)