“While the next major revision for DirectX is not expected until Longhorn’s launch, Microsoft’s DirectX group has been briefing developers on what’s in store for “DirectX Next” with presentations at Microsoft Meltdown and other developer conferences. Recently, this presentation was made available to the public via Microsoft’s Developer Network. The intent of this article is to give a more thorough treatment of the features listed for inclusion with DirectX Next and hence explore the types of capabilities that DirectX Next may be offering.” Read the article at Beyond3D.
Microsoft’s DirectX group is the only decent one at the company. They’ve been innovating for a long time, continually pushing graphics consumer graphics hardware forward by making each new DirectX spec just a little bit beyond the reach of existing graphics cards. In contrast, it was just recently that the OpenGL ARB got off its duff and started cranking out those specs. The changes in this new version look very far-reaching.
Interestingly, the graphics market seems to be rehashing CPU history, except in compressed time. First you had more general programming interfaces, then, higher-level languages, and now you’re getting reduced restrictions on things like reads and writes (the generalized I/O model), along with virtual memory for the GPU. They’re even bumping up against having to schedule shader programs of indefinate length. A lot of these changes are definately needed if consumer GPUs are to go from being a “run this game only” piece of hardware, as it is currently, to a “handle all graphics for all apps” piece of hardware. In the near term, getting consumer hardware to properly handle concurrency is going to be important for Longhorn.
I just dream of the day when we get rid of GPUs entirely (maybe Cell?). Just put a couple of extra high-performance vector processors in a high-bandwidth Hypertransport mesh and get the power of dedicated hardware along with the ease of programming of a unified-memory/unified-ISA system. A lot of the nastiness that comes with the architectures of current window systems just goes away when all you have are CPUs. Concurrency? The kernel scheduler handles that. High level languages? Heck, you could write your graphics code in Lisp if you wanted (hi Roy Texture copies? Not in a unified memory architecture!
True, eventually it would be nice if graphics were handled by a processor similar to a CPU, that could to more arbitrary operations rather then just simple movement of memory, which for a long time is all they could do. Now they can move memory, but also scale and interpolate!
Graphics are an interesting problem, since it all has to happen in real time and cannot take an undefined time for an operation to complete. Like when you ask a card to draw a rectangle, it had better take the same amount of time to draw it every time (within a small amount of error), or else the CPU will not be able to make it look right.
My theory is that GPU’s will get so complex that they will run some kind of kernel themselves.
I think 3DLabs cards can do some of the stuff you mention including multitasking, there was an article on tomshardware about it but i can’t seem to find it.
3Dlabs was also the ones behind opengl 2.0 (what’s the current status on that one?)
“Microsoft’s DirectX group is the only decent one at the company. They’ve been innovating for a long time, continually pushing graphics consumer graphics hardware forward by making each new DirectX spec just a little bit beyond the reach of existing graphics cards.”
Yeah, and when they do this with their OS to CPU’s and hardware people complain.
…Roy! You make your graphics code in Lisp? Way to go! You’re da man!
🙂 <- I forgot
Heh Roy hates it whenever I manage to sneak Lisp into a discussion.
@Brad: You’re missing the point. DirectX’s code doesn’t get slower and more bloated with each release. Rather, DirectX exposes support for new features not accelerated by existing graphics cards. This prods manufacturers to implement new hardware features in their card. Windows just gets slower with each release. Its not fundementally pushing forward CPU technology (current x86 CPUs are largely the same, at the programmer level, as the 386!). Rather, it uses faster CPUs as a crutch for slower code.
Rayiner, no it is much a similar effect, if windows didn’t need more cpu there would be less drive for faster cpu’s and windows isn’t getting slower with each release or more bloated, it’s just getting more useful. I understand what your saying about new features in the cards, thats fine. But at the same time I don’t really want an OS dictating needing new bits in a cpu. That would just make support more of a headache and obsolete hardware faster.
“DirectX’s code doesn’t get slower and more bloated with each release”
Oh contraire mon frere!
I have two boxes, both Ghz Athlons. Both GeForce 2 MX 400/64Mb. Oodles of horespower and VRam. one is XP, other is ME. DirectX 9.0 was buggy, 9.0b, was unsuable. cut most frame rates totally in half ( DungeonSeige, 4×4 evolution ). If this version is NOT for my cards ( why not? ) then why does Windows Update insist on cramming them down my throat, and I had a bear of a time removing them, and luckly had the DX8.1 install ( not aviaible ) on backup?
Why? Ohhh They want me to buy brand new graphic cards to run their crummy NEW software? I really bought windows in the first place to slow these machines to a crawl, and this would finish them off. New Motherboard too? Sorry:
DirectX Eradicator 1.09 Beta 2: I certainly owe this man money, and thanks for a good solid peice of sofware that works, and works well. Thanks Thanks Thanks Thanks!
“, if windows didn’t need more cpu there would be less drive for faster cpu’s and windows isn’t getting slower with each release or more bloated, it’s just getting more useful.”
Clearly the USE of windows is to sell us software that gets slower, and more buggy. Ever see Win3.1 on a 1.3Ghz Athlon? WOW! kinda like running Windows XP on a 25 Ghz Pentium 4. Oh you cant afford a 25Ghz Pentium 4? Sorry.
Killmofasta
You forgot one thing…
The old $oftware doesn’t work 100% on new hardware because it has shoddy support for all the new USB/Firewire/PCI/BIOS/etc chips.
So the new $oftware doesn’t work on “old” systems and the “old” $oftware doesn’t work on new systems.
Micro$oft didn’t make $55 billion in ca$h money by mistake.
Every day, more and more people and companies are seeing that Micro$oft Window$ is a one way ticket to the poor farm. Over the years Micro$oft has forced companies into spending over 200 billion on upgrade$ they didn’t need.
Some might call it “techno terrorism” …
You need dx9 compatible drivers too, don’t use the crappy ones that come with windows.
I have two boxes, both Ghz Athlons. Both GeForce 2 MX 400/64Mb. Oodles of horespower and VRam. one is XP, other is ME. DirectX 9.0 was buggy, 9.0b, was unsuable. cut most frame rates totally in half ( DungeonSeige, 4×4 evolution ). If this version is NOT for my cards ( why not? ) then why does Windows Update insist on cramming them down my throat, and I had a bear of a time removing them, and luckly had the DX8.1 install ( not aviaible ) on backup?
I love when people say they have plenty of graphics power and an MX card (A GeForce 2 MX no less) in the same paragraph, nothing like cutting the memory bandwidth to neuter a card, but I am already off on a tangent. As the previous poster said, make sure you download DirectX9 compatible drivers from nVidia’s site (or the site of whichever manufacturer made your card), rather than using the drivers from Microsoft. Once you have DX9 and manufacturer’s drivers installed, Windows Update should stop telling you to download the MS drivers, and the games should work at least as well as they did before, if not better.
Why? Ohhh They want me to buy brand new graphic cards to run their crummy NEW software? I really bought windows in the first place to slow these machines to a crawl, and this would finish them off. New Motherboard too? Sorry:
With enough system RAM those machines should be quite fast with XP (I wouldn’t load WinMe on any system, but YMMV). Also, considering that the games you’re complaining about are older, even the aging video card shouldn’t be a problem. Maybe when you feel like spending some money, though, you should consider a GeForce 4 card for $50-100 (or less for an MX).
Nobody’s forcing you to upgrade your version of DirectX. Very few games, even now, require it … and none that I’m aware that your graphics card can handle.
There’s progression in technology, and software progresses just as much as hardware. And other than the want to run the “latest and greatest”, I don’t see where anyone is forcing anyone to upgrade.
I wouldn’t recommend a GeForce4 MX card, the Geforce3 Ti 200 is faster, and cheaper.
The GeForce4’s are just rebranded GeForce2 MX’s with a few updates, literally.
I wouldn’t recommend a GeForce4 MX card, the Geforce3 Ti 200 is faster, and cheaper.
Normally, neither would I, but it just doesn’t matter much to me any more. That being said, right now you can get the low-end GeForce 4 Ti cards for about $50, so the MX line is becoming mostly irrelevant. I upgraded to the mid-range 128MB GeForce 4 Ti almost a year ago and haven’t had any reason to upgrade, yet (Doom 3 and HL2 will probably be the ultimate test of that). The ‘forced’ upgrade cycle has been entirely in people’s heads for a while now, even for gamers, unless your ultimate quest is to play the latest games at 1600×1200 rather than 1024×768 or 800×600 (hell, I remember when it was impressive to even be able to play a game at 1024×768, and that wasn’t very long ago).
The big thing seems to be virtual video memmory. This is BAD! It is nice for the programmer because he/she doesn’t really need to worry about the texture size. However for the gamer this just means more lag! Yes lets move extended vitual video memmory from the card to regular memory that then gets moved to virtual memmory on your hard disk. Can anyone say BOTTLENECK!