“AMD worldwide developer relations manager of graphics Richard Huddy has blamed Microsoft’s DirectX and its APIs for limiting the potential of GPUs in PCs. ‘We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it’s very clear that the games don’t look ten times as good. To a significant extent, that’s because… DirectX is getting in the way.'”
Then dump this piece of s..oops… and use OpenGL instead.
Simple!
If only people would just do this and write proper cross-platform games to begin with. But that won’t happen.
One of the problems is that DirectX is nowadays quite some ahead of OpenGL. OpenGL is lagging behind badly.
Secondly, if you read more to the actual matter it’s not really the fact that DirectX is somehow so bad or something that they want to get rid of it. Quite the contrary; they still want to use DirectX, they just want certain functions be implemented in hardware itself so the CPU doesn’t get involved at all in certain operations. Like for example handling large lists of objects is an example of a case where the current implementation gets in the way: either you pass all the objects as a large, single bunch to the card which is fast, but manipulating those objects independently afterwards is really tedious and slow, or pass the objects separately but then the CPU is involved for every single pass.
Basically, this is again a rather sensationalistic “news article” for the devs calling for more direct access to certain hardware functions, which really doesn’t have anything to do with DirectX either as OpenGL suffers from the same issues too.
What planet do you come from? Khronos is doing an excellent job since they took over OpenGL, and GL 4.1 is pretty much at the same feature set as DirectX11, plus it works on WindowsXP (DX11 doesn’t).
The main features GL lacks are intermediate pre-compiled bytecode and that thread support is more messy (requiering multiple contexts), none of which are vital features for nowadays games or applications (bytecode shaders are recompiled for every card anyway and thread support does not gain much, as the article itself says). Most games nowadays are also written for OpenGL ES, so little by little as portable devices get better, DirectX is becoming irrelevant.
From planet Earth, actually. Go ahead and research and ask around. One of the most prominent heads in 3D gaming technology, John Carmack, too used to defend OpenGL for years but has now himself admitted that DirectX has surpassed OpenGL.
Feel free to read if you don’t believe me:
http://www.bit-tech.net/news/gaming/2011/03/11/carmack-directx-bett…
Irrelevant in this context.
Not even nearly “most games.” Mobile games, yes, as there is only DirectX for Windows Mobile and WP7, but no console games nor PC games. Besides, that too is again irrelevant.
The article you’ve linked to mixes up a family of API’s (Direct X) with a 3D only API (OpenGL). The journalist should have mentioned Direct3D when comparing to OpenGL. Carmack would never make such a mistake, hence the article probably has more journalistic interpretation / freedom in it than actually quoting Carmacks exact words. The proof is still in the pudding, since Carmack is still using OpenGL for idTech5 rendering engine, and not Direct3D. There was a period in 2006-2008 where OpenGL got intangled in a dispute amongst it’s members regarding the future direction of OpenGL. However, ever since they got their act together, OpenGL has been moving faster and adopting newer technologies more agressively than the Microsoft offering. There have been 3 more OpenGL spec releases in the last 18 months compared to Direct3D. Each spec matches hardware requirement (what’s the point of doing a release if it doesn’t target hardware support). Hence, OpenGL 4.1 supports more hardware features than Direct3D.
Disclaimer – I write 3D rendering engines professionally for a living.
Actually, both the ps3 and the wii use Opengl ES
PS3 does _support_ OpenGL ES 1.0 with additions from 2.0 for smaller games, but all the bigger ones actually use libgcm, not OpenGL ES. Libgcm provides direct access to the RSX, OpenGL ES 1.0 simply ain’t got nearly enough features to power modern games.
This is not true.
Most Sony consoles _support_ one form of OpenGL or another but that doesn’t mean that developers really _use_ the API. The Wii graphics hardware is accessed through a proprietary API that doesn’t resemble OpenGL that much.
The article you cite only confirms my point. OpenGL is *not* lagging behind badly. It’s only now a few months behind the latest DirectX specification and ready to use when new hardware is released.
This is mileages better than when the ARB decided on each new specification, which would take years. Carmack’s point is that vendors implment DX first, we know that, but also the reality is that since DX took the huge lead with DX9, the API takes almost a decade to incorporate new technologies. DX10 is pretty much a DX9 cleanup, and DX11 is tesselation + DX10+Compute cleanup.
WTF. I have a hard time believing that you’ve ever written DX10 code, or you’d realize that the entire programming model is fundamentally different between DX9 and DX10. DX9 is largely fixed-function. DX10 is a constructive model that lets you build up chains of render/compute shaders. The models couldn’t be more different.
Edited 2011-03-22 00:24 UTC
This is not the case currently and hopefully it will stay that way. OpenGL 4.1 provides the same feature set as directx11.
Agree. It’s not as if AMD has some alternative waiting in the wings. So, they should stop being so dramatic, and work with MSFT and others to build the technology they want.
Read the article, the complaint is that DirectX is too high-level, and OpenGL is at least as high-level throughout.
Well, since Windows basically IS the pc desktop games market then I can’t really blame devs for going with DirectX, particularly since it’s been geared at games development from the get go. In other graphically demanding segments like 3D content creation OpenGL is de facto standard.
Yeah right! ever heard of Humble Indie Bundle, windows users contributed with less than 55% of the cash. Those other platforms mac os with 5% and linux with 1% made up the rest. So your premise is false, the presumed 90% of the marketshare of windows does not mean that you can ignore the rest. Well, you can ignore them at your own loss.
Yes I certainly have, I actually used this fact as a defence against someone on this forum who were basically claiming the Linux users are cheapskates.
Actually you obviously can, else we’d be seeing a ton more games for Linux. As for the indie games sector, there’s certainly a different mentality there since we are generally talking entirely different sales numbers. Also there are likely also technical aspects behind this since the majority of indie games seem to be developed using cross-platform frameworks like SDL, GameMaker, OpenGL rather than Windows only solutions such as DirectX.
That’s also a single case that was buoyed by a massive amount of free marketing.
A lot of those Linux purchases were likely politically motivated rather than by a strong desire to play the games. It was after all a fundraiser that many Linux users viewed as a competition.
Is there unserved demand for games in the Linux world? Probably. But I still think there is a strong case for the DX Windows/360 route. You have to consider productivity benefits when choosing a development technology. Part of that decision depends on experience as well.
But with that said I don’t think the OSX and Linux markets should be immediately discredited based on size.
Eventually there will be a really good 3D browser plug-in that will likely be built around OpenGL and only the heavy games will have local ports.
They fail at exactly the same. It’s probably too difficult to explain why technically, but I think it can be best described as that most operations that APIs such as DX and GL do, the cards should be doing them themselves by running higher level game or rendering code in them, otherwise the CPU <-> GPU data exchange is an enormous bottleneck.
What is that even supposed to mean? How many times do the graphics on my Wii look better than the graphics on a N64?
It is just an attempt at conveying a point to the less technically-inclined audience. Ie. his point is just that even though PCs have many times the processing power compared to consoles the actual graphics and features don’t go hand-in-hand with the increased power.
Though, I’d rather blame it on companies insisting on creating lame console ports for PC nowadays instead of an SDK..
Hmm. Given how bad Wii games look like I’d say “there’s hardly any difference..”
SSBB looks great !
Okay, that’s about all…
I think the main problem of Wii games is the rendering resolution (480p), if you’ve ever seen wii games emulated running in HD (using dolphin) you’d be pretty impressed about how nice many of them look.
Well, my own way of doing that is playing the game and forgetting about graphics. After all, I love playing Ocarina of Time in an emulator, and recent Bethesda games make me feel like they have ditched some depth of universe, gameplay, and scenario in order to make them look better.
I must say that many GC/Wii games have a pretty polygonal look and old-looking textures for current-gen hardware, though. The Wii is better played on a CRT screen where low-res games don’t suffer the LCD pixelation, but that’s not just about it.
Well, it’s all subjective of course but personally I really like the graphics in games like Mario Galaxy and also what I’ve seen of the upcoming Zelda Skyward Sword (or whatever it was called). Not realistic by any means but that’s not really my cup of tea. Obviously the hardware is generations behind PS3,XBox360, given that the Wii is pretty much a souped up Gamecube, but hardware power is not the only determining factor in making good graphics.
I have no idea what the point of his statement is, the PC is a fairly open platform, AMD can provide any and all APIs to their hardware that they want. It is not like DirectX fails at what it sets out to do, being a fairly high-level abstraction that can support all hardware without exposing details about threads and memory management.
Well, the article claims that your definition of ‘fairly high-level abstraction’ is too high in order for programmers to tap into the real power of the hardware in question. As always, while high level abstraction makes it faster to write code, it relies on generic solutions which due to their nature can seldom compete with low-level programming in performance.
As for how much of an impact high level vs low level has, it obviously depends on the flexibility of the interface and how much of it is obscured in the high level implementation. And from what I gather there’s quite alot of it obscured using high level api’s like DirectX.
OpenCL is very interesting technology, from what little I’ve seen it’s basically C with an API that abstracts at a much lower level than DirectX and OpenGL with the kicker being that the code written in it runs both on cpu and gpu without any changes done to it.
Found this recent debate between ID software’s head and bit tech.
http://www.bit-tech.net/news/gaming/2011/03/11/carmack-directx-bett…
This has been answered in this thread, in http://www.osnews.com/thread?467079
AMD Exec Says C++/Java/C# Getting in the Way
let’s all write everything in assembler. DirectX is used to speed up development, not to get the highest possible performance. Sure there is room for improvement and that is what the guy was actually talking about. But this headline is pure sensationalism
Also, for 10 times better graphics performance you don’t need 10 times faster hardware, you need much more than that. 1920 x 10800 wouldn’t be a proper resolution now would it? (yeah, I know, overly simplified)
Seeing as how LLVM is getting good results in abstracting CPU/GPU operations, I wonder if a similar approach wouldn’t work for abstraction of 3D graphics. Hardware needs to be free to evolve independently of software APIs, but if you don’t reap the benefits, an API isn’t doing what it should. How about an optimizing compiler instead?
I am wondering if Microsoft is purposely holding back the PC market to keep it in line with the console market. ie, ignoring advances in hardware because they aren’t available until the next console version gets designed. Really, there is less need for DirectX now than there was. There really are only two graphics chipsets now, Nvidia and ATI. Time to start optimizing things to take advantage of that fact.
Hi,
I still think the reason the console market got so big so fast is because PC games had compatibility issues. No one wants to spend $80 on a game only to find out you need “XYZ video card” and your “WXY video card” is useless. For consoles, there isn’t any compatibility problem because the consumer can’t upgrade any of the hardware (they’re stuck with whatever the console came with).
Providing lower-level access to the video capabilities in PCs is just going to make things worse – the best PC games will be the ones that use low-level access for a very limited number of video cards; and if you upgrade your video card to something “too new” you can kiss all your games goodbye.
What I want is to go back to “dumb” video cards that don’t have any hardware acceleration at all (just video mode setting and a framebuffer); and have some sort of generic co-processor/s that conform to a specification that anyone can get their hands on (like Intel’s Larrabee could’ve been) so anyone can use the co-processor for anything they like (not just accelerating graphics without caring what the video card/s actually are).
– Brendan
Yeah, that would be close to the greatest thing on Earth for OS developers.
I wonder, though : looking at OSdev topics, it seems that one problem with unaccelerated graphics like VESA is that blitting is pretty slow. Could this be fixed if the GPU was only a coprocessor ?
DirectX a problem? Maybe but problem is in the hardware too. When graphic cards become something like a Cell on steroids (a model which is a way to go despite Cell itself not being the successful example) then a programming model more similar to SMP will be the way to go. Until then, batch buffering, DMA and all the rest (including drivers which contain _many_ optimizations for high level API’s) will be used.
These days even the rasterization and all related is just a “shader” code running on general purpose gpu’s. All the “3D” is kind of code in drivers. So at the other end games are isolated from doing the real programming. In fact drivers are libraries of code that runs on GPU’s. In distant future, these libraries will become similar to our usual libraries which run in virtualized environment on CPU’s and supervised by kernel. So GPU, from the isolated coprocessor, becomes a part of the CPU (I believe this is what Fusion is about, at certain stages).
And of course you loose all the raw speed of fixed pipelines, but we don’t want to pack everything into the silicon, even if standard, as rasterization methods are.
The article was not about Direct3D per se, but about how there exists a faction of development studios who would be interested in having lower-level access to the GPUs, like they do on the consoles.
Its incontrovertibly true that console games often look nearly as good, if not as good, as their PC counterparts — I’m not talking about identical, but I’m talking about easily within the same magnitude, dispite having GPUs which are, now, and order of magnitude fewer shader resources, and ~4 generations behind. The perfect example is the PS3, which has a modified Geforce 7600 CPU. You would expect a PC having so much more resources at its disposal would blow the PS3 and 360 out of the water clearly and definitively, and yet they do not.
There are several reasons for this — the rest of the architecture is built around game workloads (for example, large numbers of smallish DMAs), being able to design and profile around a single platform, removing unnecessary abstraction layers, and having more control of the GPU hardware (being able to build command buffers manually, reserving GPU registers without the driver butting in, explicit cache controls). With clever programming, this kind of stuff can gain you back that order of magnitude (or so) that the hardware itself lacks.
Some are asking to get back to software rendering with something resembling the Cell processors SPU, or Intel’s ill-fated Larrabee (knight’s Ferry) and I think that moving in that direction will play a part, but a purely software solution cannot be the whole answer — things like texture sampling simply cannot be done efficiently enough in software, and has sufficiently-different memory access patterns that purpose-built caching hardware is a requirement.
The PC ecosystem is simply too diverse to remove most of the abstraction layers we’ve invented through OSes, graphics drivers, and APIs like Direct3D and OpenGL. At any one time, a PC gaming title needs to support a minimum of 8 wholly-different GPU platforms (4 generations from 2 vendors) — not to mention the various speed grades and shader-counts for each distinct platform. On the console side you’ve got precisely 2 GPU platforms (ignoring the Wii, which doesn’t compete with PC-style games) and you know exactly how fast they are, how many shaders they have, and their precise behavior all the way through the pipeline. You can actually spend the resources to optimize around those two platforms knowing exactly the size of the market they represent.
Now we can talk about “virtualizing” the GPU — in the same sense that someone already mentioned an LLVM-like platform for GPUs, and plenty of parallels could drawn to the x86 platform being CISC externally, but translating on the fly to internal RISC-like architecture. Its not an approach without merit, but a couple points about this are worth making: First, this is somewhat like the driver/GPU already does — the shader opcodes don’t necessarily have a 1:1 mapping to hardware instructions, and the GPU takes care of batching/threading things across whatever hardware resources are available. Second, the cost of the more-flexible software model you just attained is having a less-flexible hardware model that the GPU vendors now have to design around. This hardware model may be better, at least for now, but stands to impose similarly on innovation in the future — as one might argue that the x86 model has stifled processor innovation to a degree (notwithstanding the fact that Intel has done a great job of teaching that old dog new tricks over the years).
Greater programmability will play some role here, and possibly some amount of virtualizing the platform as well. Another factor here is the consolidation and commoditization of the graphics and game engine market. In some way, Direct3D and OpenGL aren’t a solution anyone is *really* looking for — you have guys that want an even higher-level solution (scene graphs, graphics engines, game engines) who account for maybe 75% of the target audience, and you have the guys building those scene-graphs and game engines who are a small part of the audience, but who have the knowledge necessary to put lower-level access to good use and, because they support the other 75% of the market, can make a living by providing good support for the 8 or so GPU platforms — of course, then the “problem” becomes that “they” will be the only one to drive graphical innovation and research, while the rest of the market turns out products that can’t go very far beyond what they’ve been provided.
This is basically what the article said, in a round about way: Some developers stand to benefit from lower-level, console-style, access to the GPU, and that Direct3D (as well as OpenGL) is in their way. What they failed to mention is that the other part of the market — the other 75+ percent — want something that’s *even higher level* than Direct3D or OpenGL! You know, the folks buying Unity, Unreal, iD TECHn or Source.
Ultimately I think one of two things will happen — Direct3D and OpenGL will continue to evolve and become thinner as GPUs become more general (and perhaps, virtualized), or an essentially new programming model and API will arrive and only graphics-specialists and the strong-of-heart will be able to wield it, and they will make their living selling solutions to the rest of the market. We’ll still see DirectX/OpenGL implemented in terms of this new API, so it won’t go away, but it will probably be relegated to use by serious hobbyists and small studios who need something that isn’t provided by popular engines, and needs to support a wide-range of hardware without an exponential increase in effort.
I want to vote for you but cant…already posted a comment..
In my opinion this, like many other OSnews comments, can be front page features so good I think it is.
…provided all three of the major graphics vendors (Nvidia, AMD and Intel) agreed to a standard low level API. I bet the AMD spokesperson was thinking of an AMD-only low-level API, which wouldn’t work really.
I could, however, see the low-level API getting quite messy and having new APIs added regularly if it had to cover every evolution/revolution of the 3 vendors’ latest hardware releases.
If the low-level API actually worked, then I could ultimately see DirectX and OpenGL being written as higher-level wrappers to the low level API calls, so developers can have the best of both worlds (high-level for convenience when speed isn’t critical and low-level to squeeze the last drop of performance out of the hardware).
I’d also like to see the low-level APIs (and ideally OpenGL too) kept in sync across all three platforms – Windows, Mac and Linux. It might actually make games more portable between platforms if we’re lucky (plus the developer doesn’t code in DirectX of course).
Anyone else getting a warning from their AV when clicking on that article link ?
Yup, me too.
Astaro gateway reports a HTML/Infected.WebPage.Gen virus and blocks access.
Yay for Dos Games!
simple.. use OpenGL to develop games, problem solved.
AMD and other hardware companies are just frustrated with the current situation where games are baselined to consoles which results in pc gamers not having to upgrade as often as they used to.
Blaming high-level APIs is inane. Game developers do not want a hardware level API for pc games. That would just result significant increase in costs with few benefits. You might as well suggest a return to ASM. DX is highly refined and companies have huge teams of developers that have years of experience with it. They aren’t going throw away all that experience and start over.
The real problem is not console popularity but piracy. It’s the dead rhino in the room that no one wants to address. PC gamers have the theoretical numbers to support graphically demanding exclusives but such games have piracy rates of over 70%. So game companies focus on consoles and release corresponding pc ports that don’t require continually new levels of hardware. Or they go after casual gamers with graphically light games. In both cases a low level API isn’t going to change the economics of the market.
Edited 2011-03-21 20:01 UTC
Blaming high-level APIs is inane. Game developers do not want a hardware level API for pc games.
Don’t speak for everyone. Games are tuned for a range of specific graphics chips anyway.
Certain features of DX9 hardware became exposed only in DX10 API. Depth buffer read, for example, or access to the individual samples.
Consoles can do deferred rendering with MSAA easily, while PC can’t achieve that in DX9.
Some features are only exposed to direct access, because they do not map to DX feature set.
I guess the “AMD exec” did not read his own website before opening his mouth.
Let me quote from it: “DirectX® 11 is the very latest in high-speed, high-fidelity gaming and computing…”
Have a look: http://sites.amd.com/us/game/technology/Pages/directx-11.aspx