AMD’s position in the graphics market continues to be a tricky one. Although the company has important design wins in the console space – both the PlayStation 4 and Xbox One are built around AMD CPUs with integrated AMD GPUs – its position in the PC space is a little more precarious. Nvidia currently has the outright performance lead, and perhaps more problematically, many games are to a greater or lesser extent optimized for Nvidia GPUs. One of the chief culprits here is Nvidia’s GameWorks software, a proprietary library of useful tools for game development – things like realistic hair and shadows, and physics processing for destructible environments – that is optimized for Nvidia’s cards. When GameWorks games are played on AMD systems, they can often do so with reduced performance or graphical quality.
To combat this, AMD is today announcing GPUOpen, a comparable set of tools to GameWorks. As the name would suggest, however, there’s a key difference between GPUOpen and GameWorks: GPUOpen will, when it is published in January, be open source. AMD will use the permissive MIT license, allowing GPUOpen code to be used without any practical restriction in both open and closed source applications, and will publish all code on GitHub.
Great move by AMD, and definitely a step up from Nvidia’s questionable closed tactics that only seem to harm users. HotHardware has more information on AMD’s extensive plans.
The million dollar question in my mind is: Does this make it easier to get higher quality open source graphics drivers for alternative OSes?
Assuming they use Mesa and can port Linux kernel GPU drivers, sure.
Yeah, my guess also. If AMD opened their spec to get better Linux/Haiku/whatever graphic driver, I bet many people would be happy to select their cards instead to nVidia that currently also offer their drivers close sourced. But at least nVidia drivers are good.
It’s a shame because I have AMD in great respect, they still have a technical mojo that grant them some leadership in the hardware business. But on the software, drivers and “openess” sides…
PS : owner of Athlon XP 1700, E-350 and A8-3500M, so I know my deal. And yeah, pretty much low grade CPUs because low consumption. The APU is a fantastic stuff, can be upgraded with a full grade graphic card on desktop computers.
Uhhh AMD has ALREADY opened their specs and has gone sop far as to pay for devs to work on the open driver. They also opened their CPU specs a couple years back and have been saying for over a year their ultimate goal is to replace the proprietary driver on Linux with the FOSS driver. As you can see AMD is pretty committed to FOSS..
http://developer.amd.com/tools-and-sdks/open-source/
I think this will be a great chance for all the hardware makers out there to see if the FOSS community actually walks the walk or just talks the talk, as you have two companies, one that has done everything the community asked for including opening ALL their specs and docs and has spent a ton of money to support FOSS and puts out their code under a FOSS license that lets all use it, versus a company that has been absolutely hostile to FOSS, one that has given such headaches that there is a famous video of no less than Linus Torvalds flipping them the bird and cursing them.
If the FOSS community doesn’t support AMD after all this, but instead keeps recommending Nvidia? Then they have no right to complain when companies refuse to support them, because they will have shown themselves to be hypocrites when it comes to actually supporting companies that embrace the four freedoms.
But but but, I’m all with this, and my next rig will be completely AMD powered. If only they could consume a bit less power… My VIA C7 is a real “power” horse in that matter
If you want ULV look at the socket AM1 chips, you can get the AMD 5350 for $53 that gives you 4 cores at 2Ghz and a Radeon GPU capable of 1080P over HDMI, all in a package that maxes out at 25w and averages under 10w.
If you want even lower they have the 1.4Ghz Sempron quad that also has a max of 25w but in my tests with the one I built for a customer as a ULV HTPC it averaged less than 6w for most tasks like web surfing.
So if you want ULV look at the AM1, good performance, cheap prices, really nice chips for everything from office boxes to media centers.
I wonder if AMD has (or will) come up with a competitor to nVidia’s streaming services, as well as set-top boxes and tablets that will let you stream games wirelessly from a PC to a TV. I don’t do a lot of gaming myself, but several of my friends love this feature.
I have done this with using Moonlight on my Raspberry Pi. It’s pretty awesome.
https://github.com/irtimmer/moonlight-embedded/wiki
Let me get this right: NVidia made a library available to improve the graphics and that harmed users?
And now AMD makes a similar library available of which there is no mentioning of quality of performance and that is a step up…just because it is Open Source?
I would say that NVidia understands much better that they have to provide the best hardware AND the best software to make a product that people want.
(There was a similar complaint about the Android emulator being optimized by Intel for Intel CPU’s a few posts ago)
It seems to me that AMD needs to up their software game…which they have started doing in this case.
They bribed companies with actual money into using their libraries and not optimising for AMD cards, thus harming those users. They also knowingly lowered performance on other cards even though that was technically not necessary.
The emphasis in this article is not on quality or results, but on not excluding a significant share of the market. It’s funny how you fail to see that. Do you also turn to stone when touched by sunlight?
Edited 2015-12-16 09:22 UTC
AMD has done the same in the past, see Tomb Raider tressFX, see DX:HR 3D, so that’s a moot point.
What do you mean? TressFX is implemented using DC, which is compatible with every DX-compatible GPU. Unlike PhysX or any other proprietary feature from nVidia.
I admit having read that AMD did not allow nVidia to do prerelease optimisations, but that’s the only nasty thing they did. nVidia went (and still goes) much further than that.
Edited 2015-12-16 14:25 UTC
Why are they using DirectCompute instead of OpenCL? I hope once opened, all that will be rewritten to use Vulkan instead anyway.
Edited 2015-12-16 15:19 UTC
Vulkan, you mean Mantle, the ‘revolutionary’ stuff AMD created and released for free ? Dude, I never caught attention that changing a few chars out of AMD you get NGO…
Vulkan and Mantle are two entirely different APIs. Mantle is AMD’s API, Vulkan is Khronos Group’s API — https://en.wikipedia.org/wiki/Vulkan_%28API%29 Vulkan was based on Mantle in the beginning, but it is no longer anywhere near the same beast.
Edited 2015-12-16 17:53 UTC
Vulkan was created from Mantle (AMD gave Mantle to Khronos group to kickstart the effort). But that’s really irrelevant to the above. Unlike Mantle, Vulkan isn’t tied to AMD.
Vulkan is cross hardware and cross platform. DirectCompute and Mantle aren’t.
1) The major gaming API on Windows is Direct3D.
2) DirectCompute maps to OpenGL 4’s compute shaders and not OpenCL.
3) OpenCL is less ideal for games because they don’t easily share command lists. OpenCL also defaults to favor precision over speed.
4) OpenGL 4 has poor Intel support and poor Apple support.
Thus using OpenCL and/or OpenGL 4 would miss the target audience: game engine developers on Windows.
Translating HLSL to GLSL is actually fairly easy, so apparently they figured that supporting Linux was not interesting enough at this point.
And DirectCompute has any Apple support? Isn’t it MS only?
Supposedly Vulkan has good compute applicability, so hopefully these libraries will be switched to Vulkan going forward (or Vulkan will be added as an option there).
Their target audience won’t be only developers on Windows anymore.
Edited 2015-12-17 02:24 UTC
Well I guess that depends on what they do to any pull requests that adds support for other platforms.
I expect that’s what will happen, yes.
Edited 2015-12-17 02:42 UTC
This is why I love OSNews. I respond to what is written in the article and get caught in the dirty history of AMD vs NVidia of which I only very vaguely remember some shards of some articles ages ago. If you cannot tell yet…I am not a gamer (although the Keen article struck a cord )
Thanks for schooling me and lets see how good this library is going to be, how well it will get adopted and how cross-hardware the code will be/get optimized
Intel is rapidly catching up on OpenGL 4 compliance. 4.3 is about to be released via Mesa (4.3 is 2 extensions away from completion, 4.5 is another 4 extensions away. 4.4 is 7 extensions away. Source: http://mesamatrix.net/). I expect OpenGL 4.5 to be reached around the time that Vulkan is launched. Once Vulkan is out, OpenGL won’t be receiving updates so once implemented OpenGL is complete. No more chasing the moving target. Now to my knowledge Vulkan is a much easier API to implement. It’s a lot closer to the Gallium 3D layer and doesn’t require a lot of the build work that OpenGL has required.
If you look at what Intel actually got out there ( http://www.intel.com/support/graphics/sb/CS-033757.htm ) you can see that even their flagship products only support OpenGL 4.0. Unfortunately compute shaders are introduced in OpenGL 4.3.
To make it even worse, if you want to support slightly older Intel graphics you have to make sure your code works all the way down to OpenGL 3.1.
Contrast this with Direct3D, where all Intel GPUs supports the Direct3D 11 API. Granted not all their cards support compute shaders, but you can stick to the same API. Using OpenGL has been hell for decades mostly thanks to Intel.
Yes, I am bitter about it. As for them doing a better job with Vulkan, as its Intel I wouldn’t put it beyond them to fuck it up somehow.