“AMD is on the heels of releasing the next set of GPU programming documentation to aide in the development of the open-source R500/600 drivers (xf86-video-ati and xf86-video-radeonhd). It’s already been discussed what this NDA-free documentation release will have, but one of the questions that have repeatedly come up is if/when AMD will release information on accelerated video playback. AMD’s John Bridgman has now stated what they plan to release in the video realm as well as a new requirement for their future graphics processors: being open-source friendly while avoiding DRM.”
The open graphics project aims for a completely documented graphics card with 3D acceleration.
Of course the drivers will be open source, as well as the board layout and the hardware definition (Verilog code).
Maybe AMD sees the demand for open source drivers, and sees no real downside to making these drivers possible by releasing the specs of their hardware.
Maybe they figured out that all of their real inventions are patented, and whatever is not patented is no real danger to them in case Nvidia finds out about it.
So their management finally found out what would have been told to them by the first in-house techie they could have asked.
Heh, as much as I appreciate the vision of projects like OGP, they’re never going to be commercially viable, let alone on the same radar screen as the big three. Not gonna happen.
AMD beginning to head down the long road toward open graphics specifications has much more to do with convergence and globalization than it does about their current and future competitors in the PC market.
From subnotebooks and smartphones to set-top boxes and grid computing, free software will play a more influential role on post-PC platforms than it did on the PC. Intel is pushing aggressively into these environments in part by embracing Linux and open graphics drivers. AMD has to open up in order to win contracts in this evolving competitive landscape where free software is crucial to the ongoing incursion of PC hardware into the consumer electronics industry.
I think that most hardware vendors now realize that free software is going to play a major role in emerging technologies and emerging markets. Smart CEOs look at transformational effects of free trade on the global economy and realize that demand-side stimulus from emerging middle classes in the developing world will be the predominant engine of growth in the 21st century.
Vendors that attempt to sell the same old stuff into the same old markets will have trouble competing with those that are positioned to exploit the incomparable economic force of hundreds of millions of people being pulled out of poverty. Free software is helping to flatten the world, empower society, and strengthen the global economy.
Not OGP, but Intel. Among the Linux users I know, most bought or consider to buy Intel PCs, because the drivers are open source. This is to a lesser degree for political reasons but for practical. Closed source drivers are not maintained forever and after a few years you might find yourself stuck with an older OS, unable to upgrade without replacing hardware, because the old driver simply doesn’t work with newer OS versions.
I don’t know how many users are really affected by that problem and for how many it’s just a psychological thing. Maybe it’s enterprises that just consider that they might switch to Linux on the desktops in the future and — better safe than sorry — bet on Intel chipsets.
Maybe it even simpler: After the PR disasters with the Phenom CPUs, AMD just wants good publicity.
That’s the boat I’m in. If I were to buy a new PC today, it would be Intel based, just for the graphics drivers(I don’t do much PC gaming).
I would say this could definitely be a pr move, and they need it more then ever. While the radeon 3870 looks to be a great card at a much lower price then the 8800, the Phenom release was an utter bust, and intel has a whole new wave of chips waiting to strike. At any rate, intel has contributed alot to the open source community, and not only graphics drivers. If you have an intel mobo, the intel site also has sound drivers and some other chipset goodies for linux. Also the have applications like PowerTop and the website lesswatts.org to help linux users maximize battery life on laptops. Additionally the OLPC is questionably a contribution from AMD to the open source world. AMD did not develop the Geode, they bought it from National Semiconductor who had basically given up on it for dead, and AMD made out of the deal by getting it in the OLPC. At any rate, with AMD looking to be hurting in the CPU wars in the near future, the ATI branch is what really needs to shine to keep the company afloat. Thus we are seeing a wave a new hardware that is going to compete on par with Nvidia’s best at most likely a lower price and an outreach by AMD/ATI to users they have alienated in the past a.k.a. Linux users. Hopefully they find a way to skirt the DRM and can win back the Linux fanbase, who are a big deal. Linux users are not a large group(relatively speaking) but are passionate and VERY VERY vocal, and getting their support means tons of good pr on the web. Best of luck to ATI on this one.
“””
“””
“Very vocal” and “influential” are two different things. “Not a very large group” is true and probably outweighs the over-vocalization.
Actually I think this are not so good news. They say that most likely any r600 and r700 cards (the current and the upcoming series) won’t have accelerated video with the open source drivers due to it being entangled with DRM.
That is, you either get an old card now or wait a couple of years to buy a new one if you want to have accelerated video with open source drivers.
I really hope they find a solution to release information about accelerated video without touching DRM, because now that most people were willing to jump on the AMD bandwagon for its open source drivers, it’s not easy to recommend to but an ATI card that will be crippled with open source drivers and that has worse closed source drivers than NVIDIA.
I can’t blame AMD completely for this, since its all just about legal problems, but God do I hate DRM…
I can’t blame AMD completely for this, since its all just about legal problems, but God do I hate DRM…
Agreed. But well, _if_ they can’t find a way around DRM then there’s always the proprietary drivers for those users who really want the accelerated video playback functionality. Also even with the accelerated video playback we still won’t get accelerated h.264 or VC-1 decoding since there is no standard way of doing such under Linux yet. XvMC != accelerated decoding of such formats. Though, there has been some discussions going on about that and various suggestions. Like f.ex. not all hardware do support decoding of h.264 natively but it could be implemented as a shader program. All that is needed is to define some standard way of accelerating video decoding so that all the drivers can use that and not only drivers/cards from a specific vendor.
But well, _if_ they can’t find a way around DRM then there’s always the proprietary drivers for those users who really want the accelerated video playback functionality.
Incidentally, video playback is where the closed source driver is at its worst. The videos are horribly pixelated and the diagonal tearing is gruesome. They made progress, a few driver versions back it used to crash the X server when playing back videos.
Between the last three driver versions, one could choose either a incredible memory leak (hundreds of MBytes in a few minutes when running an OpenGL app) or a driver which is not capable of driving common resolutions like 1400×1050 or 1680×1050. This is how bad the state of the closed source AMD driver is.
Incidentally, video playback is where the closed source driver is at its worst. The videos are horribly pixelated and the diagonal tearing is gruesome. They made progress, a few driver versions back it used to crash the X server when playing back videos.
I don’t own any recent ATI/AMD card so I can’t confirm but well, XvMC isn’t really that useful anyways. It can only be used for MPEG-2 streams, not for f.ex. DivX, MPEG4 or anything such. It’s XV which matters more but I imagine the open-source drivers will have a working XV implementation. But XV doesn’t mean hardware-aided decoding.. :/ Nowadays when graphics hardware is so fast and powerful, and even programmable, it is quite amazing that there is not yet support for hardware-aided decoding of video under Linux, atleast not without some very specific drivers and proprietary software (and I don’t personally even know of any such combination).
Windows and Mac OS X has had support for hardware decoding of video streams for..umm, god knows how long :O It’s unfortunately one of those things where Linux is still lacking behind..
>XvMC isn’t really that useful anyways
True, but this standard will be replaced by VA (Video Acceleration) API, which does implement the modern features that GPUs have (and not only in MPEG-2, but also MPEG-4 ASP, MPEG-4 AVC and VC1). That’s why it is really a pity that most likely all r600 and r700 AMD GPUs won’t be able to take advantage of it with the new open source driver.
About VA API:
http://en.wikipedia.org/wiki/VaAPI
Thanks, that’s exactly what I had in mind but couldn’t remember the link/name! I’m sure that once the libraries are in place and drivers start to implement those features Linux will become quite a capable multimedia platform indeed And the best thing is that it will also benefit older cards too as they can implement atleast some of the functionality as shaders and as such still provide some boost in performance ^^
Incidentally, video playback is where the closed source driver is at its worst. The videos are horribly pixelated and the diagonal tearing is gruesome.
With the fglrx driver (maybe the same with other ATIs) video output has to be set to use openGL (in the video player you’re using) or else it will seem horribly pixelated. I suffered under this for a while before I somehow found out about this.
only accellerated video playback, there will still be 3d accell, so that is ok
“””
That is, you either get an old card now or wait a couple of years to buy a new one if you want to have accelerated video with open source drivers.
“””
Isn’t that ATI’s middle name? Still sitting on my Radeon 9200 and waiting. They talk the talk. But their walk still looks like a shaky DVD of someone else’s toddler trying to get past the crawling stage.
i don’t really care how a company decides to deliver a product, i only care about the final thing and how it works (if it works).
so with that, i will only choose what will function best and not choose a solution that will not fully function or be slower simply because its ‘open’.
i’ve also been following and dealing with 3D cards since voodoo 1 all the way back to linux 2.0.x days, and the ‘open’ solutions from this day have never been good as far as features or performance… and the GPUs these days are extremely more complex than they were in 2000.
Well, you name it.
For me all that really matters is a working computer, and of course how to get there.
And that is, where open source comes into play. I like Linux and related applications and environments, therefore I want to install a Linux-distribution on my computer. Whenever I update a kernel or other component, I want to know that the distribution also can handle the influence this update may have on the drivers. Remember, Linux is not necessarily binary-compatible, at least not towards drivers which hook into the kernel.
With an open source driver, I can be confident that the hardware will be recognized and configured correctly, AND that updates will not break the system. In that regard open source is a feature, and a usability advantage to the company which offers the free drivers. I therefore think AMD is doing the right thing here (or at least they are on the right trail).
… And when nVidia/ATI decides that it’s time for -you- to dump your (what they perceive as) aging ATI/nVidia driver and drop support for it, you’ll be…
… Doing what – exactly?
Don’t get me wrong, I use closed drivers when I must; but I rather spend twice the money and get 1/2 the performance and use open drivers. Nobody is going to force an upgrade down -my- throat.
– Gilboa
That’s part of the reason I was hoping that one day we’ll see a video card vendor come out with a dedicated PCIe video card loaded with dedicated memory using an Intel GPU.
I have a look at the Nvidia drivers, and basically, got a problem? tough luck. Same said for ATI drivers – look how long it took for them to fix issues with their drivers; and they’re still crappy. So bad in fact I’ve decided to stick with the open source ATI driver over the proprietary one, the performance is that bad.
I really like AMD processors the best, I have had the best luck running them and overall they seem to handle processes better in my opinion.
i dont think you can sit there with a straight face and tell me there is an advantage over amd’s latest over intel’s latest core 2 duos or quads in terms of price/performance/heat.
hmm, the Athlon 5200 X2 EE would be my cpu of choice for a new system. it is very cheap ( like 80 Euros ). Boards are cheap too ( even with PCI Ex 2.0 ).
It does not have the performance like more expensive cpus but in terms of price/performance and heat it is VERY competitive. ( IMHO even better than any Intel for that kind of money )
At the moment, the Intel Core 2 Duos pretty much spank the AMD X2s on most everything I’ve tested (I participate in some ~80 distributed computing projects of various types) – but prior to that, the AMD Athlon series easily clobbered the Pentium 4’s and Pentium D’s… Both the AMD X2s and Intel C2Ds are very energy efficient (compared to prior technologies), with the C2Ds slightly better at the moment IMO.
I’m guessing it’s just a matter of time before AMD unleashes some new innovation. Intel may have the dual and quad-core battle won for the moment, and they have a lot more fab locations under their belt – but that doesn’t mean AMD is permanently out of the picture.
In any case, I hope AMD makes a strong comeback
Edited 2007-12-31 01:25
I agree, but please don’t call chip designs “technologies”, it’s pretty wrong. Major things like, perhaps, digital computers, BJT transistors, self aligned gates, or even PMOS/NMOS/CMOS, could all pass as “technologies”. A particular chip (family) could not however – not even a micro architecture would be a “technology”.
take care
/Mr semantic
what a strange thing to complain about
Are you testing the X2s in x64 mode? I imagine that a native x86 processor would perform better than a native x64 running in x86 mode.
I imagine that a native x86 processor would perform better than a native x64 running in x86 mode.
Nope.
Why not? The AMD x64 architecture is optimised to run in 64-bit mode. running it in 32-bit mode will never be as performant.
Are you testing the X2s in x64 mode?
I have run both my Intel C2D and AMD X2 in 32-bit and 64-bit mode… trust me, the C2D spanks the X2 on computation power.
edit: and if you don’t believe my personal experiences mean much – just look around – there are plenty of benchmarks. Believe whichever you feel most inclined to believe.
Edited 2007-12-31 18:44
What exactly is it that you think is “optimised” to run in 64-bit mode?
If you stop and think for a moment, you’ll realize how silly that idea actually is. If 64-bit mode is so much faster, then why wouldn’t the 32-bit mode simply add 32 zeroes to the front of every number, thereby automatically transforming it into 64-bit code and gaining the extra performance?
In reality, the 64-bit code often tends to be slightly slower in a straight comparison with 32-bit because the larger pointers fill up the caches a bit faster. The extra registers available in the AMD64 architecture tend to more than make up that difference, and of course some programs are able to take advantage of using all 64 bits at once. But that tends to be program specific rather than cpu specific.
Edited 2008-01-01 06:09
Erm:
double the number of registers, and double the number of XMM registers
The facility for parallel operation on up to 8 bytes at a time using a full 64-bit register.
relative address pointers = much faster code jumps/lookups
So, in the hands of a decent compiler, a native AMD64 CPU will be faster than one running in x86 compatibility mode.
No one is claiming that the x86 architecture is superior to the x86-64 architecture. That’s all about the architecture though, and has nothing to do with a specific chip. AFAIK, there are no optimizations specifically made by the A64 chip (for 64 bit ops), that the Core 2 chip doesn’t also have. It’s a mistake to even call those optimizations at all IMO, because they are fundamental to the new architecture.
You will generally find that AMD’s CPUs gain a lot more by running in 64-bit mode than Intel’s CPUs do. Generally, AMD’s CPUs are significantly faster in 64-bit mode than they are in 32-bit mode. However, Intel’s CPUs show virtually no performance advantage in 64-bit mode compared to 32-bit mode.
On the old 64-bit P4s, it could be argued that the CPU was designed to run x86 code and wasn’t “optimised” for running 64-bit code, but that’s not the case with the newer Core 2 CPUs.
Obviously there’s a trade-off going on here – more registers and some more efficient opcodes versus increased memory usage (and therefore cache, and memory bandwidth) caused by 64-bit pointers.
My guess is that AMD’s CPUs aren’t affected as much by the larger pointers. They tend to have better memory latency than Intel’s CPUs, largely thanks to HyperTransport and the on-die memory controller. That’s why Intel’s CPUs have such monstrously large caches. When Intel move over to QuickPath, that advantage will likely disappear.
Edited 2008-01-02 08:30
You will generally find that AMD’s CPUs gain a lot more by running in 64-bit mode than Intel’s CPUs do.
I know that used to be the case, but I don’t think it is anymore. Do you have any benchmarks to back that up?
The original Prescott 64 bit processors were a joke – I think they may have actually emulated some 64bit instructions by using 2 32bit ones. So yes, you can obviously do 64 bit badly. But just because you do it well doesn’t necessarily mean you’re doing 32 bit badly. You can do both. That was my point.
I don’t think the increased cache pressure hurts either chip too much, we’ve seen that doubling cache size only increases performance by maybe 5% and the larger pointers aren’t using nearly double the old cache. The other benefits nearly always outweigh the negatives here.
Edited 2008-01-02 08:50
Benchmarks comparing 64- and 32-bit mode are pretty hard to come by. Benchmarks comparing AMD and Intel CPUs in either mode are almost impossible to find.
Potentially dodgy benchmark (it’s benchmarking a specific piece of software running under .NET), but it’s all I could find at the moment:
http://blogs.msdn.com/rickbrew/archive/2006/07/13/664890.aspx
That tends to match my own experiences with AMD CPUs in 32- and 64-bit mode, and matches most of the benchmarks of the Core 2s I’ve seen around the ‘net.
Hmm, that’s interesting. Some things to note – the Intel system was using a beta BIOS, and I’m almost certain I’ve seen a few other benchmarks showing no significant difference, so it may be something that affects certain programs and not others. I can’t remember where I saw that and I don’t have any experience myself, though, so I could be wrong.
It’s also possible that the Conroe chip was running into some other bottleneck, given that it was the highest performing part. Maybe at that speed it was running into FSB limitations? Or maybe not, it’s hard to tell with just 1 data point.
Still, it’s interesting.
I’m not so sure about your choice.
The AMD Athlon 5200 X2 EE is ~£76 (~104 Euros) on one of most competitive UK online retailers. In terms of price this seems to compete with the Intel Core 2 Duo E4500, which is a ridiculously overclockable chip that most likely kicks the X2’s arse on any playing field.
No matter which way you cut it, Intel’s current core architecture is more efficient than AMD’s and Intel have a good line up across the entire budget range.
Edited 2007-12-31 16:00
“handle processes better”?
yeah, right…
Oops,
some people just think that Intel doesn’t invent anything… outperforming AMD on most areas, including price, dissipation and speed…
No way a current AMD is a better option for the most of us.
Technically speaking, of course you are right.
But it doesn’t always matter which is ‘the better option’.
Sometimes all that matters is, is it sufficient.
For most things most people end up doing on their pcs (disregarding here the people with eight gigs of RAM ), mere cpu speed doesn’t matter anymore. RAM matters and so does hard drive speed.
My brother got himself a new, cheap AthlonX2 pc which he thought was really fast. I said, these days, Core 2 outperform them. Guess what, he doesn’t care, he’s used AMD for years and it works for him.
Then there’s the energy efficiency thing.
Sure, very important.
Then again, a guy at work bought a Core 2 machine to replace his single core Athlon, for “a little extra power” – I asked him why he didn’t upgrade just his cpu, would be cheaper and it would work with his particular motherboard (socket 939, A.64X2 is still available for it).
He said it was, besides faster, also an investment in energy savings..
He also got himself a new 20″ screen that draws 25 Watts more than his present 17″, which immediately kills the savings on his new pc.
And wireless mouse and keyboard.
Not to mention the energy that goes into the production of all the news things he got himself, which some (most?) people tend to forget. A huge amount of energy and resources goes into the production of cpus and the like.
Sure, that’s how the economy works, but it makes me a bit more sceptical than I used to be, when it comes to this energy efficiency thing.
That’s why we have competition, you know. Sometimes one option is better, other times another option. It tends to create innovation, higher quality and lower prices.
We would be worse off without AMD, VIA etc.
“””
“””
Agreed. It is also best if the competition is fairly evenly balanced. I worry about AMD. They’ve done well in the past, but Chipzilla has tremendous resources and influence. I hope we really do see them leapfrogging each other technologically. I felt more comfortable when the smaller vendor had the better chips.
can someone tell me please which generation belongs these chipsets??
(are xpress 200m and x300se r300 or r400or r500 or r600??)
thanks in advance and sorry my english
BTW ati latest fglrx drivers S_UCK_S (really sucks )
The X300SE and Xpress 200M are both based on the RV370 core, a r300 based chip.
What’s so bad about the latest drivers? They supposedly fixed that memory leak.
What’s so bad about the latest drivers? They supposedly fixed that memory leak.
Like I already said, the latest driver fails to support most widescreen resolutions and some 4:3 resolutions (like 1400×1050) which makes it unusable for quite a number of people.
the latest driver fails to support most widescreen resolutions and some 4:3 resolutions (like 1400×1050)
Really? That’s pretty bad, I thought you meant the older versions of the driver didn’t support it.
All this time, it has been Linux vs. Windows.
Maybe it will become Wintel vs. LinAMD?
Kinda exciting to contemplate if you ask me.
Not likely. Intel contributes heavily to Linux too.
While I rather like AMD, and use a -lot- of AMD workstation / servers, Intel contributes far more code to the OSS community.
If anything, AMD has far tighter relationship with Microsoft (then Intel).
Never the less, AMD’s more to release all the GPU specs is a -very- good thing.
Hopefully nVidia will be forced to follow suite.
– Gilboa
I didn’t know that, how so?
You mean, Linux-running-OLPC-powering AMD?
Well.. if you mean that generic AMD laptops generally suck more with Linux, I agree.
You mean, Linux-running-OLPC-powering AMD?
Hmm, there’s AMD Geode in OLPCs but that’s just hardware. AMD did not donate all the Geode chips for use in OLPCs nor have they done much else either. Intel on the other hand is working on several open-source projects, they have released open-source drivers for their graphics cards and some wlan cards etc.. So, what were you trying to say?
I was merely wondering what arguments Gilboa has to claim that AMD is somehow more friendly with Microsoft than Intel is.
I’m not saying it’s not the case, maybe it is, I was just wondering what reasons there are to make this a fact.
My guess is that both Wintel chipmakers in some way need friendly relations with Microsoft, because its operating system should run nicely on their chips (and of course, MS needs them too).
What makes this need bigger for AMD?
If anything, AMD has far tighter relationship with Microsoft (then Intel).
I didn’t know that, how so?
AMD is a bit closer to them just by default. MS is dealing with them from a position of strength, which makes them happy and AMD can’t afford to piss them off like Intel can. Meanwhile, Intel has tons of cash to throw around trying to expand into the emerging open source market, while AMD has to rely on getting the most for the limited amount of money they have which currently means going to where most customers are.
I think both companies are fairly friendly with the open source community, Intel just has a little more ability to do stuff on their own.
Free software is helping to flatten the world, empower society, and strengthen the global economy.
Video acceleration reminds me of the early MPEG2 decoder PCI cards that were needed because the CPU of the day were so weak, software DVD players would never decode at a reasonable quality. Zoom to today, and software DVD players are now common place.
The video acceleration of today is focused at the h264, which is an incredibly complex CODEC; even gigahertz machines struggle at times to decode at a reasonable quality, but like the evolution of DVD, the need to have the off loading onto a dedicated video card will go.
Then there is the new AMD GPU/CPU hybrid which will have video extensions accessible to all, without needing to have access to the patent protected Macrovision which current AMD GPU’s use for their content protection.
With that being said, there will still be OpenGL GPU acceleration, and hopefully with that, interesting things will develop – with that being said, don’t expect games to suddenly appear; then again, as a Mac user, I prefer having my Wii to play games, and keep my MacBook for everything else.
Too true. I remember the DXR2 and DXR3 cards I used to have to support while I was working for Creative Labs. They where great cards but ass processors advanced they became unnecessary.
Frankly, this type of solution is only of interest to me if the tech filters back to just the GPU. Don’t get me wrong, I’m all for CPU/GPU integration for those that want it, it’s just that I like to be able to pick and choose a graphics upgrade without having to splash out for a processor while I’m at it.
More and more of my windows only friends are starting to just use consoles for their gaming. Considering the amount of high quality titles that are either Console only, or are taking much longer to be released on the PC, me thinks I should start looking into it!
ass processors?
I dread to ask .
Somewhat off-topic, but count me in that camp, for several reasons: 1) I don’t have to worry about buying a new graphics card at practically the price of a new console every 6-12 months just to be able to play the latest games, 2) No need to fiddle with buggy driver revisions, and 3) the PC game industry pretty much stopped making all the game genres that I actually cared about. It seems as if very little besides FPS and RTS games come out for PC any more, and neither of those hold my interest.
If you don’t game then the Intel GPU is the way to go. My only problem with them is that you can’t get dual-monitor setups very easily at all. Supposedly there is a add-in card, but I have never been able to find one for purchase.
For desktop PCs with Intel graphics, I guess you’ll have to find a PCI video card to add extra monitors.
You are probably looking for SVDO add-on cards for the Intel i915 and greater chipsets, something like this:
http://www.directron.com/pegasusadd2dl.html
It uses the onboard Intel graphics and simply provides DVI out using the x16 PCI Express lane. Just be sure to get one of the ADD2 cards, and not one of the ADD2-R ones.