“Now that the kernel mode-setting page-flipping for the ATI Radeon DRM kernel module has been merged into the Linux 2.6.38 kernel and the respective bits have been set in the xf86-video-ati DDX, we’re in the process of running new open-source ATI graphics benchmarks under Linux. Our initial results (included in this article) show these latest improvements to cause some major performance boosts for the open-source ATI driver as it nears the level of performance of the proprietary Catalyst driver.”
Not that I’ve had any ATI card in years, its still good to hear of the progress.
Did someone mod you down or something? Well, I modded you back up to 2 with a “+1 Informative” since there is no “+1 Agree” because, well, I agree with your statement. I’ve been using nVidia in nearly the entire decade-plus that I’ve been computing, and have been thankful for it since switching to Linux in 2006 from what I’ve heard about ATI’s cards. It’s good to see them making some serious improvements, from going open-source to finally catching up with ATI’s own proprietary drivers. The future is looking even brighter for them in open-source operating systems.
Since AMD/ATi publishes open source gpu specs, shouldn’t they try to integrate developments .. it would be silly to have two high perf drivers; even though I guess there’s competitive/administrative reasons for it to not be the case right now, I hope it’ll be possible to some extent.
They can’t, for patent problems, and they won’t, to avoid leaking industrial secets about their best algorithms and hardware tricks.
i’m not trolling but this means that in a few years ati drivers will be as good as nvidia drivers. Hope the driver will be for older cards too, cause i’m running latest nvidia driver on a onboard geforce 6150 and works great
The Nouveau NVIDIA driver is pretty bad. So its more like the other way around. The Nouveau NVIDIA driver is playing catch up. Your post is nothing but troll.
You may be interested to learn that the “latest nvidia driver” (presumably you are talking about the closed-source proprietary driver for Linux which is actually written by nvidia) won’t ever run KMS or kernel memory management (because the kernel guys won’t accept driver code which is only a part of a driver), and hence will never be able to run Wayland or rootless X.
BTW, this article is not about ati drivers (namely fglrx, aka Catalyst), but rather it is about the open source drivers for ATI/AMD GPUs. This open source driver, named xf86-video-ati, is an entirely different driver, and it does not belong to AMD/ATI.
http://wiki.x.org/wiki/radeon
Just like the proprieatry closed-source nvidia driver, so to the proprieatry closed-source fglrx driver will never be able to run Wayland or rootless X.
However, open-source drivers can run Wayland and rootless X, and in fact already do for beta versions. Open-source xf86-video-ati drivers for ATI/AMD GPUs are just recently beginning to include optimisations, such that as this article attests, their performance is approaching the closed-source drivers.
Not only are open source drivers the more capable (being able to run Wayland and rootless X, for example), but also there is promise they may soon become the best performing.
Edited 2011-01-14 03:47 UTC
Not only are open source drivers the more capable (being able to run Wayland and rootless X, for example)
That’s not really what I would have in mind when talking about capabilities. The open-source drivers — both Nouveau and Radeon — lack things the hardware supports and you get those only with the proprietary driver. And this “the open-source ones will soon catch up to proprietary ones” is nonsense. That’s been touted for years and years and they’re not even close. Sure, I’d _love_ them to actually catch up both in performance and in features, but I simply do not see that happening.
Now, with Ubuntu and several others moving towards Wayland ATI/NVIDIA users are getting screwed as they lose a bunch of features, and speed even more so.
Meh, I’m just annoyed. I recently ditched Linux on my desktop. It makes a great OS for the average user or for servers, but it simply doesn’t suit me. There is always something that’s broken or needs a bunch of extra steps or something and it really gets tiresome, and as I am a gamer the fact that my games don’t work is already enough.
Actually, the original article has some games for which the open drivers are already close:
http://www.phoronix.com/scan.php?page=article&item=ati_r500_pflippe…
and even a couple where it is ahead:
http://www.phoronix.com/scan.php?page=article&item=ati_r500_pflippe…
This is only after the first “low hanging fruit” optimistaion effort, there is a lot more improvement to come.
As for the contention that there are “missing features” … the only one that is missing now is an API to the hardware video decoder dedicated circuitry. Even that can be overcome via use of GLSL instead.
OpenGL support is a bit thin for some GPUs also, but that will rapidly improve.
Linux may not suit your particular uses, fair enough, I wouldn’t argue otherwise. However, that is no reason to dismiss it in general. It can be a perfect fit for some uses … for example, I am stictly NOT a gamer but I do like to browse the Internet and I do not want my machine to be used as part of a botnet, or as a colection device for my Internet banking passwords.
Windows does not automatically become part of a botnet … stop spreading FUD.
Where did I say it did?
Do you deny that botnets are composed of Windows machines?
As I said, I do not want my machine to become part of a botnet. This is not FUD, it is fact … I really, truly do not want my machine to become part of a botnet.
You inferred it. You infer a lot of things on here and then say “I never actually said that”, when everyone knews what you meant.
It is really pathectic tbh.
Also Linux isn’t immune … after doing a quick google I found these in the first page.
http://lwn.net/Articles/222153/
http://www.itworld.com/security/77499/first-linux-botnet
http://blogs.computerworld.com/14723/no_more_linux_security_braggin…
Considering I have actually had to fix some Linux Boxes (when working as a network admin) that were sending out spam emails because that had been compromised … nothing is immune.
[/q]
He didn’t say “Windows automatically becomes part of a botnet”. And using a logic process, rules of inference and so on… nobody can “infer” that statement, of course.
No comments 🙂
http://dictionary.reference.com/browse/infer
“to hint; imply; suggest.”
It seemed quite obvious to me that it was suggested.
To accuse someone, you’ll need something better than an “it seemed” statement. That’s why I talked about “logic process”, “rules of inference” and so on.
If you read lemming2’s recent posts, you’ll see that he’s adopted a new talking point to justify that position. His argument is essentially that all closed source software should be presumed to contain malware, because you can’t prove with 100% certainty that it doesn’t. Which is exactly the reasoning used by people who say “if you value privacy, then you must have something to hide.”
A post so full of mistakes that is beyond all recovery.
This story that proprietary drivers won’t work with Wayland is incorrect. It won’t CURRENTLY work, but the Wayland devs have already stated that it would be simple to add another backend which could call into the proprietary drivers. The KMS interface in Wayland could easily be extended by NVidia and AMD, just as they are already extending parts of X now.
KMS and kernel memory management are dependencies for Wayland support. The kernel developers will not accept KMS and kernel memory management pieces if the rest of a graphics driver is a binary blob. Wayland might be a different matter, but as it stands the graphics driver binary blobs will never be part of the kernel.
No they’re not. Modesetting in the kernel is a requirement, not specifically KMS. Same for memory management. Yes, *currently* Wayland is hardcoded to use DRI2, but like the other comment says, it’s a simple adjustment to allow for non-DRI2 drivers and the Wayland people are willing to do it.
Modsetting in the kernel is KMS.
http://www.phoronix.com/scan.php?page=article&item=wayland_maverick…
Wayland requires KMS, GEM and a Mesa driver. KMS is kernel modesetting (or modsetting in the kernel, if you like), and GEM is graphics memory mangement, also needs to be in the kernel.
For some parts of a graphics driver to be in the kernel, the kernel developers will only accept it if all of the driver is open source.
Edited 2011-01-14 12:58 UTC
No, KMS is one possible implementation of doing modesetting in the kernel. Nvidia uses another, their own. Wayland doesn’t care how the mode is set, just that it is set.
That article is wrong, which is not unusual for Phoronix. Or, to be fair, the article states the current situation. But there are comments in the Phoronix forums from people actually working on graphic card drivers. Nothing in Wayland’s design excludes binary drivers (Nvidia blob and fglrx), the changes necessary to make it work are small, and should Wayland gain traction, they will be made.
Exactly. KMS is used to name 2 separate things here: it literally stands for Kernel Mode Setting, but it also is the name of the implementation the OSS drivers use.
There can be many different implementations of KMS – the OSS drivers use one called “KMS” while the proprietary drivers implement their own versions in their kernel modules.
Wayland requires an implementation of KMS, but it won’t specifically require the same one that the OSS drivers are currently using.
Nothing is demanding the drivers to be integrated in the kernel code. Afaik they can still be loaded as modules as is the situation now.
The downside of these external, closed, modules is that they need to be updated everytime the kernel changes. AMD’s proprietary drivers are always usually one or two Xorg versions behind, because they are slow to add new support.
That prevents people from using the latest version, which again might introduce new limits for other software.
Edited 2011-01-14 10:50 UTC
That is not a downside of the closed modules, it is a downside of the kernel with their moving target ABI.
Edited 2011-01-14 12:18 UTC
Let’s stop raising that point. Stable in-kernel AP/BI isn’t gonna happen in the mainline Linux kernel, ever. It’s logistically infeasible and most kernel developers would swear by the ability to change APIs as development progresses.
What’s possible and actually being done is maintaining stable ABI in the same distro release and for enterprise products this often can span many years. In that respect, it isn’t too different from commercial operating systems.
Mainline is a development branch. Linux kernel is being developed way faster and in a very different way than proprietary operating system kernels. If for whatever reason stable ABI is a must for you (it escapes me why it would even matter for usual desktop users tho), go ahead and use something else. Quit whining about something which isn’t feasible.
Edited 2011-01-14 13:19 UTC
Constantly having to change an interface reeks of poor software design.
The whole point of an interface is that it stays the same and the implementation behind it changes.
Enterprise != My Desktop/Laptop.
So according to your logic it is acceptable for a desktop machines for hardware to stop working after an update (which does happen).
A driver released 10 years ago for Windows XP will still work with Windows XP Service Pack 3 with all the latest updated … In fact I am using Windows 2000 drivers on my old laptop because there are no Windows XP drivers for it … The Interface has stayed the same therefore the older code still works.
It is feasible. The Kernel maintainers have the power to do this anytime they want.
Anyway none of this changed my original point. If there was a Stable ABI, less effort would have to spent on hardware support whether that code was open or closed since code doesn’t have to be constantly changed.
It’s something called evolution and an extremely fast paced one at that. In the end what really matters is scaling the scalability of development and that’s the biggest reason why Linux got where it is today.
The same goes for RHEL and SLE SPs (for the most part). If you talk to MS kernel engineers that they have to maintain internal ABI identical across different kernel generations, they’ll probably give you a pretty silly look too. And, you know what? They don’t either.
The difference doesn’t primarily come from development itself. It comes from how they’re packaged and released. Linux distros are much faster paced, which has its benefits and drawbacks. A lot of that is by choice but at the same time with the current money flow (at least for desktop), it’s quite difficult for distros to sustain such long maintenance cycles. It takes a lot of money to do that and that’s why you see much longer cycles with enterprise distros.
Really, it’s not about kernel devs trying to screw everyone else. If it were that simple, don’t you think someone would already have come up with a branch or something which maintains the supposedly superiorly designed stable ABI? It’s about how the whole thing is structured and the economy around it is built and I personally think that it may be different from other but nevertheless a model which has potential for sustainable improvement over long period of time.
So, think a bit more about it. It’s okay to complain but don’t draw conclusions when your understanding is very shallow. Just say hardwares working in one release and not in the next is very annoying or unacceptable. Don’t jump to the unwarranted conclusion that that’s a result of kernel developers’ whims.
Your own fault, if you upgrade packages that break compatibility.
And Linux 2.6.35.10 is 100-% compatible with the ABI/API of Linux 2.6.35.1.
If you’re gonna compare Windows XP updates with each other, then you also must compare the same kernel revision updates with each other. Otherwise it’s meaningless.
It’s not like “Linux 2” or “Linux 2.6” were major versions that have to keep compatibility throughout their life spans. “Linux 2.6.36” and “Linux 2.6.37” are major versions, really.
If you have hardware that’s considered ‘unstable’, then you shouldn’t try to act like it was something better. It’s your own fault in the end.
Only the legacy interface has stayed the same. Microsoft is constantly upgrading Windows’ interfaces, too.
Microsoft supports the old/legacy interfaces only because they have no choice. They aren’t supporting them because the interfaces are so god damn great and never become technically out-of-date.
MS also hides the incompatibility by delaying public releases of Service Packs and such. It gives time for the manufacturers to check their stuff against the new version and fix regressions and other breakage in time for the masses.
Linux kernels are there for everyone to test any time, it’s just not many third parties give a crap whether their products work all the time or not. Probably partly due to the fact that a Linux release happens four times a year whereas Microsoft rarely releases any real updates aside bug fixes.
Edited 2011-01-14 14:37 UTC
You’re talking about freezing the kernel ala RHEL which then introduces a new set of limitations.
The unstable abi is a philosophical decision made by the kernel team and there isn’t much to show for it compared to FreeBSD or Solaris.
If a stable API is important to you, then feel free to keep using the same version of the kernel for as long as you want. No one is forcing you to update it.
Wrong, pretty much every point. Nvidia does modesetting in the kernel and has done so forever. Same for memory management. They don’t use KMS and TTM/GEM, but these are merely implementations, and Nvidia doesn’t use them because they have their own.
Second, the Nvidia driver doesn’t need root, and this is also nothing new. All it needs is access to the /dev/nvidia* nodes, which is usually done by putting users in the video group, where they need to be for open source drivers too (to access the /dev/dri/* nodes). It’s other stuff in X itself that right now still requires running it as root. Once the X people take care of that, the Nvidia driver will continue to work without modifications.
And third, there’s only a very tiny adjustment in Wayland necessary to make the Nvidia driver work with it. I’m too lazy to search for links where developers explain this, but if you really want, I will. It’s somewhere on the Phoronix forums.
Edited 2011-01-14 10:27 UTC
This is good news. Now if only we can get some REAL (read stable) drivers for windows. There haven’t been stable drivers since the 8.x series (years?)
I don’t know what is wrong with your system, but I have a Radeon HD 4870 on Win7 64-bit, and I haven’t had the drivers crash on me even once in a year. They’re very stable and I quite like auto-overclocking feature, too.
Don’t worry, nothing is wrong with my system. I know ATI (before AMD pwned them) used to make stable drivers and hardware/cards. Tried them out recently, thus tons of driver issues (and 3 DOA cards).
3 “DOA” cards? Sounds like a hardware problem to me. Probably something with your power supply.
Haven’t had Windows driver issues with ATI for years. Even the proprietary Linux driver is reasonably stable; it would be great if only ATI could follow X.org and Linux kernel development so that it didn’t break after every other upgrade.
Then you bought from the wrong company and that is NOT the fault of AMD/ATI. AMD just like Nvidia doesn’t actually make cards they make reference specs and chips and then companies use those designs however they wish.
As a PC builder and repairman I’ve found the ATI cards to be just as stable and certainly more power efficient than Nvidia. Nvidia has an “all or nothing “approach with their cards which is why they are so power hungry whereas ATI uses a design where cells will be turned off and on as they are needed.
But if you were really having that much trouble you simply bought from the wrong company. I’ve found that especially in the low end cards one has to be careful as some (such as anything by PowerColor below the x5xx) tend to skimp on HSFs and use lower quality materials. of course one can find the same on the Nvidia side such as with BFG which is why they are no longer in business.
I would suggest Gigabyte cards as I have never had a problem with any card I’ve gotten from them, or PowerColor as long as one stays above the x5xx series such as the 4830. But if you bought from a bad vendor you can’t blame ATI anymore than you can blame MSFT because you bought an eMachine and it fell apart.
I keep hearing things like this yet every Windows system I’ve ever used with ATI hardware has been rock solid. The complete opposite for NVIDIA hardware.
First, Mesa is heavily CPU limited when it comes to fast GPUs. The one tested is so slow that i don’t think that became much of a factor. It’s probably not realistic to expect this kind of performance from a high-end card, though.
Second, this is the r300g driver which supports r3xx-r5xx hardware. AMD dropped support for all those cards a while back, so it’s not like they can combine it with their current drivers. And they won’t for the more recent hardware, because A) still not as optimized as the r300g driver is, B) heavily CPU limited right now, C) doesn’t yet support even GL3, D) won’t ever support all of GL, at least by default, because of software patents, and E) they would lose all competitive advantage in the workstation market which is where they want to sell cards for linux anyway. They won’t simply release their current drivers as open source either, because of all the IP in them that they don’t even own.
All that said, this was a very encouraging sign. Hopefully the drivers will continue to improve.