“Nouveau is a community project that is working on producing open-source 3D display drivers for NVIDIA graphics cards. Nouveau is not affiliated with Nvidia Corp and is an X.Org project. While this project is still far from being completed, for this holiday special we are sharing some of our first thoughts on this project from our experience thus far. We would like to make it very clear, however, that the Nouveau driver is no where near completed and still has a great deal of work ahead for the 3D component. This article today will also hopefully shed some light on the advancements of this project so far.”
Quite interesting project… perhaps this will be a fillip for nvidia to open source their drivers or, at least, to provide 3D accelerated drivers for nvidia cards when non-FOSS kernel modules are branded as “illegal”
I’m positive there are too many patents/IP/third party matters for both Nvidia and ATI to completely open source their drivers, and I have nothing against closed source drivers, but I do hope they’ll be able to create the drivers with decent specs.
The biggest advantage I see is excellent integration with X.org for accelerated KDE/Gnome/XFCE/fluxbox/etc…
One more reason to stick with Nvidia hardware.
I have nothing against closed source drivers, they’re fabulous! and I certainly will use them when I require – but at the same time, these companies need to stop thinking that the *NIX world revolves around Linux.
What about those who run OpenSolaris? FreeBSD? OpenBSD? I mean, honestly, if OpenSound can support all those platforms with such a limited number of programmers, it shouldn’t be a hard task to provide atleast an equal level of support for alternative platforms – right now, but Nvidia and ATI do an incredibly crap job at it.
Nvidia isn’t the only company who can get pulled up for this; Intel and their ipw3945 driver, and their stupid decision to release it under a GPL licence instead of dual licencing so that it could be ported to run on non-GPL operating systems like FreeBSD/OpenBSD/NetBSD/OpenSolaris.
And what happens when I want to write my own OS? With open source drivers, I can port over the open code (and pray it’s well documented). With binary drivers, I’m dead in the water.
And what happens when I want to write my own OS?
There are risks associated with doing practically anything.
I haven’t played with the BSDs lately, but nvidia supports Opensolaris quite well. In fact, the last couple builds of SXCR have even had the latest nvidia driver included and enabled by default.
But ‘officially’, the consumer line of graphics cards aren’t supported by the driver – that is a hit and miss as to whether it properly works.
There has been much promise on the mailing lists about a 3945abg driver over 3 months ago, and yet nothing delivered, no feedback from Sun – I thought the whole purpose of opensouce was to have open dialogue between Sun programmers, end users and third party developers – but I guess its all smoke and mirrors for the media.
If non-FLOSS modules were every prevented from being loaded in to the kernel, then a FLOSS project would be started to make a module to ‘load’ non-FLOSS modules.
What is needed is projects like this and more consumer demand for the drivers to be opened. The problem is that the majority of consumers aren’t directly effected by my closed source drivers. They are only effected due to lower programmer output by wasting time dealing with closed source crap, which leads to longer time between new feature upgrades etc.
I’m all for open source drivers but operating systems should be free to use and distribute closed source as well so that users don’t have to wait before they can have maximum use of the system
//If non-FLOSS modules were every prevented from being loaded in to the kernel, then a FLOSS project would be started to make a module to ‘load’ non-FLOSS modules. //
This is a very good description of what the binary Nviidia driver for Linux and its open-source “wrapper” already does.
http://www.nvidia.com/object/unix.html
This project will not make nvidia open their drivers. However, it might garner support from nvidia anyways – that’s what happened with the nforce ethernet drivers.
its hard enough to write a 3D graphics driver, but to write one without specs now thats amazing
I must say when I read the title I got my hopes up that Nvidia had released their code. But this is still good news. It means that down the line I can switch from the proprietary one to this one in my quest to use more ‘free’ code.
These guys are heroes. I wish them success in their project.
I am considering a new computer purchase. At the moment I am only considering Intel based motherboards with onboard intel graphics so that I can get 3D acceleration with open source drivers. If they are successful a whole broader range of hardware becomes acceptable.
…they will not be as crap as the open source radeon drivers.
I used to think the ATI open source drivers were crap too… then I had the same results with the binary ones from ATI themselves.
Games were OK, but things like google-earth were not, it would crash or freeze. firefox would crash on pages that showed videos or flash.
I stuck and nvidia card in the same machine, and everything worked sweet.
From this I concluded 2 things.
1: Nvidia and Linux are a great combination, even if (so far) they use proprietary drivers.
2: ATI must be using some APIs in Windows as their cards are amazing in that system.
The way I see things is that there are very little chances that nvidia is going to open their drivers. However, if they find the quality of Nouveau reasonable, there’s a slight chance they’ll start committing to it. Lets hope the best.
2.1.3 Limitations.
No Reverse Engineering. Customer may not reverse engineer, decompile, or disassemble the SOFTWARE, nor attempt in any other manner to obtain the source code.
http://www.nvidia.com/object/nv_swlicense.html
Not really.
“Using REnouveau is considered “clean-room” reverse engineering and is not in violation of NVIDIA’s license. This application is available from the Nouveau SourceForge CVS server.”
I don’t understand the distinction between “clean-room” and “regular” reverse engineering. I don’t see how developers can do anything except getting “down and dirty”, when no specifications are availible.
Has Nvidia confirmed that Nouveau is not violating the EULA?
I totally encourage open-source driver development, but only as long as it respects EULAs. Nvidia was nice enough to respect us by creating a GPL shim kernel module that dynamically executes the driver blob.
Well, read about it, e.g. here: http://en.wikipedia.org/wiki/Clean_room_design . This kind of reverse engineering is not forbidden (thus allowed) by law in most countries and nvidia has no way to forbid it AFIK.
Simplest solution to EULA prohibition of reverse engineering, decompiling, or disassembling would be to do that in a place where you don’t have to care about this points of EULA (law allows them), such as at least some countries in EU…
In clean room reverse engineering, you only observe the effects of the software with which you are working. In this case, the Nouveau developers are monitoring the values in the registers on the graphics cards when certain calls get made. They’re working off the observed effects of the program, not the program itself.
To put it in other terms, the developers are building a set of requirements that accurately describe the functionality of an existing piece of software, then they’re building (from scratch) new software that fulfills the requirements. When all is said and done, you have two, provably distinct pieces of software that do the same thing.
Exactly.
There are methods to monitor the behaviour of a piece of running code, and then try to replicate that behaviour with your own alternative code (ie clean room reverse-engineering), without ever disassembling the original code or copying it in any way (other than reproducing its behaviour in other equivalent code).
Rough example: I might learn calculus from someone’s textbook, but that does not mean that if later in life I write a calculus textbook of my own from what I have learned overall that I necessarily violate the IP of the author of the book from which I first learned.
To violate a copyright, I have to actually copy the original book’s text in substantial part.
Edited 2006-12-26 02:44
With the application they’re using – which they wrote, would be interesting to see if they could apply it to other situations like Ati and Matrox.
hahahaha and there was me thinking the end of your post would be “Photoshop and Dreamweaver”
😛
I love photoshop to be on Linux/*NIX, but quite frankly, Dreamweaver should be bought out into the street and shot multiple times for creating such hideous code.
Dreamweaver ranks right up there with flash and shockwave for ‘technology most commonly abused by morons, then subjecting the world to the crap that they produce’ – bloated ugly websites with crap, bloated flash littered through the site which takes an eternity to load.
//I don’t understand the distinction between “clean-room” and “regular” reverse engineering. I don’t see how developers can do anything except getting “down and dirty”, when no specifications are availible. //
“Dirty” reverse engineering would involve disassembling code and/or looking at the original source, then “plagiarising” it by writing different but equivalent code. There is more than one way to write any given “while loop”.
“Clean” reverse engineering would involve taking the original code, running it, stimulating it with known, controlled inputs in a systematic manner, and observing the output that the running code then produced. Then one documents the functional behaviour observed, and then other people write new code to replicate that same functional behaviour using the original recorded observations as a specification.
Dirty replicates the original work, but palgiarised.
Clean replicates only the functionaility of the original, not the original itself.
It is the difference between making a fake look-alike copy of a watch and calling it a “Timeks” versus coming up with one’s own distinct different design for a timepiece but which still did an exactly equivalent function as a real Timex … display the time.
Edited 2006-12-26 03:01
A EULA does not override the law of the land. Reverse engineering is perfectly legal, whatever any EULA of dubious legality itself, may say.
Reverse engineering for interoperability is perfectly legal.
Probably not a license violation. First, it would only be a license violation if you agreed to their license (which you wouldn’t need to if you weren’t using their driver). Second, at least in the USA, you cannot prevent someone from reverse engineering anything.
I wish only the best for this crowd: While the propriatary nvidia drivers seem pretty decent, experience using multiple video cards (quad-monitors eh) shows that they do not play well with other (propriotary drivers, in this case mtx/matrox ones).
I hope they succeed in providing source level drivers for the Nvida chip sets so the full features of the hardware can be used on OSes other then what is supported by the binary blob drivers. Full support under Syllable, Haiku, SkyOS, etc. would be great.
Pledge: http://www.pledgebank.com/nouveaudriver
Now if they DRI guys would get some of the r5xx cards workingm they don’t even work with the 2D Radeon driver thus you’re stuck with the less than useful vesa drive until you get the binary drivers installed.
Rudolf Cornelissen of the Haiku Project has already reverse engineered nvidia chips and wrote the FIRST, USABLE nvidia opensource driver.
Take a look by yourselves!
http://web.inter.nl.net/users/be-hold/BeOS/NVdriver/index.html
p.s. Beos ROCKS
The latest post seems to be from 2005, and it looks like they barely made any head way for 3D features above basic acceleration.
The latest post seems to be from 2005, and it looks like they barely made any head way for 3D features above basic acceleration.
And suspiciously looks to match the functionality of the opensource driver in Xorg.
If you look deeper at the link, it eventually points to a developer’s page concerning driver development, and has a lengthy section on porting linux drivers.
All the more power to BeOS/Haiku et al. for providing nvidia functionality, but seriously, let’s not point to it as being the first open source nvidia driver when in all reality it was probably derived from the actual first open source driver for nvidia. Xorg is an fdo project that isn’t tied to linux specifically, but even so, credit is due where credit is due.
They used an old link for the haiku/beos nvidia driver.
http://www.bebits.com/app/3636
Has up to version 0.80 updated April 11th, 2006.
Edited 2006-12-27 01:20
> … their stupid decision to release it under a GPL licence instead of dual licencing so that it could be ported to run on non-GPL operating systems like FreeBSD/OpenBSD/NetBSD/OpenSolaris.
Why is this a problem? Under the GPL, the user has the liberty to study how the software works. Since the drivers are GPL, the non-GPL operating system developers (BSD,Solaris) are free to study how the Intel drivers work and use that knowledge to develop their own clone of the drivers.
plus, if the GPL is compatible with the other license, who cares?
The GPL in incompatible with MIT and BSD licenses if it is used with source of those licenses in the sense that it would effectively change the license of that source code.
That is not what would happen at *all*. If I am the copyright holder of BSD licensed code X, I don’t have the legal *right* to incorporate GPL licensed code Y into code X and then distribute it. To do so would violate Y’s copyright holder. If I *chose* to release a version of X under the terms of the GPL, I could bundle Y.
There as a subtle (but HUGE) difference between the situation I described and your claim: in your case, you make the GPL the active agent which goes around changing the license of other source code. That is just not true; the GPL never changed anybody’s license.
In my case, I place the responsible party for a license change on the BSD license holder who *chooses* to integrate GPL code into his project. Big difference.
The GPL is *not* a virus. Developers who choose to ignore licensing issues are asking for trouble. But even if you accidentally include GPL code in your project, the GPL copyright holder can quite easily order a cease and desist to your distribution of the code. She cannot, however, make all of your code magically GPL licensed.
I was trying to point out that to mingle source between GPL and any other open source license is not allowed unless the other open source license allows its source to be wrapped by the GPL. To keep a project under a non-GPL license requires the project to not use GPL source. This makes the GPL incompatible in this scenario.
If you desire to create a GPL project out of non-GPL source, a BSD (2-clause) or MIT license would be considered compatible from the standpoint of the GPL.
//If you desire to create a GPL project out of non-GPL source, a BSD (2-clause) or MIT license would be considered compatible from the standpoint of the GPL.//
Where did you get this idea from?
I don’t believe it is correct.
AFAIK Linux itself (the kernel) contains BSD code.
It is correct. Check the FSF website for compatible licenses. The BSD 2-clause and MIT licenses may be used within a GPL project.
//The GPL in incompatible with MIT and BSD licenses if it is used with source of those licenses in the sense that it would effectively change the license of that source code.//
MIT & BSD licenses are promiscuous licenses and they allow this.
Specifically, those licenses allow anyone to take the BSD or MIT code and incorporate it into another program under a different license.
They even allow their code to be incorporated into a proprietary plrogram.
That is why most open source projects are licensed under the GPL, and not BSD or MIT license. Under the GPL, no-one is allowed to “steal” the code into a closed-source product.
/* That is why most open source projects are licensed under the GPL, and not BSD or MIT license. Under the GPL, no-one is allowed to “steal” the code into a closed-source product. */
That is why so many people use a BSD license; they want to share with everyone without strings attached.
Intel has previously released some source under a BSD license. This allows it to be used much more easily within all operating systems. Why make a GPL driver in the first place if all you (hardware company) want is for everybody to use it? The same can be said about proprietary drivers. In this regard, I see both to have similar restrictions.
Which then require a massive clean-room re-implementation, as what happen with OpenBSD – resources tied up duplicating functionality all because one group wants to be a nigel no mate and refuse to share their code.
GPL seems to split the community in half, with a solid wall around one group who refuses to play nice because of ideological issues rather than anything to do with pragmatic intellectual property protection.
“ideological issues rather than anything to do with pragmatic intellectual property”
Its the wrong place to pull the pragmatic(sic) card.It sounds like an attack on GPL licensing in favor of BSD, rather than trying to understand what the benefits the licence brings to the project.
I would love to see you argue anything *pragmatic* in this instance, because I would love you to point out the benefit to the project from a *pragmatic* point of view.
You seriously need to look at why someone might choose BSD over GPL, and there are benefits, and then look at how the project is affected when NVIDIA is not playing ball.
Again I’d love to see your statements justified.
Obviously, since NVidia isn’t going to give away its IP in the form of source code, it makes sense for developers to try to work around the problem. However, I think it’s reasonable to remind everyone that getting the current crop of cards working may be useful now, but NVidia isn’t exactly going to stand still, technology-wise.
The technology will continually evolve and, since NVidia is actively writing drivers for Windows, it stands to reason that reverse-engineered drivers on Linux will lag newer drivers on Windows by a considerable margin. Debugging video chips isn’t easy. It takes time. And resources.
I’m not pointing this out in order to discourage the creation of NVidia drivers. Rather, I think it’s a BETTER IDEA to encourage NVidia to distribute a binary driver for Linux. Having the source code seems, to me, more of an ideological limitation than a practical one. The kind of limitation that really doesn’t discourage driver innovation and availability on Windows.
How do you encourage NVidia to write Linux drivers? Simple: Make it worth their while from a financial standpoint. I’m suggesting that you pay them to do so. I’m quite sure that the mere suggestion of financial incentives will provoke an outcry from the more ideologically-driven folks who will throw around words like “purity” and “independence” and “philosophy”. But you can’t fully expect a profit-motivated organization to put philanthropy ahead of the bottom line, and you’re never going to keep up with the forward pace of innovation, if you’re constantly reverse-engineering.
So compromise. Set up a lab which works with all major video card manufacturers on behalf of Linux. Fund the effort. Raise funds. Hold bake sales. Whatever. Just do it. You’ll be much happier, in the long run. Because once you get NVidia and others to release binary Linux drivers, it just becomes that much easier to get them to write subsequent versions.
My two cents. Feel free to disagree.
The technology will continually evolve and, since NVidia is actively writing drivers for Windows, it stands to reason that reverse-engineered drivers on Linux will lag newer drivers on Windows by a considerable margin. Debugging video chips isn’t easy. It takes time. And resources.
It will evolve for a while longer, but not all that much. Eventually the evolution will slow to a point to where it is easy to track with open source software; even if nVidia doesn’t want to participate in that effort. As an example, look how easy it is to create open source drivers for new IDE or SATA chipsets. The level of innovation in that area has just settled down and the differences between implementations are minor and easily handled by open source software. The same thing _will_ happen in the 3d hardware area as well. Just need some patience.
So compromise. Set up a lab which works with all major video card manufacturers on behalf of Linux. Fund the effort. Raise funds. Hold bake sales. Whatever. Just do it. You’ll be much happier, in the long run. Because once you get NVidia and others to release binary Linux drivers, it just becomes that much easier to get them to write subsequent versions.
Of course people who don’t care about free software are able follow that path if they choose.
For people who understand and desire the long term benefits of removing themselves from the mercy (and continued existence) of software support from a hardware vendor like nVidia, it makes a lot of sense to support the creation of open source drivers.
After all, look at just how much open source software offers today. The “pragmatists” would have quit long ago telling everyone to “compromise” and just use binary windows because it was crazy to duplicate all that functionality in free software. Thankfully, nobody listened to them back then either. ;o)
It will evolve for a while longer, but not all that much. Eventually the evolution will slow to a point to where it is easy to track with open source software
I strongly disagree. Specialized processors (such as video) and their specialized memory subsystems have outpaced standard CPUs by a wide margin — to the point that companies such as Intel are feeling left out in the cold (see recent OsNews article about Intel trying to buy talent for its own video processor campaign). If anything, technology development is going to accelerate, not slow down in the years ahead. You simply can’t compare the pace of innovation in video cards versus IDE/SATA chipsets. It’s night and day.
For people who understand and desire the long term benefits of removing themselves from the mercy (and continued existence) of software support from a hardware vendor like nVidia, it makes a lot of sense to support the creation of open source drivers.
Face it: You’re inevitably dependent on nVidia — regardless of which direction you go — because you won’t get software to reverse-engineer unless they prosper. So you might as well admit that you’re dependent and work with them rather than try to keep up with copying their technology. Plus, you get a hand in telling nVidia and others what YOU think is important, rather than having Microsoft set the agenda.
After all, look at just how much open source software offers today. The “pragmatists” would have quit long ago telling everyone to “compromise” and just use binary windows because it was crazy to duplicate all that functionality in free software. Thankfully, nobody listened to them back then either.
Apples and oranges. It wasn’t necessary to reverse-engineer anything to produce Linux.
I strongly disagree. Specialized processors (such as video) and their specialized memory subsystems have outpaced standard CPUs by a wide margin — to the point that companies such as Intel are feeling left out in the cold (see recent OsNews article about Intel trying to buy talent for its own video processor campaign). If anything, technology development is going to accelerate, not slow down in the years ahead. You simply can’t compare the pace of innovation in video cards versus IDE/SATA chipsets. It’s night and day.
It’s not night and day at all. There is only so much you can do with 3D hardware. There’s only so many things you can add to the mix to help, there’s only so much hardware you’ll _ever_ need. Much of the reason for the constant flow of newer video cards is for performance reasons. A time will come when there is simply enough performance that newer cards won’t be as necessary or introduce nearly as much change.
Think of cars for instance. Yes, year after year they introduce slightly new features and design shapes. However, do you have to relearn how to drive a car each year? It is the same in computer technology.
There’s absolutely no reason to believe that 3d hardware is so special that it will evolve forever and ever at such a rate of speed as it does today. Eventually it will slow down and pose a much slower target to track.
Face it: You’re inevitably dependent on nVidia — regardless of which direction you go — because you won’t get software to reverse-engineer unless they prosper. So you might as well admit that you’re dependent and work with them rather than try to keep up with copying their technology. Plus, you get a hand in telling nVidia and others what YOU think is important, rather than having Microsoft set the agenda.
No, we won’t be dependent on nVidia, they’re not the only hardware vendor. If they go out of business we’ll have open source drivers that will continue to work with the hardware they produced while they were operating. Even if they signed a deal with someone to exclusively support their operating system (say Windows for example), we will still have our open source drivers. It puts us in the drivers seat; that’s really the point.
Apples and oranges. It wasn’t necessary to reverse-engineer anything to produce Linux.
Sorry you’re just simply wrong here. Many drivers in Linux were reverse engineered from Windows and other OS drivers.
Edited 2006-12-27 06:59
> A time will come when there is simply enough
> performance that newer cards won’t be as necessary or
> introduce nearly as much change.
People also said that the limit of requirements for RAM, or CPU cycles, or hard disk space, or portable storage size, or screen resolution, or whatever has been reached. They said that all time, and were proven wrong every single time.
As for the domain of graphics processing, and in a general sense specialized parallel processing for virtual reality implementation, I don’t see a sensible limit within the next 50 years.
People also said that the limit of requirements for RAM, or CPU cycles, or hard disk space, or portable storage size, or screen resolution, or whatever has been reached. They said that all time, and were proven wrong every single time.
You’re right of course, but they made the mistake of specifying a specific quantity. You say yourself you can see a limit 50 years out, maybe it’s a 100. But that really wasn’t the main point: the exponential _rate_ of change will diminish just as we see Moores law starting to taper off in the CPU realm.
However in the shorter term there’s lots of reason to believe the situation will improve even in the face of many advancements in the performance of 3d video cards. Much in the same way that CPU’s continue to perform better and change large parts of the underlying silicon while maintaining an instruction set that hardly changes at all. Similarly, there is every reason to believe that even while video card performance will continue to improve, and offer VR etc, the underlying programming interaction with the card won’t need to change nearly as much.
But no matter what the technical challenges, doing the best we can is a better option than begging nVidia for a binary blob we have no control over ourselves, and then praying they don’t get bought up by Microsoft etc.
As an aside, once a community developed open source driver exists, it removes many of the reasons nVidia has given about why they can’t provide an open source driver themselves. Perhaps they will have a change of heart and offer patches here and there where they can. But we’ll never create the conditions for that to happen if we just accept the status-quo.
It’s a great project with lots of reason to think it will ultimately succeed, even if some people think we should give up before really getting started.
You’re right of course, but they made the mistake of specifying a specific quantity. You say yourself you can see a limit 50 years out, maybe it’s a 100. But that really wasn’t the main point: the exponential _rate_ of change will diminish just as we see Moores law starting to taper off in the CPU realm.
Nah. That’s a prediction that you’re basing on vapor. Air.
Nah. That’s a prediction that you’re basing on vapor. Air.
Nope, basing it on experience. Look at televisions, computers, any number of technologies. As they mature the rate of change just diminishes. Face it.
Nope, basing it on experience. Look at televisions, computers, any number of technologies. As they mature the rate of change just diminishes. Face it.
The reason that televisions had slowed in terms of resolution in quality had nothing to do with technology or capability limits or failure of vision. It had to do with the pipeline that was feeding the sets with bits; namely, the limited, least-common-denominator, low-resolution NTSC/PAL video signals that have become de rigeur. There’s really no reason to attempt to optimize technology that can’t get significantly better due to poor video signals. Even DVD’s relatively low-resolution (720×480) has held back the technology. The introduction of HD, though, has caused a brand new growth cycle in this technology. But, again, televisions are slaves of the signals that are provided to them. The market recognizes this fact, and you’re not going to see things advance unless resolution standards improve.
As for computers, they haven’t slowed. Moore’s Law is still kicking along nicely despite lots of predictions from skeptics in recent years. People were saying, “How can Intel possibly increase transistor density and increase processor speeds?” Consequently, no one saw multi-core processors coming. But they’ve changed the technology equation substantially, to the point that people are now talking about 32- or 64-core processors in desktop computers in the not-too-distant future. No one is declaring Moore dead anymore, as it relates to CPU power.
Video chipsets are doing more and more every year. Tasks which were previously being done in software are moving steadily to hardware. Not only that, but the potential for parallelization of processing in both the 2D and 3D pipelines is enormous and has only begun to be tapped.
We’re going to have simply agree to disagree on this matter. I’m quite optimistic, though, that this technology will still be evolving for at least the next decade — which in terms of technology is foreeeeeeeeeever.
We’re going to have simply agree to disagree on this matter. I’m quite optimistic, though, that this technology will still be evolving for at least the next decade — which in terms of technology is foreeeeeeeeeever.
Well finally something we can agree on :o) Ten years sounds reasonable to me, in fact it might be quite a bit longer than that. But it seems even you agree that eventually, whenever it may be, things will slow down. So in the long term, the issue will resolve itself in such a way that open source will be able to cope quite nicely.
In the shorter term we’ll just have to do the best we can. It’ll mean that we’re behind the leading edge by a fair margin. However, those who actually need the bleeding edge capabilities are the minority. Most people only use a fraction of the video processing they have today. There are some obvious exceptions like gamers, and cad/cam users etc. But most people doing their email, some word processing and web surfing don’t need the leading edge anyway.
There is already really good open source 3d support for some newish ATI cards. My X700 works _great_ with the open source r300 3D driver; beryl/compiz + a few games i tried just to see it do its thing all works better than i could have hoped. Now we need some good open source support for some newish nVidia cards too.
With any luck, Intel will throw its hat in the ring soon, and maybe even with an open source driver from the get go.
Anyway, there’s no reason to give up the pursuit of a full open source operating system.
> You’re right of course, but they made the mistake of
> specifying a specific quantity. You say yourself you
> can see a limit 50 years out, maybe it’s a 100.
No I didn’t say that.
> But that really wasn’t the main point: the exponential
> _rate_ of change will diminish just as we see Moores
> law starting to taper off in the CPU realm.
But Moore’s law only deals with transistor density. The number of processor architectures still grows, and even more so when the transistor density comes to a halt. And it’s the number of different architectures that demands for different software to interface with that hardware: drivers.
> Similarly, there is every reason to believe that even
> while video card performance will continue to
> improve, and offer VR etc, the underlying programming
> interaction with the card won’t need to change nearly
> as much.
The CPUs are ISA compatible because the market demands so – nobody would buy an incompatible CPU for his computer at home. In contrast, the different graphics cards are already ISA incompatible, and incompatible at register level. Just try a graphics card with a driver that wasn’t written for it. Currently, nobody demands that they are compatible, and as long as the vendor distributes the drivers with the hardware, that won’t change.
Just note how the whole Nouveau project deals with writing additional drivers for incompatible graphics cards, and *NOT* with making the cards compatible.
> It’s a great project with lots of reason to think it
> will ultimately succeed, even if some people think we
> should give up before really getting started.
Correct. But don’t delude yourself into thinking that Nvidia will only produce compatible cards in the future, and that the aim of the whole Nouveau project is to write drivers for a few cards and everything will be fine.
OpenSource drivers for hardware components that lack well documentated standardizations are crucial for alternative OSes, and that for very pragmatic, down-to-earth reasons (not that “purity” and “philosphy” are bad things, quite the contrary).
– Binary drivers for NVIDIA graphic cards are currently only available for x86 and x86_64 platforms. PPC (and in the future) Cell processors may not have a large enough total install base for NVIDIA to consider them important enough for support, but with an OSS driver at least partial 3d support is possible.
– Other alternative Operating systems can benefit either directly (if the license is compatible and porting the driver is accomplishable) or indirectly (by reading/learning/reimplementing, which is even more impoartant for fringe non-open source OSes) from this work. Given, that reinvention of the wheel is a common criticism of FOSS, this is a very important point.
– Most users (desktop, gamers are a different target group) will consider a significant performance penalty against the binary drivers acceptable, as long as current desktop enhancements (compiz/beryl et al) work reliable and competative. Since many users will expect this effects from OSes in the next years, FOSS drivers will allow the makers of distributions to at least provide a better out-of-the-box experience without the legal grey zones, that surround binary only modules (in the case of kernels covered by the GPL). This is esp. important for Live-CD’s/DVDs/etc. based systems, since linking against the kernel currently has to happen essentially every time the system get’s booted (although mechanisms like storing the linked module on the hard disk / sperate medium by the user is possible)
– One thing, that is imho too often overlooked in the context of the binary vs. open source kernel driver debate is the possibility to learn from open source projects. Open source drivers for current 3d hardware can be great learning grounds for compuatational sciences students or engineering students, which is something that at least I consider very important.
– Finally, as long as Linux et al have a market share smaller than 10 – 15% and the market for 3d graphic cards is left to essentially three large players, I consider open source drivers a necessarity, since binary-only support (or support for older kernels /architectures, etc.) can always be dropped because of economic pressure or changed marketing politics on behalf of the vendor.
EDIT: fixed typos and added the part with the Live CD’s
Edited 2006-12-26 22:28
> I’m not pointing this out in order to discourage the
> creation of NVidia drivers. Rather, I think it’s a
> BETTER IDEA to encourage NVidia to distribute a binary
> driver for Linux.
But the best idea is to do *both*.
//I’m not pointing this out in order to discourage the creation of NVidia drivers. Rather, I think it’s a BETTER IDEA to encourage NVidia to distribute a binary driver for Linux.//
Did you realise that Nvidia already do exactly that?
http://www.nvidia.com/object/unix.html