“Bad architecture and the proprietary lock-in has frozen out innovation,” Tiemann, Red Hat CTO, said. “That makes it difficult to bring new ideas to market. The open-source platform with open-source standards has liberated the playing field.” Tiemann urged chief information officers to approach their planning from this perspective, and he provided more detail in this interview at SearchEnterpriseLinux.
Well, this can be debated…
Regarding proprietary systems: DirectX is a great example how a proprietary architecture can thrive and innovate. On the other hand, there is the open standard of OpenGL, which hasn’t innovated for ages, and it is fragmented like hell (each gfx card uses its own GL implementation, see Tom’s benchmarks using different versions of Doom games in order to play well with each gfx card – that sucks).
Of cource, Tiemann talks about proprietary Unix here, but putting all proprietary software in the same sack, it would be a mistake.
> That makes it difficult to bring new ideas to market. The open-source platform with open-source standards has liberated the playing field
If Tiemann talks about patents over here, then these patents can be equally problematic for an open source project as much for a software company. There is no “stagnated innovation” in the closed source field. People are coding whatever they asked to code by in their cubes and getting paid at the end of the month. And if they stumbed on a patent, they will have to pay it up.
Same goes for the open source situation (and the GPL is very clear about patents). Problem is, almost none of the OSS hackers are looking into software patents each time they implement a new feature in their pet project that then release it for free. This can be a huge issue in the future and Red Hat has already acknowledged this danger in the recent past…
On what basis do you make the assumption that OpenGL hasn’t been innovative? Have you actually read the DirectX specifications and the number of features that have finally been added which OpenGL has had for years?
As for the pace of development, are you expecting some sort of revolution every 6-12months? OpenGL doesn’t simply sit on a island and is oblivious to the world around it. OpenGL is a committee based organisation where by hardware and software vendors can come together and address issues that each of them want resolved. Also, OpenGL is platform and architecture independent, it can’t just simply add a cool feature that blocks 98% of the other members just because it *could* possible give better performance.
Yes, OpenGL specifications is a long and sometime frustrating process, however, I’d rather have a community of hardware and software vendors come to a consensus than one company dictating everyone else should do.
As for the other parts of DirectX, there has been very little mention of OpenML. Hopefully the OpenGL committee will embrace it and create a unified specification.
Regarding patents, he isn’t talking about that. Infact, IIRC, Redhat has some patents of their own. No, what Tiemann is simplying saying is this, “there are established openstandard which the opensource community can conform to, if those standards need to be extended, the source code is available for others to create a compatible implementation”. The best example would be the kerbos issue, or infact, Active Directory which includes a large portition of LDAP features. Had Active Directory or the Microsoft kerbos implementation been opensourced, there would never had been an issue as it would have allowed others to create the appropriate interfaces so that interoperability can occur.
That is what Tiemann is trying to get at. OpenSource by its very nature is going to promote good interoperability as the ability to create compatible interfaces is made possible.
> On what basis do you make the assumption that OpenGL hasn’t been innovative?
Just look around you. OpenGL has already being surpassed by Direct3D already 2-3 years ago. Do I need to write pages and pages of each function has been implemented or not?
Please use the right subject line when replying.
OpenGL has already being surpassed by Direct3D already 2-3 years ago.
Pardon me for saying this – that is complete nonsense. In terms of functionality, OpenGL as implemented by the major vendors (ATI and NVidia) is as complete as D3D, and in some cases better implemented or faster implemented.
In terms of ease of use and conforming to standards, OpenGL is still ahead of D3D by some margin. May I also remind you that OpenGL 1.1 applications can still run today because the original standard was well defined, unlike the constantly changing mess the D3D has become.
Yes, OpenGL is a clear example of how good architecture and open standards are better than a bad architecture and closed standard, especially one that could only survive with the force of monopoly behind it.
I agree with most of what you say.
I love OpenGL for the clean and logic API, speed and portability.
That said, the main problem with OpenGL is that is a Standard that until some years ago was umatched (that ended up with with DirectX 8.0 or so).
However, OpenGL is going to 2.0 version which will be able handle modern graphics cards capabilities (Pixel Shaders, Progra) without relying in each card vendor’s extensions. And that’s mainly these extensions that introduce incompatibilities between different ports of the same game as some rely on the presence of these extension with no failback…
OpenGL 2.0 will be incompatible with previous specifications so legacy applications will need to be ported, however, IMHO, it will continue to be an industry standard, not a game industry standard as Direct3D/X.
The portability in one main issue here (besides of being a standard) as other gaming markets begin to emerge (Linux, OS X, embedded devices,…).
As for patent issues, I think there are things that shouldn’t ever get patented such as Recycle Bin and Paper Clips ;-).
Another thing (OOT), Eugenia, can you make this box where I’m typing a little bit wider? It feels a bit tight .
Well said…
>Eugenia, can you make this box where I’m typing a little bit wider? It feels a bit tight .
What browser are you using? It is quite wide already on most of my browsers… Email me about this issues please, don’t reply here.
Bad architecture and the proprietary lock-in has frozen out innovation
I agree with the bad architecture, only as it applies to Red Hat Linux itself. Nasty legacy file system layout, nasty device namespace, nasty little config files scattered seemingly at random throughout the entire OS.. nasty driver installation – who wants to recompile Xfree, a driver, the kernel et cetera in order to upgrade the graphics card? Granted, sometimes you won’t have to recompile. Lucky you. However, there shouldn’t be ANY element of luck.. it should be a simple matter, like Win2K or OSX, or even Win95 for that matter.
Bad architecture requires the user to log in as root to burn a CD. Bad architecture requires root to install basic, necessary apps. Bad architecture requires tedious permission inspection and a command-line interface when things screw up. Bad architecture allows any app to crash the GUI, even a web browser (Galeon, anybody?).
Red Hat’s main focus is UNIX to Linux migration, but the problem (at least for Red Hat, and other UNIX cloners) is that the future isn’t 1970’s tech.
I wouldn’t trade a penny for Red Hat’s stock in two years.
Regarding proprietary systems: DirectX is a great example how a proprietary architecture can thrive and innovate. On the other hand, there is the open standard of OpenGL, which hasn’t innovated for ages, and it is fragmented like hell (each gfx card uses its own GL implementation, see Tom’s benchmarks using different versions of Doom games in order to play well with each gfx card – that sucks).
Based on this statement it seems quite obvious that you either have never developed an kind of visualization app with both OpenGL and Direct3D, or you are a glutton for punishment.
With respect to varying OpenGL performance/implementations. This has more to do with Microsoft’s almost total withdrawl of OpenGL support from Windows. MS no longer provides a basic MCD framework by which all OpenGL device specific drivers latch into to expose their unique functionality and optimizations. Instead since Windows 2000 MS has forced each graphics card manufacturer to build their own complete ICD interface to their hardware. Effectively forcing every graphics card maker to re-implement OpenGL on their own. Microsoft’s software OpenGL renderer and libs no longer allow for acceleration hardware to latch into its graphics pipeline.
Yeah… Right…
“(each gfx card uses its own GL implementation, see Tom’s benchmarks using different versions of Doom games in order to play well with each gfx card – that sucks). ”
This is incorrect. Tom’s did not use different versions of Doom games. It was the same game with a different optimized rendering path for each generation of cards. Direct3D games actually have to do the same thing believe it or not. Mainly because of bugs though, or to avoid certain code paths that are really slow on a specific generation of cards.
> It was the same game with a different optimized rendering path for each generation of cards.
Same thing. Same result.
regards to the “correct subject heading”, I am using one. I use a direct reply so that people are not confused to who I am referring my remarks to. If this chat continues there will be a RE: RE: RE: RE: RE: RE: and no one will bloody well know who anyone is replying to.
Just look around you. OpenGL has already being surpassed by Direct3D already 2-3 years ago. Do I need to write pages and pages of each function has been implemented or not?
A better idea would be you actually writing a program using OpenGL and DirectX and pointing out the so-called “failings” of OpenGL.
Doom doesn’t use OpenGL, it drew straight to the video card using mode X. Doom has to be retrofitted to use openGL. You mean Doom III which uses such complicated OpenGL instructions that different cards render at a different speed depending on which render path is used.
Yes this is a problem with OpenGL but only Carmack seems to be having this problem. I haven’t heard of any other programmers having problems with the extensions. Probably because Carmack is at the leading edge and others at the edge are not talking or not using OpenGL.
http://www17.tomshardware.com/graphic/20030512/geforce_fx_5900-10.h…
As for Direct X vs OpenGL or any other library, The truth is whatever you are used to is better.
Carmack chose OpenGL because at the time Direct 3D was doing things in a very stupid way. OpenGL was just better. Microsoft has had to work very hard to DirectX where it is today, it hasn’t always been so good. Also DirectX is horribly flawed in that it is not cross platform. This is not good for the gaming market because hobbyist programmers (The ones that become professionals later) can’t always change to a specific platform due to cost.
To start with directX you need Windows and a Windows compiler. Not as cheap as a Linux GCC OpenGL solution. Which can also port easily to OS X.
>>It was the same game with a different optimized rendering path for each generation of cards.
>Same thing. Same result.[i]
Not really. You painted the picture that OpenGL was a fragmented and incompatible mess. This isn’t true. OpenGL extensions provide a clean and graceful mechanism for checking the capabilities of different hardware from OpenGL 1.x and up. Compare that to Direct3D where the [i]whole interface changes between Direct3D versions. That doesn’t even take into account driver quality across different versions. The “incompatibility” between different vendors that you refer to was a variation in shading tech that’s now been resolved, which is not unlike the shader mess Direct3D found itself in. OpenGL isn’t fragmented, Direct3D is fragmented.
OpenGL 2.0 will be incompatible with previous specifications so legacy applications will need to be ported
A design goal of OpenGL 2.0 is total support of OpenGL 1.3, to ensure “legacy” applications continue to run. It’s up to you if you want to make the leap or not.
Just because Toms hardware spouts off about OpenGL doesn’t mean it’s true or unbiased. The last major posturing, while informative, was a thinly veiled slur. As for the constant parroting by games sites that pretend to be written by “journalists,” it would be advisable if they checked once in a while with people who actually know about or use OpenGL – before they start spouting their nonsense. Real journalists would be more concerned about getting to the truth rather than spouting some half-remembered Microsoft FUD.
I don’t think that there are any large corporation who’s CEO’s don’t take the majority of the profits by having huge salaries in the millions and tens of millions of dollars.
These corporations can afford any technology because they can write it off in tax refunds. In the mean time those foolish enough will stick up for them. I’m sure that CEO’s find it amusing that people are fighting for their small group to have all the money instead of wealth being distributed to groups of ordinary people who actually earn it based on the work that they do.
One big advantage of open implementation is the decentralization of control over knowledge, that encourages innovation rather than vendor lock-in (withholding knowledge). Making knowledge available and accessible means that the knowledge can not be abused or controlled by a small group or individual. Instead, if knowledge is accessible to a large general audience than more ideas will be generated inventing ways to apply it in a broad area of domains which in turn will generate more requirements. Innovation is not just about competing business interests, it not just about which technology will be pushed to sell more units, because any quality of technology could be pushed to sell if there is no choice. The innovation that I was thinking about is more about removing existing constraints so that architecture is able to support more generic, decentralized, scaleable, democratic, and more accessible application platform.
The platform is supposed to serve the applications, and so the applications should decide the platform. I think that open source is about the applications and not the platform and people would be better off to think that way. Now some would argue, but let me elaborate on this idea because the situation is delecate when you only have half of the perspective. For example, in an object oriented framework of classes, you have the ability to inherit an objects implementation and further specialize the behavior in the subclass by over-riding a public method or adding new methods, and in essence reusing the design. Now this is an example of making knowledge accessible, however it is not an example of making knowledge open. Yet having both open and accessible knowledge is required for users or businesses, because it is the actual implementation of those class objects which provides you with the control over your technology investment (the applications). If you do not control the implementation than you do not have the key to the future of your investment, you were merely provided with an accessible format for rapid development (reuse) and that is only one part of the solution. If you put your applications ahead of the platform than you might finally gain the realization that control over your application is more than the specialized behavior that you added to the vendor class objects, and this might lead you to understand the importance of open implementation because it is intimately tied to your own personal interests at a level that you can control. It is all about putting the power into your own hands instead of in the hands of a small group that will put their interests before yours. I think that software platform vendors should be service based, and not complete providers because they can’t be responsible over an extended period of time.
As more people participate in open source projects (applications), than the technology which is open and accessible will innovate through groups of people contributing in efforts to expand the knowledge base to meet their domain requiements based on a perfect architecture which supports exponential reuse that is independant of controlling interests.
Hahah, do you actually know what a mess Direct3D used to be a few years ago? (execute buffers? aaaargh!)
It took Microsoft many versions to get things right (like anything)..
And like anything, Microsoft did try to kill OpenGL, that’s probably why you (Eugenia) think OpenGL is ‘surpassed’ by D3D.
The version of OpenGL that comes with Windows is still at version 1.1, thanks to Microsoft (driver issue..), but the standard is already at version 1.4 and soon OpenGL 2.0 will be released (somewhere around summer I believe), which is far better and way more powerful than Direct3D.
And lastly, Direct3D always seems to have problems, it’s _the_ reason that so many people have issues with their Windows systems.
OpenGL just seems to work, always did, and it will always do, even on flaky operating systems.
(ie. try putting a not so stable AGP driver on your Windows system.. every app that uses D3D will crash and lockup the machine, OGL not)
Now excuse me, I’m going to play some nvOpenGL accelerated game on my ‘Xbox’ now (‘X’ as in XFree
Question: Why is it that Open GL looks so much better in rendering than DirectX?
I find DirectX seems to have washed out textures and poor visual qualities whereas Open GL visually looks so much better vith vibrant colours and objects look more detailed/sharper. Just my observations and seeing as I deal with Pre Press work I realise my eyes are fairly attuned to visual detail.
You aren’t the only one who’s noticed that
There has been some guy somewhere around at some game programming website who noticed that as well and made a basic app (displaying a traingle that showed 3 different colors, red, green and blue, smoothly interpolized between the 3 corner vertices).
One app used OpenGL, the other D3D (both with comparable state settings, etc).
The result of that discussion (I can’t remember the URL anymore ) was that OpenGL does indeed compute its stuff somewhat different.
Guess that’s the reason why OpenGL is marketed as a standard for ‘High Visual Quality and Performance’, unlike D3D, which is/was only marketed as a standard for graphics in games..