At the end of February 2006, the Open Graphics Project team released schematics for their development board, OGD1. An article on KernelTrap was written about this, explaining the release under GPL and the nature of PCB schematics (logical connections between chips) and artwork (physical component placement and circuit trace routing). Just last Friday, was announced the first draft of the artwork. For the most indepth information, check out the OGD1 page on the OGP Wiki, which links to PDFs for each of the routing layers and a composite image of all of the layers.
I’ll be the first in line to pre-order my board.
If you want 3D graphics with an open source driver, you can save a lot of money (and fuss) and at the same time get much better performance by simply buying an el-cheapo board with integrated Intel graphics. Sure, the performance will suck compared with ATI and nVidia chips, but:
* Open Graphics Project’s hardware will suck a lot worse (even they say so).
* OGP hardware will be pretty expensive if it ever becomes available.
* If Intel’s driver availability does not make ATI and nVidia consider opening up their drivers, be sure OGP’s won’t either.
* Intel chips are available now, they are very inexpensive and are more than enough for desktop acceleration.
This project is redundant; I’m pretty sure it will never come to fruition, but the engineers doing it will have fun and learn a lot, which I guess is good.
it is easy to criticize other people’s work. Why dont you go do something
I’m not criticizing, though I would be free to if I saw it fit. I admire and envy their ability to do what they are doing: I wish I knew that much about graphics hardware and VLSI engineering.
However, I still think, and I’m free to express my opinion, that the project is made redundant by the fact that there are 3D graphics chips in the market available everywhere at very reasonable prices offering reasonable performance and with free drivers which already come included with the Linux kernel and x.org.
They want to develop a new family of 3D chips? Well, fine. But they are not going to solve the inexistent problem of inexistent chipsets with inexistent open source drivers. You wish ATI and nVidia would open up their drivers? I do too, but this initiative won’t drive us any closer to that **in my opinion**
Edited 2006-05-30 23:02
Yes, it is still possible to get cheap graphics cards, like the Radeon 7000, that are supported by OSS drivers. The problems are that the supply of those cards is dwindling, and they’re not really totally supported by OSS drivers. They’ve never published quite enough information to FULLY support all of their features, and as you get into later generations like the Radeon 9000, the OSS driver acceleration code is okay, but doesn’t support all of the best features. nVidia have never published docs on their chips. Matrox also don’t publish everything.
Please, read my previous post. I’m not talking about ancient Radeons; I’m talking about Intel integrated chipsets.
Intel IGP’s are not only current, but they are by far the most popular 3D chipset in the market, present in many inexpensive motherboards and many inexpensive desktop and portable computers. Why, they are even present in all of Apple’s mini Mac and Mac Book computers!
The performance of this chip is not good enough for games (though it seems great improvements are coming in coming models), but neither will be OGP’s. However, it is more than enough for all the AIXGL or XGL effects you may want, it is very inexpensive, it is available now, and it’s got good open drivers.
In short: you want 3D, you want open drivers, you want to convey your message to ATI and nVidia? Go buy Intel! I know it’s unfashionable to defend the winning horse, but in this race it is doing the right thing.
Edited 2006-05-31 06:44
This is more than just drivers. And sure, the initial generation will probably suck, but as they improve I’m sure they will be more than good enough to challenge cheap GPU manufacturers like SIS, Via, Matrox, etc.
Edit to respond to your second post:
Can you legaly take the design from one of the GPUs you refer to and manufacture new chips based on those?
Because for all I know, this is going to be the strenght of this project, that it can serve as a base technology to a new generation of GPUs. It’ll take time, but I think it’s possible
Edited 2006-05-30 23:06
Hmm, a good move would be for Matrox to make available her GXXX series and allow the opensource community to upgrade and update the designs – they’ve already fully opensourced their drivers and support OpenGL, with the GXXX series, the only thing that would be required would be the 3D engine given an overhaul to boost the perfomance, but apart from that, the GXXX series would provide a good base to build upon.
How do you know this about the final OGP performance?
Do you have chip level access to the 3D API of ATI or nVidia’s chips for that matter?
* The OGP hardware will start slow and get faster with open source software updates.
* OGP is already planned to be expensive for the first designs, cheaper later – since when has that been news that video cards get cheaper over time? Also, clearly you have not looked at the high end video cards from the very people you are supportting. They are damn expensive too – and the 3D APIs are not open on those cards either, in other words I can’t use them anyway.
* Oh, yes it will. When you mentioned other 3D chipsets out there you forgot to mention that none seem to give out 100% of the information of thier 3D APIs. Correct me if I am wrong. Also the FGAs planned for the first OGP cards can be replaced with faster ones in the future, this inturn means faster 3D cards, the companies you are defending will have to contend with competion that has a feature thier cards don’t – open APIs.
* If the Intel chips were good enough how come those high end graphic cards are selling? Clearly the cheap chipsets are not good enough for a large number of people.
Working in BeOS we already have only a limited OpenGL hardware support. The difference is we could tune this card’s command set to that of BeOS and get far mance than any of the shelf chipset where the manuafactor refuses to release the 3d API. We sure are not getting a binary driver from them.
Additionally with open access to the hardware doing image processing with the card instead of the CPU is a natural solution to running on older slower machines. For some reason you think the only use of an open design is as a video card. When I can get mine I doubt very much that is what I will be using it for, but I will still be using the video output. Right now I thinking high speed logic scope.
> How do you know this about the final OGP performance?
Well, I’ve been reading interviews with OGP people. They made it very clear they could not, and did not intend to, compete with ATI and nVidia in performance.
> If the Intel chips were good enough how come those high end graphic cards are selling? Clearly the cheap chipsets are not good enough for a large number of people.
Intel chips are good enough for desktop use. High end cards are necessary for good gaming. OGP hardware will have cheap chipset performance.
> OGP is already planned to be expensive for the first designs, cheaper later
They will get cheaper if they sell enormous numbers, which I doubt will be the case.
> you have not looked at the high end video cards from the very people you are supportting
> the 3D APIs are not open
Intel has no high end video cards. Their APIs are OpenGL and DirectX just as everybody else. Their drivers are fully open.
Please, stop talking about the companies “I am supporting”. I’m supporting nothing but free hardware drivers for Linux. I don’t want the kernel contaminated with proprietary modules that will hinder its progress. ATI and nVidia are not providing free drivers; Intel already is. So, to put my money where my mouth is, that is the hardware I should choose.
If what I want is good gaming performance, then I’m running proprietary games under proprietary Windows, so the whole subject of the free drivers is meaningless.
Edited 2006-05-31 09:43
Cool! Just keep in mind that OGD1 isn’t actually a graphics card. It’s an FPGA-based development board, to be used during design and testing of a graphics card, so it’s 10x the cost of a graphics card. But there’s a sizable market for FPGA-based development boards, so sales of this will be used to fund the graphics card development. The more of these we sell, the sooner we all get FOSS-friendly graphics cards.
Curiously enough the Kernel traps states that the board is PCI-X. Why would they work with something dated, with no future, and that never saw the desktop really? Perhaps they meant PCIe? Or maybe they are focusing on other parts first? Anyone know more?
—
Edit:
Found in the Kernel Trap discussion
We’ve decided that the simplest way to handle supporting PCIe is to support PCI-X and use a PCIe-to-PCI-X bridge chip. This will add some latency in accessing the chip, but since the GPU is being designed around bulk DMA transfers, this won’t be a problem.
Still doesn’t explain why they don’t design for PCIe directly (are they trying to be backwards compatible for PCI???) Seems against their mantra of be forward-thinking and seems like it’ll probably require a lot of work to take out at a later date for effeciency sake. Still, I’m glad that they seem to be making progress and aren’t pure vapourware though I’m still dubious as to the ultiamte success of the proejct.
Edited 2006-05-30 16:25
No, they mean PCI-X. What they’re concentrating on for the first run of graphics boards is PCI/PCI-X, and later AGP. The controller core for all of these interfaces is similar, and PCI and AGP together cover almost all of the machines out there.
The beauty of FGA designs is you don’t have to design for the latest and greatest tech which also tends to be the hardest to do in the first place.
Design for a lower standard, confirm your base design works, then start improving it in steps as it is just a software update away. I will not be suprised if the first shipments only works as a framebuffer with a 33 Mhz PCI interface, but then features will start to get added as needed and as the programmers (because this is what you are doing, just on a hardware level) learn more the board gets better and better, also supoorting more busses as it advances.
Design for a lower standard, confirm your base design works, then start improving it in steps as it is just a software update away.
You cannot software update a PCI-X card to PCI Express, because the connectors are physically different and very electrically different.
They got me the moment I read it was going to be a programmable FGA board with hardware to support video.
For BeOS this is just what I need. Even if I never get around to using it as a true video card, I can think so many projects where I would love having simple video output available.
There are a number of reasons to use PCI-X:
– Backward compatible with PCI
– FPGA prototyping board users are more likely to want to use PCI.
– We don’t have to use anyone else’s IP do the host interface. PCIe requires signal speeds that we can’t do in an FPGA.
– Even PCI32/33 isn’t much of a bottleneck for desktop graphics performance.
Note that to bridge between PCI-X and PCIe requires someone else’s IP, but it’s an external chip, so we don’t care. Going for PCI-X gives us the broadest user base.
PCIe requires signal speeds that we can’t do in an FPGA.
You just must be using cheap FPGAs; all the high-end ones have built-in serdes. But the point on IP stands and is probably much more important.
Yes, and we may in fact be able to use the built-in serdes (assuming the XP6 has them), but I just don’t want to deal with that complexity right now. We’ll get to it before it’s a problem.
We still want our own controller IP. So it can be under GPL.
In any case, when can I order two? I want to build a system from scratch, and this is an obvious way to go.
Exciting to see this. It’s good to innovate in a very organized way with open Hardware, it’s very fun. Imagine doing this with a space station, very incrementally versioning because it’s open. You would get pretty close to exactly what you wanted. Customizable hardware.
It took Ubuntu 10 weeks cheaply to port to OpenSPARK!
No pay to play.
Also I think the PS3 Cell chip has opened their hardware non-commericially.
I have alot of sympathy for the project but I can’t really support it since it is still vested in the current x86 infrastructure.
I just bought a D805 dual core at 2.666 GHz and it isn’t yet any faster than my 2GHz XP2400 Athlons but its still early days. I couldn’t install W2K on it so I tried BeOS 5 R1. Amazingly it just worked except the Intel mobo chipset graphics needs safe mode to get me up in UXGA mode. That has left a bad taste in my mouth for what is supposed to be a state of the art x86 although getting BeOS (almost) working on it was a nice bonus. No AGP bus so I now have to look for a new BeOS supported PCI or PCIe card.
From time to time I look up whats available on the open market for FPGA boards (usually for Xilinx) that have all the features I would need to build a full blown PC but without nVidia, ATI, and esp no x86 or any DRM in sight.
On comp.arch.fpga one can find a shipload of comments on all the current standard interfaces and even soft core cpus to convince one that FPGA computing is viable, but probably expensive and won’t be like normal PC computing.
I would like to see PCI, PCIe, a high res video output DAC atleast to UXGA or better (or two please), plus bridges for HD controllers and networking, USB, PS2, FireWire etc. All these interfaces are supported but usually need an extra phy chip for each type, but these are usally pretty cheap in volume. The IP for these interfaces may not be so cheap, some will be damn expensive, but many are available on openchip.org. Some of the edu boards include entry level IP for all the onboard phy ports and lots of good app notes.
Now if I build my dream specs, it won’t meet your specs so I am doomed to only build 1 proto possibly for 100Ks of development work. Can you find in any computer store a PC board that has every single feature you want, you usually build it from components.
Giving up on the x86 really means the cpu performance is going to go to hell since most FPGA cpus are in the 100Mips region at best and crude by any standards if one plays in the usual single threaded computer architecture space. A multi threaded design though can really make alot of difference if one is prepared to deal with possibly 40 or more hardware threads.
I see quite a few edu FPGA boards from Digilent, Xess and some other (google for FPGA boards education etc) that are almost complete theoretically, except the interfaces are really all junk. Many have VGA resolution 640.480 with 3 or 4 bits of color ie no Ramdac. Many can barely drive a HD with an ATA controller maybe at 33MB/s. Many have 10/100Mb ethernet phy components. Many have USB1.1 phy layer too. But in reality they all suck unless you want to build a 15yr old PC. Some boards actually have an array of high performance interfaces from say Avnet but cost several $K and you still have to buy or develop IP for them.
The proposal I am working to is to do things very differently and that would be to build an FPGA module about the size of a credit card that has space for a no of small soft core processors and uses its fewish I/O pins to do a few interfaces well with the addition of special chips per function. This lowers the risk by separating development board by board and each can be upgraded over time. Perhaps each board can be developed by other parties. It would look a great deal like the HydraxC board from Germany but with extra custom ports.
The video board would use a Ramdac upto 300MHz or so. and might well support DVI as well as VGA, possibly 2 heads.
The user interface board would have the usual PS2 ports, USB1.1 & 2.0 bridges, 10/100 networking etc and is relatively slow on its ports but focusses on lots of serial standards, maybe even FireWire, and external SpaceWire.
The HD board would have 1 or 2 PATA ports for current HDs etc, maybe SATA phy parts added later.
Every time you want a new interface ie more video heads, more HD ports, more KB/Mouse ports you add these standard modules with specialized interfaces hence as many of anything as you want. Since each FPGA board is mostly used to hold copies of the cpu with extra IP for interfaces, the cpu performance goes up with the no of boards as it should do.
The primary FPGA board for computing grunt work will use RLDRAM with more expensive Virtex FPGAs for multiple processsor mostly. The RLDRAM has 20x the throughput of SDRAM for true random access and is the basis for my cpu work. The rest of the boards are more staighforward.
The peripheral boards can use Spartan3s and SDRAMs since their cpus are more bonus used for protocol, buffering etc.
All these boards hook up with high speed LVDS serial links like SpaceWire links which makes it into a Transputer card array. Is this a new idea, well it was done 25 years ago at 10x slower signal speeds, they were called TRAMs and were a bit bigger.
Now it starts to look easier to find boards that meet these specs or build own to do so and combine them all into the system I’d really like. Me I’d include several of each to build the system I really want.
The PC104 standard is stackable model but a bit too low performance.
The OS won’t be MS, or even BeOS/Haiku, or Linux but the OS I have in mind will look alot like BeOS but distributed over these smaller cores. That was also done 25years ago but looked more like an Atari workstation.
So whats the deal with FPGA graphics? I would much prefer to have a plain video buffer driven by lots of tiled processors in the same FPGA, the hardware only needs to share vid ram over those and let the cpus do the algorithm grunt work. Theres nothing to stop the cpus from having specialized graphics coprocessors added permanently or reconfigured on the fly.
The FPGA processor side is described by a paper at wotug.org, the software paper will out later.
I appreciate that you actually put in so much effort to write such a detailed write-up for this, but can you please polish it up? I mean proofread it.
This is because I’m really having a hell of a time trying to read it. I simple gave up in the middle of the comment since I still can’t read your stand — do you mean to want them to support FPGA or x86/PCI/PCIe? I see you mention it more than 3 times already, and yet I cannot comprehend you case…
I am half way torn on it. If a good FPGA board with a UXGA on any of PCI, PCIe, AGP were available for less than say $500 I’d order one too as a stop gap to put in my own hardware content. It must have a decent size Xilinx part on it that can be used with WebPack or use Altera, Lattice equivalents. I haven’t seen any FPGA boards with decent VGA out there and I know where to look. I would really prefer that one of the regular FPGA board vendors did this for quality reasons and know that they will stick around.
I doubt the ASIC idea can ever fly, the costs are way higher than $2M and growing but there are some ways to lower the FPGA costs with conversion or using special fixed IP tested parts but that removes the whole point of being reprogramable. An ASIC based board will be of little interest to me and others if it can’t be changed.
In the long term I do want to rid myself of most of the PC starting with the cpu & video. The essential idea I am pushing is to break the PCB design into smaller parts and allow the customer to assemble at will. The same idea as PC104+ stacked cards really but based on a much smaller form factor with faster parts such as Nallatech’s Dime standard. PCI doesn’t even have to be in the system if fast serial links are plentifull. VGA FPGA on one board, soft processors, interfaces on another and so on possibly mounted on a simple backplane.
A problem with FPGA boards is that they date rather quickly since the newer FPGAs come out rather often. I am still designing with Virtex2 Pro while Virtex 5 is being announced as shipping.
A bigger issue with FPGAs is cost v volume. The vendors talk in marketing about parts for a few bucks in 250K volumes but when Joe FPGA designer tries to design something with those parts, they either don’t exist anymore or are priced 4-10x higher than marketing suggested. Then at the board level the parts need to be 4x priced up again to make some profit. In other words low cost FPGAs are really now for high volume use as ASIC replacements.
Lurking or searching in comp.arch.fpga will give a fair bit of background.
One suggestion would be to pick one of the better board vendors with a variety of boards with some common method of adding a daughter board and just do that daughter board with high quality Ramdac, DVI interface. That allows for the customer to choose FPGA board they really want and upgrade with video. Trouble is there aren’t many coices that come to mind that are widely supported, maybe the Dime interface.
hope that helps
Quote: “I have alot of sympathy for the project but I can’t really support it since it is still vested in the current x86 infrastructure.”
You couldn’t be more wrong. I don’t know where you get your information from. But one of our top goals is to make this as broadly compatible as possible so that any machine with any OS that has a PCI slot can use this board.
As long as PCs still have PCI slots, thats a good goal, but the no of slots seems to be going down pretty quickly for the smaller mobos.
By x86 infrastructure I mean the whole christmas tree architecture with just one cpu socket at the base with N & S bridges handing out limited bandwidth to specialized interface standards which are almost entirely serial these days except memory channels.
This project uses the FPGA for building one of those peripherals.
I see FPGAs as being far more capable of replacing the whole system in a distributed fashion, a far more ambitious goal.
This is really cool to see — engineering in action, in public, from the ground up. Hopefully it’ll be something that universities can point their students to as an example of practical engineering.
For me, the downside (the “but”) is that I have no actual need for the hardware. Otherwise I’d definitely go and buy one (or two….)
So I understand that all 2D operations are done on the graphics card today where it was done in the CPU before?
What does the graphics card offer other then native support to managed OpenGL or Mesa (better to use for this since it’s open source).
I can write my own graphics language and have tried but because of driver issues especially working with the monitor and resolutions it’s a no go; and then I am locked into a proprietary GCard.
I can try to write just 2D to make it easier but all 2D is done on the GCard now so more problems from rented ghhardware. It’s just a nightmare of going through various standards from GUIs like QT to Xorg all fighting for that OGL spot. I just hope this creates more graphics languages like 10.
Do I really need a graphics card or can’t I just use a multicore chip and use the other cores for graphics.
Ken Soilverman, wrote a Voxel based engine VOXLAP, that tries to avoid these issues with simplicity. Just creating tiny cubes everywhere not having to deal with resolution although he uses SDL as well. No graphics language all 3D. Of course that’s probably to simple.
A 100MHz graphics engine can outperform a 2GHz CPU, because the GPU is specialized for graphics and dedicates all of the hardware specifically for that. The pipelining in the GPU provides a degree of parallelism impossible with one or even four CPUs.
Thanks, I was also interested in the CELL processor but not sure if that would ever become mainstream.
The cell processor is great, but it’s optimized for supercomputing applications. That’s closer to graphics, but still not quite optimal.
OK, I meant the one being used by the PS3 (or is it not that great for graphics?) I think it’s open source, non-commercially (not GPL like you), now. The PS3 would be nice if it was an open system. Maybe GPL 3 will help to change that. I just want a regular computer.
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=1631…
Cell specs and tutorials:
http://www.linuxgames.com/news/feedback.php?identiferID=8432&action…
Thanks very much for your project. I will get a card when prices come down or as soon as I can. Right now I have an x86, of course.
I wanted to get into hardware too but I find doing software helps me to at least understand and offer suggestions.
I don’t have a detailed knowledge of the Cell processor, but based on what I know, I can guess that if it were used for graphics, it would be much better than a regular CPU. But if you want to compete on power consumption or performance, a specialized GPU is still going to beat it.