Although I am quite happy with just a 1G TB, I am nots so happy with the noise or heat from even such an older chip.
Now like everyone else I lust after 1 or 2 or more Hammers for a massive HT connected cpu. I do not know what it will be used for just now, but someone will build it & I will want it. But the heat & noise will make it deadly.
I am already keeping my TB in an adjacent room with cables running under the door, but I had this idea that would basically solve the heat & noise issue in 1 fell stroke. If I could spec my own motherboard, I would take the Athlon+Memory off & place it on another small board connected to the main board with the longest possible Hyper Transport link I could get away with. Now the system board with PCI & AGP cards would be fairly cool, could go into a compact attractive case with fairly small PSU, it would contain the CD,HD etc. The cpu module producing all the heat & noise could now easily be 20ft away next door (HT permitting). This is basically the same idea as separating audio pre amp & power amp, pre amp is small compact & atractive, power amp is butt ugly & hidden away. One reeally nice feature is that the 2 components can be upgraded or replaced separately, the cpu module every 6 months, the sytem module much less often.
Yes, it is a crazy idea. The reason is because that 20 ft cable will introduce all types of engineering problems, some of which can’t be solved. For example, that cable is going to be quite large, even at Hypertransport pin counts. Also, your system bus is suddenly going to be subject to environmental conditions that aren’t present inside a computer case (what if you have a ceiling fan on the floor below your cable?); not to mention that you’re now flooding the surrounding area with EMI from the cable that will likely interfere with all manner of things (TV’s, Radio, Cell, Cordless Phone)
But I think the real killer here is latency. “The longest possible Hyper Transport link (you) could get away with” is not very long. Certainly not 20ft, and why? By the time one set of signals is, just a guess here, only a quarter of the way down the pipe (5ft) you’re already sending the next set. If there is an error, your CPU module won’t know about it until (based on my guess) _8_ cycles later. That’s NOT ok.
This latency problem spreads quickly to every other part of the system. Every time an interrupt fires, you have a MASSIVE delay before it gets serviced by the host OR the card. It will destroy CPU<->AGP bandwidth, and it will effectively put an end to PCI bus mastering as well as memory mapping of system peripherals (the latter would mean a total rethinking of how current computer architectures work).
I’m not saying that what you propose is impossible, or undesirable, just that it would mean a completely new and totally different kind of computer system. One which I’ve spent a lot of time considering as you can perhaps tell.
The documentation is pretty good, I must admit. Though, I really don’t mind Intel’s docs that much. They’re stale, but fairly clear. Their architecture manual (volume 3 I think) is pretty good, better than their other hardware docs. However, if you haven’t read the Voodoo3 specification, you should. Its wonderfully clear. It reads like a HOWTO. It should be called “3D Hardware Programming for Dummies.” They have pictures and diagrams and descriptions of the conceptual structures used by the hardware, instead of just a list of registers and their uses.
There is an easier way. Run your whoel computer remotely. Have the videa, keyboard and mouse where you work and the computer elsewhere. Or you could do it the way we used to do it in the good old days when a computer took up a whole floor, and just connect using a smaller terminal & a serial cable.. what about 100BaseT networking?
I already have 12ft & a wall between the cpu & 2 monitors (1600×1200) using a KVM as a buffer & USB1 hub, so 20ft was just a nice round no that would likely be smaller depending on bus specifics. The KVM does not noticibly degrade the video signal. So I have video+audio+usb cables under the door.
If I can do this myself in an half assed way I am very sure the industry can do a much better job even if some of the engineering is difficult. Gee if they can put Vacuum tubes on a mobo for some users, & I never would have predicted that then who knows. As for EMI etc there is fiber optic, & plastic may be fast enough & cheap enough too in volume. Also all these LVDS serial schemes have to deal with common mode noise anyway, but some better than others. As for latency, sure there could be quite a few ns (say 20-50) so that would be slower PCI cycles, but the PC architecture is already wierd & needs some cleaning up anyway. A clean sheet design would be nice.
As for using 100BaseT networking, I would do that if it were possible, but only XWindows as far as I know makes that the norm. Most of the devices I want in my face so to speak are actually quite low bandwidth except video, & I could give up AGP graphics, slow PCI is all I need.
Anyway I believe Intels (paper?) alternative to HT is external and I have seen HW that dumps low res video+KB+mouse on to cat5 wire. I have also heard that API actually used HT as some sort of internal network backbone.
Anyway I still want a couple of those Opterons, and I will deal with the heat & noise when it comes.
The docs are made DEC Alpha styled! They are well organized and not total rip-offs of Intel manuals
They are in an easy to understand english and try to get you going as fast as possible. A _must_ even for IA-32 developers!!
Thanks for the app note reminder!
Although I am quite happy with just a 1G TB, I am nots so happy with the noise or heat from even such an older chip.
Now like everyone else I lust after 1 or 2 or more Hammers for a massive HT connected cpu. I do not know what it will be used for just now, but someone will build it & I will want it. But the heat & noise will make it deadly.
I am already keeping my TB in an adjacent room with cables running under the door, but I had this idea that would basically solve the heat & noise issue in 1 fell stroke. If I could spec my own motherboard, I would take the Athlon+Memory off & place it on another small board connected to the main board with the longest possible Hyper Transport link I could get away with. Now the system board with PCI & AGP cards would be fairly cool, could go into a compact attractive case with fairly small PSU, it would contain the CD,HD etc. The cpu module producing all the heat & noise could now easily be 20ft away next door (HT permitting). This is basically the same idea as separating audio pre amp & power amp, pre amp is small compact & atractive, power amp is butt ugly & hidden away. One reeally nice feature is that the 2 components can be upgraded or replaced separately, the cpu module every 6 months, the sytem module much less often.
just an idea!
Slightly off topic
I just came across this, it seems to be the real thing, a vacuum tube audio section on a P4 mobo!!!!!!
http://www.aopen.com/products/mb/ax4b-533tube.htm
http://www.aopen.com/products/mb/Pax4b-533tube1.htm
yikes
oh thats actually pretty old news … you should read slashdot.org more often …
Yes, it is a crazy idea. The reason is because that 20 ft cable will introduce all types of engineering problems, some of which can’t be solved. For example, that cable is going to be quite large, even at Hypertransport pin counts. Also, your system bus is suddenly going to be subject to environmental conditions that aren’t present inside a computer case (what if you have a ceiling fan on the floor below your cable?); not to mention that you’re now flooding the surrounding area with EMI from the cable that will likely interfere with all manner of things (TV’s, Radio, Cell, Cordless Phone)
But I think the real killer here is latency. “The longest possible Hyper Transport link (you) could get away with” is not very long. Certainly not 20ft, and why? By the time one set of signals is, just a guess here, only a quarter of the way down the pipe (5ft) you’re already sending the next set. If there is an error, your CPU module won’t know about it until (based on my guess) _8_ cycles later. That’s NOT ok.
This latency problem spreads quickly to every other part of the system. Every time an interrupt fires, you have a MASSIVE delay before it gets serviced by the host OR the card. It will destroy CPU<->AGP bandwidth, and it will effectively put an end to PCI bus mastering as well as memory mapping of system peripherals (the latter would mean a total rethinking of how current computer architectures work).
I’m not saying that what you propose is impossible, or undesirable, just that it would mean a completely new and totally different kind of computer system. One which I’ve spent a lot of time considering as you can perhaps tell.
The documentation is pretty good, I must admit. Though, I really don’t mind Intel’s docs that much. They’re stale, but fairly clear. Their architecture manual (volume 3 I think) is pretty good, better than their other hardware docs. However, if you haven’t read the Voodoo3 specification, you should. Its wonderfully clear. It reads like a HOWTO. It should be called “3D Hardware Programming for Dummies.” They have pictures and diagrams and descriptions of the conceptual structures used by the hardware, instead of just a list of registers and their uses.
There is an easier way. Run your whoel computer remotely. Have the videa, keyboard and mouse where you work and the computer elsewhere. Or you could do it the way we used to do it in the good old days when a computer took up a whole floor, and just connect using a smaller terminal & a serial cable.. what about 100BaseT networking?
P
I already have 12ft & a wall between the cpu & 2 monitors (1600×1200) using a KVM as a buffer & USB1 hub, so 20ft was just a nice round no that would likely be smaller depending on bus specifics. The KVM does not noticibly degrade the video signal. So I have video+audio+usb cables under the door.
If I can do this myself in an half assed way I am very sure the industry can do a much better job even if some of the engineering is difficult. Gee if they can put Vacuum tubes on a mobo for some users, & I never would have predicted that then who knows. As for EMI etc there is fiber optic, & plastic may be fast enough & cheap enough too in volume. Also all these LVDS serial schemes have to deal with common mode noise anyway, but some better than others. As for latency, sure there could be quite a few ns (say 20-50) so that would be slower PCI cycles, but the PC architecture is already wierd & needs some cleaning up anyway. A clean sheet design would be nice.
As for using 100BaseT networking, I would do that if it were possible, but only XWindows as far as I know makes that the norm. Most of the devices I want in my face so to speak are actually quite low bandwidth except video, & I could give up AGP graphics, slow PCI is all I need.
Anyway I believe Intels (paper?) alternative to HT is external and I have seen HW that dumps low res video+KB+mouse on to cat5 wire. I have also heard that API actually used HT as some sort of internal network backbone.
Anyway I still want a couple of those Opterons, and I will deal with the heat & noise when it comes.