Researchers with the University of California at Santa Barbara, working in conjunction with Intel, announced Monday the next step in their joint plans to produce an entirely solid-state photonic processor assembly – a chip which processes data as light waves, without the need for microscopic, yet movable, parts.
its about time!
This sort of thing has significance for fiber optics and communications, but not really the microcomputer world. An optical microprocessor in this form really wouldn’t be faster than an electronic one. The modulation speeds currently being reported really aren’t that impressive when compared with, say, the cutoff frequency of a typical MOSFET or HEMT.
In any case, it’s really strange to me that they are pumping all this research into indirect bandgap semiconductors. I know Intel is a silicon company, but it seems odd for them to go out of their way to avoid the obvious choices for optoelectronics just for the chance of being able to manufacture optical devices with process technology similar to what they already have. I guess someone somewhere has costed this out and figured it might be a good investment… Maybe…
It’s too bad that the article doesn’t even comment on how they are modulating the light with silicon… That’s the more interesting, if strange, aspect to this concept.
Edit: Ah, okay, the press releases were a little more enlightening than the tech reporter’s spin. This isn’t a laser “processor”, it’s a hybrid laser that uses InP for the emission source and silicon for the waveguide; basically allowing lasers to be easily integrated with a silicon process. This is an interesting development, but it’s still a long way from silicon based optoelectronics.
This link has some good info and some nice press photos:
http://www.intel.com/research/platform/sp/hybridlaser.htm
Though, this statement is telling and somewhat humorous:
“For example, silicon is […] well understood by the semiconductor industry.”
Heh… Read: Intel wants to stay a silicon company, and isn’t going to let silly things like better optoelectronic materials get in their way. So they’d rather jam optics into a silicon mold than mess much with non-Si-or-SiGe devices. I guess its those wacky economics again.
Edited 2006-09-20 00:08
This project has already been started before by some scientist and named this computer as Quantum Computer. Where the processor resorted Fiber Optic to send signals to CPU. This is really fantastic because of the tremendous speed. And Maybe Intel is afraid to be leave very behind by this ealy project, and this maybe the reason why they are now projecting their research and plan along with the concept of using Light wave or photonic activity on their processor. But I’ll be hands up with Intel if they will successfully bring this to us.
I was under the impression that this is a totally different concept from quantum computing. Surely this is merely a way of speeding up a cpu by using light waves for signal carriage, whereas quantum computing uses the individual indeterminate state of a photon to carry out computations with the various outcomes superimposed until read and interpreted, enabling many blind-alley computations to be rejected without ever evaluating them.
Yes, I am terrible at explaining this, but in the one case we have a lot of light-beams in a box, in the other we have Schrodinger’s cat in a box, I think
Yeah, it’s about carying signals between chips. Nothing to do with quantum computing (yet).
Can also be used to carry signals inside chips, allthough that’s more speculative whether that will ever break through, as the electronic people keep on developing new things (or old new things) like processors with multiple asynchronous parts, multicore processors,…
About power consumption and heat dissipation, what gives a photonic CPU ?
Kochise
Well, power consumption is exactly one of the drives to do this. For instance, lasers (and detectors) integrated on chip can be used for chip-to-chip or board-to-board communication. Driving the signals off chip takes a tremendous amount of power if you do it electronically at high speed (>Gbps). Of course, the electronics people are advancing, and can push more and more data further and further off, but still at some point optical communication becomes easier and less power consuming.
Nevertheless, the heat dissipation problem is probably THE bottleneck, as the lasers are integrated on a silicon circuit.
Thanks for the answer, I guessed power/heat would be a point and a goal. I don’t think they’ll use laser later, let me explain. Currently power is divided in two unit, W for electronic, Cd for photonic. In electronic you have frequency, in photonic you have wavelenght. In electronic you have amplitude modulation, and in photonic nothing ‘similar’ (if you may call it light modulation).
By focusing on a straight wavelenght (laser), Intel might loose the ability of creating holographic CPU, I mean moduling the wavelenght against the light power. Imagine according to a reference wavelenght, you cut on/off your source, you may have a binary signal. But then if you change the wavelenght (IR to UV) you may add another kind of information.
Then final note, I think they won’t use laser or laser diode (power comsumption, miniatirisation, wavelenght limitation, …) they’ll surely use white LED and modulate it’s wavelength or wavelenght’s print (remove some color to get other information). If you sample the range from IR to UV in 256 (or 65536), instead to have a 32 bits CPU, you have a 256 bits or 65536 bits CPU from a pure color light. You might then compose a color from these pure color like you may compose a value like bits in a unsigned int register, where IR is 0X8000000, UV is 0x00000001 and white is 0xFFFFFFFF, but on 256 or 65536 ‘bits’. You can also modulate each color instensity, so things are not binary anymore (say 256 shade level for each pure color), then you might reach 256×256=65536 or 65536×256=16777216 possible value for each color print. Sample this at 100 TBps, you’ll get the idea…
Add this to the ability to save informations in wavelength format (holographic storage), and you can multiply by thousand the capacity of storage.
Kochise
EDIT : …then you might reach 256(level)^256(slice)=3,231700607131100730071487668867e+616 (aka 2^2048, equivalent to a 2048 bits -2 KB- value) or 256(level)^65536(slice)=’too-much’ (aka 2^524288, equivalent to a 524288 bits -512 KB- value) possible value for each color print…
256^256 @ 100 TBps => 200 TB/s => 512 HD of 400 GB each second
256^65536 @ 100 TBps => 50 PB/s => 131072 HD of 400 GB each second
On a >> SINGLE << fiber !
Kochise
Electrons are Fermions (think of it as “don’t like each other”), meaning that they easily influence each other.
This is both, a blessing and a curse in ordinary electronics.
The smaller the circuit gets the harder it is to prevent unrelated currents from interfering with each other.
On the other hand, in photonics you can (potentially) have millions of lightbeams that cross each other without being influenced. BUT it is a lot more difficult to let the photons which are Bosons (read as “like each other, don’t mind to be all in the same state”) interact.
Talking of speed photons obviously give you maximum velocity (c that is).
Still I wouldn’t put the different parts metres apart as the article suggested would be possible:
Say we have 10GHz that means that the distance light travels in one clock cycle would be 3E8(m/s)/1E10(1/s)=3*10^-2 m = 3cm so for 1m you would loose about 30 clock cycles.
EDIT: 60 actually, since the distance must be travelled twice (from the chip and back to the chip)
Edited 2006-09-20 09:28
Well, there is no difference in losing clock cycles between electronic and optical links. The speed of the EM waves in an eletronic link is approximately the same as in an optical link. The fact is that currently electronic chip-to-chip interconnects at multigigabit speeds exist, and optical ones are in development (envisaged to work at >1TBps aggregate).
However, the optical link CAN, at the same aggregate bandwidth as an electronic equivalent, be much, much longer.
Note that in order to not to loose clock cycles, you can use a higher parallelisation: just use 8,16,32,64,… lanes at 10Gbps for instance. With optics, this will be possible, while the scalability is much lower in electronics.
Hmm, not sure if I understand what you’re trying to tell me…
I mean parallelisation can give you bandwidth but it cannot buy you less latency (at least not if the limiting factor is the speed of transmission).
But, of course, you are right that EM waves travel at the speed of light (which is, after all, an EM wave itself). Nevertheless electronic parts do have a propagation delay (bigger than their dimension/c) because the electrons have to move in order to establish a new state and they don’t allways move as fast as light ’cause
a) they have mass and
b) there are things like resistors in their way
Also note that, talking of bandwidth, the skin effect can get in your way, which means that the current is only located at the surface which increases resistance.
On the other hand typical light already has a wavelength of 0.5ยตm, therefore having a frequency of about 6*10^14Hz which is pretty high.
But, as I said before, I’m not sure I got your point…
Edited 2006-09-20 12:38
Yes, you’re right of course that more parrellelisation doesn’t give you less latency. I was wrong there
However, you can play the same tricks as in electronics: burst transfers etc. Still, as you say, the speed of the optical link defines the end-to-end latency.
Well you sort of miss the point if you compare photons directly with electrons when comparing optical and electrical signals. In an electronic system, signals are usually conveyed as potential differences, and are therefore not directly limited to the speed at which electrons themselves can travel (i.e. their mobility in a semiconductor). Where “signal” is interpreted as a potential difference, an electrical signal propagates through a conductor very fast, approaching the speed of light. Of course, as you pointed out there are circuit delays introduced by various real factors (taking the CMOS example, parasitic capacitances and finite drain current to charge up the next MOSFET gate enough to drive it into strong inversion).
Still, the same delays will be present in any device that utilizes optics in the way that Intel is developing, thus my earlier comment. The issue is that the current optical modulators are still relatively “slow” compared to what is possible with modern loaded FETs. Maybe this will change, but I doubt these solid state optical modulators will ever be a whole lot faster than electronic transistors. In other words, I don’t forsee this particular development path leading to an optical CPU. So we’re back to my original assertion that the significance of lasers integrated with a high volume silicon process is largely for communications and perhaps, as some have mentioned, high speed interconnects. This latter option is pretty interesting for the massively parallel computer regime since optical interconnect busses wouldn’t be subject to the crosstalk problems inherent with electrical signals in wires.
Yes, you are right.
It’s definately most interesting for communications.
Integrating some router IN the fibre optic itself would be really cool
However, keep in mind that the technology is still quite young and might get faster over time.
Or is there any reason why it cannot?
Of course I’m talking propagation delay of components here, not speed of transmission – c is still rather limited ๐
I’m really curious if it could allow for more 3d in computer design – it’s just a shame to hardly use one dimension. I think it might because wires are not such an issue and if cooling does not get in our way it would allow for much smaller devices (potentially higher clock rate) to be built.
That being said I still find molecular computing way cooler if we can assure it does not get too hot.
Self assembling nano-structures would rock!
By the way: Intel is making a lot of fuss, but we’re putting InP lasers on silicon that are so small one could put 1000s on a square cm, while intel can fit more than an order of maginute less on the same die size. Nevertheless, they have a nice result.
Damn, I wrote this idea in my journal 4 years ago!
I should have patented it and made some money. ๐
Haha. Believe me, we already started working on similar stuff 4 years ago ๐
Interesting and informative dialogue going on here, but seriously, an article about processors with “frickin’ lasers” just begs for a Dr. Evil punchline, I’m simply not witty enough to think of one myself.
Ok, ok, so maybe Austin Power references are a little dated and have no place in an intelligent technical discussion, but come on, somebody throw me a frickin’ bone here.
“The hottest chips on the market? But they have frikkin’ lasers on top? Now evidently my cycloptic colleague informs me that that cannot be done. Ah, would you remind me what I pay you people for, honestly? Throw me a bone here! What do we have?”
Sorry, I tried
Sorry for the bad joke but it would be a good headline. I don’t think anyone wants to see a world without Windows.
This sounds like a take off of the AT&T optical processor many years ago. I hope that these advancements will produce a real machine before we run out of room between the connections.