“Intel Corporation announced an important advance in the quest to use light beams to replace the use of electrons to carry data in and around computers. The company has developed a research prototype representing the world’s first silicon-based optical data connection with integrated lasers. The link can move data over longer distances and many times faster than today’s copper technology; up to 50 gigabits of data per second. This is the equivalent of an entire HD movie being transmitted each second.”
it’s cool and all, but this was announced a long time ago.
http://www.engadget.com/2010/07/27/intels-50gbps-silicon-photonics-…
Edited 2010-08-24 22:21 UTC
If I remeber correctly, it was also mentioned on osnews at the time.
From the article, so I guess I was wrong:
“This research is separate from Intel’s Light Peak technology, though both are components of Intel’s overall I/O strategy. Light Peak is an effort to bring a multi-protocol 10Gbps optical connection to Intel client platforms for nearer-term applications. Silicon Photonics research aims to use silicon integration to bring dramatic cost reductions, reach tera-scale data rates, and bring optical communications to an even broader set of high-volume applications. Today’s achievement brings Intel a significant step closer to that goal.”
I’m not impressed. Cisco had OC-768c/STM-256c (that’s 40 Gbps to non-networking geeks)introduced and already installed back in 2007. (http://www.usatoday.com/tech/webguide/internetlife/2007-07-19-swedi…) This was published just before intel announced their 40Gbps stuff. Not only that but dense wave division multiplexing has been around for 10GBps for even longer than that and you could multiplex up to 32 channels of 10Gbps into one fiber pair. That’s 320Gbps people. The only thing new here is that it’s being done all on one chip with integrated lasers and multiplexing at 12.5 Gbps per channel. The video on that site also implies that other modules could be linked with the first.Sounds like more multiplexing to me. Like I said…. Not impressed.
That’s a huge difference. It’s a completely different application with very different requirements (most importantly power consumption and very short range).
As far as networking goes, people are now trying to use 100Gb/s long haul connections and probably even faster links at shorter distances. But these solutions (because of optics and power dissipation) are not suitable for integration on a single chip.
OTOH, Intel’s chip has to compete with traditional wire-line transmission, which can now achieve similar performance (10Gb/s is standard, ~30Gb/s is in development) and don’t require special process and package solutions. Electrical solutions typically are limited to a several tens of IO channels per chip (require several “pads” per channel for building a transmission line) and this (plus larger range) is where optical solution could potentially have an advantage.
Exactly. Integrating optical data transfer on a chip + using the wavelength multiplexing capabilities = making much, much faster buses. And opening the way for all-optical data processing in the future, which means little to no heat generation (which in turn means no more stupid fans. And cubic or spherical chip design instead of those boring plastic pancakes if you want it to be so), extreme parallelism, no more costly energy conversions in optical data transmission…
Integrating 50 GBps optical transmission on a chip is very exciting It’s knowingly possible to make long-distance transfers at much faster speed, but it’s the “integrated” word that matters here
Edited 2010-08-25 14:42 UTC
Optical data transfer is reality but please don’t mix it with optical computing (which, as it is today, is a hoax). Yes, you can make some basic nonlinear optical only blocks (e.g. mixers) but there is little chance they will scale down to sizes comparable to single transistors anytime soon.
Even integrating (hybrid) lasers on chip was problematic. If I understand Intel’s presentation correctly, they had to modify the manufacturing process in order to do this. These cells are also not exactly like what you would call “small” or “low power”. High speed IO cells can easily occupy area 10000x larger than that of a single CMOS NAND gate and I’m pretty sure Intel’s optical IO isn’t very different in that respect.
No, sure, it’s not for tomorrow. And it will never be as powerful as current high-end processors due to some diffraction issues, except if some current research on sub-wavelength light confinement prove to be successful.
But there are many areas where we don’t need the power of current cpus. Most offices, as an example, would be just fine with PIII-equivalents for everyday work. And using light, there are things like Fourier transform which can be done much, much faster than with current electronic components…
I think that all-optical computing has its place in a long-term future. But only time will tell.
This kind of counter announcement is standard in the industry. AMD just released Bulldozer, so Intel has to release something too – even if they already released it.
an HD movie at 50Gb wouldn’t be that HD
It would if you’re talking about a Blu-Ray version. 50 gigs is the typical amount a bd disk can hold. Of course if you’re talking a completely lossless studio master you’d be correct.
a dual layer bluray disc holds 50GB not 50Gb. a lot of bluray movies are well above the 30 gigabyte mark
Wrong type of ‘gigs’. Storage is in gigabytes, but network performance is in gigabits per second – hence the comment in the parent post about “every 8 seconds”.
Edited 2010-08-24 23:33 UTC
But even then, most people use ~10 bits per byte for communication over network mediums due to the overhead incurred for flow control, error correction, packet headers, etc.
In this case, though, I suppose 8 bits per byte is suitable since it’s a chip’s throughput
Well yes, there’s always overhead. But the article makes the claim that 50gbps is enough to transmit an HD movie every second, which appears to be based on either a relatively short movie, or a misunderstanding over bits vs bytes.
Also, why use HD movies when we have standard units?
I want to know how many Libraries of Congress/s that is dammit.
Edited 2010-08-25 14:25 UTC
…the virtual router…
…and you have a snappy GMail!
Verra snappy, lad!
I’m in the ISP-business and I don’t expect much from the virtual router stuff. It’s more a neat trick at this point.
But I they did mention in the previous press release they do think that this optical project will deliver cheap and fast transmission and will replace USB and firewire and I think (e)SATA.