Linked by fran on Tue 24th Aug 2010 22:09 UTC
Intel "Intel Corporation announced an important advance in the quest to use light beams to replace the use of electrons to carry data in and around computers. The company has developed a research prototype representing the world's first silicon-based optical data connection with integrated lasers. The link can move data over longer distances and many times faster than today's copper technology; up to 50 gigabits of data per second. This is the equivalent of an entire HD movie being transmitted each second."
Order by: Score:
....
by poundsmack on Tue 24th Aug 2010 22:20 UTC
poundsmack
Member since:
2005-07-13

it's cool and all, but this was announced a long time ago.

http://www.engadget.com/2010/07/27/intels-50gbps-silicon-photonics-...

Edited 2010-08-24 22:21 UTC

Reply Score: 2

RE: ....
by Lennie on Tue 24th Aug 2010 23:28 UTC in reply to "...."
Lennie Member since:
2007-09-22

If I remeber correctly, it was also mentioned on osnews at the time.

Reply Score: 2

RE[2]: ....
by Lennie on Wed 25th Aug 2010 19:19 UTC in reply to "RE: ...."
Lennie Member since:
2007-09-22

From the article, so I guess I was wrong:

"This research is separate from Intel's Light Peak technology, though both are components of Intel's overall I/O strategy. Light Peak is an effort to bring a multi-protocol 10Gbps optical connection to Intel client platforms for nearer-term applications. Silicon Photonics research aims to use silicon integration to bring dramatic cost reductions, reach tera-scale data rates, and bring optical communications to an even broader set of high-volume applications. Today's achievement brings Intel a significant step closer to that goal."

Reply Score: 2

Not impressed
by Brightglaive on Wed 25th Aug 2010 03:55 UTC in reply to "...."
Brightglaive Member since:
2010-08-25

I'm not impressed. Cisco had OC-768c/STM-256c (that's 40 Gbps to non-networking geeks)introduced and already installed back in 2007. (http://www.usatoday.com/tech/webguide/internetlife/2007-07-19-swedi...) This was published just before intel announced their 40Gbps stuff. Not only that but dense wave division multiplexing has been around for 10GBps for even longer than that and you could multiplex up to 32 channels of 10Gbps into one fiber pair. That's 320Gbps people. The only thing new here is that it's being done all on one chip with integrated lasers and multiplexing at 12.5 Gbps per channel. The video on that site also implies that other modules could be linked with the first.Sounds like more multiplexing to me. Like I said.... Not impressed.

Reply Score: 2

RE: Not impressed
by ndrw on Wed 25th Aug 2010 05:53 UTC in reply to "Not impressed"
ndrw Member since:
2009-06-30

The only thing new here is that it's being done all on one chip with integrated lasers and multiplexing at 12.5 Gbps per channel.


That's a huge difference. It's a completely different application with very different requirements (most importantly power consumption and very short range).

As far as networking goes, people are now trying to use 100Gb/s long haul connections and probably even faster links at shorter distances. But these solutions (because of optics and power dissipation) are not suitable for integration on a single chip.

OTOH, Intel's chip has to compete with traditional wire-line transmission, which can now achieve similar performance (10Gb/s is standard, ~30Gb/s is in development) and don't require special process and package solutions. Electrical solutions typically are limited to a several tens of IO channels per chip (require several "pads" per channel for building a transmission line) and this (plus larger range) is where optical solution could potentially have an advantage.

Reply Score: 2

RE[2]: Not impressed
by Neolander on Wed 25th Aug 2010 14:39 UTC in reply to "RE: Not impressed"
Neolander Member since:
2010-03-08

Exactly. Integrating optical data transfer on a chip + using the wavelength multiplexing capabilities = making much, much faster buses. And opening the way for all-optical data processing in the future, which means little to no heat generation (which in turn means no more stupid fans. And cubic or spherical chip design instead of those boring plastic pancakes if you want it to be so), extreme parallelism, no more costly energy conversions in optical data transmission...

Integrating 50 GBps optical transmission on a chip is very exciting ;) It's knowingly possible to make long-distance transfers at much faster speed, but it's the "integrated" word that matters here

Edited 2010-08-25 14:42 UTC

Reply Score: 2

RE[3]: Not impressed
by ndrw on Wed 25th Aug 2010 15:31 UTC in reply to "RE[2]: Not impressed"
ndrw Member since:
2009-06-30

Optical data transfer is reality but please don't mix it with optical computing (which, as it is today, is a hoax). Yes, you can make some basic nonlinear optical only blocks (e.g. mixers) but there is little chance they will scale down to sizes comparable to single transistors anytime soon.

Even integrating (hybrid) lasers on chip was problematic. If I understand Intel's presentation correctly, they had to modify the manufacturing process in order to do this. These cells are also not exactly like what you would call "small" or "low power". High speed IO cells can easily occupy area 10000x larger than that of a single CMOS NAND gate and I'm pretty sure Intel's optical IO isn't very different in that respect.

Reply Score: 2

RE[4]: Not impressed
by Neolander on Wed 25th Aug 2010 21:15 UTC in reply to "RE[3]: Not impressed"
Neolander Member since:
2010-03-08

No, sure, it's not for tomorrow. And it will never be as powerful as current high-end processors due to some diffraction issues, except if some current research on sub-wavelength light confinement prove to be successful.

But there are many areas where we don't need the power of current cpus. Most offices, as an example, would be just fine with PIII-equivalents for everyday work. And using light, there are things like Fourier transform which can be done much, much faster than with current electronic components...

I think that all-optical computing has its place in a long-term future. But only time will tell.

Reply Score: 2

RE: ....
by CaptainN- on Wed 25th Aug 2010 16:00 UTC in reply to "...."
CaptainN- Member since:
2005-07-07

This kind of counter announcement is standard in the industry. AMD just released Bulldozer, so Intel has to release something too - even if they already released it.

Reply Score: 1

don't you mean every 8 seconds?
by nabil2199 on Tue 24th Aug 2010 22:42 UTC
nabil2199
Member since:
2010-03-31

an HD movie at 50Gb wouldn't be that HD

Reply Score: 1

darknexus Member since:
2008-07-15

an HD movie at 50Gb wouldn't be that HD


It would if you're talking about a Blu-Ray version. 50 gigs is the typical amount a bd disk can hold. Of course if you're talking a completely lossless studio master you'd be correct.

Reply Score: 2

nabil2199 Member since:
2010-03-31

"an HD movie at 50Gb wouldn't be that HD


It would if you're talking about a Blu-Ray version. 50 gigs is the typical amount a bd disk can hold. Of course if you're talking a completely lossless studio master you'd be correct.
"
a dual layer bluray disc holds 50GB not 50Gb. a lot of bluray movies are well above the 30 gigabyte mark

Reply Score: 1

Delgarde Member since:
2008-08-19

It would if you're talking about a Blu-Ray version. 50 gigs is the typical amount a bd disk can hold.


Wrong type of 'gigs'. Storage is in gigabytes, but network performance is in gigabits per second - hence the comment in the parent post about "every 8 seconds".

Edited 2010-08-24 23:33 UTC

Reply Score: 2

umccullough Member since:
2006-01-26

"It would if you're talking about a Blu-Ray version. 50 gigs is the typical amount a bd disk can hold.


Wrong type of 'gigs'. Storage is in gigabytes, but network performance is in gigabits per second - hence the comment in the parent post about "every 8 seconds".
"

But even then, most people use ~10 bits per byte for communication over network mediums due to the overhead incurred for flow control, error correction, packet headers, etc.

In this case, though, I suppose 8 bits per byte is suitable since it's a chip's throughput ;)

Reply Score: 2

Delgarde Member since:
2008-08-19

But even then, most people use ~10 bits per byte for communication over network mediums due to the overhead incurred for flow control, error correction, packet headers, etc.


Well yes, there's always overhead. But the article makes the claim that 50gbps is enough to transmit an HD movie every second, which appears to be based on either a relatively short movie, or a misunderstanding over bits vs bytes.

Reply Score: 2

Stratoukos Member since:
2009-02-11

Also, why use HD movies when we have standard units?

I want to know how many Libraries of Congress/s that is dammit.

Reply Score: 4

Soulbender Member since:
2005-08-18



Edited 2010-08-25 14:25 UTC

Reply Score: 2

Now combine that with...
by Tuishimi on Tue 24th Aug 2010 23:52 UTC
Tuishimi
Member since:
2005-07-06

...the virtual router...

Reply Score: 2

RE: Now combine that with...
by fretinator on Wed 25th Aug 2010 02:35 UTC in reply to "Now combine that with..."
fretinator Member since:
2005-07-06

...and you have a snappy GMail!

Reply Score: 2

RE[2]: Now combine that with...
by Tuishimi on Wed 25th Aug 2010 02:36 UTC in reply to "RE: Now combine that with..."
Tuishimi Member since:
2005-07-06

Verra snappy, lad!

Reply Score: 2

RE: Now combine that with...
by Lennie on Wed 25th Aug 2010 08:17 UTC in reply to "Now combine that with..."
Lennie Member since:
2007-09-22

I'm in the ISP-business and I don't expect much from the virtual router stuff. It's more a neat trick at this point.

But I they did mention in the previous press release they do think that this optical project will deliver cheap and fast transmission and will replace USB and firewire and I think (e)SATA.

Reply Score: 2