Linked by Thom Holwerda on Tue 25th Mar 2008 16:33 UTC, submitted by irbis
Oracle and SUN As computers chip continue to decrease in fabrication size, manufacturers such as AMD and Intel are researching new ways to overcome physical barriers. Die size, performance, operating frequency and heat are all major obstacles in the semiconductor industry. Sun Microsystems announced that in partnership with Luxtera, Kotura and Stanford University, it is working on an ambitions project to move data transmissions from electrical signals over copper wires to pulses of light using lasers.
Order by: Score:
It's only natural
by Punktyras on Tue 25th Mar 2008 18:34 UTC
Punktyras
Member since:
2006-01-07

Who else but Sun could produce something valuable from photons? ;)

Reply Score: 8

Rather Intel than Sun...?
by usr0 on Tue 25th Mar 2008 19:09 UTC
usr0
Member since:
2006-10-27

I would expect such risky and expensive research campaigns rather from Intel than from Sun. Ok, Sun have also built its own chips for the server market but was not really successful as you can see on the upcoming Xeon series from Intel.

Reply Score: 1

RE: Rather Intel than Sun...?
by ahmetaa on Tue 25th Mar 2008 19:45 UTC in reply to "Rather Intel than Sun...?"
ahmetaa Member since:
2005-07-06

At least they have innovative ideas like Rock and Niagara instead of beating the old x86 donkey for years.

Reply Score: 5

Detlef Niehof Member since:
2006-05-02

At least they have innovative ideas like Rock and Niagara instead of beating the old x86 donkey for years.

Would there be any technical advantage in dumping the x86 legacy instead of on-going compatibility? If so, could you provide some more explanation and/or a link to an informative web page?

Thanks heaps,
Detlef

Reply Score: 2

trembovetski Member since:
2006-09-30

Why would you have to dump the compatibility? Sun didn't when it introduced the new chips - the apps which worked on Ultra Sparc work just fine on Niagara (and will on Rock).

Dmitri

Reply Score: 3

RE[4]: Rather Intel than Sun...?
by andrewg on Tue 25th Mar 2008 22:15 UTC in reply to "RE[3]: Rather Intel than Sun...?"
andrewg Member since:
2005-07-06

Which is sort of his point is Sun did not have to drop Sparc why would intel have to drop x86?

Reply Score: 3

RE[5]: Rather Intel than Sun...?
by Treza on Wed 26th Mar 2008 10:31 UTC in reply to "RE[4]: Rather Intel than Sun...?"
Treza Member since:
2006-01-11

Sometimes it is good to try something different, even Intel did try several time to dump the x86 : i432, i860, Itanium, ... with mixed results.

The Sparc is an interesting RISC design, with an original register windowing system.

Reply Score: 1

RE[3]: Rather Intel than Sun...?
by xiaokj on Wed 26th Mar 2008 13:16 UTC in reply to "RE[2]: Rather Intel than Sun...?"
xiaokj Member since:
2005-06-30

Would there be any technical advantage in dumping the x86 legacy instead of on-going compatibility?

Well, x86 may be fine for most applications, but virtualisation has always been one of x86's troubles. Although the introduction of hardware support for it helps much, it is not going to be perfect since it was not designed with Popek and Goldberg virtualisation requirements in mind.

http://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requ...

Another fact is that the x86 is now a RISC chip core emulating a CISC chip. Why not just produce to a RISC design, instead of wasting cpu cycles on translating them?

Moreover, the sheer size of the mindshare in x86 is distorting the market. A few years before, I heard of peripheral devices using MIPS as their intermediate processor between the raw device/data and the system. Reasons cited for such a decision was cost and ease of development. Personally, having tried a bit of MIPS and x86 assembly, the MIPS architecture is easier to work with. If more resources had been diverted away from x86 to MIPS/SPARC/POWER, Apple would not have to move away from PPC.

Last but not least, it is the trouble of legacy. Although backwards compliance is touted as a feature, it can be a horrible curse. Many old code contain bugs that are next to impossible to eradicate. Also, it helps in maintaining binary blobs -- a reason why Windows 64bit series are seldom sold is because of its reliance on 32bit binary blobs, which means much less drivers. While linux has had much ease in the transition to 64bit, nVidia drivers posed such a problem. In fact, the lack of nVidia drivers for PPC hindered mac to linux OS migrations in the past.

ps: been editing but simply cannot make links that don't show the full url, only the title of document

Edited 2008-03-26 13:24 UTC

Reply Score: 1

KenJackson Member since:
2005-07-18

Although backwards compliance is touted as a feature, it can be a horrible curse.

Good point.

I haven't checked, but I bet current Pentiums still implement the SAHF and LAHF instructions. I believe these were implemented in the Intel8088 to make it easier to port existing 8080 and Z80 software 30 years ago. If they haven't already been dropped, they should be. Any OS that's inclined to support them could implement an exception handler.

Reply Score: 2

RE[4]: Rather Intel than Sun...?
by Doc Pain on Wed 26th Mar 2008 18:40 UTC in reply to "RE[3]: Rather Intel than Sun...?"
Doc Pain Member since:
2006-10-08

As far as I know, x86 based processors were beaten by MIPS processors (usually built into fine SGI machines) regarding iteration speed which is important for scientific applicances. In most cases when x86 industry came up with something "new", "fast" or "revolutionary", I could laugh: "Hmmm well, we do have this in non-x86 world for years already." So I welcome everything that beats x86 in any regards. :-)

Reply Score: 2

Sharks?
by jjmckay on Tue 25th Mar 2008 23:51 UTC
jjmckay
Member since:
2005-11-11

The true application is for shark tank exhibits. These chips will be implanted into their foreheads. This was my suggestion for the next true killer application.

Reply Score: 4

Throw out 128-bit encryption
by KenJackson on Wed 26th Mar 2008 12:42 UTC
KenJackson
Member since:
2005-07-18

According to Sun, processors that are thousands of times faster than ones we have today will be possible if the project is successful.

Originally, the US government limited https secure web access to 40-bit encryption so it could eavesdrop by brute force attack. The current 128-bit keys are supposed to be strong enough to ward off brute force attacks.

But if computers become readily available that are thousands of times faster than current computers, we may need longer keys to keep access secure.

Reply Score: 2

RE: Throw out 128-bit encryption
by WereCatf on Wed 26th Mar 2008 12:54 UTC in reply to "Throw out 128-bit encryption"
WereCatf Member since:
2006-02-15

But if computers become readily available that are thousands of times faster than current computers, we may need longer keys to keep access secure.

It has always been like that. When technology advances so must the security methods used to protect that technology. Sooner or later we'll start using several thousands bits just for the encryption keys, and even that will probably be too little when quantum computing (or anything similarly fast and highly-parallel technology) becomes the norm. I remember having read somewhere that some people are even trying to create a protection scheme that would protect atleast somewhat better against such hugely parallel methods of trying to break the encryption, but I just can't remember where. :/

Reply Score: 2