Linked by Thom Holwerda on Thu 19th Aug 2010 10:32 UTC
Hardware, Embedded Systems "A computer chip that performs calculations using probabilities, instead of binary logic, could accelerate everything from online banking systems to the flash memory in smart phones and other gadgets. Rewriting some fundamental features of computer chips, Lyric Semiconductor has unveiled its first "probability processor," a silicon chip that computes with electrical signals that represent chances, not digital 1s and 0s."
Order by: Score:
AI
by Laurence on Thu 19th Aug 2010 11:22 UTC
Laurence
Member since:
2007-03-26

A field they've not touched on, but I could see this chip being utilised is in developing "human-like" artificial intelligence

Reply Score: 5

Yeah..right
by Nagilum on Thu 19th Aug 2010 12:04 UTC
Nagilum
Member since:
2009-07-01

We haven't even figured out a way to get programs to behave the intended way with a CPU that does exactly what you tell it to and they want us to write code for a CPU with probabilistic behavior?
The only use I can currently think of is generating random numbers. But only in the form a a few special instructions in an otherwise ordinary CPU.

Reply Score: 0

Finally
by anda_skoa on Thu 19th Aug 2010 13:09 UTC
anda_skoa
Member since:
2005-07-07

First step towards the Infinite Improbability Drive ;)

Reply Score: 6

Insted of 1's and 0's
by fretinator on Thu 19th Aug 2010 13:58 UTC
fretinator
Member since:
2005-07-06

Usually 1's and 0's are just different voltages on a line. I think that is a waste. Instead, we should be able to store thousands/millions/billions of voltages, and thus allow a tremendous boost in storage capacity - and totally new kinds of circuits/logic design. At the infinite level (as n -> infinity), we get back to analog computers!

Reply Score: 3

RE: Insted of 1's and 0's
by theosib on Thu 19th Aug 2010 16:06 UTC in reply to "Insted of 1's and 0's"
theosib Member since:
2006-03-02

The more voltage steps there are, the harder it is to robustly distinguish them. I remember reading something about 5V TTL, and someone did a calculation as to how many voltage levels would be reliable, and it came out between 2 and 3.

An example of the difficulty with using more than two voltage levels is MLC flash memory. SLCs store two levels per cell, so to store data, you just fully drain or fully charge the cell. ECCs take care of any bits that don't hold charge properly. With MLCs, you have to have four distinct voltage levels, requiring that the charge be tuned carefully. To write a cell, you drain it, and then start pumping it up gradually until it reaches the charge level that corresponds to the two bits being stored. The storage is much less robust, requiring much more expensive (in terms of redundant bits and decode time) ECC codes. This is why MLCs are so much slower to write and have so much higher latency for reading.

Dial-up modems try to use 128 of the 256 available dynamic levels available from digitally encoded POTS lines. When you make a land-line call, your voice gets digitally encoded (mu-law or alpha-law, I think) to 8-bit samples at 8KHz. The voice you hear from the other end has been converted back to analog. Thus, the max theoretical bandwidth is 64000 bits/sec. The levels aren't linearly spaced, so modem encodings use only 128 levels, hence the 56KBPS modems. Unfortunately, that 56K is not achievable in part because you can't rely on good line fidelity. Thus, encoding methods like Trellis Modulation are used, which use AI techniques to encode and decode forward error correcting codes so as to try to get a clean transmission at a lower bit rate.

Also, consider what it would take in terms of circuitry to handle thousands of voltage levels in "multi-level digital logic". CMOS transistors aren't linear when operating in saturation. What you'd probably need to do is throw a lot of extra transistors are the circuitry. You're probably much better off just going with binary. The circuits would be smaller, faster, more reliable, and use less energy.

Reply Score: 5

Skynet!
by Cody Evans on Thu 19th Aug 2010 14:38 UTC
Cody Evans
Member since:
2009-08-14

Did anyone else think of skynet while reading this?

Lyric has been working on its technology in stealth mode since 2006, partly with funding from the U.S. Defense Advanced Research Projects Agency.


Edited 2010-08-19 14:44 UTC

Reply Score: 1

So New It's Old
by Brendan on Thu 19th Aug 2010 15:05 UTC
Brendan
Member since:
2005-11-16

Hi,

We've been there, we've done that, we realised it sucked, we switched to digital. Of course that was about 50 years ago and suckers with money will fall for anything...

http://en.wikipedia.org/wiki/Analog_computer

- Brendan

Reply Score: 3

RE: So New It's Old
by _txf_ on Thu 19th Aug 2010 19:05 UTC in reply to "So New It's Old"
_txf_ Member since:
2008-03-17

Certainly, that was then...

But now we have much better fabrication methods, signal processing that can control noise. Either way this chip is designed for specialized purposes replacing use of the fpu which requires far more power to do the same function of this chip.

It is like saying that putting the gpu back with the cpu is so new it's old

Reply Score: 2

RE[2]: So New It's Old
by ndrw on Fri 20th Aug 2010 11:21 UTC in reply to "RE: So New It's Old"
ndrw Member since:
2009-06-30

Yes, manufacturing processes are improving. But signal/(noise+variability) is decreasing even faster. That's the one of main reasons why supply voltages no longer scale down proportionally to the process node.

The result is that the gap between analog and digital circuits is ever increasing. Analog circuits sometimes need only a few transistors to implement some function but their size and power consumption is often larger than that of a digital block implementing equivalent algorithm.

Analog circuits are actually very common in modern IC's but they are used either when they have to be used (e.g. for interfacing to high speed analog signals or in A/D and D/A converters) or as a very specific optimization technique (high speed logic, memory readout etc) sacrificing other performance figures, usability, process scalability etc.

I'm a bit skeptical about the chip in question. Not that they can't do it, just there is not enough information about their technique to evaluate its performance characteristics, robustness, yield, scalability and so on.

Besides, if there is really such a demand for statistical processors, where are all these dedicated digital ASIC's fulfilling this niche? Why wait several years to get calculation speed up of x10000 from a potentially unreliable analog processor if you could get a x1000 faster dedicated digital chip today for a fraction of the cost?

Reply Score: 1

oh ho
by ARUmar on Thu 19th Aug 2010 15:44 UTC
ARUmar
Member since:
2009-10-08

so they want to build a fuzzy logic processor in hardware.the application in terms of viison systems seems interesting , tho writing code for it is gonna be a pain

Reply Score: 1

Folding @ Home?
by Mike.K. on Thu 19th Aug 2010 16:47 UTC
Mike.K.
Member since:
2010-01-04

While finances have stopped most of my crunching for now, my first thought would be how well this would work for distributed computing, particularly something like protein folding simulation. Folding@Home has been one of the better projects for supporting multiple platforms and looking into new hardware (PS3 and GPU worked well). From my (admittedly limited) understanding, it seems protein folding and energy mapping wold benefit, helping projects like F@H, POEM, Docking, and some of the Wold Community Grid projects. Even if this is only a niche market processor, some niches provide a lot of scientific value.

Reply Score: 1

We should just ...
by Tuishimi on Thu 19th Aug 2010 16:54 UTC
Tuishimi
Member since:
2005-07-06

...We should just clone brain cells and create bio-chips...

Reply Score: 1

RE: We should just ...
by neticspace on Fri 20th Aug 2010 07:29 UTC in reply to "We should just ..."
neticspace Member since:
2009-06-09

Or a mentat from Dune.

Reply Score: 2

Looks like a very, VERY old idea
by Neolander on Thu 19th Aug 2010 18:08 UTC
Neolander
Member since:
2010-03-08

If I remember well, I heard about something like this ages ago. It was called fuzzy logic or something like that...

Reply Score: 5

RE: Looks like a very, VERY old idea
by talaf on Fri 20th Aug 2010 11:17 UTC in reply to "Looks like a very, VERY old idea"
talaf Member since:
2008-11-19

It is indeed called that, and it's extensively used in expert systems and all kind of decision systems, in IA, etc. Moving it to hardware may be a good idea, although not unlike quantum computing you'll have to rethink every algorithm in a "fuzzy" way, and it won't be useable nor interesting for every task. Probably as a coprocessor of some kind.

Reply Score: 1

Comment by bnolsen
by bnolsen on Thu 19th Aug 2010 19:02 UTC
bnolsen
Member since:
2006-01-06

Part of the logic here being that silicon based transistors can't shrink much more, one of the problems being more and more errors in the system. So the proper conclusion they made here was that if a deterministic system can't be guaranteed then just assume the entire system can't be trusted and design from there.

Reply Score: 3

RE: Comment by bnolsen
by ndrw on Fri 20th Aug 2010 11:38 UTC in reply to "Comment by bnolsen"
ndrw Member since:
2009-06-30

That's an active field of research and I agree that variability and plain reliability is a big problem. But it doesn't look like what the company is doing. They advertise a kind of a specialized analog(?) statistical coprocessor (currently just a single cell) for application in statistical computing.

Reply Score: 1

Based on pulse stream arithmetic?
by Ruahine on Fri 20th Aug 2010 08:56 UTC
Ruahine
Member since:
2005-07-07

It doesn't really say how they're doing this. Some commentors seem to assume that it is based on analog systems, but I wonder weather it may be based on pulse stream arithmetic.

Reply Score: 1

ndrw Member since:
2009-06-30

The company says that the cell function is derived from the characteristics of single devices. That implies analog.

If you look inside a modern CPU or memory chip you'll find that they are often using non-rail-to-rail or "pulsed" logic signals. High performance ALU's are generally implemented using dynamic circuits, which operate in a sort of "pulsed" way, memories use reduced voltage swing to increase readout speed and lower power etc. That's also a form of analog circuitry, although used in otherwise digital IC's. These techniques don't scale well so general purpose digital functions are standard cell based.

Reply Score: 1

Money Sink
by xiaokj on Fri 20th Aug 2010 14:35 UTC
xiaokj
Member since:
2005-06-30

This must be some sort of a scam because we have been hearing a similar concept all the time to no avail. What do the new attempt bring to the table that is not available before?

If you were wondering what other statistical algorithm I was thinking about, I am talking about quantum computing. It is essentially the same concept, just implemented differently.

Analogue computing had been out for a reason -- precision and discreteness are important.

Reply Score: 1