“Computer engineers at North Carolina State University have developed hardware that allows programs to operate more efficiently by significantly boosting the speed at which the ‘cores’ on a computer chip communicate with each other.”
“Computer engineers at North Carolina State University have developed hardware that allows programs to operate more efficiently by significantly boosting the speed at which the ‘cores’ on a computer chip communicate with each other.”
I don’t understand how it works. I don’t understand why you would right an article about this.
as they would explain in detail…then tyres burning to the patent office..dont think so.
Sounds like they added dual port buffers as a mailbox between cores.
Take one concept from 50 years of computer history, mix with another concept, publish.
Then go back to dreaming of Transputers.
Edited 2011-02-06 15:15 UTC
I haven’t read the conference paper, so I’m guessing here. Huge grain of salt.
Take as a baseline a massive multicore processor that has a network on chip (NoC), and that NoC is used exclusively for cache coherence traffic. I won’t go into what that is.
Anyhow, they seem to be saying that they’ve added specialised processor instructions that use that NoC to send explicit messages.
So, using a software library, you can write a message-passing API that uses the memory hierarchy (store messages in memory, use locking, signals, etc. to signal the receiver, etc.). Instead, use the same network (wires and logic on the chip) to send your message (probably in a very restrictive format) directly to the other processor.
This, of course, has the drawback of requiring that the recipient process being active at the time the message is sent. If there are more threads than cores, and the recipient is not currently scheduled, you’ll have to fall back to another mechanism. I don’t know if/how they’re accounting for that.