Linked by Flatland_Spider on Wed 8th Oct 2008 12:41 UTC
AMD AMD finally fleshed out the "Asset Smart" strategy it has been talking about since, at least, last December. The result: AMD is now fabless.
Thread beginning with comment 332983
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:

Savain is a well known internet troll that routinely antagonizes the Erlang people and spams links to his blog everywhere. He also attacks well established physics theories based purely upon evidence from the bible, personal inability to believe and other such non-sense. To top it all off, he basically calls Babbage and Turing idiots routinely (not to mention everyone else). The real kicker is that he will register multiple accounts to reply to himself in praise.

His COSA idea is just a finite state machine. I don't see anything revolutionary about it. The problem with FSMs is that they don't scale well. The number of states and transitions between states quickly grows out of control to the point that nobody can understand it. The reason why software is modeled the way it is now is because PEOPLE can understand it.

Reply Parent Score: 5

transputer_guy Member since:

I don't condone trolls either (esp if religion or strange physics or rudeness is in play). Its a shame that anyone would call Babbage, Turing or anyone an idiot, people make the best of what is available to them, very few can predict what can be done with technology that doesn't yet exist. It is also a shame when the technology is right there and is unused because the current market is almost set in concrete.

I looked briefly at his blog and some of the comments there. As I said before the cycle simulation is exactly the way we have designed simpler synchronous digital chips for decades. We often start with Fortran, C or Matlab codes usually for DSP problems that are expressed sequentially and we parallelize them and end up with hardware that is essentially an enormous group of FSMs. Can software folks do this, I don't see why not for some problems but it is only suitable for problems that look like they could be made in hardware. We have more tools today that do alot of the grunt work so its more about specifying what we want the chip to do, and it figures out the high and low level architecture, plus the layout. Many of these tools are graphical entry too. I don't see why some of these tools couldn't be reengineered to be useful to those working with parallel processes, placement, scheduling and so on.

Now if you consider FPGAs you have a possibility of moving parallel software codes more fluidly into suitable HDLs and timesharing synthesized blocks in FPGA fabric, Combine that with Opteron HT bus and you have maybe some acceleration possibilities for deeper pockets, its been done a few times. Some of the Occam people ended up in this space trying to convince software people that they could design hardware with HandelC etc but it is a struggle.

On your point about software modeling. When we understand how software works, we do so usually in an idealized simplified processor model ignoring how the processor really does all it's work. The same is true of hardware designs, we understand them at various abstraction levels, we have many transformational verification tools and they mostly work when we constrain the design style. We usually have a software model to compare inputs, outputs and internals against those of the hardware simulation, often one to one, bit by bit. Clearly then some software could be written as concurrent processes much like larger hardware blocks, its been done before and will be done again with or without hardware support. The FSM level of abstaction is really only useful for the hardware guys though.

Reply Parent Score: 2