Linked by Flatland_Spider on Wed 8th Oct 2008 12:41 UTC
AMD AMD finally fleshed out the "Asset Smart" strategy it has been talking about since, at least, last December. The result: AMD is now fabless.
Order by: Score:
v Good Move But Not Good Enough
by Mapou on Wed 8th Oct 2008 17:19 UTC
RE: Good Move But Not Good Enough
by rajj on Wed 8th Oct 2008 18:22 UTC in reply to "Good Move But Not Good Enough"
rajj Member since:
2005-07-06

Savain. Seriously. Stop spamming every thread/CPU related article on the net with your BS.

Reply Score: 3

TaterSalad Member since:
2005-07-06

I can't stop you but I can mod you down so others aren't subjected to your posts.

Reply Score: 4

B. Janssen Member since:
2006-10-11

Dude, clean up your vocabulary. Censorship is a state (as in nation state) driven suppression of some matter or the other. What you experience on sites like Slashdot or this one is called "peer review".

Your post can still be read, nobody is going to jail for reading your stuff and you are not going to jail (or wrose) for posting it. People are just tired with your broken record routine or "spamming forums" or just disagree with you. Marketplace of ideas, you know, freedom of speech et al.

In this time and age, where many governments invade our privacy and try to regulate and suppress our legitimate demands of information transparency, people like YOU, who spout censorship nonsense as soon as someone disagrees with them and employs personal judgement, are one of the many obstacles we face in fending off censorship. You devaluate the whole concept by expanding it on petty personal disagreements. Shame on you.

Reply Score: 2

Vanger Member since:
2007-11-28

Nobody suppresses you.
You're not that important.
Modding down just means we don't like you.

Reply Score: 2

B. Janssen Member since:
2006-10-11

It's your privilege to disagree with me and I don't even feel suppressed or "censored" by you. All your cursing will not change that, you are just damaging your cause. But that's your privilege, too.

Reply Score: 2

StephenBeDoper Member since:
2005-07-06

Translation: "Don't tase me, bro!"

Reply Score: 2

rajj Member since:
2005-07-06

Do I look like a legislator? How and in what way am I encroaching upon your liberty? Screaming, "free-speech" at the top of your lungs doesn't save you from public ridicule.

Further, I didn't mod you down either.

Reply Score: 1

evangs Member since:
2005-07-07

I've said this before, and I'll say this again. Even though you fancy yourself some sort of 21st century Galileo Galilei, in reality you're a lot closer to Frank Chu.

So stop it, please.

Reply Score: 2

google_ninja Member since:
2006-02-05

Does it ever bother you that you have actually managed to develop an international reputation for being a crackpot?

Reply Score: 2

transputer_guy Member since:
2005-07-08

Okay, I was curious to see what it is that makes you and others think Savain is babbing on about that you call BS.

Well as a veteran hardware software engineer, I would say most of the comments against him are pathetic. No wonder OSNews has gotten so utterly boring recently, nobody knows anything about the past and what will have to be reinvented again. On his blog he appears to have reinvented what we in the hardware (VLSI) industry have been doing for 4+ decades, expressing concurrency in a more hardware like fashion using event driven cycle simulation. I still have to fathom if there is anything else there beyond what I am familiar with. If everyone started modding me off to <0, I'd probably get pissed too and walk away.

Two decades ago there was a parallel processor that was easy to write concurrent programs for and map onto any number of available communicating processors. Typically 1-1000 cores were used in various apps with little change to the code, but those were 4KB/chip days and the apps were very much DSP like. The hardware is long gone but the || model still works in CSP based languages that can run on x86, not sure how well it exploits multiple x86 cores though. The model is very similar to hardware design languages, model processes as communicating hardware objects. I could say APL, Occam, ADA, various CSP languages, Verilog, VHDL, Matlab, even Haskell etc all fall into a hardware -software continuum. All of those have been used to express hardware designs which is inherently parallel.

He is right about one thing though, todays x86 really does suck in so many ways, it is very fast in some things like Video Codecs (extreme cache locality) but in practice is many orders slower when dealing with highly unordered memory requests. It all boils down to that pesky Memory Wall, true random accesses are now 1000s times slower than the aggregate datapaths. Going to 64b address space and having many cores only worsens this. What is the point of having 4 or more cores when even the 1st one is under utilized due to this wall.

One can remedy this somewhat by starting with a memory system using a low latency RL DRAM from Micron that is much closer in performance to the processor cycle speed. The penalty is that atleast 40 threads must be used per processor to hide the remaining memory latency, in effect trading a giant Memory Wall for a modest Thread Wall, and the memory costs more too. On this kind of processor, I would have no trouble partitioning the graphic compute intensive parts of my apps onto large numbers of threads. I really have no idea how to exploit todays hardware anymore. Even moving data around in memory is highly unpredicable, longer blocks take far more cycles per word. In the MVC type of app, only the View part is usually compute intensive but also easy to tile.

So what thesis do you propose to make parallel computing run well on the next AMD/Intel masterpiece? Enlighten me please.

As for AMD going fabless, that is sad, end of an era, as Jerry Sanders used to say so often, real men have fabs. I did work for one of AMDs parts a long time ago.

Reply Score: 2

rajj Member since:
2005-07-06

Savain is a well known internet troll that routinely antagonizes the Erlang people and spams links to his blog everywhere. He also attacks well established physics theories based purely upon evidence from the bible, personal inability to believe and other such non-sense. To top it all off, he basically calls Babbage and Turing idiots routinely (not to mention everyone else). The real kicker is that he will register multiple accounts to reply to himself in praise.

His COSA idea is just a finite state machine. I don't see anything revolutionary about it. The problem with FSMs is that they don't scale well. The number of states and transitions between states quickly grows out of control to the point that nobody can understand it. The reason why software is modeled the way it is now is because PEOPLE can understand it.

Reply Score: 5

transputer_guy Member since:
2005-07-08

I don't condone trolls either (esp if religion or strange physics or rudeness is in play). Its a shame that anyone would call Babbage, Turing or anyone an idiot, people make the best of what is available to them, very few can predict what can be done with technology that doesn't yet exist. It is also a shame when the technology is right there and is unused because the current market is almost set in concrete.

I looked briefly at his blog and some of the comments there. As I said before the cycle simulation is exactly the way we have designed simpler synchronous digital chips for decades. We often start with Fortran, C or Matlab codes usually for DSP problems that are expressed sequentially and we parallelize them and end up with hardware that is essentially an enormous group of FSMs. Can software folks do this, I don't see why not for some problems but it is only suitable for problems that look like they could be made in hardware. We have more tools today that do alot of the grunt work so its more about specifying what we want the chip to do, and it figures out the high and low level architecture, plus the layout. Many of these tools are graphical entry too. I don't see why some of these tools couldn't be reengineered to be useful to those working with parallel processes, placement, scheduling and so on.

Now if you consider FPGAs you have a possibility of moving parallel software codes more fluidly into suitable HDLs and timesharing synthesized blocks in FPGA fabric, Combine that with Opteron HT bus and you have maybe some acceleration possibilities for deeper pockets, its been done a few times. Some of the Occam people ended up in this space trying to convince software people that they could design hardware with HandelC etc but it is a struggle.

On your point about software modeling. When we understand how software works, we do so usually in an idealized simplified processor model ignoring how the processor really does all it's work. The same is true of hardware designs, we understand them at various abstraction levels, we have many transformational verification tools and they mostly work when we constrain the design style. We usually have a software model to compare inputs, outputs and internals against those of the hardware simulation, often one to one, bit by bit. Clearly then some software could be written as concurrent processes much like larger hardware blocks, its been done before and will be done again with or without hardware support. The FSM level of abstaction is really only useful for the hardware guys though.

Reply Score: 2

Sophotect Member since:
2006-04-26

Sort of a Transputerfarm on a Chip:

http://www.intellasys.net

Though it has other targets than the usual computing.

Btw. i have a Sata->Atapi->USB-Controller in an external Disk, which is somehow related to one of their former Chips. Has never let me down so far, FORTH chugging along at its best :-)

Reply Score: 1

transputer_guy Member since:
2005-07-08

Yes there are at least a dozen of these sorts of chips out there that typically include atleast 8 and upto several hundred simple cores, often use barrel wheel MTA design. I've losty track of most of them but they all superficially look like transputer farms, but they aren't of course because they don't include support for communicating processes or the process scheduler in hardware. They are generally using some other scheme for mapping processes onto sites. Many x Transputer people have gone into these projects as well as Intel/AMD so you see familiar ideas in new clothes Many of these chips get used in networking gear. Atiq Raza of RMI formerly Athlon Architect has a MIPs based multicore which does use RLDRAM.

If you take the comparison to an extreme some of the FPGA jumbo chips can look like a multi core chip too, each core centered around some BlockRam and a local DSP datapath cut into silicon. If you hook it up to DDR DRAM you end up right back with a memory wall again, not enough I/O pins to feed all the cores.

Reply Score: 2

RE: Good Move But Not Good Enough
by helf on Wed 8th Oct 2008 18:24 UTC in reply to "Good Move But Not Good Enough"
helf Member since:
2005-07-06

YES! AMD should quickly come out with BREAKTHROUGH technology from their ASS to LEAP FROG over their enemies!

They should figure out how to efficiently spread out programs over multiple cores that aren't designed to efficiently scale to multiple cores! ITS SO EASY!

Edited 2008-10-08 18:25 UTC

Reply Score: 6

sbergman27 Member since:
2005-07-24

YES! AMD should quickly come out with BREAKTHROUGH technology from their ASS to LEAP FROG over their enemies!

Well... he does bring up a good point. Both Intel and AMD have hit a barrier to being able to give us *faster* chips, and have turned to multicore and heavy PR and marketing to make people feel that they should continue upgrading to later, faster computers, while dumping the problem on programmers in a move which one might go so far as to call a cop out. I don't think the reality of this state of affairs gets enough press. And I am very skeptical that desktops are ever going to be parallelized to the extent that most users will ever benefit from the massive number of cores which are soon to come our way. In fact, I suspect we will suffer, as multithreaded code is going to be buggier and come with additional, hard to track down, race condition related issues. I'll stop short of saying that Intel and AMD are leading us straight to hell. "Down the garden path" may be a more appropriate phrase. It is critical to both companies that a demand for new processors continues, whether the consumer actually benefits from it... or not.

Update: BOINC projects will likely be big winners, though. ;-)

Edited 2008-10-08 19:30 UTC

Reply Score: 4

wanker90210 Member since:
2007-10-26

They are not standing completely passive. I personally have good hopes for Intel Threading Building Blocks, which seems to be a sober move in the right directions.

This situation is a bit like when everybody was complaining when going from some 8bit encoding to Unicode some 10 years ago. The bulk of the mess went into the VM of languages like C# and Java and now it's not really a problem (although having wrapper macros/functions and conversions to wchar_t internally makes it painful in C/C++ still. I think this is one of the main reasons C++ has lost to Java/C#).

I think that as languages mature around threading, and 3pp tools like tbb gets stable and accepted, the threading will come somewhat natural. The big complication with threading I think will arrive with NUMA which I guess is the next logical step in 5-10 years. I hope tbb and the likes will be able to evolve naturally to cope with NUMA instead of requiring a rewrite from scratch.

Reply Score: 1

turrini Member since:
2006-10-31

"C++ have lost to C#/Java."

Sorry, but this is *totally* FUD.

Reply Score: 2

wanker90210 Member since:
2007-10-26

My hat's off to your splendid reasoning and motivation behind your argument.

Apart from legacy stuff and low level things that need to be written in C/C++, how many use C/C++ nowdays? I'm not a big fan of massive VM:s making my dual core 2GB RAM computer really slow, but my dislike for them does not make them go away.

Reply Score: 2

turrini Member since:
2006-10-31

"Linux, Windows, Mac OS X and all operating systems are legacy and nobody uses them because they are made in C/C++."

"All software done today in C/C++ is legacy"

Yeah, right. Nice Try.

A lot of software today is done using C/C++. I do C/C++ and it's far from legacy.

I've seem many attempts through the computer history of many languages that have failed.

The main reason that Java/C# are still on the market is because they are endorsed by big companies.

Take Sun and Microsoft.NET team out of the game... Who will survive? C/C++.

Edited 2008-10-09 11:44 UTC

Reply Score: 2

evangs Member since:
2005-07-07

Apart from legacy stuff and low level things that need to be written in C/C++, how many use C/C++ nowdays?


Let's see, on my machine the software that I'm currently running or frequently run include :

1) Browser
2) IM client
3) Mail client
4) Office Suite
5) Text editor
6) Photo editor
7) RAW editor
8) Media player
9) Photo manager
10) Bittorrent client
11) Loads and loads of games
12) R/Matlab
13) IDE
14) Misc OS tools

None of them are written in Java or .NET. This is on Windows and on Mac OS X which I use predominantly. While a lot of code is undoubtedly written in .NET and Java, they rarely leave the company door (i.e. they're in-house apps). Nobody who writes client side apps where the client *pays* for the software writes it in anything other than C++. Thus if C++ is dead, when I die I wanna die like C++!

Reply Score: 3

helf Member since:
2005-07-06

Good point ;) At the moment, though, I'm perfectly happy with letting my main programs fight over 1 or two cores and letting some Distributed Computing projects use up the idle cores.

Reply Score: 2

PlatformAgnostic Member since:
2006-01-02

That's probably a waste of power. It might be more efficient if those researchers purchased computing time on a proper supercomputer and did their calculations in a place where thought has been put into calculations per joule or at least efficient ways of getting the energy needed to do the computations.

Reply Score: 2

sbergman27 Member since:
2005-07-24

That's probably a waste of power. It might be more efficient if those researchers purchased...

Purchasing such compute time is far easier said than done. (Most BOINC projects could not even remotely afford such.) Plus, very arguably, some projects hold the promise to offset any mflops/sec/watt differences with benefits which outweigh it. Climateprediction.net and hydrogen.net come to mind. It requires a certain amount of power simpy to light up the machine. So really only the difference between the power consumption of an idle core and a fully loaded one "counts" during those times of day that the machine would be on anyway.

Quad cores like the Q6600, and to an even greater degree, the 45nm Q9xxx series of quad cores, running on 80 Plus certified power supplies and not loaded down with power hungry graphics cards, etc, actually do quite well on a mflops/sec/watt basis compared to "Top 500" supercomuters.

http://www.top500.org/lists/2008/06/highlights/power

In my experience, something on the order of 120 mips/watt or so is possible on these machines.

I use a Q6600 and the very affordable Earthwatts 380 80 Plus rated PSU and get about 80 mflops/sec/watt absolute, running Climateprediction.net, measured at the wall socket. I'm in front of the machine using it almost all day. The difference between idle and fully loaded is only 60 watts, so I am effectively getting 185 mflops/sec/watt during that time.

During the winter, the power consumption can help offset heating costs, though my heat pump can do the job more efficiently. During the summer, the situation is reversed, of course.

Edited 2008-10-09 02:33 UTC

Reply Score: 3

PlatformAgnostic Member since:
2006-01-02

The hope is that after a point the extra cores will allow us to run algorithms that simply aren't feasable on few cores. Probably games will be an early beneificiary, since they might use the multiple cores to have better AIs or more complex scenes while still keeping up with realtime performance.

Reply Score: 3

transputer_guy Member since:
2005-07-08

I highly doubt that. From what I know of AI apps, those are the very last thing that would run well on lots of cores with limited memory access.

AI needs access to very large data or knowledge sets that are far beyond the caches window. On the other hand DSP like apps like ripping, encoding, decoding, crypto, neural nets, speech, image, any major math problem, should run like a charm.

Reply Score: 2

Jimbo Member since:
2005-07-22

transputer_guy, you don't think that Nehalem will do much to defeat the memory wall?

Reply Score: 1

transputer_guy Member since:
2005-07-08

I will take a look at Nehalem, I also need to see where C++ is going in parallel support.

As long as the system uses conventional multiplexed address DDRn DRAM, you can't solve the Memory Wall. The fundamental problem is the DRAM latency >60ns plus memory management overhead, as well as its poor bank management. Latency only gets relatively longer unless you take a hard axe to it and then hide whats left over. The trusty old DRAM is almost 30years old in its basic architecture, it goes way back to the 4027 4K chip. Its address bus was multiplexed to save pins when pins were expensive. From 1984 to 2004, the worst case Ras cycle only halved, the bit I/O rate increased greatly though. It also went to syncronous design and to CMOS but it is still recognizably the same old beast, only 250K times denser.

RLDRAM can reduce the 60ns+ down to 15ns or so and it allows all 8 banks to fly concurently giving sustained full random in bank issue rate of 2ns in an SRAM like package. Thats for 512Mb chips with L3 type performance. With that you could relegate many GBs of DRAM to disk caching or swap space and have the RLDRAM for main memory. Either 4 or 8 way instruction threading will hide the 8 clock latency and multiple cores can use up the 8 way bank issue rate. One 1000 threaded Mips is more valuable to me than several 1000 bogus Mips and very predicatable too. Ofcourse most of the time most of the threads will be idle as is the classic single thread design, but memory accesses for load and store can be effectively 2 opcode slots.

RLDRAM is currently used in networking gear for name translation tables.

Reply Score: 3

Sad...
by sergio on Wed 8th Oct 2008 17:47 UTC
sergio
Member since:
2005-07-06

IMHO It's the beginning of the end. Now AMD doesn't have a complete control over their chip prices... They depend on 3rd parties. It sounds pretty dangerous to me. ;)

Reply Score: 4

RE: Sad...
by kaiwai on Wed 8th Oct 2008 20:08 UTC in reply to "Sad..."
kaiwai Member since:
2005-07-06

IMHO It's the beginning of the end. Now AMD doesn't have a complete control over their chip prices... They depend on 3rd parties. It sounds pretty dangerous to me. ;)


The cost of owning a fab is very expensive and if they're under utilised, its even more inefficient. The question AMD have to ask themselves is whether the perceived 'benefits' of owning the fabs really resulting in a competitive edge. I'm sure what the AMD people have done is ask themselves whether the same thing could be achieved by outsourcing the production and at the same time not only save money but remain competitive.

Lets also remember that owning a fab is more than just 'owning' one, there is alot of capital and investment tied up in it. If you outsource it, it should be cheaper for both parties given that AMD would be removed from the huge capital outlays in reference to re-investments and the fab company would benefit in focusing on getting as much business outside AMD as possible (and thus the economies of scale kick in).

Reply Score: 0

good news for AMD
by JrezIN on Wed 8th Oct 2008 17:49 UTC
JrezIN
Member since:
2005-06-29

I guess it's good news for AMD.

AMD is currently in a good momentum against nVidia. Their good chipsets (mainly 780G) is also responsible for gaining back some lost market in chipset/CPU business; but the release of really new architectures will be the main force behind AMD's future.

Currently looks like they're targeting some smooth transition to the new AM3 chipset with DDR3 support, but people are probably more anxious about their Fusion technology... even more with the good RV770 release.
Looks like the first Fusions will be based in the current Phenom architecture instead of a new one, but lets see how it'll do again new Intel ones...

Personally, I hope they do good and the market gets a nice and health competition. Everyone, including the costumer, wins them. =]

Reply Score: 3

Hey AMD listen....
by Ralf. on Wed 8th Oct 2008 19:23 UTC
Ralf.
Member since:
2005-08-13

I have got a great idea! Split the company in two pieces - one pice a CPU maker - use the name AMD - and the other piece a graphic card manufacturer and call them ATI. Wouldn't that be great?

We 're living in strage time - every company wants to rule the world and to achieve this, the buy other companies as they can. And when they struggle, they split in pieces again. How stupid can mankind ....? ;)

Reply Score: 1

RE: Hey AMD listen....
by kaiwai on Wed 8th Oct 2008 20:11 UTC in reply to "Hey AMD listen...."
kaiwai Member since:
2005-07-06

I have got a great idea! Split the company in two pieces - one pice a CPU maker - use the name AMD - and the other piece a graphic card manufacturer and call them ATI. Wouldn't that be great?

We 're living in strage time - every company wants to rule the world and to achieve this, the buy other companies as they can. And when they struggle, they split in pieces again. How stupid can mankind ....? ;)


More difficult that it sounds. AMD need the graphics and chipset to make themselves competitive with Intel. On the other hand, however, I do think that they lack sound and network/wireless. If they had the money, IMHO they should buy out a small wireless/network chip company and make a complete end to end competitor to Centrino - and better yet, expand that standard kit out to desktops as well. A stable platform is what OEM's look for and if AMD can provide it, it'll give them a competitive edge.

Reply Score: 1

Bad Bad News
by Anon on Thu 9th Oct 2008 07:59 UTC
Anon
Member since:
2006-01-02

Not only do they now lose all their control over the Fab production process, they're at the whim of third parties to pull the volume/product when AMD needs it. Watch this space, things are going to go downhill fast.

This is a HUGE will for Intel - A massive problem for AMD.

They shouldn't of gone near ATI, then they wouldn't have been in such a bad mess.

Reply Score: 3

RE: Bad Bad News
by Soulbender on Thu 9th Oct 2008 12:10 UTC in reply to "Bad Bad News"
Soulbender Member since:
2005-08-18

I'm sure you are speaking from your vast experience of running a chip maker and operating a fab...

Reply Score: 3