Username or EmailPassword
This is a good move by Ruiz but it is not good enough, in my opinion. AMD is buying precious time so as to delay the inevitable disaster that is waiting around the corner. Instead of taking advantage of the unprecedented opportunity afforded by the parallel programming crisis to leapfrog over its main competitor with breakthrough technology, AMD chose to play the me-too fiddle. That is sad. But it is not too late for Hector Ruiz and Dirk Meyer to turn the company around and become the leader of the processor industry for decades to come. And then they can buy their old fab back, if they feel manly.
http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-prog... Edited 2008-10-08 17:25 UTC
Savain. Seriously. Stop spamming every thread/CPU related article on the net with your BS.
What are you, Mussolini? I am a free man in a free country. You can't stop me, man. If you don't like what I write, don't read it. After all, nobody is twisting your arm, right?
I can't stop you but I can mod you down so others aren't subjected to your posts.
Wow! I'm trembling like a leaf. You can censor my stuff until shit comes out your ears on OSnews or Slashdot or what have you, but these are not the only venues on the internet. Not every web site allows censorship, you know. Censorship is a chicken shit tactic used by bird brains and, in the end, it always loses. LOL.
Dude, clean up your vocabulary. Censorship is a state (as in nation state) driven suppression of some matter or the other. What you experience on sites like Slashdot or this one is called "peer review".
Your post can still be read, nobody is going to jail for reading your stuff and you are not going to jail (or wrose) for posting it. People are just tired with your broken record routine or "spamming forums" or just disagree with you. Marketplace of ideas, you know, freedom of speech et al.
In this time and age, where many governments invade our privacy and try to regulate and suppress our legitimate demands of information transparency, people like YOU, who spout censorship nonsense as soon as someone disagrees with them and employs personal judgement, are one of the many obstacles we face in fending off censorship. You devaluate the whole concept by expanding it on petty personal disagreements. Shame on you.
Janssen, eat shit. Your demand that I clean up my vocabulary is an attempt at censorship. Any attempt by a group or an individual to suppress the free expression of another's opinion is censorship. And yes, peer review is censorship. It's primarily a way to suppress dissenting opinion and it is used to do just that. Shame on you and OSNews and Slashdot and Digg and Wikipedia and all the other social sites for encouraging a practice that is nothing but censorship.
Nobody suppresses you.
You're not that important.
Modding down just means we don't like you.
It's your privilege to disagree with me and I don't even feel suppressed or "censored" by you. All your cursing will not change that, you are just damaging your cause. But that's your privilege, too.
Translation: "Don't tase me, bro!"
Do I look like a legislator? How and in what way am I encroaching upon your liberty? Screaming, "free-speech" at the top of your lungs doesn't save you from public ridicule.
Further, I didn't mod you down either.
You wish you were a legislator, asshole. I am not losing any sleep over my stuff being ridiculed on OSNews by a bunch of cretins. IOW, you can kiss my ass, rajj. And same to your buddies. How about that? LOL.
I've said this before, and I'll say this again. Even though you fancy yourself some sort of 21st century Galileo Galilei, in reality you're a lot closer to Frank Chu.
So stop it, please.
Evangs, f--k you, whoever you are. I am not a pompous ass like some of the crackpot assholes in the scientific community whose asses you kiss. LOL. I just want to see some progress done in certain fields before I die. That's all. If you don't like what I write, don't read it. As simple as that. And f--k you one more time. And the mule you sleep with. How about that for free speech, eh? LOL.
Does it ever bother you that you have actually managed to develop an international reputation for being a crackpot?
Not at all. The only thing that bothers me is that my detractors are too chicken shit and gutless to identify themselves. I won't get the satisfaction of seeing them eat a mountain of crow when I'm vindicated. You're a prime example of an anonymous ass kisser. ahahaha... I would rather be a crackpot any day than a spineless ass kisser. ahahaha... AHAHAHA...
Okay, I was curious to see what it is that makes you and others think Savain is babbing on about that you call BS.
Well as a veteran hardware software engineer, I would say most of the comments against him are pathetic. No wonder OSNews has gotten so utterly boring recently, nobody knows anything about the past and what will have to be reinvented again. On his blog he appears to have reinvented what we in the hardware (VLSI) industry have been doing for 4+ decades, expressing concurrency in a more hardware like fashion using event driven cycle simulation. I still have to fathom if there is anything else there beyond what I am familiar with. If everyone started modding me off to <0, I'd probably get pissed too and walk away.
Two decades ago there was a parallel processor that was easy to write concurrent programs for and map onto any number of available communicating processors. Typically 1-1000 cores were used in various apps with little change to the code, but those were 4KB/chip days and the apps were very much DSP like. The hardware is long gone but the || model still works in CSP based languages that can run on x86, not sure how well it exploits multiple x86 cores though. The model is very similar to hardware design languages, model processes as communicating hardware objects. I could say APL, Occam, ADA, various CSP languages, Verilog, VHDL, Matlab, even Haskell etc all fall into a hardware -software continuum. All of those have been used to express hardware designs which is inherently parallel.
He is right about one thing though, todays x86 really does suck in so many ways, it is very fast in some things like Video Codecs (extreme cache locality) but in practice is many orders slower when dealing with highly unordered memory requests. It all boils down to that pesky Memory Wall, true random accesses are now 1000s times slower than the aggregate datapaths. Going to 64b address space and having many cores only worsens this. What is the point of having 4 or more cores when even the 1st one is under utilized due to this wall.
One can remedy this somewhat by starting with a memory system using a low latency RL DRAM from Micron that is much closer in performance to the processor cycle speed. The penalty is that atleast 40 threads must be used per processor to hide the remaining memory latency, in effect trading a giant Memory Wall for a modest Thread Wall, and the memory costs more too. On this kind of processor, I would have no trouble partitioning the graphic compute intensive parts of my apps onto large numbers of threads. I really have no idea how to exploit todays hardware anymore. Even moving data around in memory is highly unpredicable, longer blocks take far more cycles per word. In the MVC type of app, only the View part is usually compute intensive but also easy to tile.
So what thesis do you propose to make parallel computing run well on the next AMD/Intel masterpiece? Enlighten me please.
As for AMD going fabless, that is sad, end of an era, as Jerry Sanders used to say so often, real men have fabs. I did work for one of AMDs parts a long time ago.
If everyone started modding me off to <0, I'd probably get pissed too and walk away.
Censorship is censorship, especially when it is based on popularity. I see the same crap happen on Digg, Slashdot and all those other so-called "social" sites. If OSNews cannot find a way to keep a small cretinous gang of regulars from censoring people they disagree with, then f--k OSNews and the mule it rode in on. I don't need this shit.
Savain is a well known internet troll that routinely antagonizes the Erlang people and spams links to his blog everywhere. He also attacks well established physics theories based purely upon evidence from the bible, personal inability to believe and other such non-sense. To top it all off, he basically calls Babbage and Turing idiots routinely (not to mention everyone else). The real kicker is that he will register multiple accounts to reply to himself in praise.
His COSA idea is just a finite state machine. I don't see anything revolutionary about it. The problem with FSMs is that they don't scale well. The number of states and transitions between states quickly grows out of control to the point that nobody can understand it. The reason why software is modeled the way it is now is because PEOPLE can understand it.
Yo, monkey. It's one thing to try to censor others but it's another to cowardly hide behind your anonymity to spread blatant lies. You're a gutless piece of shit, and you know it. Everybody knows who I am. Identify yourself, you spineless moron.
I don't condone trolls either (esp if religion or strange physics or rudeness is in play). Its a shame that anyone would call Babbage, Turing or anyone an idiot, people make the best of what is available to them, very few can predict what can be done with technology that doesn't yet exist. It is also a shame when the technology is right there and is unused because the current market is almost set in concrete.
I looked briefly at his blog and some of the comments there. As I said before the cycle simulation is exactly the way we have designed simpler synchronous digital chips for decades. We often start with Fortran, C or Matlab codes usually for DSP problems that are expressed sequentially and we parallelize them and end up with hardware that is essentially an enormous group of FSMs. Can software folks do this, I don't see why not for some problems but it is only suitable for problems that look like they could be made in hardware. We have more tools today that do alot of the grunt work so its more about specifying what we want the chip to do, and it figures out the high and low level architecture, plus the layout. Many of these tools are graphical entry too. I don't see why some of these tools couldn't be reengineered to be useful to those working with parallel processes, placement, scheduling and so on.
Now if you consider FPGAs you have a possibility of moving parallel software codes more fluidly into suitable HDLs and timesharing synthesized blocks in FPGA fabric, Combine that with Opteron HT bus and you have maybe some acceleration possibilities for deeper pockets, its been done a few times. Some of the Occam people ended up in this space trying to convince software people that they could design hardware with HandelC etc but it is a struggle.
On your point about software modeling. When we understand how software works, we do so usually in an idealized simplified processor model ignoring how the processor really does all it's work. The same is true of hardware designs, we understand them at various abstraction levels, we have many transformational verification tools and they mostly work when we constrain the design style. We usually have a software model to compare inputs, outputs and internals against those of the hardware simulation, often one to one, bit by bit. Clearly then some software could be written as concurrent processes much like larger hardware blocks, its been done before and will be done again with or without hardware support. The FSM level of abstaction is really only useful for the hardware guys though.
Sort of a Transputerfarm on a Chip:
Though it has other targets than the usual computing.
Btw. i have a Sata->Atapi->USB-Controller in an external Disk, which is somehow related to one of their former Chips. Has never let me down so far, FORTH chugging along at its best :-)
Yes there are at least a dozen of these sorts of chips out there that typically include atleast 8 and upto several hundred simple cores, often use barrel wheel MTA design. I've losty track of most of them but they all superficially look like transputer farms, but they aren't of course because they don't include support for communicating processes or the process scheduler in hardware. They are generally using some other scheme for mapping processes onto sites. Many x Transputer people have gone into these projects as well as Intel/AMD so you see familiar ideas in new clothes Many of these chips get used in networking gear. Atiq Raza of RMI formerly Athlon Architect has a MIPs based multicore which does use RLDRAM.
If you take the comparison to an extreme some of the FPGA jumbo chips can look like a multi core chip too, each core centered around some BlockRam and a local DSP datapath cut into silicon. If you hook it up to DDR DRAM you end up right back with a memory wall again, not enough I/O pins to feed all the cores.
YES! AMD should quickly come out with BREAKTHROUGH technology from their ASS to LEAP FROG over their enemies!
They should figure out how to efficiently spread out programs over multiple cores that aren't designed to efficiently scale to multiple cores! ITS SO EASY! Edited 2008-10-08 18:25 UTC
They are not standing completely passive. I personally have good hopes for Intel Threading Building Blocks, which seems to be a sober move in the right directions.
This situation is a bit like when everybody was complaining when going from some 8bit encoding to Unicode some 10 years ago. The bulk of the mess went into the VM of languages like C# and Java and now it's not really a problem (although having wrapper macros/functions and conversions to wchar_t internally makes it painful in C/C++ still. I think this is one of the main reasons C++ has lost to Java/C#).
I think that as languages mature around threading, and 3pp tools like tbb gets stable and accepted, the threading will come somewhat natural. The big complication with threading I think will arrive with NUMA which I guess is the next logical step in 5-10 years. I hope tbb and the likes will be able to evolve naturally to cope with NUMA instead of requiring a rewrite from scratch.
"C++ have lost to C#/Java."
Sorry, but this is *totally* FUD.
My hat's off to your splendid reasoning and motivation behind your argument.
Apart from legacy stuff and low level things that need to be written in C/C++, how many use C/C++ nowdays? I'm not a big fan of massive VM:s making my dual core 2GB RAM computer really slow, but my dislike for them does not make them go away.
"Linux, Windows, Mac OS X and all operating systems are legacy and nobody uses them because they are made in C/C++."
"All software done today in C/C++ is legacy"
Yeah, right. Nice Try.
A lot of software today is done using C/C++. I do C/C++ and it's far from legacy.
I've seem many attempts through the computer history of many languages that have failed.
The main reason that Java/C# are still on the market is because they are endorsed by big companies.
Take Sun and Microsoft.NET team out of the game... Who will survive? C/C++. Edited 2008-10-09 11:44 UTC
Good point At the moment, though, I'm perfectly happy with letting my main programs fight over 1 or two cores and letting some Distributed Computing projects use up the idle cores.
That's probably a waste of power. It might be more efficient if those researchers purchased computing time on a proper supercomputer and did their calculations in a place where thought has been put into calculations per joule or at least efficient ways of getting the energy needed to do the computations.
The hope is that after a point the extra cores will allow us to run algorithms that simply aren't feasable on few cores. Probably games will be an early beneificiary, since they might use the multiple cores to have better AIs or more complex scenes while still keeping up with realtime performance.
I highly doubt that. From what I know of AI apps, those are the very last thing that would run well on lots of cores with limited memory access.
AI needs access to very large data or knowledge sets that are far beyond the caches window. On the other hand DSP like apps like ripping, encoding, decoding, crypto, neural nets, speech, image, any major math problem, should run like a charm.
transputer_guy, you don't think that Nehalem will do much to defeat the memory wall?
I will take a look at Nehalem, I also need to see where C++ is going in parallel support.
As long as the system uses conventional multiplexed address DDRn DRAM, you can't solve the Memory Wall. The fundamental problem is the DRAM latency >60ns plus memory management overhead, as well as its poor bank management. Latency only gets relatively longer unless you take a hard axe to it and then hide whats left over. The trusty old DRAM is almost 30years old in its basic architecture, it goes way back to the 4027 4K chip. Its address bus was multiplexed to save pins when pins were expensive. From 1984 to 2004, the worst case Ras cycle only halved, the bit I/O rate increased greatly though. It also went to syncronous design and to CMOS but it is still recognizably the same old beast, only 250K times denser.
RLDRAM can reduce the 60ns+ down to 15ns or so and it allows all 8 banks to fly concurently giving sustained full random in bank issue rate of 2ns in an SRAM like package. Thats for 512Mb chips with L3 type performance. With that you could relegate many GBs of DRAM to disk caching or swap space and have the RLDRAM for main memory. Either 4 or 8 way instruction threading will hide the 8 clock latency and multiple cores can use up the 8 way bank issue rate. One 1000 threaded Mips is more valuable to me than several 1000 bogus Mips and very predicatable too. Ofcourse most of the time most of the threads will be idle as is the classic single thread design, but memory accesses for load and store can be effectively 2 opcode slots.
RLDRAM is currently used in networking gear for name translation tables.
IMHO It's the beginning of the end. Now AMD doesn't have a complete control over their chip prices... They depend on 3rd parties. It sounds pretty dangerous to me.
I guess it's good news for AMD.
AMD is currently in a good momentum against nVidia. Their good chipsets (mainly 780G) is also responsible for gaining back some lost market in chipset/CPU business; but the release of really new architectures will be the main force behind AMD's future.
Currently looks like they're targeting some smooth transition to the new AM3 chipset with DDR3 support, but people are probably more anxious about their Fusion technology... even more with the good RV770 release.
Looks like the first Fusions will be based in the current Phenom architecture instead of a new one, but lets see how it'll do again new Intel ones...
Personally, I hope they do good and the market gets a nice and health competition. Everyone, including the costumer, wins them. =]
I have got a great idea! Split the company in two pieces - one pice a CPU maker - use the name AMD - and the other piece a graphic card manufacturer and call them ATI. Wouldn't that be great?
We 're living in strage time - every company wants to rule the world and to achieve this, the buy other companies as they can. And when they struggle, they split in pieces again. How stupid can mankind ....?
Not only do they now lose all their control over the Fab production process, they're at the whim of third parties to pull the volume/product when AMD needs it. Watch this space, things are going to go downhill fast.
This is a HUGE will for Intel - A massive problem for AMD.
They shouldn't of gone near ATI, then they wouldn't have been in such a bad mess.
I'm sure you are speaking from your vast experience of running a chip maker and operating a fab...
I use slackware in my Pentium II Computer 5 years Ago. for the first time, it's to hard to understand and to admin the system, But day by day i like this system where slackware is simple, light and "configure your self/ do it your self" distribution.
I'm learn so many thing in slackware,From the instalation (BSD syle), edit the configuration file and so many interesting command tool and text based aplication.
Until now, slackware is my choise.