Phil Hester, AMD’s chief technology officer, stopped by the Hot Chips conference here at Stanford University on Tuesday to talk a little more about Fusion, AMD’s plan to integrate a graphics processor and PC processor onto the same chip. By the time the chip is ready around 2009, Hester thinks the growing explosion of video and 3D graphics on PCs these days will require an affordable chip that still delivers great graphics performance.
…and Intel’s decision to integrate a floating-point unit into it. The idea to integrate a math co-processor into a standard code grinder is nothing like integrating a graphics chip and a modern multipurpose processor.
Otherwise, it’s an interesting idea, especially when dealing with general, non gaming systems and non high end graphical systems.
Having built a few of these non gaming systems for people wanting only to surf, email and use office productivity suits, motherboards with integrated graphics chips are, without a doubt, the largest savings in cost. Having the same option, but on a processor level, should really drive down that cost.
pardon my ignorance but not a gpu integrated on the same cpu die and communicated bis a bis at the same bus speed would be faster than any pci gpu?
Thats a good point but you seem to be missing something. Gamers usually upgrade they’re GPU several times before they upgrade the CPU and even though having the two integrated on the same die would result in faster data transfer between them, it would be very inflexible and upgrades would end up costing much more in the long run.
If you look at the advances in game technology, the latest 3D accelerator is a must have for large budget games. Just look at when Doom3 and Quake4 came out. I remember one article jokingly claiming that a Cray was needed to run both games, and that was just for the general system specs, not the graphics chip specs.
On the other hand, integrating a physics processor (like the AGEIA Physix chip) with either the CPU or the GPU die would be an instant sell for any gamer I know.
CPU+GPU integration in same processor should work hand in hand with Compiz Fusion + GNOME/KDE, accelerating 3D Desktop adoption.
Especially if AMD/ATI decide to get off their butts and give us decent drivers in a timely and efficient manner before or shortly after the products are released.
Or, better yet, release the specs since
A.)they don’t seem to be trying to shoot for fastest GPU with this product
B.) This is DIFFERENT, so maybe it’ll lack a lot of the patent-laiden old code that they cite as a reason for holding on to the specs so very tightly….
We can hope? When I started on my ancient compaq with win95, I NEVER saw computers coming what they are now!
Unless, of course, they require proprietary drivers.
Intel never has required a special proprietary driver for 80×87 coprocessor or SSE extensions. All Instructions Set for x87 are publicly available.
AMD, normally, shouldn’t do the same for his own processors ?
A GPU can be used for other type of computation like a vector coprocessor, not only for execution of OpenGL translation calls, but for scientific or engineering applications.
Look at
http://www.cs.sunysb.edu/~vislab/papers/GPUcluster_SC2004.pdf
http://www.eetasia.com/ART_8800367818_480100_e03ee92e.HTM
or
http://people.scs.fsu.edu/~blanco/gpusc/gpusc_project.htm
Ok, so I’m pulling that statistic out of my rear, as most statistics seem to be harvested from someone’s rear, but most people aren’t that concerned with the highest speed of graphics displays on today’s computers, because as long as the drivers are stable, and they aren’t into the heavy 3D games and/or CAD type of stuff, most users don’t push even the most mediocre video accelerator to the max with whatever they’re doing.
While it should logically be that such an integration can produce faster graphics processing because there’s a minimal amount of signal delay between the main CPU and the GPU, physics come together in a couple of ways to keep the top-performing solutions as discrete chips. Why? First, there’s the die space: top-end modern GPUs are often larger than the current general purpose CPU available at the time. Second, there’s the heat issue: both modern general purpose CPUs and modern top-end GPUs generate a huge amount of waste heat in a small amount of space, so the thermal envelope is another major factor in what’s feasible to put in the same package.
“First, there’s the die space: top-end modern GPUs are often larger than the current general purpose CPU available at the time. ”
That’s because different manufacturing process, as the CPUs are usually much more advanced in this manner and therefore occupying less space. Of course you are also right, because nowadays GPUs tend to have many more transistors than CPUs.
“a huge amount of waste heat in a small amount of space, so the thermal envelope is another major factor in what’s feasible to put in the same package.”
You again forgot about manufacturing process, which if more advanced, helps to produce less heat. And maintaining one cooling system for both GPU+CPU is generally a good idea, too, as it allows to use only a single but more advanced and efficent cooling system (e.g. some advanced and expensive heatpipe) instead of two simplier and cheaper, but equalling the cost of that advanced one when summed up.
Edited 2007-08-22 23:18
Wonder what this’ll mean for people who tend to go through 2 to 3 video cards per CPU upgrade?
Be interesting to see the results non the less.
It’ll mean nothing. It means a lot to the guy in the office who may save $100 on each PC, or who wants to make a lower power HTPC, etc., but maybe needs features or performance of a low-end card right now (or the guy who’s kid wants to get games from the dollar store to hose his Windows install… ). Imagine not that AMD can replace a $300 video card with this move, but that they can replace a $100 card with it.
Edited 2007-08-23 01:51
I was just thinking, considering that AMD have to make a decision regarding using either gddr or plain old ddr(2?) ram for the integrated board, I’ve long thought it would be great to be able to add extra ram to a video card. Would sure make my old 6800 go for a little longer. But then they’d get less people upgrading video cards less often, I guess (cynic alert).
now we can take a product that worked great with linux, amd’s processors. and we can pair it with something that worked terribly with linux, ati’s graphic drivers. just another reason to switch to intel for me.
chipsets — the gpu is only half the battle!
dear amd, please integrate the entire chipset, not just a gpu. hopefully you already know this, and have some idea how useless chipsets are.
the conventional role of chipsets has been to connect memory and i/o devices to the cpu. amd has already moved memory controllers onto the cpu core, and this news item specifies that the highest bandwidth output device in the computer is going to be integrated too. remaining roles for a chipset could be a usb controller, sound card, ethernet card, pcie bus, or any other i/o device commonly found in PC systems. chipsets also contain the bios bootcode for the system.
now that the two highest performance subsystems have been integrated onto the cpu (memory & graphics), there is no need to continue offering a seperate i/o chip: just integrate the remaining peripherials on die. x86 remains the only market where chipsets are used, and its a heapload of complexity that serves no advantage, particularly for anyone in embedded space (where complexity comes at tangible cost of boardspace).
so, put the sound controller, ethernet controller, pcie host, all those little things, into the cpu. its 2007 and the x86 is the only architecture still hampered by being a processor that is completely useless without a high bandwidth connection to a chipset whose seeming only purpose is to boot the cpu and host its peripherials. get rid of this relic, put everything on die, and bring x86 into embedded space. every other embedded cpu has a fairly complete i/o spec built in, and x86 is stuck in the past with massively over-complex system over-engineering.
(i’ve also heard rumors that fusion is not going to be x86, but no matter the architecture, get rid of the chipset)
rektide