Linked by Thom Holwerda on Tue 1st Apr 2014 21:50 UTC
AMD

AMD claims that the microarchitectural improvements in Jaguar will yield 15% higher IPC and 10% higher clock frequency, versus the previous generation Bobcat. Given the comprehensive changes to the microarchitecture, shown in Figure 7, and the additional pipeline stages, this is a plausible claim. Jaguar is a much better fit than Bobcat for SoCs, given the shift to AVX compatibility, wider data paths, and the inclusive L2 cache, which simplifies system architecture considerably.

Some impressive in-depth reporting on the architecture which powers, among other things, the current generation of game consoles.

Thread beginning with comment 585820
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: Wow
by Ultimatebadass on Wed 2nd Apr 2014 10:02 UTC in reply to "RE: Wow"
Ultimatebadass
Member since:
2006-01-08

"we are getting to a point of diminishing returns of just pushing more polygons anyway"

Yes, if all you're rendering is a single character then you are right. But who says the extra polygons have to be used on a single object?

Reply Parent Score: 3

RE[3]: Wow
by Alfman on Wed 2nd Apr 2014 14:43 in reply to "RE[2]: Wow"
Alfman Member since:
2011-01-28

Ultimatebadass,

Yes, if all you're rendering is a single character then you are right. But who says the extra polygons have to be used on a single object?


Regardless of the number of objects you'd still get diminishing returns. Increasing polygon count results in smaller and smaller polygons that become ever-less visually significant. To what end should this go? At the extreme, one polygon per pixel would be pointless. Also, polygons become more expensive as their computational overhead becomes divided across fewer pixels.


I believe we are reaching the point where we might as well be ray tracing individual pixels instead of worrying about polygons at all. The problem with this is that *everything* in our existing toolchain is based on polygons, from hardware to software to the editors that produce the content. Never the less, I think the futility of pushing ever more polygons will help provide momentum towards raytracing.

Interesting blog post on this topic, apparently the amiga had a ray tracing demo in 1986 (the animation was prerendered rather than realtime):
http://blog.codinghorror.com/real-time-raytracing/
http://home.comcast.net/~erniew/juggler.html

Edited 2014-04-02 15:00 UTC

Reply Parent Score: 3

RE[4]: Wow
by przemo_li on Wed 2nd Apr 2014 15:56 in reply to "RE[3]: Wow"
przemo_li Member since:
2010-06-01

Well, polygon count is 90's way of making wonders.

Now You need compute capabilities, little CPU involvement (as it still is needed for things that require sequential computing), multitasking, virtualization, bla bla bla, etc.


In other words. Yes getting more polygons is not so important anymore.

Doing more stuff with those though is.

Reply Parent Score: 2

RE[4]: Wow
by Ultimatebadass on Wed 2nd Apr 2014 17:02 in reply to "RE[3]: Wow"
Ultimatebadass Member since:
2006-01-08

(...) At the extreme, one polygon per pixel would be pointless. Also, polygons become more expensive as their computational overhead becomes divided across fewer pixels.


At the extreme end of things, yes, but I think were still quite far from that point. I'm not saying polycount is the end-all of graphics. Just that the more detail you can push into your scene the better. Take any current video game, you can easily spot objects where the artists had to limit their poly-count drastically to get acceptable performance. Granted the "up front" stuff will usually look great (like a gun in a FPS game), but other non-interactive props will (generally) be simplified. We have a long way to go before we reach that 1 pixel = 1 poly limit.

I believe we are reaching the point where we might as well be ray tracing individual pixels instead of worrying about polygons at all. The problem with this is that *everything* in our existing toolchain is based on polygons, from hardware to software to the editors that produce the content. Never the less, I think the futility of pushing ever more polygons will help provide momentum towards raytracing.


I do agree that realtime raytracing is the way graphics engines should progress (related: https://www.youtube.com/watch?v=BpT6MkCeP7Y pretty impressive real-time raytracing). However, even if we replace current lighting/rendering trickery with realtime raytracing engines surely you will still need some way of describing objects in your scene? Handling more polys can go hand in hand with those awesome lighting techniques.

PS. I'm aware of NURBS-based objects but from my Maya days I remember those to be a royal PITA to use ("patch-modeling" *shudder*) and on current hardware they still get converted to triangles internally anyway.

OT: In 1986 i was still shitting in diapers, but I remember playing with some version of Real3D on my Amiga in the 1990s. The iconic 3d artists "hello world" - shiny spheres on a checkerboard took hours and hours to render just a single frame ;)

Reply Parent Score: 3