Linked by Thom Holwerda on Tue 1st Apr 2014 21:50 UTC

AMD claims that the microarchitectural improvements in Jaguar will yield 15% higher IPC and 10% higher clock frequency, versus the previous generation Bobcat. Given the comprehensive changes to the microarchitecture, shown in Figure 7, and the additional pipeline stages, this is a plausible claim. Jaguar is a much better fit than Bobcat for SoCs, given the shift to AVX compatibility, wider data paths, and the inclusive L2 cache, which simplifies system architecture considerably.

Some impressive in-depth reporting on the architecture which powers, among other things, the current generation of game consoles.

Permalink for comment 585842
To read all comments associated with this story, please click here.
RE[4]: Wow
by Ultimatebadass on Wed 2nd Apr 2014 17:02 UTC in reply to "RE[3]: Wow"
Member since:

(...) At the extreme, one polygon per pixel would be pointless. Also, polygons become more expensive as their computational overhead becomes divided across fewer pixels.

At the extreme end of things, yes, but I think were still quite far from that point. I'm not saying polycount is the end-all of graphics. Just that the more detail you can push into your scene the better. Take any current video game, you can easily spot objects where the artists had to limit their poly-count drastically to get acceptable performance. Granted the "up front" stuff will usually look great (like a gun in a FPS game), but other non-interactive props will (generally) be simplified. We have a long way to go before we reach that 1 pixel = 1 poly limit.

I believe we are reaching the point where we might as well be ray tracing individual pixels instead of worrying about polygons at all. The problem with this is that *everything* in our existing toolchain is based on polygons, from hardware to software to the editors that produce the content. Never the less, I think the futility of pushing ever more polygons will help provide momentum towards raytracing.

I do agree that realtime raytracing is the way graphics engines should progress (related: pretty impressive real-time raytracing). However, even if we replace current lighting/rendering trickery with realtime raytracing engines surely you will still need some way of describing objects in your scene? Handling more polys can go hand in hand with those awesome lighting techniques.

PS. I'm aware of NURBS-based objects but from my Maya days I remember those to be a royal PITA to use ("patch-modeling" *shudder*) and on current hardware they still get converted to triangles internally anyway.

OT: In 1986 i was still shitting in diapers, but I remember playing with some version of Real3D on my Amiga in the 1990s. The iconic 3d artists "hello world" - shiny spheres on a checkerboard took hours and hours to render just a single frame ;)

Reply Parent Score: 3