3D Archive

The story of the Rendition Vérité 1000

Regrettably, there is little to read about the hardware invented around 1996 to improve 3D rendering and in particular id Software’s ground-breaking title. Within the architecture and design of these pieces of silicon lies the story of a technological duel between Rendition’s V1000 and 3dfx Interactive’s Voodoo. With the release of vQuake in early December 1996, Rendition seemed to have taken the advantage. The V1000 was the first card able to run Quake with an hardware acceleration claiming a 25 Mpixel/s fill-rate. Just in time for Christmas, the marketing coup allowed players to run the game at a higher resolution with a higher framerate and 16-bit colors. But as history would have it, a flaw in the design of the Vérité 1000 was to be deadly for the innovative company. I had never heard of Rendition or its V1000, and this story illustrates why. An absolutely fascinating and detailed read, and be sure to also read the follow-up article, which dives into the 3Dfx Voodoo 1 and Quake.

The AMD Radeon VII review: an unexpected shot at the high-end

AnandTech has published its review of AMD’s surprise new high-end Radeon VII graphics card, and the results should be cause for some cautious optimism among PC builders. Overall then, the Radeon VII puts its best foot forward when it offers itself as a high-VRAM prosumer card for gaming content creators. And at its $699 price point, that’s not a bad place to occupy. However for pure gamers, it’s a little too difficult to suggest this card instead of NVIDIA’s better performing GeForce RTX 2080. So where does this leave AMD? Fortunately for the Radeon rebels, their situation is improved even if the overall competitive landscape hasn’t been significantly changed. It’s not a win for AMD, but being able to compete with NVIDIA at this level means just that: AMD is still competitive. They can compete on performance, and thanks to Vega 20 they have a new slew of compute features to work with. It’s going to win AMD business today, and it’s going to help prepare AMD for tomorrow for the next phase that is Navi. It’s still an uphill battle, but with Radeon VII and Vega 20, AMD is now one more step up that hill. While not a slam-dunk, the Radeon VII definitely shows AMD can get at least close to NVIDIA’s RTX cards, and that should make all of us quite happy – NVIDIA has had this market to itself for far too long, and it’s showing in the arrogant pricing the company maintains. While neither RTX cards nor this new Radeon VII make me want to replace my GTX 1070 – and its custom watercooling parts – it at least makes me hopeful that the coming years will be more competitive.

Nvidia CEO warns of “extraordinary, unusually turbulent, disappointing” Q4

Ars Technica writes: On Monday, Nvidia took the unusual step of offering a revised Q4 2019 financial estimate ahead of its scheduled disclosure on February 14. The reason: Nvidia had already predicted low revenue numbers, and the hardware producer is already confident that its low estimate was still too high. The original quarterly revenue estimate of $2.7 billion has since dropped to $2.2 billion, a change of roughly 19 percent. A few new data points factor into that revision. The biggest consumer-facing issue, according to Nvidia, is “lower than expected” sales of its RTX line of new graphics cards. This series, full of proprietary technologies like a dedicated raytracing processor, kicked off in September 2018 with the $1,199 RTX 2080 Ti and the $799 RTX 2080. The RTX launch was bungled, and the cryptocurrency hype is way past its prime. It’s not a surprise Nvidia is going to experience a rough year.

Nvidia adds FreeSync support to its GPUs, but not for all monitors

FreeSync support is coming to Nvidia; at its CES event today, Nvidia announced the GSync-Compatible program, wherein it says it will test monitors that support the VESA DisplayPort Adaptive-Sync protocol to ascertain whether they deliver a “baseline experience” comparable to a GSync monitor. Coincidentally, AMD’s FreeSync utilizes the same VESA-developed implementation, meaning that several FreeSync-certified monitors will now be compatible with Nvidia’s 10- and 20-series GPUs. This is great news, since GSync support requires additional hardware and this increases prices; you’ll find that the GSync version of a display is always significantly more expensive than the FreeSync version.

Announcing PhysX SDK 4.0, an open-source physics engine

NVIDIA is proud to announce PhysX SDK 4.0, available on December 20, 2018. The engine has been upgraded to provide industrial grade simulation quality at game simulation performance. In addition, PhysX SDK has gone open source, starting today with version 3.4! It is available under the simple 3-Clause BSD license. With access to the source code, developers can debug, customize and extend the PhysX SDK as they see fit.

I'm not well-versed enough in this area to gauge how big of a deal this news it, but regardless, it seems like a good contribution to the open source community.

Nvidia created the first game demo using AI-generated graphics

The recent boom in artificial intelligence has produced impressive results in a somewhat surprising realm: the world of image and video generation. The latest example comes from chip designer Nvidia, which today published research showing how AI-generated visuals can be combined with a traditional video game engine. The result is a hybrid graphics system that could one day be used in video games, movies, and virtual reality.

Impressive technology. I can see how this will eventually make it a lot easier to generate graphics for 'realistic' games.

NVIDIA Turing GPU architecture deep dive: prelude to RTX

It's been roughly a month since NVIDIA's Turing architecture was revealed, and if the GeForce RTX 20-series announcement a few weeks ago has clued us in on anything, is that real time raytracing was important enough for NVIDIA to drop "GeForce GTX" for "GeForce RTX" and completely change the tenor of how they talk about gaming video cards. Since then, it's become clear that Turing and the GeForce RTX 20-series have a lot of moving parts: RT Cores, real time raytracing, Tensor Cores, AI features (i.e. DLSS), raytracing APIs. All of it coming together for a future direction of both game development and GeForce cards.

In a significant departure from past launches, NVIDIA has broken up the embargos around the unveiling of their latest cards into two parts: architecture and performance. For the first part, today NVIDIA has finally lifted the veil on much of the Turing architecture details, and there are many. So many that there are some interesting aspects that have yet to be explained, and some that we'll need to dig into alongside objective data. But it also gives us an opportunity to pick apart the namesake of GeForce RTX: raytracing.

AnandTech's deep dive into NVIDIA's new Turing architecture - the only one you really need.

NVIDIA RTX 2080 Ti, 2080, 2070 officially released

NVIDIA announced its new Turing video cards for gaming today, including the RTX 2080 Ti, RTX 2080, and RTX 2070. The cards move forward with an upgraded-but-familiar Volta architecture, with some changes to the SMs and memory. The new RTX 2080 and 2080 Ti ship with reference cards first, and partner cards about 1-3 months after that, depending on which partner it is. The board partners did not receive pricing or even card naming until around the same time as media, so expect delays in custom solutions.

A major upgrade, and pricing - starting at $599 for the 2070 - is entirely reasonable for a new generation. Might finally be time to upgrade my 1070 once EK Waterblocks releases waterblocks for these new cards.

NVIDIA reveals next-gen Turing GPU architecture

Moments ago at NVIDIA's SIGGRAPH 2018 keynote presentation, company CEO Jensen Huang formally unveiled the company's much awaited (and much rumored) Turing GPU architecture. The next generation of NVIDIA's GPU designs, Turing will be incorporating a number of new features and is rolling out this year. While the focus of today's announcements is on the professional visualization (ProViz) side of matters, we expect to see this used in other upcoming NVIDIA products as well. And by the same token, today's reveal should not be considered an exhaustive listing of all of Turing's features.

If you've been holding off on upgrading a 10x0 or earlier card, you're about to be rewarded - at Gamescom next week, NVIDIA is expected to unveil the consumer cards based on the Turing architecture.

AMD embraces open source to take on Nvidia’s GameWorks

AMD's position in the graphics market continues to be a tricky one. Although the company has important design wins in the console space - both the PlayStation 4 and Xbox One are built around AMD CPUs with integrated AMD GPUs - its position in the PC space is a little more precarious. Nvidia currently has the outright performance lead, and perhaps more problematically, many games are to a greater or lesser extent optimized for Nvidia GPUs. One of the chief culprits here is Nvidia's GameWorks software, a proprietary library of useful tools for game development - things like realistic hair and shadows, and physics processing for destructible environments - that is optimized for Nvidia's cards. When GameWorks games are played on AMD systems, they can often do so with reduced performance or graphical quality.

To combat this, AMD is today announcing GPUOpen, a comparable set of tools to GameWorks. As the name would suggest, however, there's a key difference between GPUOpen and GameWorks: GPUOpen will, when it is published in January, be open source. AMD will use the permissive MIT license, allowing GPUOpen code to be used without any practical restriction in both open and closed source applications, and will publish all code on GitHub.

Great move by AMD, and definitely a step up from Nvidia's questionable closed tactics that only seem to harm users. HotHardware has more information on AMD's extensive plans.

Microsoft unveils DirectX 12

DirectX 12 introduces the next version of Direct3D, the graphics API at the heart of DirectX. Direct3D is one of the most critical pieces of a game or game engine, and we've redesigned it to be faster and more efficient than ever before. Direct3D 12 enables richer scenes, more objects, and full utilization of modern GPU hardware. And it isn’t just for high-end gaming PCs either - Direct3D 12 works across all the Microsoft devices you care about. From phones and tablets, to laptops and desktops, and, of course, Xbox One, Direct3D 12 is the API you've been waiting for.

It's great that DirectX works across "phones and tablets, to laptops and desktops, and, of course, Xbox One", but an important adjective is missing here: Windows. With Microsoft playing little to no role in smartphone and tablets, and the desktop/laptop market being on hold, how much of a plus is DirectX on phones and tablets, really? Doesn't Windows Phone's and Windows 8 Metro's reliance on it only make it harder for game developers and houses to port their iOS and Android games over?

Nvidia seeks peace with Linux

Few companies have been the target of as much criticism in the Linux community as Nvidia. Linus Torvalds himself last year called Nvidia the "single worst company" Linux developers have ever worked with, giving the company his middle finger in a public talk.

Nvidia is now trying to get on Linux developers' good side. Yesterday, Nvidia's Andy Ritger e-mailed developers of Nouveau, an open source driver for Nvidia cards that is built by reverse engineering Nvidia's proprietary drivers. Ritger wrote that "NVIDIA is releasing public documentation on certain aspects of our GPUs, with the intent to address areas that impact the out-of-the-box usability of NVIDIA GPUs with Nouveau. We intend to provide more documentation over time, and guidance in additional areas as we are able."

It wouldn't surprise me if this is related to the SteamOS announcement.

Mozilla Rejects Microsoft’s WebGL Criticism

"Mozilla's VP of Technical Strategy, Mike Shaver has rejected Microsoft's criticism of WebGL in which it said it would not implement the 3D graphics standard because of security issues in the design. Shaver says that "there is no question that the web needs 3D capabilities" to enable developers to create "advanced visualisations, games or new user interfaces" and points at Molehill (Adobe's 3D for Flash) and Microsoft's Silverlight 3D which are offering just those capabilities." One discussion of Microsofts WebGL criticism can be found here.

Microsoft’s 3-D Strategy

Microsoft has joined the wave of companies betting that 3-D is the next big thing for computing. At a recent talk at MIT, chief research and strategy officer Craig Mundie said he sees the technology as an innovation that "will get people out of treating a computer as a tool" and into treating the device as a natural extension of how they interact with the world around them. Microsoft plans to introduce consumers to the change through its gaming products, but Mundie outlined a vision that would eventually have people shopping and searching in 3-D as well.

New Implementation Direct3D 11 COM API for Gallium

"Luca Barbieri made a rather significant commit today that adds a state tracker dubbed 'd3d1x', which implements the Direct3D 10/11 COM API in Gallium3D. Luca says this is just the initial version, but it's already working and can run a few DirectX 10/11 texturing demos on Linux at the moment. This is not a matter of simply translating the Direct3D calls and converting them to OpenGL like how Wine currently handles it, but is natively implemented within Gallium3D and TGSI to speak directly to the underlying graphics driver and hardware. Thanks to Gallium3D's architecture, this Direct3D support essentially becomes 'free' to all Linux drivers with little to no work required."