3D Archive
Announcing PhysX SDK 4.0, an open-source physics engine
NVIDIA is proud to announce PhysX SDK 4.0, available on December 20, 2018. The engine has been upgraded to provide industrial grade simulation quality at game simulation performance. In addition, PhysX SDK has gone open source, starting today with version 3.4! It is available under the simple 3-Clause BSD license. With access to the source code, developers can debug, customize and extend the PhysX SDK as they see fit.
I'm not well-versed enough in this area to gauge how big of a deal this news it, but regardless, it seems like a good contribution to the open source community.
Nvidia created the first game demo using AI-generated graphics
The recent boom in artificial intelligence has produced impressive results in a somewhat surprising realm: the world of image and video generation. The latest example comes from chip designer Nvidia, which today published research showing how AI-generated visuals can be combined with a traditional video game engine. The result is a hybrid graphics system that could one day be used in video games, movies, and virtual reality.
Impressive technology. I can see how this will eventually make it a lot easier to generate graphics for 'realistic' games.
NVIDIA Turing GPU architecture deep dive: prelude to RTX
It's been roughly a month since NVIDIA's Turing architecture was revealed, and if the GeForce RTX 20-series announcement a few weeks ago has clued us in on anything, is that real time raytracing was important enough for NVIDIA to drop "GeForce GTX" for "GeForce RTX" and completely change the tenor of how they talk about gaming video cards. Since then, it's become clear that Turing and the GeForce RTX 20-series have a lot of moving parts: RT Cores, real time raytracing, Tensor Cores, AI features (i.e. DLSS), raytracing APIs. All of it coming together for a future direction of both game development and GeForce cards.
In a significant departure from past launches, NVIDIA has broken up the embargos around the unveiling of their latest cards into two parts: architecture and performance. For the first part, today NVIDIA has finally lifted the veil on much of the Turing architecture details, and there are many. So many that there are some interesting aspects that have yet to be explained, and some that we'll need to dig into alongside objective data. But it also gives us an opportunity to pick apart the namesake of GeForce RTX: raytracing.
AnandTech's deep dive into NVIDIA's new Turing architecture - the only one you really need.
NVIDIA RTX 2080 Ti, 2080, 2070 officially released
NVIDIA announced its new Turing video cards for gaming today, including the RTX 2080 Ti, RTX 2080, and RTX 2070. The cards move forward with an upgraded-but-familiar Volta architecture, with some changes to the SMs and memory. The new RTX 2080 and 2080 Ti ship with reference cards first, and partner cards about 1-3 months after that, depending on which partner it is. The board partners did not receive pricing or even card naming until around the same time as media, so expect delays in custom solutions.
A major upgrade, and pricing - starting at $599 for the 2070 - is entirely reasonable for a new generation. Might finally be time to upgrade my 1070 once EK Waterblocks releases waterblocks for these new cards.
NVIDIA reveals next-gen Turing GPU architecture
Moments ago at NVIDIA's SIGGRAPH 2018 keynote presentation, company CEO Jensen Huang formally unveiled the company's much awaited (and much rumored) Turing GPU architecture. The next generation of NVIDIA's GPU designs, Turing will be incorporating a number of new features and is rolling out this year. While the focus of today's announcements is on the professional visualization (ProViz) side of matters, we expect to see this used in other upcoming NVIDIA products as well. And by the same token, today's reveal should not be considered an exhaustive listing of all of Turing's features.
If you've been holding off on upgrading a 10x0 or earlier card, you're about to be rewarded - at Gamescom next week, NVIDIA is expected to unveil the consumer cards based on the Turing architecture.
Vulkan 1.0 released
Khronos launched the Vulkan 1.0 specification on February 16th, 2016 and Khronos members released Vulkan drivers and SDKs on the same day. Below you will find everything you need to come up to speed on Vulkan and to forge ahead and explore whether Vulkan is right for your engine or application.
AMD embraces open source to take on Nvidia’s GameWorks
AMD's position in the graphics market continues to be a tricky one. Although the company has important design wins in the console space - both the PlayStation 4 and Xbox One are built around AMD CPUs with integrated AMD GPUs - its position in the PC space is a little more precarious. Nvidia currently has the outright performance lead, and perhaps more problematically, many games are to a greater or lesser extent optimized for Nvidia GPUs. One of the chief culprits here is Nvidia's GameWorks software, a proprietary library of useful tools for game development - things like realistic hair and shadows, and physics processing for destructible environments - that is optimized for Nvidia's cards. When GameWorks games are played on AMD systems, they can often do so with reduced performance or graphical quality.
To combat this, AMD is today announcing GPUOpen, a comparable set of tools to GameWorks. As the name would suggest, however, there's a key difference between GPUOpen and GameWorks: GPUOpen will, when it is published in January, be open source. AMD will use the permissive MIT license, allowing GPUOpen code to be used without any practical restriction in both open and closed source applications, and will publish all code on GitHub.
Great move by AMD, and definitely a step up from Nvidia's questionable closed tactics that only seem to harm users. HotHardware has more information on AMD's extensive plans.
Microsoft unveils DirectX 12
DirectX 12 introduces the next version of Direct3D, the graphics API at the heart of DirectX. Direct3D is one of the most critical pieces of a game or game engine, and we've redesigned it to be faster and more efficient than ever before. Direct3D 12 enables richer scenes, more objects, and full utilization of modern GPU hardware. And it isn’t just for high-end gaming PCs either - Direct3D 12 works across all the Microsoft devices you care about. From phones and tablets, to laptops and desktops, and, of course, Xbox One, Direct3D 12 is the API you've been waiting for.
It's great that DirectX works across "phones and tablets, to laptops and desktops, and, of course, Xbox One", but an important adjective is missing here: Windows. With Microsoft playing little to no role in smartphone and tablets, and the desktop/laptop market being on hold, how much of a plus is DirectX on phones and tablets, really? Doesn't Windows Phone's and Windows 8 Metro's reliance on it only make it harder for game developers and houses to port their iOS and Android games over?
Nvidia seeks peace with Linux
Few companies have been the target of as much criticism in the Linux community as Nvidia. Linus Torvalds himself last year called Nvidia the "single worst company" Linux developers have ever worked with, giving the company his middle finger in a public talk.
Nvidia is now trying to get on Linux developers' good side. Yesterday, Nvidia's Andy Ritger e-mailed developers of Nouveau, an open source driver for Nvidia cards that is built by reverse engineering Nvidia's proprietary drivers. Ritger wrote that "NVIDIA is releasing public documentation on certain aspects of our GPUs, with the intent to address areas that impact the out-of-the-box usability of NVIDIA GPUs with Nouveau. We intend to provide more documentation over time, and guidance in additional areas as we are able."
It wouldn't surprise me if this is related to the SteamOS announcement.