3D Archive

Nvidia just made $6 billion in pure profit over the AI boom

The company raked in $13.5 billion in revenue since May, it revealed in its Q2 2024 earnings, with the unprecedented demand for its generative AI chips blowing past any difficulty it might have had selling desktop and laptop GPUs into a shrinking PC industry. Data center accounted for a record $10.32 billion of that revenue, more than doubling in just one quarter, and Nvidia made $6.188 billion in profit as a result — up 843 percent year over year. And while gaming is more than a billion dollars short of pandemic highs, it was actually up 22 percent year over year to $2.48 billion in revenue, too. I don’t really post about financial results anymore – the amounts of money “earned” by tech companies are obscene and utterly destructive – but I do want to highlight NVIDIA here, if only to be able to link back this a few years from now after the “AI” bubble has popped.

NVIDIA BIOS signature lock broken, vBIOS modding and crossflash enabled

You can now play with NVIDIA GeForce graphics card BIOS like it’s 2013! Over the last decade, NVIDIA had effectively killed video BIOS modding by introducing BIOS signature checks. With GeForce 900-series “Maxwell,” the company added an on-die security processor on all its GPUs, codenamed “Falcon,” which among other things, prevents the GPU from booting with unauthorized firmware. OMGVflash by Veii; and NVflashk by Kefinator (forum names), are two independently developed new tools that let you flash almost any video BIOS onto almost any NVIDIA GeForce graphics card, bypassing “unbreakable” barriers NVIDIA put in place, such as BIOS signature checks; and vendor/device checks (cross-flashing). vBIOS signature check bypass works up to RTX 20-series “Turing” based GPUs, letting you modify the BIOS the way you want, while cross-flashing (sub-vendor ID check bypass) works even on the latest RTX 4090 “Ada.” No security is unbreakable. This will hopefully enable a lot of unlocking and safe performance boosts for artificially stunted cards.

Intel graphics drivers now collect telemetry by default

The latest version of Intel Arc GPU Graphics Software introduced an interesting change that isn’t reflected in the Release Notes. The installer of the 101.4578 beta drivers add a “Compute Improvement Program” (CIP) component as part of the “typical” setup option that is enabled by default. Under the “custom” installer option that you have to activate manually, you get to select which components to install. The Compute Improvement Program can be unchecked here, to ensure data collection is disabled. The benignly named CIP is a data collection component that tracks your PC usage and performance in the background (not just that of the GPU), so Intel can use the data to improve its future products. Intel created a dedicated webpage that spells out what CIP is, and what its scope of data collection is; where is says that CIP “does not collect your name, email address, phone number, sensitive personal information, or physical location (except for country).” NVIDIA’s and AMD’s drivers also contain telemetry collection software, and only AMD tries to be as transparent as possible about it by offering a check box during installation, whereas Intel and NVIDIA hide it behind the “custom” option. Needless to say, Linux users don’t have to worry about this.

Blender gets Wayland support

Recently we have been working on native Wayland support on Linux. Wayland is now enabled for daily builds and if all goes well, it will be enabled for Blender 3.4 release too. One of the major productivity applications adding Wayland support – especially one such as Blender – is a big deal.

Things aren’t “back to normal” yet, but GPU prices are steadily falling

Graphics card prices remain hugely inflated compared to a few years ago, but the good news is that things finally seem to be getting consistently better and not worse. This is good news. I don’t think I’ve ever experienced something like this before in my life, and I can’t wait for prices to truly reach sane levels again, as both my fiancée and I are due for an upgrade.

NVIDIA Hopper GPU architecture and H100 accelerator announced: working smarter and harder

Taking NVIDIA into the next generation of server GPUs is the Hopper architecture. Named after computer science pioneer Grace Hopper, the Hopper architecture is a very significant, but also very NVIDIA update to the company’s ongoing family of GPU architectures. With the company’s efforts now solidly bifurcated into server and consumer GPU configurations, Hopper is NVIDIA doubling down on everything the company does well, and then building it even bigger than ever before. The kinds of toys us mere mortals rarely get to play with.

Nvidia hacks its own GeForce RTX 3060 anti-mining lock

The Ampere graphics card was also supposed to be less attractive to miners, but it appears that the chipmaker shot itself in the foot and inadvertently posted a driver that unlocks mining performance on the RTX 3060. Meaning, anyone can unlock full mining performance with a minimum of effort. Well that was short-lived.

To address GPU shortage, NVIDIA cripples RTX 3060 for cryptominers

You may have noticed that it’s kind of hard to find any new graphics card as of late, since supplies are limited for a whole variety of reasons. For the launch of its upcoming RTX 3060 GPU, which might prove to be a relatively affordable and capable upgrade for many, NVIDIA is going to try and do something about the shortage – by crippling the card’s suitability for cryptominers. RTX 3060 software drivers are designed to detect specific attributes of the Ethereum cryptocurrency mining algorithm, and limit the hash rate, or cryptocurrency mining efficiency, by around 50 percent. To address the specific needs of Ethereum mining, we’re announcing the NVIDIA CMP, or, Cryptocurrency Mining Processor, product line for professional mining. CMP products — which don’t do graphics — are sold through authorized partners and optimized for the best mining performance and efficiency. They don’t meet the specifications required of a GeForce GPU and, thus, don’t impact the availability of GeForce GPUs to gamers. It’s a good first step, I guess, but I feel the market is so starved at the moment this will be a drop in the ocean.

NVIDIA announces the GeForce RTX 30 series

With much anticipation and more than a few leaks, NVIDIA this morning is announcing the next generation of video cards, the GeForce RTX 30 series. Based upon the gaming and graphics variant of NVIDIA’s Ampere architecture and built on an optimized version of Samsung’s 8nm process, NVIDIA is touting the new cards as delivering some of their greatest gains ever in gaming performance. All the while, the latest generation of GeForce will also be coming with some new features to further set the cards apart from and ahead of NVIDIA’s Turing-based RTX 20 series. The first card out the door will be the GeForce RTX 3080. With NVIDIA touting upwards of 2x the performance of the RTX 2080, this card will go on sale on September 17th for $700. That will be followed up a week later by the even more powerful GeFoce RTX 3090, which hits the shelves September 24th for $1500. Finally, the RTX 3070, which is being positioned as more of a traditional sweet spot card, will arrive next month at $499. My GTX 1070 is still going strong, and I found the RTX 20xx range far too overpriced for the performance increase they delivered. At $499, though, the RTX 3070 looks like a pretty good deal, but it wouldn’t be the first time supplies will be low, and thus, prices will skyrocket.

NVIDIA Ampere unleashed: NVIDIA announces new GPU architecture, A100 GPU, and accelerator

While NVIDIA’s usual presentation efforts for the year were dashed by the current coronavirus outbreak, the company’s march towards developing and releasing newer products has continued unabated. To that end, at today’s now digital GPU Technology Conference 2020 keynote, the company and its CEO Jensen Huang are taking to the virtual stage to announce NVIDIA’s next-generation GPU architecture, Ampere, and the first products that will be using it. Don’t let the term GPU here fool you – this is for the extreme high-end, and the first product with this new GPU architecture will set you back a cool $199,000. Any consumer-oriented GPUs with this new architecture is at the very least a year away.

Blender 2.80 released

Blender, the open source 3D computer graphics software package, has released a major new version, Blender 2.80. Among other things, it sports a brand new user interface designed from the ground up, a new physically based real-time renderer, and much, much more. The 2.80 release is dedicated to everyone who has contributed to Blender. To the tirelessly devoted developers. To the artists inspiring them with demos. To the documentation writers. To the Blender Cloud subscribers. To the bug reporters. To the designers. To the Code Quest supporters. To the donators and to the members of the Development Fund. Blender is made by you. Thanks! I remember way back when, in the early 2000s, when people would adamantly state that professional software for fields such as image manipulation and 3D graphics would never be something the open source community could create or maintain. And here we are, almost two decades later, and Blender is a household name in its field, used for all kinds of big, megabudget projects, such as Marvel movies, Ubisoft games, by NASA, and countless others. Blender is a stunning success story.

The story of the Rendition Vérité 1000

Regrettably, there is little to read about the hardware invented around 1996 to improve 3D rendering and in particular id Software’s ground-breaking title. Within the architecture and design of these pieces of silicon lies the story of a technological duel between Rendition’s V1000 and 3dfx Interactive’s Voodoo. With the release of vQuake in early December 1996, Rendition seemed to have taken the advantage. The V1000 was the first card able to run Quake with an hardware acceleration claiming a 25 Mpixel/s fill-rate. Just in time for Christmas, the marketing coup allowed players to run the game at a higher resolution with a higher framerate and 16-bit colors. But as history would have it, a flaw in the design of the Vérité 1000 was to be deadly for the innovative company. I had never heard of Rendition or its V1000, and this story illustrates why. An absolutely fascinating and detailed read, and be sure to also read the follow-up article, which dives into the 3Dfx Voodoo 1 and Quake.

The AMD Radeon VII review: an unexpected shot at the high-end

AnandTech has published its review of AMD’s surprise new high-end Radeon VII graphics card, and the results should be cause for some cautious optimism among PC builders. Overall then, the Radeon VII puts its best foot forward when it offers itself as a high-VRAM prosumer card for gaming content creators. And at its $699 price point, that’s not a bad place to occupy. However for pure gamers, it’s a little too difficult to suggest this card instead of NVIDIA’s better performing GeForce RTX 2080. So where does this leave AMD? Fortunately for the Radeon rebels, their situation is improved even if the overall competitive landscape hasn’t been significantly changed. It’s not a win for AMD, but being able to compete with NVIDIA at this level means just that: AMD is still competitive. They can compete on performance, and thanks to Vega 20 they have a new slew of compute features to work with. It’s going to win AMD business today, and it’s going to help prepare AMD for tomorrow for the next phase that is Navi. It’s still an uphill battle, but with Radeon VII and Vega 20, AMD is now one more step up that hill. While not a slam-dunk, the Radeon VII definitely shows AMD can get at least close to NVIDIA’s RTX cards, and that should make all of us quite happy – NVIDIA has had this market to itself for far too long, and it’s showing in the arrogant pricing the company maintains. While neither RTX cards nor this new Radeon VII make me want to replace my GTX 1070 – and its custom watercooling parts – it at least makes me hopeful that the coming years will be more competitive.

Nvidia CEO warns of “extraordinary, unusually turbulent, disappointing” Q4

Ars Technica writes: On Monday, Nvidia took the unusual step of offering a revised Q4 2019 financial estimate ahead of its scheduled disclosure on February 14. The reason: Nvidia had already predicted low revenue numbers, and the hardware producer is already confident that its low estimate was still too high. The original quarterly revenue estimate of $2.7 billion has since dropped to $2.2 billion, a change of roughly 19 percent. A few new data points factor into that revision. The biggest consumer-facing issue, according to Nvidia, is “lower than expected” sales of its RTX line of new graphics cards. This series, full of proprietary technologies like a dedicated raytracing processor, kicked off in September 2018 with the $1,199 RTX 2080 Ti and the $799 RTX 2080. The RTX launch was bungled, and the cryptocurrency hype is way past its prime. It’s not a surprise Nvidia is going to experience a rough year.

Nvidia adds FreeSync support to its GPUs, but not for all monitors

FreeSync support is coming to Nvidia; at its CES event today, Nvidia announced the GSync-Compatible program, wherein it says it will test monitors that support the VESA DisplayPort Adaptive-Sync protocol to ascertain whether they deliver a “baseline experience” comparable to a GSync monitor. Coincidentally, AMD’s FreeSync utilizes the same VESA-developed implementation, meaning that several FreeSync-certified monitors will now be compatible with Nvidia’s 10- and 20-series GPUs. This is great news, since GSync support requires additional hardware and this increases prices; you’ll find that the GSync version of a display is always significantly more expensive than the FreeSync version.

Announcing PhysX SDK 4.0, an open-source physics engine

NVIDIA is proud to announce PhysX SDK 4.0, available on December 20, 2018. The engine has been upgraded to provide industrial grade simulation quality at game simulation performance. In addition, PhysX SDK has gone open source, starting today with version 3.4! It is available under the simple 3-Clause BSD license. With access to the source code, developers can debug, customize and extend the PhysX SDK as they see fit.

I'm not well-versed enough in this area to gauge how big of a deal this news it, but regardless, it seems like a good contribution to the open source community.

Nvidia created the first game demo using AI-generated graphics

The recent boom in artificial intelligence has produced impressive results in a somewhat surprising realm: the world of image and video generation. The latest example comes from chip designer Nvidia, which today published research showing how AI-generated visuals can be combined with a traditional video game engine. The result is a hybrid graphics system that could one day be used in video games, movies, and virtual reality.

Impressive technology. I can see how this will eventually make it a lot easier to generate graphics for 'realistic' games.

NVIDIA Turing GPU architecture deep dive: prelude to RTX

It's been roughly a month since NVIDIA's Turing architecture was revealed, and if the GeForce RTX 20-series announcement a few weeks ago has clued us in on anything, is that real time raytracing was important enough for NVIDIA to drop "GeForce GTX" for "GeForce RTX" and completely change the tenor of how they talk about gaming video cards. Since then, it's become clear that Turing and the GeForce RTX 20-series have a lot of moving parts: RT Cores, real time raytracing, Tensor Cores, AI features (i.e. DLSS), raytracing APIs. All of it coming together for a future direction of both game development and GeForce cards.

In a significant departure from past launches, NVIDIA has broken up the embargos around the unveiling of their latest cards into two parts: architecture and performance. For the first part, today NVIDIA has finally lifted the veil on much of the Turing architecture details, and there are many. So many that there are some interesting aspects that have yet to be explained, and some that we'll need to dig into alongside objective data. But it also gives us an opportunity to pick apart the namesake of GeForce RTX: raytracing.

AnandTech's deep dive into NVIDIA's new Turing architecture - the only one you really need.

NVIDIA RTX 2080 Ti, 2080, 2070 officially released

NVIDIA announced its new Turing video cards for gaming today, including the RTX 2080 Ti, RTX 2080, and RTX 2070. The cards move forward with an upgraded-but-familiar Volta architecture, with some changes to the SMs and memory. The new RTX 2080 and 2080 Ti ship with reference cards first, and partner cards about 1-3 months after that, depending on which partner it is. The board partners did not receive pricing or even card naming until around the same time as media, so expect delays in custom solutions.

A major upgrade, and pricing - starting at $599 for the 2070 - is entirely reasonable for a new generation. Might finally be time to upgrade my 1070 once EK Waterblocks releases waterblocks for these new cards.

NVIDIA reveals next-gen Turing GPU architecture

Moments ago at NVIDIA's SIGGRAPH 2018 keynote presentation, company CEO Jensen Huang formally unveiled the company's much awaited (and much rumored) Turing GPU architecture. The next generation of NVIDIA's GPU designs, Turing will be incorporating a number of new features and is rolling out this year. While the focus of today's announcements is on the professional visualization (ProViz) side of matters, we expect to see this used in other upcoming NVIDIA products as well. And by the same token, today's reveal should not be considered an exhaustive listing of all of Turing's features.

If you've been holding off on upgrading a 10x0 or earlier card, you're about to be rewarded - at Gamescom next week, NVIDIA is expected to unveil the consumer cards based on the Turing architecture.