The new year isn’t even a day old, and Haiku developer X512 dropped something major in Haiku users’ laps: the first alpha version of an accelerated NVIDIA graphics drivers for Haiku. Supporting at least NVIDIA Turing and Ampere GPUs, it’s very much in alpha state, but does allow for proper GPU acceleration, with the code surely making its way to Haiku builds in the near future.
Don’t expect a flawless experience – this is alpha software – but even then, this is a major milestone for Haiku.

This is excellent news, and probably the only thing that would get me to buy a card from NVIDIA. It will hopefully also work with laptops that have NVIDIA GPUs onboard.
AMD having (most of) their specs open, I guess AMD GPU are already supported.
My understanding is that the AMD and Intel drivers for Haiku are modesetting only, they don’t currently offer hardware 2D or 3D acceleration. In my experience, the AMD driver is good enough on the surface, but can’t quite reach the limits of my monitor. It can do 2560×1440 at 170Hz over DisplayPort, but the Haiku driver caps at 120Hz.
And you can notice the difference between 170 and 120?
kwanbis,
I wouldn’t think I could detect these ordinarily, but with bad tearing or else missing the frame render window (such that frames gets duplicated) might make it noticeable. Unaccellerated drivers on a loaded CPU experiencing spikes is probably the worse case scenario to make these happen.
Alas, I don’t have such a high end monitor so I can’t really test it, haha.
In most operating systems, not really, though I can absolutely tell the difference between 60Hz and any higher refresh rate. There’s a certain smoothness to dragging windows that isn’t present at 60Hz, that refresh rate looks choppy and disorienting to my eyes and on the rare occasion where the computer reboots into 60Hz instead of the usual 170Hz I’m used to, I notice it right away.
Now, in Haiku, 120Hz is not nearly as smooth as it is on other OSes, it’s nearly as choppy as 60Hz, so hardware acceleration would definitely help there.
“There’s a certain smoothness to dragging windows that isn’t present at 60Hz”
I… I think I can live with that.
I survived an Atari Falcon030 with windows refreshing at 2 Hz when you scrolled, barely better when you moved them around.
So it’s much more of a first world problem than an usability problem.
How much more power drawn at 170 Hz instead of 60 Hz ?
kwanbis,
If you are gaming, you can.
Because the frame rate will be almost directly tied to response time. Many, especially in competitive gaming, will invest in better monitors (and keyboard/mouse) to reduce those pesky delays between the “brain says hit” and “the game does something”
Nothing could get me to buy a card from NVidia.
0brad0,
I get that, For games I couldn’t really care less about nvidia. For many consumers that’s the only reason to consider a GPU. But for GPGPU and raytraced blender modeling nvidia do have a substantial lead in performance and software compatibility. Honestly the linux drivers work well for me, but it does irk me that they’re not open source. It’s not good to be dependent on a single supplier, but for better or worse CUDA has become the industry standard and a non-nvidia card that doesn’t support this standard can be a disadvantage on the software side.
I really wish the FOSS community could build an open GPU from the ground up. I don’t think designing such a GPU is so far fetched. The FOSS community can deliver competitive software, but actually getting open hardware fabricated on a cutting edge fab is a pipe dream. We could conceivably build one out of FPGAs, which would be neat, but it would be neither competitive nor accessible. The capital expenditure needed to fabricate high performance GPUs means only extremely wealthy companies can play at this game and my understand is that even those wealthy companies are fighting for fab time.
Well I wouldn’t be buying it new from them, it would be a used card as the supported cards are all several generations behind. They wouldn’t see any money from me, which is the only other reason I’d buy a used NVIDIA card.
I am the same way with Apple products; I despise the company but until very recently I used an old MacBook Air I bought as not working and repaired, just for my music hobby, as my instruments and equipment work best with macOS. I wouldn’t buy a new Mac if you gave me the money to do so.
There is much more to a driver than supporting some APIs, but this is still a great step.
They have ported the “Mesa” Vulkan drivers. That means, if things go right, they should have at least the same capabilities as the open source Linux one. (More or less, I’m pretty sure DRM and a few others will not be supported).
Will it run games?
Nope, unless you settle with 15 year old Quake 3 ports
Will it have noticeable impact?
Yes. The desktop is “3d composited” these days. Along with any multimedia frameworks. (And Haiku’s original goal was being the top multimedia platform replacing BeOS, and possibly Amiga before that)
sukru,
It’s so regrettable that the term “DRM” can refer to both the linux graphics subsystem as well digital rights management. Far more people understand DRM to mean the latter, but in linux graphics contexts it becomes ambiguous and I wish linux used a different less confusing name for the subsystem.
https://en.wikipedia.org/wiki/Direct_Rendering_Manager
I assume you mean playing digitally locked content.
I asked this question not too long ago, but on linux I didn’t know of a way to see whether a playing stream was using hardware acceleration or not – especially when both the CPU and GPU loads increase during playback. “top” is a bit circumstantial and I don’t even know whether any of the content I play gets accelerated on the GPU at all.
Since I do not follow the workings of Linux, I take DRM to also mean “Digital Radio Mondiale”, a pretty neat digital radio standard that is better than DAB, HD Radio etc, as it also works in shortwave bands and thus can be received over vast distances.
Squizzler,
Meanwhile, here in the US, I’m only familiar with AM/FM radio. I think only satellite radio is digital. I’ve heard that teslas don’t even let owners receive AM signals, due to lower fidelity this is generally used for news/emergencies/most sports broadcasts.
Alfman,
AM radio is removed because the Tesla electric motor is basically an AM transmitter, and causes interference.
FM is also being removed as far as I know, but SiriusXM was recently added (not sure whether it is Internet based only)
Basically after Spotify/YouTubeMusic/whatnot… I almost never listen to radio unless I’m travelling overseas with limited internet.
sukru,
I hadn’t heard this justification. If it’s true that they cause interference on the AM bands, I’m surprised FCC regulations allow that because electronics have FCC warnings and aren’t allowed to be used if they cause interference. Do you know if they got a legal exception to emit radio frequency interference?
Don’t all satellite services require a subscription? Terrestrial AM/FM bands are public and free Entertainment isn’t mission critical, but IMHO loosing access to public emergency/traffic/weather alerts is more problematic. Not everybody has high cap mobile internet plans for streaming and there are still plenty of internet dead spots especially in rural areas. In areas hit by hurricanes it’s not hypothetical that normal internet service is often disrupted.
Congress is not known for successfully passing many bills, but there are motions to recognize the importance of terrestrial radio and not allowing subscription services to fully take over.
https://bilirakis.house.gov/media/press-releases/bilirakis-and-pallone-re-introduce-am-radio-every-vehicle-act
From what I’ve seen out there, DRM reception is worse than AM.
If you got a NVidia GPU, nvidia-smi does show you the usage of your Graphicscard so:
watch -n 1 nvidia-smi
Before streaming then starting it will either show more usage, or not
sao,
I know that nvidia-smi outputs a GPU version of top, but it goes up when the display starts outputting anything at all – I believe because the composition pipeline uses it. But it doesn’t tell me whether the codec is accelerated. Top and nvidia smi provide evidence of load but it doesn’t clearly reveal what components are causing this load.
When I play a video in FF, CPU usage goes up to 7% of 8T/16C AND the GPU goes up 11%. And the nvidia-smi process list does not show any of the processes accounting for this, so whatever the load is doing is taking place at a lower level than nvidia-smi’s process accounting reveals. It’s plausible that nvidia drivers use a sidechannel mechanism that doesn’t show up in the process list. I want to guess FF rendering is NOT accelerated on linux because historically that’s been the case. I could experiment some more, but the linux tools I am aware of are too blunt to provide specific insight into what’s going without indirectly guessing. Between TCP/HTTPS/DRM/codec/compositor/API overhead, both the CPU and GPU loads increase.
Alfman,
Yes, but thinking back the other DRM (Direct Rendering Manager) could also be useful in supporting more GPU drivers. However, it is basically “we give up, and we are re-implementing linux kernel bit-by-bit”.
ffmpeg, or mplayer command line will tell of course.
But for generic one, I do not know.
There seems to be a tool called nvtop (no longer nvidia specific) that shows which hardware features is used. But I have never tested it.
sukru,
It tells you the API it’s using, but even the ffmpeg userspace tool doesn’t actually say whether hardware acceleration is in use.
“Using auto hwaccel type vdpau with new default device.”
The Vdpau API is one of a few APIs that can use acceleration, but it’s just a front end wrapper for multiple backends and using it doesn’t imply that hardware acceleration is actually in use. Video playback will work with or without acceleration and I don’t know of a way to get standard tools to report the actual configuration used. If you know of a way, I’d like to learn how.
I tried to install this on debian, but the package had dependencies that literally tried to uninstall the nvidia driver and would not complete installation without doing so. Obviously I don’t want to let it do that. However the screen shots do reveal something I had no previously seen before…
https://github.com/Syllo/nvtop/blob/master/screenshot/Nvtop-config.png
“GPU encoder rate” “GPU decoder rate”!
These aren’t displayed by nvidia-smi’s top interface, but looking at this “nvidia-smi cheatsheet” shows there are commandline options to output these.
(links broken to avoid auto-moderation)
gist.github.com/zhum/0cc1c3447bcf61fe5c36416056ac7c4a
Alas, I can’t get nvidia-smi to output a non-zero value for decoder. I guess that means GPU based codec acceleration is not in use.
This nvidia documentation covers building a custom version of ffmpeg to support “cuda”. I tested my local version and it does not support the cuda param.
docs.nvidia.com/video-technologies/video-codec-sdk/13.0/nvdec-video-decoder-api-prog-guide/index.html
docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html
So at least on debian linux NVDEC hardware acceleration does not seem to be used out of the box by FF/chrome/ffmpeg.