KDE’s Nate Graham talks about Wayland, and sums up both its history, current status, and the future.
Wayland. It comes up a lot: “Bug X fixed in the Plasma Wayland session.” “The Plasma Wayland session has now gained support for feature Y.” And it’s in the news quite a bit lately with the announcement that Fedora KDE is proposing to drop the Plasma X11 session for version 40 and only ship the Plasma Wayland session. I’ve read a lot of nervousness and fear about it lately.
So today, let’s talk about it!
Wayland is a needlessly divisive topic, mostly because the people who want to stick to X.org are not the same people with the skills required to actually maintain, let alone improve, X.org. Wayland should not be a divisive topic because there’s really nowhere else to go – it’s the current and future of the Linux desktop, and as time goes on, the cracks in X.org will start to grow wider and longer.
In essence, Xorg became too large, too complicated, and too fragile to touch without risking breaking the entire Linux ecosystem. It’s stable today because it’s been essentially frozen for years. But that stability has come hand-in-hand with stagnation. As we all know in the tech world, projects that can’t adapt die. Projects that depend on them then die as well.
My biggest – and basically only – issue with Wayland is that it’s very Linux-focused for now, leaving especially the various BSDs in a bit of a rough situation. There’s work being done on Wayland for BSD, but I fear it’s going to take them quite a bit of time to catch up, and in the meantime, they might suffer from a lack of development and big fixing in their graphics stack.
This is a very honest post that admits that “mistakes were made” with Wayland’s design, and that is hugely necessary when you’re trying to get people on board. Polemical editorials are NOT how you do that – acknowledging that people’s issues are real and that they are now being worked on seriously IS how you do that. Lots of things are changing with Wayland – at least, from the KDE side, so much so that I think when Plasma 6 comes out it’s probably where I’ll be headed.
One thing I find common about a lot of the recent “how come you’re not using Wayland” articles is that they assume I am deliberately avoiding it because I dislike change. This is untrue. What I am more interested in are the applications I use working under Wayland. I do not use GNOME or KDE and that leaves my window environment options limited right now. I have also not found a suitable terminal replacement in the Wayland world. The answer I keep getting is “well just keep using everything you want or need under XWayland”. That may actually be what I have to end up doing. The problem I have with that is it feels like just adding a layer for the sake of adding a layer. What I really want to do is remove the hundreds of X utilities and libraries of my system, but I can’t really do that until all of the applications I use are working in Wayland.
I wish these articles would instead focus on porting guides and tutorials rather than trying to convince a user to get onboard the Wayland bandwagon. I’m there, you guys just forgot to get the application developers on board.
The article tries to make the argument that Wayland isn’t a toy… but admits it can’t do a lot of the essentials still even in the most advanced implementation which is severely lagged behind by wlroots. Translation: Wayland is making progress but for many is still a Toy and even if you use KDE you’ll have to accept missing features for the time being.
Until they rectify that its still a side grade and not a path forward.
wlroots based KDE with all essential features available… is pretty much the requirement for dropping X… want to move on from X? FINISH YOUR PRODUCT. Then we are hit with the insane statement that graphics drivers need to be reinvented… to adapt to wayland that was literally written to the capabilities of the drivers to begin with.
All the while Windows has had fairly stable XDDM and WDDM Drivers for decades… and a clean upgrade path for applications.
dcantrell,
+1
Most of us aren’t resisting change, we’re resisting the breakages!!!
That’s going to take a while. xwayland is probably going to be a long term solution for large swaths of X applications. Abstractions are very common in software, while adding another is not ideal, I’m ok with it if it helps us cross the bridge…as long as it works obviously. It’s almost done except for nagging corner cases. If I were in charge of wayland development, I would address those corner cases so that everyone could embrace wayland without feeling they lost something in the process.
Best summary on the situation.
I for my part will stick with X because everything I do need works. I would only switch if everything I need worked in Wayland (I need especially Java/Swing support and “online meetings”) or when things start to break in X.
Adding XWayland will achieve for me what exactly?
From my experience, Wayland is really great when it works, but when you hit a wall, everything becomes much more difficult.
Simple things like “ssh -Y” for application forwarding, requires one to read long threads with conflicting information, and finally, as you said, requiring XWayland fallback, or completely switching back to X.Org:
https://www.reddit.com/r/swaywm/comments/qyfeok/ssh_x_is_there_an_equivalent_under_wayland/
The design philosophy is “simple things should be simple” (setting up a basic windowing system), and “hard things should be possible” (remote access to applications). Wayland seems to break that. Much more than systemd I would say.
For me, the problem is that Wayland is not fully functional. It worked on my Zenbook, but it died. It’s not working on my Dells. Until it’s a 100% replacement for X, I have to use either X or Windows, Or Mac.
Many of my bad experiences with Linux have descended from its graphics stack, predominantly X11. Granted, i haven’t daily driven Linux in years (decades), but when you have multiple distros crap out with the same issues, you get a bit disenfranchised about the whole platform.
On the other hand, Haiku has never crapped out on me like Linux has. It just works, or it doesn’t. It never lives in that kind of Schrodinger’s OS, where you never knew if you rebooted whether something new would crap out.
That’s not entirely true… the reason it can appear this way is the much slower development, so way fewer changes happen to the graphics stack, but they do happen and I have been bitten by things like previously using Vesa but my card got support in the accelerated driver (but it was actually broken) etc… If Linux just assumed Vesa like Haiku does graphics would work on everything from mid 90s forward out of the box.
Then there is the whole… Haiku still doesn’t support acceleration so the whole bar is way lower to begin with. Acceleration support is at a 1995 level still… granted they do have efforts to bring it up to about 2020 level but development there is sporadic and hasn’t been mainlined.
If you do not mind me prying, are you using Haiku as a daily driver? I could not use it for work but adopting Haiku as my “secondary” system is something I have considered trying for a while. Is it there yet?
Honestly, we don’t have a choice here. X is what it is, and Wayland is not X. While Wayland combined with compositor implementations support many features that were not “great” in X and of course, without taking the whole system when they go awry, the Wayland ecosystem is nothing like X, and therefore, anything “cool” that might have depended on the idea of “display server” and “client” just doesn’t make direct sense. Fortunately, there is at least XWayland, but the paradigm of X11 network protocol isn’t there.
Might have been interesting to have a wire protocol that was “better”, but for now, we have what we have and that’s Wayland and compositors (the latter arguably being the most interesting piece). So, no generic wire protocol. Unless full frame blasting is your thing… which maybe it should be? It sort of flies in the face of “the network is the computer”, and people (we’ll call them gamers for now) have little to no interest in things outside of Windows centric paradigms (which, are quite primitive). So, again, here we are.
I think we’ll see “things” (custom, one offs, specific) come to “compositor land” that help fill the voids of not having a “network paradigm” as the base. It will be different, and again, most people could care less. But, for many, we will remember…. “I remember when even the window manager didn’t have to run locally….”.
Accessibility (for the blind) has been getting worse on Linux for a wile now, and the inevitable forced migration to Wayland will probably be the final KO to any viable desktop functionality for the disabled.
I think there are like two people working on Linux accessibility at this point; Oracle bought Sun and promptly terminated the Sun employees who were formerly doing that. It’s impossible for two people working in their spare time to keep up and unbreak all the breakage from the folks who need to redesign things every ten minutes.
As for people saying that X can’t be modernized or updated anymore… I’m pretty sure that if someone can add ray-tracing to Doom, a game from 1993, then X could very much still be updated and enhanced.
Wayland is still a dumb framebuffer when vector display servers existed 37 years ago.
jgfenix,
You’re right, as a network protocol, it was nice to be able to tell the X server to “draw this text at this location”. But these days most applications/frameworks today are rendering their own graphics buffers in-process even when running on X. I feel that any sufficiently advanced framework is going to make heavy use of pixel blit operations and not much else because they want to maximize control over rendering.
I’m ok with this, it helps keep things simple by culling the unnecessary baggage, but where I feel wayland made mistakes was in neglecting to plan appropriate replacement for lost functionality up front. Xwayland should have been able to hit the ground running with full functionality on day 1 rather than passing the buck and letting everyone else solve wayland limitations after the fact. Now we’re dealing with those consequences. There’s no technical reason it had to be this way, but they didn’t listen to the community and unfortunately that’s why we’re here. I have strong criticism for projects that are managed this way. It takes time to earn back trust.
I am not talking about neither network transparency nor X11. I am talking about the postscript (and now pdf) imaging model. I am talking about Sun’s NeWS, Nextstep, Apple’s Quartz, (the now deceased) Berlin/Fresco. I am talking about Wayland only aiming to be the least common denominator. I am talking about lack of ambition instead of trying to do a modern system.
jgfenix,
Ok, thanks for explaining it further!
All of those things are implemented on top of dumb framebuffers though, right? (If not, how else are you proposing to do it?)
Obviously we need more complex abstractions too, but those can be built on top of low level frame buffers rather than making low level frame buffers more complicated. Since many developers are going to create their own high level framework abstractions anyway, there’s something to be said about having the lowest level graphics abstraction be a basic framebuffer and not trying to complicate it with functionality that isn’t going to be used or could be done just as effectively in-process.
Obviously in the end you need a framebuffer but it’s different if your display server is raster based or vector based. For example you get resolution independence for free. The postscript/pdf imaging model also makes color profiles easier. Or the fact that you can guarantee that your screen and your printed document are identical.
jgfenix,
I’d say that scaling and resolution is best handled by the code which is generating the interface whereever that happens to be. The low level graphics stack is normally very naive about the needs of the application. It would just be really hard to come up with something that works well generically for every application. Given that the toolkit is rendering controls anyway. why not have it render the widgets whatever size it needs? It would just need a basic API to query DPI metrics.
I plead ignorance here, but isn’t that typically handled at the application level? On the one hand doing it globally sounds good, but on the other the problem I see is the underlying graphics stack has no idea about the color profiles in use by an application. For example, browser software might simultaneously render multiple jpegs or videos each with a different color profile. In order to match them to the display, it seems inevitable that the application and/or toolkit rendering resources would need to apply color corrections before sending pixels to the framebuffer. Or conversely a much more elaborate API would need to negotiate color profiles for every single resource output to the framebuffer. Even if it could be done at the framebuffer level, I would imagine that this is much less efficient than color correcting within the resource decoder where HSL is already being converted to RGB in the first place. Also the browser might keep a cache of the color corrected image data so it can be re-rendered over and over again without any further color correction, but using a framebuffer API the pixels might need to be color corrected every time they are painted.
I do see the value for an API to provide a query interface so the application knows which color profile to use, but actually having the frame buffer API perform color correction seems like a lot of effort and complexity. whereas an in-process.library might do it better.
Hopefully my rebuttals aren’t coming across as annoying, you are bringing up some really insightful points and I do respect your point of view 🙂