I’ve got two fantastic posts about Linux today, from the same author – Chris Siebenmann. First, the history behind kernel mode setting in Linux.
In the older days of Linux, the kernel didn’t know very much about graphics (at least on PCs). Instead, setting up and handling graphics hardware was the domain of the X server; the kernel gave it access to PCI (or AGP) resources, and the X server directly stored values and read things out. Part of what the X server did was set the graphics mode (ie, the modeline resolution, depth, and scan frequencies), initially from explicit modelines and then over time from EDID information and other things you didn’t have to configure (which was great). This was user space mode setting. There were a variety of reasons to do this at the time (cf) but it had various drawbacks, including requiring the X server to have significant privileges (cf Fedora removing them).
You can see where this is going.
I would argue that this taking authority over from X process have gone a bit too far.
On my recent Ubuntu setup, SSH’ing into my machine remotely, and trying to start a X app, causes it to be drawn on the host screen not the client one. In other words, I can no longer remotely run X apps on my terminal, whenever I run them, it shows up on the remote screen, which kinda defeats to point.
That is one step further far than the previous “too far” step, which would have startx ignore my local config changes, but try to “guess” my hardware config. Having multiple different types of graphics cards, and experimenting on screen setup meant having a graphical login necessarily restart for every attempt, instead of the previously possible “X -config test.conf”
(I am sure there are ways to do these).
Anyway, every year I see Linux becoming more like “other operating systems” where customization is only an afterthought, and us users are supposed to be content with the defaults given.