Linked by Thom Holwerda on Mon 10th Nov 2008 22:56 UTC
Amiga & AROS Saturday November 8, I received an email from someone, inquiring if I would be interested in "doing a first interview/introduction into a new operating system". We get these emails and news submissions all the time, and most of the time, "new operating system" means Ubuntu-with-a-black-theme, so we don't bother. I figured this time things wouldn't be different, but after a bit of digging around, there's a little more to it this time.
Thread beginning with comment 336986
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:

And unfortunately Unix/Linux's sw model has many limitations: BeOS has really shown this, its application responsiveness has not been reproduced on Linux even on much more powerful hw and I don't expect this to change anytime soon..
:-( :-(

This is my observation as well.

I wonder what it is exactly in the Linux stack that makes the responsiveness so slow?

The kernel?

Every OS that makes applications more responsive than Linux has my support.

Reply Parent Score: 2

WereCatf Member since:

I wonder what it is exactly in the Linux stack that makes the responsiveness so slow?

The kernel?

I'd probably say the biggest issue is X itself. It is getting rather dated and carries with itself a lot of outdated stuff. X does of course have features not available to Windows users for example, but there's nowadays many things lurking there that could be done more efficiently with modern hardware and software. For example, there simply is not sold any graphics hardware that can't accelerate any kind of video output or graphics operations, and many generations old hardware can do those things as well.

I don't have the skills to do anything even remotely as good as X is now, but someone who has the skills and the knowledge should perhaps start working on something new and optimize it for more modern hardware.

Reply Parent Score: 2

silix Member since:

MacOSX is actually such thing, in lower levels, it runs a forked FreeBSD (Darwin).

XNU evolved from a merge of Mach 3 (old NextStep's kernel) with the upper halves of the network and VFS stacks from 4.4BSD, and a new (object oriented) hardware subsystem and api (IOKit, based and written in embedded C++)

Apple took code from FreeBSD, but there's not enough to consider the kernel a fork - OTOH the whole platform is, because of the inclusion of BSD userland

I wonder what it is exactly in the Linux stack that makes the responsiveness so slow?

The kernel?

as it has often been said, one of the main factors BeOS was responsive was pervasive multithreading - if every BeOS program was to be multithreaded yes programmers had to always develop with concurrency concerns in mind, but it also meant the kernel was designed to enable large numbers of lightweight threads (so, say, every new Tracker window could run its event loop in a different thread)
Linux instead wasn't born with threading as a major feature (i actually recall Torvalds being very vocal against mechanisms for thread support in the kernel for a long time) - threads have been implemented (as an afterthought) mapping them on normal processes, and the (kernel, non userland library - level) api for them was based, on a (costly) syscall that clones a process's address space to create a new one
with time, internal mechanisms have been refined (for instance all children subprocesses related of a single process would carry the same PID - interestingly, it was not so in the beginning - locks have been optimized and locking influence inside the kernel, reduced, to yield better preemptability and lower latencies)
but the overall structure and low level kernel interfaces seems to not have changed much, to retain self compatibility - so apparently, threads still carry higher inherent overhead than on other systems, and their usage on applications, the "use sparingly" warning

on the other hand, BeOS windows were managed by a single process, also managing the input loop and focus mechanism, which avoided the kernel-X-kernel-WM-kernel-X round trip
the IPC mechanism itself was (as in other microkernel, or desktop optimized OSs) more modern and efficient (due to beos initially, being microkernel based) than what was (and still is nowadays) available from the classic unix kernel
also interesting, message based IPC and hardware events on Linux are other afterthoughts - DBUS actually does in a daemon what other system do with native kernel facilities (message ports) inside the kernel, requiring a round trip (thus, overhead) for the message exchange operation

to reply to the original question, i'd say both each single, and the pretty much conservative overall structure of a server derived system
I'd probably say the biggest issue is X itself.It is getting rather dated and carries with itself a lot of outdated stuff.

i suggest the reading of Mark Kilgard's (former SGI , now nvidia employee) paper about D11
dated 1995, it pretty much sums up X11's inefficiencies (which were the same we try to tackle today, not much has changed) and redesigns the X11 code path optimizing for the local case - basically bypassing everyting:
bypassing server side rendering (rendering on client private surfaces)
bypassing protocol en/decoding, shared memory, sockets, and IPC altogether (instead, relying on an innovative (for unix) procedure call mechanism to share data between the client and the serv.. pardon, graphic kernel)
the drawback was applications needing to adopt the protected procedure call model and the XClientPrivateSurface api - but as a matter of fact btw, Kilgard also thought about legacy compatibility, and the option of running a ported X11 server on the D11 kernel

but someone who has the skills and the knowledge should perhaps start working on something new and optimize it for more modern hardware.

those with the skills have already started, or actually done such new solutions

the problem is, the comunity at large is actually ignorant - meaning people often *ignores* the very existence of whatever is born outside of *nix, or outside of FOSS - OTOH X has become so entrenched with the current state of community friendly for the community accepting its replacement anytime soon to be a realistic scenario...

Edited 2008-11-12 00:17 UTC

Reply Parent Score: 4

rob_mx Member since:

I wonder what it is exactly in the Linux stack that makes the responsiveness so slow?

The kernel?

I think it is the combination of the three. The design of the three components is independent between each other in the case of a Linux system. They are not build with the only purpose of working well together. implementation of X protocol is multiplatform, so its design account for that. The same for GTK and Qt. They are meant to make the development of applications in multiple platforms more easy. Working in different platform has its disadvantages, you can't take for granted that certain feature is available at all times, nor that the programming model is the same in all platforms (like multi threading model differences). About the kernel, the Linux kernel only provides basic services for the desktop, and as a generic kernel, its objective is not only to improve the desktop performance, it is also to improve the server use case.

In systems like BeOS and others, where all components are designed under the same roof and aim to the same objective, some assumptions can be made to make the system more responsive or efficient. They have the flexibility design their display server, the display driver API and the user API together, without worrying affecting other platforms. This enable to have a more clean, cohesive, consistent and, in some cases, more efficient design.

Reply Parent Score: 2

renox Member since:

My answer would be 'none of the above': the traditional way to make an application on Unix/Linux is the 'event loop': you have a main loop, when the user click on something it executes the corresponding action and then resume, of course when it executes an action the HMI isn't responsive as the main loop doesn't process events.

So to avoid being too unresponsive, the application delegate some of the very long action to another process/thread but developers do it as little as possible as it increase complexity and overhead (on a single CPU any additional thread reduce the overall performance).

Whereas BeOS was initially planned for a bi-CPU computer so they used thread everywhere to use efficiently the two CPU and this has the very nice effect of producing a responsive OS even on a single-CPU.

If I'm right, then it means that we'll get responsive application on Linux only when the userspace applications and framework (X) are recoded to use parallelism (thread|process), this will probably happen as someday everyone is going to have a quadcore under their desk, so the incentive will be there but it'll take a long time..

Reply Parent Score: 2