Linked by Norman Feske on Thu 15th Aug 2013 22:47 UTC
OSNews, Generic OSes The just released version 13.08 of Genode marks the 5th anniversary of the framework for building component-based OSes with an emphasis on microkernel-based systems. The new version makes Qt version 5.1 available on the entirety of supported kernels, adds tracing capabilities, and vastly improves multi-processor support.
Thread beginning with comment 569904
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: Comment by jayrulez
by nfeske on Sat 17th Aug 2013 11:57 UTC in reply to "RE[2]: Comment by jayrulez"
nfeske
Member since:
2009-05-27

For the reasons you stated, Genode's interfaces for I/O (the session interfaces for networking, block access, and file-system) are designed to work asynchronously. This way, it is possible to issue multiple file operations at once and receive a signal on the completion of requests.

Also in general, Genode tries to completely move away from blocking RPC calls. For example, the original timer session interface had a blocking 'sleep' call. On the server-side (in the timer driver), this required either a complicated out-of-order RPC request dispatching or the use of one thread per client. By turning it into an asynchronous interface some months ago, we could greatly simplify the timer and reduce resource usage (by avoiding threads). Another example is the NIC bridge component, which we reworked for the current release. Here the change to modelling the component as a mere state machine improved the performance measurably.

There still exist a few blocking interfaces from the early days, but those will eventually be changed to operate asynchronously too.

However, even though the interfaces are well prepared to a fully asynchronous working software stack, not all server implementations operate this way, yet. For example, the part_blk partition manager dispatches one measly block request at a time. This needs to be fixed.

Your comment about SMP and I/O-bounded work loads is spot-on!

Managing the affinities dynamically at runtime is certainly an interesting project in its own right. In the current release, we have laid the ground work to pursue such ideas. What's missing are good measurement instruments for the thread's behaviour. It would be useful to gather statistics per thread about how much CPU time was actually consumed, how many attempts had been made to perform (costly) IPC to a remote CPU, or how often lock-contention took place, maybe even somehow capturing the access profile of local vs. remote memory. This information could then be fed into an optimization algorithm that tries to minimize a cost model. These are tempting topics for further research.

Reply Parent Score: 3