Linked by Hadrien Grasland on Sun 29th May 2011 09:42 UTC
OSNews, Generic OSes It's funny how trying to have a consistent system design makes you constantly jump from one area of the designed OS to another. I initially just tried to implement interrupt handling, and now I'm cleaning up the design of an RPC-based daemon model, which will be used to implement interrupt handlers, along with most other system services. Anyway, now that I get to something I'm personally satisfied with, I wanted to ask everyone who's interested to check that design and tell me if anything in it sounds like a bad idea to them in the short or long run. That's because this is a core part of this OS' design, and I'm really not interested in core design mistakes emerging in a few years if I can fix them now. Many thanks in advance.
Thread beginning with comment 475649
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[8]: RPC considered harmful
by Neolander on Thu 2nd Jun 2011 10:26 UTC in reply to "RE[7]: RPC considered harmful"
Member since:

There's something else here too though. For most OSs, typiaclly only a thread within a process can create a thread for that process; which means that at the start of thread creation the CPU/kernel is using the correct process' address space, so it's easier to setup the new thread's stack and thread local storage. For your IPC this isn't the case (the sending process' address space would be in use at the time you begin creating a thread for receiving process), so you might need to switch address spaces during thread creation (and blow away TLB entries, etc) if you don't do it in a "lazy" way (postpone parts of thread creation until the thread first starts running).

Not necessarily, it depends where the IPC primitives are managed. If RPC is done through system calls, then you can create a thread while you're in kernel mode and have no extra address space switching overhead.

Hehe. Let's optimise the implementation of this!

You could speed it up by having a per-process "thread cache". Rather than actually destroying a thread, you could pretend to destroy it and put it into a "disused thread pool" instead, and then recycle these existing/disused threads when a new thread needs to be created. To maximise the efficiency of your "disused thread pool" (so you don't have more "disused threads" than necessary), you could create (or pretend to create) the new thread when IPC is being delivered to the receiver and not when IPC is actually sent. To do that you'd need a queue of "pending IPC". That way, for asynchronous operating mode you'd only have a maximum of one thread (per process), where you pretend to destroy it, then recycle it to create a "new" thread, and get the data needed for the "new" thread from the queue of "pending IPC".

Now that it's been optimised, it looks very similar to my "FIFO queue of messages". Instead of calling "getmessage()" and blocking until a message is received, you'd be calling "terminate_thread()" and being put into a "disused thread pool" until IPC is received. The only main difference (other than terminology) is that you'd still be implicitly creating one initial thread.

Yeah, I had thought about something like this for thread stacks (keeping a cache of orphaned stacks to remove the need to allocate them). But you're right that it can totally be done for full threads, with even better performance.

Reply Parent Score: 1