Linked by Elad Lahav on Thu 18th Feb 2016 19:27 UTC
QNX

A mutex is a common type of lock used to serialize concurrent access by multiple threads to shared resources. While support for POSIX mutexes in the QNX Neutrino Realtime OS dates back to the early days of the system, this area of the code has seen considerable changes in the last couple of years.

Thread beginning with comment 625027
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: Mutex behavior
by Alfman on Fri 19th Feb 2016 19:33 UTC in reply to "RE[5]: Mutex behavior"
Alfman
Member since:
2011-01-28

elahav,

It may be undefined in the spec, but it is certainly defined if you are using error-checking mutexes, which is the default on QNX and optional on Linux.


It's true error checking extensions can help catch code that triggers undefined mutex behavior. One could implement the same optional error checking with semaphores too, but I think it is unfortunate that useful scenarios were left undefined in the first place. Now we're forced to "reinvent the wheel" just to work around the cases for which a mutex is not properly defined. Alas, it is what it is.


Read the article ;-) With priority inheritance (again, default on QNX, optional on Linux), the low priority thread will be boosted to the priority of the highest waiter until it relinquishes the mutex.


I did read it, thank you for the article by the way.

This is a good point, you can't really bump the priority of unequal threads when the kernel doesn't know which thread will release a lock. Maybe there should be a well to tell it.

Reply Parent Score: 2

RE[7]: Mutex behavior
by dpJudas on Fri 19th Feb 2016 19:56 in reply to "RE[6]: Mutex behavior"
dpJudas Member since:
2009-12-10

Now we're forced to "reinvent the wheel" just to work around the cases for which a mutex is not properly defined. Alas, it is what it is.

It is not "properly defined" to increase the chance more optimal implementations are possible without having to support weird usages such as "lock mutex in one thread and unlock in another".

Exactly why you had to lock it in one thread and release it in another is still not clear (to me), but I'm guessing there is a good chance that some of the other pthread primitives available would have been a better fit for the problem at hand.

Reply Parent Score: 2

RE[8]: Mutex behavior
by Alfman on Fri 19th Feb 2016 23:39 in reply to "RE[7]: Mutex behavior"
Alfman Member since:
2011-01-28

dpJudas,

It is not "properly defined" to increase the chance more optimal implementations are possible without having to support weird usages such as "lock mutex in one thread and unlock in another".

Exactly why you had to lock it in one thread and release it in another is still not clear (to me), but I'm guessing there is a good chance that some of the other pthread primitives available would have been a better fit for the problem at hand.


It's not really as weird as you might think. Your thread is holding a lock and wishes to finish the request in a new thread without blocking the main thread. Therefor the easiest most natural place to release the lock is in the new thread. This pattern comes up a lot as elahav said himself.


Using a mutex is very desirable because it's the conventional choice for protecting resources from concurrent access, the only problem is pthread's implementation doesn't allow it here. Since a semaphore closely resembles the mutex's normal semantics with the added benefit that releasing the lock from a new thread is well defined, it seemed like the best alternative. However elahav was very observant and pointed out that thread priority bumping would be lost with semaphores.


Hypothetically we could have used a mutex from Thread A, spawned thread B to do the work needed, and then used signaling back to A to release the mutex. While this would have worked using only defined pthread mutex operations, the kernel still would have gotten the thread priority bumping wrong because it doesn't know that thread B needs to finish it's work before thread A can release the mutex. I can't think of an easy way to represent this complex thread inter-dependency to the kernel, it would be so ugly that nobody would really want to.


I think the best solution would be if a locked mutex could be explicitly transferred to a new thread using a new pthread function. Not only would this resolve the issue of undefined behavior across threads, it would also resolve the issue of giving the correct thread a priority boost - something a semaphore can't do.

If I could go back in time to demonstrate that such a use case is useful, I probably could have convinced the original developers that it should be defined and it would not have been more computationally expensive to implement with atomic CPU operations. Alas, that didn't happen...

We could come up with a big list of things that could have been improved, in hindsight ;)

Edited 2016-02-19 23:47 UTC

Reply Parent Score: 2