Linked by Elad Lahav on Thu 18th Feb 2016 19:27 UTC
QNX

A mutex is a common type of lock used to serialize concurrent access by multiple threads to shared resources. While support for POSIX mutexes in the QNX Neutrino Realtime OS dates back to the early days of the system, this area of the code has seen considerable changes in the last couple of years.

Thread beginning with comment 625029
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[7]: Mutex behavior
by dpJudas on Fri 19th Feb 2016 19:56 UTC in reply to "RE[6]: Mutex behavior"
dpJudas
Member since:
2009-12-10

Now we're forced to "reinvent the wheel" just to work around the cases for which a mutex is not properly defined. Alas, it is what it is.

It is not "properly defined" to increase the chance more optimal implementations are possible without having to support weird usages such as "lock mutex in one thread and unlock in another".

Exactly why you had to lock it in one thread and release it in another is still not clear (to me), but I'm guessing there is a good chance that some of the other pthread primitives available would have been a better fit for the problem at hand.

Reply Parent Score: 2

RE[8]: Mutex behavior
by Alfman on Fri 19th Feb 2016 23:39 in reply to "RE[7]: Mutex behavior"
Alfman Member since:
2011-01-28

dpJudas,

It is not "properly defined" to increase the chance more optimal implementations are possible without having to support weird usages such as "lock mutex in one thread and unlock in another".

Exactly why you had to lock it in one thread and release it in another is still not clear (to me), but I'm guessing there is a good chance that some of the other pthread primitives available would have been a better fit for the problem at hand.


It's not really as weird as you might think. Your thread is holding a lock and wishes to finish the request in a new thread without blocking the main thread. Therefor the easiest most natural place to release the lock is in the new thread. This pattern comes up a lot as elahav said himself.


Using a mutex is very desirable because it's the conventional choice for protecting resources from concurrent access, the only problem is pthread's implementation doesn't allow it here. Since a semaphore closely resembles the mutex's normal semantics with the added benefit that releasing the lock from a new thread is well defined, it seemed like the best alternative. However elahav was very observant and pointed out that thread priority bumping would be lost with semaphores.


Hypothetically we could have used a mutex from Thread A, spawned thread B to do the work needed, and then used signaling back to A to release the mutex. While this would have worked using only defined pthread mutex operations, the kernel still would have gotten the thread priority bumping wrong because it doesn't know that thread B needs to finish it's work before thread A can release the mutex. I can't think of an easy way to represent this complex thread inter-dependency to the kernel, it would be so ugly that nobody would really want to.


I think the best solution would be if a locked mutex could be explicitly transferred to a new thread using a new pthread function. Not only would this resolve the issue of undefined behavior across threads, it would also resolve the issue of giving the correct thread a priority boost - something a semaphore can't do.

If I could go back in time to demonstrate that such a use case is useful, I probably could have convinced the original developers that it should be defined and it would not have been more computationally expensive to implement with atomic CPU operations. Alas, that didn't happen...

We could come up with a big list of things that could have been improved, in hindsight ;)

Edited 2016-02-19 23:47 UTC

Reply Parent Score: 2

RE[9]: Mutex behavior
by dpJudas on Sat 20th Feb 2016 06:47 in reply to "RE[8]: Mutex behavior"
dpJudas Member since:
2009-12-10

It's not really as weird as you might think. Your thread is holding a lock and wishes to finish the request in a new thread without blocking the main thread. Therefor the easiest most natural place to release the lock is in the new thread.

Locking something for this long a duration is really poor behavior and kind of ruins the point multi-threading in the first place. As a result, mutexes aren't designed for this.

Using a mutex is very desirable because it's the conventional choice for protecting resources from concurrent access, the only problem is pthread's implementation doesn't allow it here.

A mutex is designed to mutually exclude access to a resource for a short duration of time. It is intended for the situations where your data won't fit into atomic operations.

What you are trying to do with the mutex IMO doesn't fit into the intended usage pattern of that construct and it shouldn't even try to support it. It is like complaining that a std::vector doesn't scale for very large lists where you should have used std::list or some other container type instead.

Since a semaphore closely resembles the mutex's normal semantics with the added benefit that releasing the lock from a new thread is well defined, it seemed like the best alternative. However elahav was very observant and pointed out that thread priority bumping would be lost with semaphores.

Not knowing the code and problem at hand, I can't say for sure exactly what you should have used instead of a mutex. Maybe it should have been semaphores, maybe condition variables, maybe pull the working set out of the mutex protected data and put it back in when done. In any case, if posix mutexes had been forced to support what you're asking for they might have taken a performance hit for their intended usage.

Hypothetically we could have used a mutex from Thread A, spawned thread B to do the work needed, and then used signaling back to A to release the mutex. While this would have worked using only defined pthread mutex operations, the kernel still would have gotten the thread priority bumping wrong because it doesn't know that thread B needs to finish it's work before thread A can release the mutex. I can't think of an easy way to represent this complex thread inter-dependency to the kernel, it would be so ugly that nobody would really want to.

Typically when I end up in situations requiring solutions as creative as you just described, it is almost always a sign that I somehow ended up over-complicating things and there's a much simpler problem to solve if I just change my approach. In this case, the key question to ask is probably somewhere along the lines of: Why do I need to maintain such a long lock in the first place? Is there a way to avoid it?

If I could go back in time to demonstrate that such a use case is useful, I probably could have convinced the original developers that it should be defined and it would not have been more computationally expensive to implement with atomic CPU operations. Alas, that didn't happen...

Keep in mind that when posix was originally defined CPU architecture looked quite different from what it did today. When doing an standard like pthreads there's a fine balance between defining something too exact that will lock it into current architecture and algorithms, and too loosely so that the standard is effectively useless. Even if you could today (on Intel/AMD hardware) make your cross-thread mutex lock/unlock without cost, it might not translate to future architectures.

We could come up with a big list of things that could have been improved, in hindsight ;)

No doubt the computing world look very different. ;)

Reply Parent Score: 2