Linked by Hadrien Grasland on Thu 19th May 2011 21:31 UTC
Hardware, Embedded Systems Having read the feedback resulting from my previous post on interrupts (itself resulting from an earlier OSnews Asks item on the subject), I've had a look at the way interrupts work on PowerPC v2.02, SPARC v9, Alpha and IA-64 (Itanium), and contribute this back to anyone who's interested (or willing to report any blatant flaw found in my posts). I've also tried to rework a bit my interrupt handling model to make it significantly clearer and have it look more like a design doc and less like a code draft.
Permalink for comment 474025
To read all comments associated with this story, please click here.
Locking benchmarks
by Alfman on Fri 20th May 2011 20:34 UTC
Member since:

This site has some interesting benchmarks for a couple low level synchronization primitives.

The obvious locking mechanism on x86 is to use a "lock bts" instruction. According to above this consumes 530 cycles on a modern dual core processor. And 1130 cycles on a 4 core multi processor system.

They've proposed another algorithm, which is incidentally the same algorithm sometimes used to synchronize multiple clients over a SMB file system.

For example, Quicken Pro maintains it's database on a file share without actually running a "server".

Anyways, I am surprised they were able to achieve performance gains this way. 220 cycles on a dual core and 1130 cycles on a 4 core multiprocessor system.

It relies on the cache coherency characteristics behind the x86, which I have no idea how portable it is across architectures.

You may want to study the approach, however I don't think it will be useful because the algorithm is dependent upon a fixed number of threads (and presumably it gets worse as that fixed number is increased).

Anyways, I still argue that a lock free asynchronous model without any threads will easily outperform the "popup thread" model, particularly for simple tasks which consume 1-2K cycles.

I don't care which way you go, it probably won't matter either way, but please stop clinging to the view that threads are more scalable.

Reply Score: 1