In terms of time and space shared memory is probably the most efficient inter-process communication channel provided by all modern OSes. This article explains how to Use Shared Objects On Linux.
In terms of time and space shared memory is probably the most efficient inter-process communication channel provided by all modern OSes. This article explains how to Use Shared Objects On Linux.
Nice little guide, but it misses a very common element to using shared memory, – synchronization. Having many processes communicate through shared memory mostly requires them to cooperate, so one doesn’t step on the others toes, creating corrupted data,race conditions and worse.
Linux like other unixoid kernels offer kernel semaphores. These are quite handy to address these topics.
BTW, I prefer named pipes for inter process communication.
Carsten
Shared memory is efficient when you have too exchange large data between tasks. But for small message (<4kb on x86), pipes are way more powerfull because of the cost of synchronisation on shared memory access. Meanwhile, on plateform with fast mutexes, the gap may shorten.
Just curious: does a document show the most efficient way of exchanging small data between applications on Win32 platforms ? All my test (pipes, lo, …) showed a huge drop in performance (20%-50%) on Win2k/XP compared to Linux on the same box.
I did some benchmarks a little while ago, and posted them into the OSNews forum (look in “older messages”). Here they are again…
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I did some digging and most sources seem to suggest for small messages ( < 512 bytes ) the fastest method to transfer data to another process is Named Pipes, then Message Queues and the slower method being UNIX Sockets (Obviously there are even slower methods but these seemed to be the top 3 for local IPC).
I did some benchmarks of my own to see:
—————————————————–
Benchmark based on sending 100,000 30 byte messages back and forth between a client and server process. Each method was run 3 times with the lowest time used.
Message Queues: 0.029 ms/msg [round trip]
Pipes: 0.041 ms/msg [round trip]
Sockets (datagram): 0.056 ms/msg [round trip]
Sockets (stream): 0.069 ms/msg [round trip]
—————————————————–
Message Queues seem to be the faster here on my Linux box.
So, as I already have a robust Message Que implementation, I think I will stick with it. I am not willing to double the time for each message just to make it easier to do network transparency (why slow down the 99% for the 1% case).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:o)
> Message Queues: 0.029 ms/msg [round trip]
> Pipes: 0.041 ms/msg [round trip]
> Sockets (datagram): 0.056 ms/msg [round trip]
> Sockets (stream): 0.069 ms/msg [round trip]
Thanks,
so all methods have roughly the same performance. Sockets are slightly slower, but offer the possibility of connecting processes running on different machines.
Carsten
>> Message Queues: 0.029 ms/msg [round trip]
>> Pipes: 0.041 ms/msg [round trip]
>> Sockets (datagram): 0.056 ms/msg [round trip]
>> Sockets (stream): 0.069 ms/msg [round trip]
>Thanks,
>so all methods have roughly the same performance. Sockets are
>slightly slower, but offer the possibility of connecting
>processes running on different machines.
Looking at the numbers given, sockets are almost twice as slow as message queues. Most of the time, that probably is a non-issue but if you’re doing a lot of IPC that could slow things down.
From what I gather, Message Ques are basically a kernel managed mechanism for passing data via shared memory. So the kernel Message Que implementation manages the semaphores etc. required (addressing the problem the first poster mentioned).
Some people think that “raw” shared memory will be faster, and for some applications that may be true, but for applications passing small amounts of data, let the kernel manage the shared memory for you via the Message Que interface. Plenty fast and has a robust portable API.
I have a “very important question”.
Is standard operating proceedure just to use a whole helluva lot of shared memory objects (when the need for many objects arises)?
Is there significant advantage to designing and using a virtual memory subsystem with a single (or small number of) shared memory segment containers? Aka: Design your own stack and class to allocate space within the stack for new objects?
I’d have tested it out already, but the VM subsystem has proven less than the weekend long job i’d hoped it would be.
Myren
How do pipes and sockets significantly vary? Besides the above mentioned performance difference, they both seem like generic ways of sending data, but lack any mechanism for parsing. They’re both like “packet” based systems, kind of, that you have to parse yourself.
Which actually is another point for shared memory, particularly as this article instantiates it. You dont have to design and build a language for your pipe to speak. You dont have to parse your incoming packet text. You “just” have to make sure acess stays syncronized.
Myren
The Amiga’s flat unprotected memory offer raw speed. Is it possible to have a block of memory with no VM mapping that would let processes exchange data at the max speed possible?