Comparing Unix/Linux IPC

Lots of IPCs are offered by Unix/Linux: pipes, sockets, shared memory, dbus, message-queues...

What are the most suitable applications for each, and how do they perform?

40077 次浏览

Unix IPC

Here are the big seven:

  1. Pipe

    Useful only among processes related as parent/child. Call pipe(2) and fork(2). Unidirectional.

  2. FIFO, or named pipe

    Two unrelated processes can use FIFO unlike plain pipe. Call mkfifo(3). Unidirectional.

  3. Socket and Unix Domain Socket

    Bidirectional. Meant for network communication, but can be used locally too. Can be used for different protocol. There's no message boundary for TCP. Call socket(2).

  4. Message Queue

    OS maintains discrete message. See sys/msg.h.

  5. Signal

    Signal sends an integer to another process. Doesn't mesh well with multi-threads. Call kill(2).

  6. Semaphore

    A synchronization mechanism for multi processes or threads, similar to a queue of people waiting for bathroom. See sys/sem.h.

  7. Shared memory

    Do your own concurrency control. Call shmget(2).

Message Boundary issue

One determining factor when choosing one method over the other is the message boundary issue. You may expect "messages" to be discrete from each other, but it's not for byte streams like TCP or Pipe.

Consider a pair of echo client and server. The client sends string, the server receives it and sends it right back. Suppose the client sends "Hello", "Hello", and "How about an answer?".

With byte stream protocols, the server can receive as "Hell", "oHelloHow", and " about an answer?"; or more realistically "HelloHelloHow about an answer?". The server has no clue where the message boundary is.

An age old trick is to limit the message length to CHAR_MAX or UINT_MAX and agree to send the message length first in char or uint. So, if you are at the receiving side, you have to read the message length first. This also implies that only one thread should be doing the message reading at a time.

With discrete protocols like UDP or message queues, you don't have to worry about this issue, but programmatically byte streams are easier to deal with because they behave like files and stdin/out.

Shared memory can be the most efficient since you build your own communication scheme on top of it, but it requires a lot of care and synchronization. Solutions are available for distributing shared memory to other machines too.

Sockets are the most portable these days, but require more overhead than pipes. The ability to transparently use sockets locally or over a network is a great bonus.

Message queues and signals can be great for hard real-time applications, but they are not as flexible.

These methods were naturally created for communication between processes, and using multiple threads within a process can complicate things -- especially with signals.

It's worth noting that lots of libraries implement one type of thing on top of another.

Shared memory doesn't need to use the horrible sysv shared memory functions - it's much more elegant to use mmap() (mmap a file in on a tmpfs /dev/shm if you want it named; mmap /dev/zero if you want forked not exec'd processes to inherit it anonymously). Having said that, it still leaves your processes with some need for synchronisation to avoid problems - typically by using some of the other IPC mechanisms to do synchronisation of access to a shared memory area.

Here is a webpage with a simple benchmark: https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets

As far as I can tell, each has their advantages:

  • Pipe I/O is the fastest but needs a parent/child relationship to work.
  • Sysv IPC has a defined message boundary and can connect disparate processes locally.
  • UNIX sockets can connect disparate processes locally and has higher bandwidth but no inherent message boundaries.
  • TCP/IP sockets can connect any processes, even over the network but has higher overhead and no inherent message boundaries.