使用哪种 Linux IPC 技术?

我们仍然处于项目的设计阶段,但是我们正在考虑在嵌入式 Linux 内核上拥有三个独立的进程。所述过程之一是通信模块,其处理通过各种介质与设备之间的所有通信。

另外两个进程需要能够通过通信进程发送/接收消息。我正在尝试评估 Linux 提供的 IPC 技术; 其他进程将要发送的消息在大小上有所不同,从调试日志到约5Mbit 速率的流媒体。而且,媒体可以同时进出。

您对这个应用程序的 IPC 技术有什么建议? Http://en.wikipedia.org/wiki/inter-process_communication

处理器正在运行约400-500兆赫,如果这改变了什么。 不需要跨平台,只要 Linux 就可以了。 需要用 C 或 C + + 实现。

94857 次浏览

I would go for Unix Domain Sockets: less overhead than IP sockets (i.e. no inter-machine comms) but same convenience otherwise.

When selecting your IPC you should consider causes for performance differences including transfer buffer sizes, data transfer mechanisms, memory allocation schemes, locking mechanism implementations, and even code complexity.

Of the available IPC mechanisms, the choice for performance often comes down to Unix domain sockets or named pipes (FIFOs). I read a paper on Performance Analysis of Various Mechanisms for Inter-process Communication that indicates Unix domain sockets for IPC may provide the best performance. I have seen conflicting results elsewhere which indicate pipes may be better.

When sending small amounts of data, I prefer named pipes (FIFOs) for their simplicity. This requires a pair of named pipes for bi-directional communication. Unix domain sockets take a bit more overhead to setup (socket creation, initialization and connection), but are more flexible and may offer better performance (higher throughput).

You may need to run some benchmarks for your specific application/environment to determine what will work best for you. From the description provided, it sounds like Unix domain sockets may be the best fit.


Beej's Guide to Unix IPC is good for getting started with Linux/Unix IPC.

If performance really becomes a problem you can use shared memory - but it's a lot more complicated than the other methods - you'll need a signalling mechanism to signal that data is ready (semaphore etc) as well as locks to prevent concurrent access to structures while they're being modified.

The upside is that you can transfer a lot of data without having to copy it in memory, which will definitely improve performance in some cases.

Perhaps there are usable libraries which provide higher level primitives via shared memory.

Shared memory is generally obtained by mmaping the same file using MAP_SHARED (which can be on a tmpfs if you don't want it persisted); a lot of apps also use System V shared memory (IMHO for stupid historical reasons; it's a much less nice interface to the same thing)

Can't believe nobody has mentioned dbus.

http://www.freedesktop.org/wiki/Software/dbus

http://en.wikipedia.org/wiki/D-Bus

Might be a bit over the top if your application is architecturally simple, in which case - in a controlled embedded environment where performance is crucial - you can't beat shared memory.

As of this writing (November 2014) Kdbus and Binder have left the staging branch of the linux kernel. There is no guarantee at this point that either will make it in, but the outlook is somewhat positive for both. Binder is a lightweight IPC mechanism in Android, Kdbus is a dbus-like IPC mechanism in the kernel which reduces context switch thus greatly speeding up messaging.

There is also "Transparent Inter-Process Communication" or TIPC, which is robust, useful for clustering and multi-node set ups; http://tipc.sourceforge.net/

Unix domain sockets will address most of your IPC requirements. You don't really need a dedicated communication process in this case since kernel provides this IPC facility. Also, look at POSIX message queues which in my opinion is one of the most under-utilized IPC in Linux but comes very handy in many cases where n:1 communications are needed.