在 Bash 中管道如何工作的简单解释是什么?

我经常在 Bash 中使用管道,例如:

dmesg | less

尽管我知道这个输出是什么,但是它采用 dmesg并让我用 less滚动它,我不明白 |在做什么。它仅仅是 >的反义词吗?

  • 对于 |的作用,有没有一个简单的或比喻性的解释?
  • 当一条管道中使用几根管子时,会发生什么情况?
  • 管道的行为在 Bash 脚本中出现的任何地方都是一致的吗?
112471 次浏览

The pipe operator takes the output of the first command, and 'pipes' it to the second one by connecting stdin and stdout. In your example, instead of the output of dmesg command going to stdout (and throwing it out on the console), it is going right into your next command.

A Unix pipe connects the STDOUT (standard output) file descriptor of the first process to the STDIN (standard input) of the second. What happens then is that when the first process writes to its STDOUT, that output can be immediately read (from STDIN) by the second process.

Using multiple pipes is no different than using a single pipe. Each pipe is independent, and simply links the STDOUT and STDIN of the adjacent processes.

Your third question is a little bit ambiguous. Yes, pipes, as such, are consistent everywhere in a bash script. However, the pipe character | can represent different things. Double pipe (||), represents the "or" operator, for example.

  • | puts the STDOUT of the command at left side to the STDIN of the command of right side.

  • If you use multiple pipes, it's just a chain of pipes. First commands output is set to second commands input. Second commands output is set to next commands input. An so on.

  • It's available in all Linux/widows based command interpreter.

Every standard process in Unix has at least three file descriptors, which are sort of like interfaces:

  • Standard output, which is the place where the process prints its data (most of the time the console, that is, your screen or terminal).
  • Standard input, which is the place it gets its data from (most of the time it may be something akin to your keyboard).
  • Standard error, which is the place where errors and sometimes other out-of-band data goes. It's not interesting right now because pipes don't normally deal with it.

The pipe connects the standard output of the process to the left to the standard input of the process of the right. You can think of it as a dedicated program that takes care of copying everything that one program prints, and feeding it to the next program (the one after the pipe symbol). It's not exactly that, but it's an adequate enough analogy.

Each pipe operates on exactly two things: the standard output coming from its left and the input stream expected at its right. Each of those could be attached to a single process or another bit of the pipeline, which is the case in a multi-pipe command line. But that's not relevant to the actual operation of the pipe; each pipe does its own.

The redirection operator (>) does something related, but simpler: by default it sends the standard output of a process directly to a file. As you can see it's not the opposite of a pipe, but actually complementary. The opposite of > is unsurprisingly <, which takes the content of a file and sends it to the standard input of a process (think of it as a program that reads a file byte by byte and types it in a process for you).

A pipe takes the output of a process, by output I mean the standard output (stdout on UNIX) and passes it on the standard input (stdin) of another process. It is not the opposite of the simple right redirection > which purpose is to redirect an output to another output.

For example, take the echo command on Linux which is simply printing a string passed in parameter on the standard output. If you use a simple redirect like :

echo "Hello world" > helloworld.txt

the shell will redirect the normal output initially intended to be on stdout and print it directly into the file helloworld.txt.

Now, take this example which involves the pipe :

ls -l | grep helloworld.txt

The standard output of the ls command will be outputed at the entry of grep, so how does this work?

Programs such as grep when they're being used without any arguments are simply reading and waiting for something to be passed on their standard input (stdin). When they catch something, like the ouput of the ls command, grep acts normally by finding an occurence of what you're searching for.

If you treat each unix command as a standalone module,
but you need them to talk to each other using text as a consistent interface,
how can it be done?

cmd                       input                    output


echo "foobar"             string                   "foobar"
cat "somefile.txt"        file                     *string inside the file*
grep "pattern" "a.txt"    pattern, input file      *matched string*

You can say | is a metaphor for passing the baton in a relay marathon.
Its even shaped like one!
cat -> echo -> less -> awk -> perl is analogous to cat | echo | less | awk | perl.

cat "somefile.txt" | echo
cat pass its output for echo to use.

What happens when there is more than one input?
cat "somefile.txt" | grep "pattern"
There is an implicit rule that says "pass it as input file rather than pattern" for grep.
You will slowly develop the eye for knowing which parameter is which by experience.

Pipes are very simple like this.

You have the output of one command. You can provide this output as the input into another command using pipe. You can pipe as many commands as you want.

ls | grep my | grep files

This first lists the files in the working directory. This output is checked by the grep command for the word "my". The output of this is now into the second grep command which finally searches for the word "files". Thats it.

In Linux (and Unix in general) each process has three default file descriptors:

  1. fd #0 Represents the standard input of the process
  2. fd #1 Represents the standard output of the process
  3. fd #2 Represents the standard error output of the process

Normally, when you run a simple program these file descriptors by default are configured as following:

  1. default input is read from the keyboard
  2. Standard output is configured to be the monitor
  3. Standard error is configured to be the monitor also

Bash provides several operators to change this behavior (take a look to the >, >> and < operators for example). Thus, you can redirect the output to something other than the standard output or read your input from other stream different than the keyboard. Specially interesting the case when two programs are collaborating in such way that one uses the output of the other as its input. To make this collaboration easy Bash provides the pipe operator |. Please note the usage of collaboration instead of chaining. I avoided the usage of this term since in fact a pipe is not sequential. A normal command line with pipes has the following aspect:

    > program_1 | program_2 | ... | program_n

The above command line is a little bit misleading: user could think that program_2 gets its input once the program_1 has finished its execution, which is not correct. In fact, what bash does is to launch ALL the programs in parallel and it configures the inputs outputs accordingly so every program gets its input from the previous one and delivers its output to the next one (in the command line established order).

Following is a simple example from Creating pipe in C of creating a pipe between a parent and child process. The important part is the call to the pipe() and how the parent closes fd1 (writing side) and how the child closes fd1 (writing side). Please, note that the pipe is a unidirectional communication channel. Thus, data can only flow in one direction: fd1 towards fd[0]. For more information take a look to the manual page of pipe().

#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>


int main(void)
{
int     fd[2], nbytes;
pid_t   childpid;
char    string[] = "Hello, world!\n";
char    readbuffer[80];


pipe(fd);


if((childpid = fork()) == -1)
{
perror("fork");
exit(1);
}


if(childpid == 0)
{
/* Child process closes up input side of pipe */
close(fd[0]);


/* Send "string" through the output side of pipe */
write(fd[1], string, (strlen(string)+1));
exit(0);
}
else
{
/* Parent process closes up output side of pipe */
close(fd[1]);


/* Read in a string from the pipe */
nbytes = read(fd[0], readbuffer, sizeof(readbuffer));
printf("Received string: %s", readbuffer);
}


return(0);
}

Last but not least, when you have a command line in the form:

> program_1 | program_2 | program_3

The return code of the whole line is set to the last command. In this case program_3. If you would like to get an intermediate return code you have to set the pipefail or get it from the PIPESTATUS.

All of these answere are great. Something that I would just like to mention, is that a pipe in bash (which has the same concept as a unix/linux, or windows named pipe) is just like a pipe in real life. If you think of the program before the pipe as a source of water, the pipe as a water pipe, and the program after the pipe as something that uses the water (with the program output as water), then you pretty much understand how pipes work. And remember that all apps in a pipeline run in parallel.

In short, as described, there are three key 'special' file descriptors to be aware of. The shell by default send the keyboard to stdin and sends stdout and stderr to the screen:

stdin, stdout, stderr

A pipeline is just a shell convenience which attaches the stdout of one process directly to the stdin of the next:

simple pipeline

There are a lot of subtleties to how this works, for example, the stderr stream might not be piped as you would expect, as shown below:

stderr and redirection

I have spent quite some time trying to write a detailed but beginner friendly explanation of pipelines in Bash. The full content is at:

https://effective-shell.com/docs/part-2-core-skills/7-thinking-in-pipelines/

Regarding the efficiency issue of pipe:

  • A command can access and process the data at its input before previous pipe command to complete that means computing power utilization efficiency if resources available.
  • Pipe does not require to save output of a command to a file before next command to access its input ( there is no I/O operation between two commands) that means reduction in costly I/O operations and disk space efficiency.