捕获 Ctrl + C/SIGINT 并在 python 中优雅地退出多进程

如何在多进程 python 程序中捕获一个 Ctrl + C并优雅地退出所有进程,我需要解决方案在 unix 和 windows 上都能工作。我试过以下方法:

import multiprocessing
import time
import signal
import sys


jobs = []


def worker():
signal.signal(signal.SIGINT, signal_handler)
while(True):
time.sleep(1.1234)
print "Working..."


def signal_handler(signal, frame):
print 'You pressed Ctrl+C!'
# for p in jobs:
#     p.terminate()
sys.exit(0)


if __name__ == "__main__":
for i in range(50):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()

有点效果,但我不认为这是正确的解决方案。

83543 次浏览

The solution is based on this link and this link and it solved the problem, I had to moved to Pool though:

import multiprocessing
import time
import signal
import sys


def init_worker():
signal.signal(signal.SIGINT, signal.SIG_IGN)


def worker():
while(True):
time.sleep(1.1234)
print "Working..."


if __name__ == "__main__":
pool = multiprocessing.Pool(50, init_worker)
try:
for i in range(50):
pool.apply_async(worker)


time.sleep(10)
pool.close()
pool.join()


except KeyboardInterrupt:
print "Caught KeyboardInterrupt, terminating workers"
pool.terminate()
pool.join()

Just handle KeyboardInterrupt-SystemExit exceptions in your worker process:

def worker():
while(True):
try:
msg = self.msg_queue.get()
except (KeyboardInterrupt, SystemExit):
print("Exiting...")
break

The previously accepted solution has race conditions and it does not work with map and async functions.


The correct way to handle Ctrl+C/SIGINT with multiprocessing.Pool is to:

  1. Make the process ignore SIGINT before a process Pool is created. This way created child processes inherit SIGINT handler.
  2. Restore the original SIGINT handler in the parent process after a Pool has been created.
  3. Use map_async and apply_async instead of blocking map and apply.
  4. Wait on the results with timeout because the default blocking waits to ignore all signals. This is Python bug https://bugs.python.org/issue8296.

Putting it together:

#!/bin/env python
from __future__ import print_function


import multiprocessing
import os
import signal
import time


def run_worker(delay):
print("In a worker process", os.getpid())
time.sleep(delay)


def main():
print("Initializng 2 workers")
original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN)
pool = multiprocessing.Pool(2)
signal.signal(signal.SIGINT, original_sigint_handler)
try:
print("Starting 2 jobs of 5 seconds each")
res = pool.map_async(run_worker, [5, 5])
print("Waiting for results")
res.get(60) # Without the timeout this blocking call ignores all signals.
except KeyboardInterrupt:
print("Caught KeyboardInterrupt, terminating workers")
pool.terminate()
else:
print("Normal termination")
pool.close()
pool.join()


if __name__ == "__main__":
main()

As @YakovShklarov noted, there is a window of time between ignoring the signal and unignoring it in the parent process, during which the signal can be lost. Using pthread_sigmask instead to temporarily block the delivery of the signal in the parent process would prevent the signal from being lost, however, it is not available in Python-2.