多处理: 如何在多个进程之间共享一个 dict?

一种程序,它创建几个进程,这些进程在可连接的队列 Q上工作,并且最终可能操作全局字典 D来存储结果。(因此每个子进程可以使用 D存储其结果,并查看其他子进程产生的结果)

如果我在子进程中打印字典 D,我会看到对它所做的修改(例如对 D)。但是在主进程加入 Q 之后,如果我打印 D,它就是一个空结果!

我知道这是同步/锁问题。谁能告诉我这里发生了什么,我怎样才能同步访问 D?

146293 次浏览

多处理不像线程处理。每个子进程将获得主进程内存的一个副本。通常状态是通过通信(管道/套接字)、信号或共享内存共享的。

多处理为您的用例共享状态提供了一些抽象,这些状态通过使用代理或共享内存被视为本地的: http://docs.python.org/library/multiprocessing.html#sharing-state-between-processes

相关章节:

一般的答案是使用 Manager对象:

from multiprocessing import Process, Manager


def f(d):
d[1] += '1'
d['2'] += 2


if __name__ == '__main__':
manager = Manager()


d = manager.dict()
d[1] = '1'
d['2'] = 2


p1 = Process(target=f, args=(d,))
p2 = Process(target=f, args=(d,))
p1.start()
p2.start()
p1.join()
p2.join()


print d

产出:

$ python mul.py
{1: '111', '2': 6}

也许您可以尝试 Pypi.python.org/pypi/pyshmht”rel = “ nofollow”> pyshmht ,为 Python 共享基于内存的散列表扩展。

通知

  1. 它还没有经过全面测试,只是为了供你参考。

  2. 它目前缺乏用于多处理的锁/sem 机制。

我想分享我自己的工作,这比 Manager 的 dict 更快,比使用大量内存的 pyshmht 库更简单,也更稳定,而且不适用于 Mac OS。虽然我的 dict 只适用于普通字符串,而且目前是不可变的。 我使用线性探测实现,并将键和值对存储在表之后的单独内存块中。

from mmap import mmap
import struct
from timeit import default_timer
from multiprocessing import Manager
from pyshmht import HashTable




class shared_immutable_dict:
def __init__(self, a):
self.hs = 1 << (len(a) * 3).bit_length()
kvp = self.hs * 4
ht = [0xffffffff] * self.hs
kvl = []
for k, v in a.iteritems():
h = self.hash(k)
while ht[h] != 0xffffffff:
h = (h + 1) & (self.hs - 1)
ht[h] = kvp
kvp += self.kvlen(k) + self.kvlen(v)
kvl.append(k)
kvl.append(v)


self.m = mmap(-1, kvp)
for p in ht:
self.m.write(uint_format.pack(p))
for x in kvl:
if len(x) <= 0x7f:
self.m.write_byte(chr(len(x)))
else:
self.m.write(uint_format.pack(0x80000000 + len(x)))
self.m.write(x)


def hash(self, k):
h = hash(k)
h = (h + (h >> 3) + (h >> 13) + (h >> 23)) * 1749375391 & (self.hs - 1)
return h


def get(self, k, d=None):
h = self.hash(k)
while True:
x = uint_format.unpack(self.m[h * 4:h * 4 + 4])[0]
if x == 0xffffffff:
return d
self.m.seek(x)
if k == self.read_kv():
return self.read_kv()
h = (h + 1) & (self.hs - 1)


def read_kv(self):
sz = ord(self.m.read_byte())
if sz & 0x80:
sz = uint_format.unpack(chr(sz) + self.m.read(3))[0] - 0x80000000
return self.m.read(sz)


def kvlen(self, k):
return len(k) + (1 if len(k) <= 0x7f else 4)


def __contains__(self, k):
return self.get(k, None) is not None


def close(self):
self.m.close()


uint_format = struct.Struct('>I')




def uget(a, k, d=None):
return to_unicode(a.get(to_str(k), d))




def uin(a, k):
return to_str(k) in a




def to_unicode(s):
return s.decode('utf-8') if isinstance(s, str) else s




def to_str(s):
return s.encode('utf-8') if isinstance(s, unicode) else s




def mmap_test():
n = 1000000
d = shared_immutable_dict({str(i * 2): '1' for i in xrange(n)})
start_time = default_timer()
for i in xrange(n):
if bool(d.get(str(i))) != (i % 2 == 0):
raise Exception(i)
print 'mmap speed: %d gets per sec' % (n / (default_timer() - start_time))




def manager_test():
n = 100000
d = Manager().dict({str(i * 2): '1' for i in xrange(n)})
start_time = default_timer()
for i in xrange(n):
if bool(d.get(str(i))) != (i % 2 == 0):
raise Exception(i)
print 'manager speed: %d gets per sec' % (n / (default_timer() - start_time))




def shm_test():
n = 1000000
d = HashTable('tmp', n)
d.update({str(i * 2): '1' for i in xrange(n)})
start_time = default_timer()
for i in xrange(n):
if bool(d.get(str(i))) != (i % 2 == 0):
raise Exception(i)
print 'shm speed: %d gets per sec' % (n / (default_timer() - start_time))




if __name__ == '__main__':
mmap_test()
manager_test()
shm_test()

在我的笔记本电脑上,性能测试结果如下:

mmap speed: 247288 gets per sec
manager speed: 33792 gets per sec
shm speed: 691332 gets per sec

简单使用例子:

ht = shared_immutable_dict({'a': '1', 'b': '2'})
print ht.get('a')

除了@senderle 之外,有些人可能还想知道如何使用 multiprocessing.Pool的功能。

好的一面是,manager实例有一个 .Pool()方法,它模仿顶级 multiprocessing的所有熟悉的 API。

from itertools import repeat
import multiprocessing as mp
import os
import pprint


def f(d: dict) -> None:
pid = os.getpid()
d[pid] = "Hi, I was written by process %d" % pid


if __name__ == '__main__':
with mp.Manager() as manager:
d = manager.dict()
with manager.Pool() as pool:
pool.map(f, repeat(d, 10))
# `d` is a DictProxy object that can be converted to dict
pprint.pprint(dict(d))

产出:

$ python3 mul.py
{22562: 'Hi, I was written by process 22562',
22563: 'Hi, I was written by process 22563',
22564: 'Hi, I was written by process 22564',
22565: 'Hi, I was written by process 22565',
22566: 'Hi, I was written by process 22566',
22567: 'Hi, I was written by process 22567',
22568: 'Hi, I was written by process 22568',
22569: 'Hi, I was written by process 22569',
22570: 'Hi, I was written by process 22570',
22571: 'Hi, I was written by process 22571'}

这是一个略有不同的示例,其中每个进程只是将其进程 ID 记录到全局 DictProxy对象 d中。