在Python中读取大文件的惰性方法?

我有一个非常大的4GB文件,当我试图读取它时,我的电脑挂起了。 所以我想一块一块地读取它,在处理每一块之后,将处理过的一块存储到另一个文件中,然后读取下一块

是否有任何方法yield这些块?

我想有一个懒惰的方法

332744 次浏览

要编写惰性函数,只需使用yield:

def read_in_chunks(file_object, chunk_size=1024):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data




with open('really_big_file.dat') as f:
for piece in read_in_chunks(f):
process_data(piece)

另一个选择是使用iter和一个helper函数:

f = open('really_big_file.dat')
def read1k():
return f.read(1024)


for piece in iter(read1k, ''):
process_data(piece)

如果文件是基于行的,那么文件对象已经是一个惰性的行生成器:

for line in open('really_big_file.dat'):
process_data(line)

如果您的计算机、操作系统和python是64位的,然后你可以使用mmap模块将文件的内容映射到内存中,并使用索引和切片访问它。下面是文档中的一个例子:

import mmap
with open("hello.txt", "r+") as f:
# memory-map the file, size 0 means whole file
map = mmap.mmap(f.fileno(), 0)
# read content via standard file methods
print map.readline()  # prints "Hello Python!"
# read content via slice notation
print map[:5]  # prints "Hello"
# update content using slice notation;
# note that new content must have same size
map[6:] = " world!\n"
# ... and read again using standard file methods
map.seek(0)
print map.readline()  # prints "Hello  world!"
# close the map
map.close()

如果您的计算机、操作系统或python是32位的,然后映射大文件可以保留大部分的地址空间和饿死你的程序的内存。

我也有类似的情况。不清楚你是否知道以字节为单位的块大小;我通常不这样做,但所需要的记录(行)的数量是已知的:

def get_line():
with open('4gb_file') as file:
for i in file:
yield i


lines_required = 100
gen = get_line()
chunk = [i for i, j in zip(gen, range(lines_required))]

更新:谢谢,nosklo。这就是我的意思。它几乎工作,除了它丢失了一行“之间”块。

chunk = [next(gen) for i in range(lines_required)]

做的把戏w/o失去任何线条,但它看起来不太好。

由于我的低声誉,我不允许评论,但SilentGhosts解决方案应该更容易与file.readlines([sizehint])

python文件方法

编辑:SilentGhost是对的,但这应该比:

s = ""
for i in xrange(100):
s += file.next()

file.readlines()接受一个可选的size参数,它近似于在返回的行中读取的行数。

bigfile = open('bigfilename','r')
tmp_lines = bigfile.readlines(BUF_SIZE)
while tmp_lines:
process([line for line in tmp_lines])
tmp_lines = bigfile.readlines(BUF_SIZE)
f = ... # file-like object, i.e. supporting read(size) function and
# returning empty string '' when there is nothing to read


def chunked(file, chunk_size):
return iter(lambda: file.read(chunk_size), '')


for data in chunked(f, 65536):
# process the data

更新:该方法最好在https://stackoverflow.com/a/4566523/38592中解释

我认为我们可以这样写:

def read_file(path, block_size=1024):
with open(path, 'rb') as f:
while True:
piece = f.read(block_size)
if piece:
yield piece
else:
return


for piece in read_file(path):
process_piece(piece)

已经有很多好的答案,但如果您的整个文件在一行上,而您仍然想处理“;行”;(相对于固定大小的块),这些答案对你没有帮助。

99%的情况下,可以逐行处理文件。然后,正如回答中建议的那样,你可以使用文件对象本身作为惰性生成器:

with open('big.csv') as f:
for line in f:
process(line)

但是,可能会遇到行分隔符不是'\n'(常见情况是'|')的非常大的文件。

对于这种情况,我创建了以下代码段[在2021年5月针对Python 3.8+更新]:

def rows(f, chunksize=1024, sep='|'):
"""
Read a file where the row separator is '|' lazily.


Usage:


>>> with open('big.csv') as f:
>>>     for r in rows(f):
>>>         process(r)
"""
row = ''
while (chunk := f.read(chunksize)) != '':   # End of file
while (i := chunk.find(sep)) != -1:     # No separator found
yield row + chunk[:i]
chunk = chunk[i+1:]
row = ''
row += chunk
yield row

[对于较旧版本的python]:

def rows(f, chunksize=1024, sep='|'):
"""
Read a file where the row separator is '|' lazily.


Usage:


>>> with open('big.csv') as f:
>>>     for r in rows(f):
>>>         process(r)
"""
curr_row = ''
while True:
chunk = f.read(chunksize)
if chunk == '': # End of file
yield curr_row
break
while True:
i = chunk.find(sep)
if i == -1:
break
yield curr_row + chunk[:i]
curr_row = ''
chunk = chunk[i+1:]
curr_row += chunk

我能够成功地使用它来解决各种问题。它已经通过了各种块大小的广泛测试。以下是我正在使用的测试套件,供那些需要说服自己的人使用:

test_file = 'test_file'


def cleanup(func):
def wrapper(*args, **kwargs):
func(*args, **kwargs)
os.unlink(test_file)
return wrapper


@cleanup
def test_empty(chunksize=1024):
with open(test_file, 'w') as f:
f.write('')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1


@cleanup
def test_1_char_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2


@cleanup
def test_1_char(chunksize=1024):
with open(test_file, 'w') as f:
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1


@cleanup
def test_1025_chars_1_row(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1025):
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1


@cleanup
def test_1024_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1023):
f.write('a')
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2


@cleanup
def test_1025_chars_1026_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1025):
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1026


@cleanup
def test_2048_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1022):
f.write('a')
f.write('|')
f.write('a')
# -- end of 1st chunk --
for i in range(1024):
f.write('a')
# -- end of 2nd chunk
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2


@cleanup
def test_2049_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1022):
f.write('a')
f.write('|')
f.write('a')
# -- end of 1st chunk --
for i in range(1024):
f.write('a')
# -- end of 2nd chunk
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2


if __name__ == '__main__':
for chunksize in [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]:
test_empty(chunksize)
test_1_char_2_rows(chunksize)
test_1_char(chunksize)
test_1025_chars_1_row(chunksize)
test_1024_chars_2_rows(chunksize)
test_1025_chars_1026_rows(chunksize)
test_2048_chars_2_rows(chunksize)
test_2049_chars_2_rows(chunksize)

您可以使用以下代码。

file_obj = open('big_file')

Open()返回一个文件对象

然后使用os。获取大小的数据

file_size = os.stat('big_file').st_size


for i in range( file_size/1024):
print file_obj.read(1024)

参考python的官方文档https://docs.python.org/3/library/functions.html#iter

也许这个方法更python化:

"""A file object returned by open() is a iterator with
read method which could specify current read's block size
"""
with open('mydata.db', 'r') as f_in:
block_read = partial(f_in.read, 1024 * 1024)
block_iterator = iter(block_read, '')


for index, block in enumerate(block_iterator, start=1):
block = process_block(block)  # process your block data


with open(f'{index}.txt', 'w') as f_out:
f_out.write(block)

在Python 3.8 +中,你可以在while循环中使用.read():

with open("somefile.txt") as f:
while chunk := f.read(8192):
do_something(chunk)

当然,你可以使用任何你想要的块大小,你不必使用8192 (2**13)字节。除非你的文件大小恰好是你的块大小的倍数,否则最后一个块将小于你的块大小。

——在给出的答案上加上——

当我在chunk中读取文件时,让我们假设一个名为split.txt的文本文件,问题我在读取块时面临的是我有一个用例,我在逐行处理数据,只是因为文本文件我在块中读取它(文件块)有时以部分行结束,最终破坏我的代码(因为它期望完整的行被处理)

阅读之后,我知道我能克服这个问题通过保持一块跟踪的最后一点我做的是如果块有< >强/ n < / >强意味着块包含一个完整的线,否则我通常存储部分最后一行和保持它在一个变量中,以便我可以利用这一点,将它与下一个未完成的线在接下来的一部分与我成功地克服这个问题。

示例代码:-

# in this function i am reading the file in chunks
def read_in_chunks(file_object, chunk_size=1024):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data


# file where i am writing my final output
write_file=open('split.txt','w')


# variable i am using to store the last partial line from the chunk
placeholder= ''
file_count=1


try:
with open('/Users/rahulkumarmandal/Desktop/combined.txt') as f:
for piece in read_in_chunks(f):
#print('---->>>',piece,'<<<--')
line_by_line = piece.split('\n')


for one_line in line_by_line:
# if placeholder exist before that means last chunk have a partial line that we need to concatenate with the current one
if placeholder:
# print('----->',placeholder)
# concatinating the previous partial line with the current one
one_line=placeholder+one_line
# then setting the placeholder empty so that next time if there's a partial line in the chunk we can place it in the variable to be concatenated further
placeholder=''
                

# futher logic that revolves around my specific use case
segregated_data= one_line.split('~')
#print(len(segregated_data),type(segregated_data), one_line)
if len(segregated_data) < 18:
placeholder=one_line
continue
else:
placeholder=''
#print('--------',segregated_data)
if segregated_data[2]=='2020' and segregated_data[3]=='2021':
#write this
data=str("~".join(segregated_data))
#print('data',data)
#f.write(data)
write_file.write(data)
write_file.write('\n')
print(write_file.tell())
elif segregated_data[2]=='2021' and segregated_data[3]=='2022':
#write this
data=str("-".join(segregated_data))
write_file.write(data)
write_file.write('\n')
print(write_file.tell())
except Exception as e:
print('error is', e)