我有一个非常大的4GB文件,当我试图读取它时,我的电脑挂了。 所以我想一块一块地读取它,在处理每一块之后,将处理过的一块存储到另一个文件中,然后读取下一块。
有什么方法可以生产这些碎片吗?
我喜欢有一个懒惰的方法。
我有一个非常大的4GB文件,当我试图读取它时,我的电脑挂了。 所以我想一块一块地读取它,在处理每一块之后,将处理过的一块存储到另一个文件中,然后读取下一块。
有什么方法可以生产这些碎片吗?
我喜欢有一个懒惰的方法。
当前回答
要编写惰性函数,只需使用yield:
def read_in_chunks(file_object, chunk_size=1024):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
with open('really_big_file.dat') as f:
for piece in read_in_chunks(f):
process_data(piece)
另一种选择是使用iter和helper函数:
f = open('really_big_file.dat')
def read1k():
return f.read(1024)
for piece in iter(read1k, ''):
process_data(piece)
如果文件是基于行的,那么文件对象已经是一个惰性的行生成器:
for line in open('really_big_file.dat'):
process_data(line)
其他回答
在Python 3.8+中,你可以在while循环中使用.read():
with open("somefile.txt") as f:
while chunk := f.read(8192):
do_something(chunk)
当然,你可以使用任何你想要的块大小,你不需要使用8192(2**13)字节。除非你的文件大小恰好是你的块大小的倍数,否则最后一个块将小于你的块大小。
更新: 也可以使用file_object。如果你想让数据块以完整的行形式给出结果我的意思是没有未完成的行会出现在结果中。
例如:-
def read_in_chunks(file_object, chunk_size=1024):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.readlines(chunk_size)
if not data:
break
yield data
——在给出的答案上加上——
当我在chunk中读取文件时,让我们假设一个名为split.txt的文本文件,我在chunk中读取时面临的问题是,我有一个用例,我正在逐行处理数据,只是因为我在chunk中读取文本文件,它(文件块)有时以部分行结束,最终破坏了我的代码(因为它期望完整的行被处理)
阅读之后,我知道我能克服这个问题通过保持一块跟踪的最后一点我做的是如果块/ n,这意味着块包含一个完整的线,否则我通常存储部分最后一行和保持它在一个变量中,以便我可以利用这一点,将它与下一个未完成的线在接下来的一部分与我成功地克服这个问题。
示例代码:-
# in this function i am reading the file in chunks
def read_in_chunks(file_object, chunk_size=1024):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
# file where i am writing my final output
write_file=open('split.txt','w')
# variable i am using to store the last partial line from the chunk
placeholder= ''
file_count=1
try:
with open('/Users/rahulkumarmandal/Desktop/combined.txt') as f:
for piece in read_in_chunks(f):
#print('---->>>',piece,'<<<--')
line_by_line = piece.split('\n')
for one_line in line_by_line:
# if placeholder exist before that means last chunk have a partial line that we need to concatenate with the current one
if placeholder:
# print('----->',placeholder)
# concatinating the previous partial line with the current one
one_line=placeholder+one_line
# then setting the placeholder empty so that next time if there's a partial line in the chunk we can place it in the variable to be concatenated further
placeholder=''
# futher logic that revolves around my specific use case
segregated_data= one_line.split('~')
#print(len(segregated_data),type(segregated_data), one_line)
if len(segregated_data) < 18:
placeholder=one_line
continue
else:
placeholder=''
#print('--------',segregated_data)
if segregated_data[2]=='2020' and segregated_data[3]=='2021':
#write this
data=str("~".join(segregated_data))
#print('data',data)
#f.write(data)
write_file.write(data)
write_file.write('\n')
print(write_file.tell())
elif segregated_data[2]=='2021' and segregated_data[3]=='2022':
#write this
data=str("-".join(segregated_data))
write_file.write(data)
write_file.write('\n')
print(write_file.tell())
except Exception as e:
print('error is', e)
已经有很多好的答案,但是如果您的整个文件都在一行上,并且您仍然想处理“行”(而不是固定大小的块),那么这些答案对您没有帮助。
99%的情况下,可以逐行处理文件。然后,正如回答中所建议的,你可以使用文件对象本身作为惰性生成器:
with open('big.csv') as f:
for line in f:
process(line)
但是,可能会遇到行分隔符不是'\n'(常见情况是'|')的非常大的文件。
在处理之前将'|'转换为'\n'可能不是一个选项,因为它可能会混淆可能合法包含'\n'的字段(例如自由文本用户输入)。 使用csv库也被排除在外,因为至少在lib的早期版本中,它是硬编码来逐行读取输入的。
对于这种情况,我创建了以下代码段[在2021年5月针对Python 3.8+更新]:
def rows(f, chunksize=1024, sep='|'):
"""
Read a file where the row separator is '|' lazily.
Usage:
>>> with open('big.csv') as f:
>>> for r in rows(f):
>>> process(r)
"""
row = ''
while (chunk := f.read(chunksize)) != '': # End of file
while (i := chunk.find(sep)) != -1: # No separator found
yield row + chunk[:i]
chunk = chunk[i+1:]
row = ''
row += chunk
yield row
[对于较旧版本的python]:
def rows(f, chunksize=1024, sep='|'):
"""
Read a file where the row separator is '|' lazily.
Usage:
>>> with open('big.csv') as f:
>>> for r in rows(f):
>>> process(r)
"""
curr_row = ''
while True:
chunk = f.read(chunksize)
if chunk == '': # End of file
yield curr_row
break
while True:
i = chunk.find(sep)
if i == -1:
break
yield curr_row + chunk[:i]
curr_row = ''
chunk = chunk[i+1:]
curr_row += chunk
我能够成功地使用它来解决各种问题。它已经通过了各种块大小的广泛测试。以下是我正在使用的测试套件,供那些需要说服自己的人使用:
test_file = 'test_file'
def cleanup(func):
def wrapper(*args, **kwargs):
func(*args, **kwargs)
os.unlink(test_file)
return wrapper
@cleanup
def test_empty(chunksize=1024):
with open(test_file, 'w') as f:
f.write('')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1
@cleanup
def test_1_char_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2
@cleanup
def test_1_char(chunksize=1024):
with open(test_file, 'w') as f:
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1
@cleanup
def test_1025_chars_1_row(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1025):
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1
@cleanup
def test_1024_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1023):
f.write('a')
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2
@cleanup
def test_1025_chars_1026_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1025):
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1026
@cleanup
def test_2048_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1022):
f.write('a')
f.write('|')
f.write('a')
# -- end of 1st chunk --
for i in range(1024):
f.write('a')
# -- end of 2nd chunk
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2
@cleanup
def test_2049_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1022):
f.write('a')
f.write('|')
f.write('a')
# -- end of 1st chunk --
for i in range(1024):
f.write('a')
# -- end of 2nd chunk
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2
if __name__ == '__main__':
for chunksize in [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]:
test_empty(chunksize)
test_1_char_2_rows(chunksize)
test_1_char(chunksize)
test_1025_chars_1_row(chunksize)
test_1024_chars_2_rows(chunksize)
test_1025_chars_1026_rows(chunksize)
test_2048_chars_2_rows(chunksize)
test_2049_chars_2_rows(chunksize)
f = ... # file-like object, i.e. supporting read(size) function and
# returning empty string '' when there is nothing to read
def chunked(file, chunk_size):
return iter(lambda: file.read(chunk_size), '')
for data in chunked(f, 65536):
# process the data
更新:该方法最好在https://stackoverflow.com/a/4566523/38592中解释
由于我的低声誉,我不允许评论,但SilentGhosts解决方案应该更容易与file.readlines([sizehint])
Python文件方法
编辑:SilentGhost是对的,但这应该比:
s = ""
for i in xrange(100):
s += file.next()