我有一个非常大的4GB文件,当我试图读取它时,我的电脑挂了。 所以我想一块一块地读取它,在处理每一块之后,将处理过的一块存储到另一个文件中,然后读取下一块。

有什么方法可以生产这些碎片吗?

我喜欢有一个懒惰的方法。


当前回答

f = ... # file-like object, i.e. supporting read(size) function and 
        # returning empty string '' when there is nothing to read

def chunked(file, chunk_size):
    return iter(lambda: file.read(chunk_size), '')

for data in chunked(f, 65536):
    # process the data

更新:该方法最好在https://stackoverflow.com/a/4566523/38592中解释

其他回答

在Python 3.8+中,你可以在while循环中使用.read():

with open("somefile.txt") as f:
    while chunk := f.read(8192):
        do_something(chunk)

当然,你可以使用任何你想要的块大小,你不需要使用8192(2**13)字节。除非你的文件大小恰好是你的块大小的倍数,否则最后一个块将小于你的块大小。

我认为我们可以这样写:

def read_file(path, block_size=1024): 
    with open(path, 'rb') as f: 
        while True: 
            piece = f.read(block_size) 
            if piece: 
                yield piece 
            else: 
                return

for piece in read_file(path):
    process_piece(piece)

要编写惰性函数,只需使用yield:

def read_in_chunks(file_object, chunk_size=1024):
    """Lazy function (generator) to read a file piece by piece.
    Default chunk size: 1k."""
    while True:
        data = file_object.read(chunk_size)
        if not data:
            break
        yield data


with open('really_big_file.dat') as f:
    for piece in read_in_chunks(f):
        process_data(piece)

另一种选择是使用iter和helper函数:

f = open('really_big_file.dat')
def read1k():
    return f.read(1024)

for piece in iter(read1k, ''):
    process_data(piece)

如果文件是基于行的,那么文件对象已经是一个惰性的行生成器:

for line in open('really_big_file.dat'):
    process_data(line)

您可以使用以下代码。

file_obj = open('big_file') 

Open()返回一个文件对象

然后使用os。获取大小的数据

file_size = os.stat('big_file').st_size

for i in range( file_size/1024):
    print file_obj.read(1024)

参考python的官方文档https://docs.python.org/3/library/functions.html#iter

也许这个方法更python化:

"""A file object returned by open() is a iterator with
read method which could specify current read's block size
"""
with open('mydata.db', 'r') as f_in:
    block_read = partial(f_in.read, 1024 * 1024)
    block_iterator = iter(block_read, '')

    for index, block in enumerate(block_iterator, start=1):
        block = process_block(block)  # process your block data

        with open(f'{index}.txt', 'w') as f_out:
            f_out.write(block)