我有一个非常大的4GB文件,当我试图读取它时,我的电脑挂了。 所以我想一块一块地读取它,在处理每一块之后,将处理过的一块存储到另一个文件中,然后读取下一块。

有什么方法可以生产这些碎片吗?

我喜欢有一个懒惰的方法。


当前回答

我认为我们可以这样写:

def read_file(path, block_size=1024): 
    with open(path, 'rb') as f: 
        while True: 
            piece = f.read(block_size) 
            if piece: 
                yield piece 
            else: 
                return

for piece in read_file(path):
    process_piece(piece)

其他回答

参考python的官方文档https://docs.python.org/3/library/functions.html#iter

也许这个方法更python化:

"""A file object returned by open() is a iterator with
read method which could specify current read's block size
"""
with open('mydata.db', 'r') as f_in:
    block_read = partial(f_in.read, 1024 * 1024)
    block_iterator = iter(block_read, '')

    for index, block in enumerate(block_iterator, start=1):
        block = process_block(block)  # process your block data

        with open(f'{index}.txt', 'w') as f_out:
            f_out.write(block)
f = ... # file-like object, i.e. supporting read(size) function and 
        # returning empty string '' when there is nothing to read

def chunked(file, chunk_size):
    return iter(lambda: file.read(chunk_size), '')

for data in chunked(f, 65536):
    # process the data

更新:该方法最好在https://stackoverflow.com/a/4566523/38592中解释

File.readlines()接受一个可选的size参数,它近似于在返回的行中读取的行数。

bigfile = open('bigfilename','r')
tmp_lines = bigfile.readlines(BUF_SIZE)
while tmp_lines:
    process([line for line in tmp_lines])
    tmp_lines = bigfile.readlines(BUF_SIZE)

由于我的低声誉,我不允许评论,但SilentGhosts解决方案应该更容易与file.readlines([sizehint])

Python文件方法

编辑:SilentGhost是对的,但这应该比:

s = "" 
for i in xrange(100): 
   s += file.next()

我也有类似的情况。不清楚你是否知道以字节为单位的块大小;我通常不这样做,但所需要的记录(行)的数量是已知的:

def get_line():
     with open('4gb_file') as file:
         for i in file:
             yield i

lines_required = 100
gen = get_line()
chunk = [i for i, j in zip(gen, range(lines_required))]

更新:谢谢nosklo。这就是我的意思。它几乎工作,除了它丢失了一行“之间”块。

chunk = [next(gen) for i in range(lines_required)]

做的把戏w/o失去任何线条,但它看起来不太好。