如何以最有效的内存和时间方式获取大文件的行数?

def file_len(filename):
    with open(filename) as f:
        for i, _ in enumerate(f):
            pass
    return i + 1

当前回答

下面这句话怎么样:

file_length = len(open('myfile.txt','r').read().split('\n'))

用这种方法在一个3900行的文件上计时只需要0.003秒

def c():
  import time
  s = time.time()
  file_length = len(open('myfile.txt','r').read().split('\n'))
  print time.time() - s

其他回答

简单的方法:

1)

>>> f = len(open("myfile.txt").readlines())
>>> f

430
>>> f = open("myfile.txt").read().count('\n')
>>> f
430
>>>
num_lines = len(list(open('myfile.txt')))

另一种可能性:

import subprocess

def num_lines_in_file(fpath):
    return int(subprocess.check_output('wc -l %s' % fpath, shell=True).strip().split()[0])

使用Numba

我们可以使用Numba来JIT(及时)编译我们的函数到机器代码。Def numbacountparallel(fname)运行速度快2.8倍 然后从问题中定义file_len(fname)。

注:

在运行基准测试之前,操作系统已经将文件缓存到内存中,因为我在我的PC上没有看到太多的磁盘活动。 第一次读取文件时,时间会慢得多,因此使用Numba的时间优势并不显著。

第一次调用函数时,JIT编译需要额外的时间。

如果我们不只是计算行数,这个就很有用了。

Cython是另一个选择。

http://numba.pydata.org/

结论

因为计算行数是IO绑定的,所以使用问题中的def file_len(fname),除非你想做的不仅仅是计算行数。

import timeit

from numba import jit, prange
import numpy as np

from itertools import (takewhile,repeat)

FILE = '../data/us_confirmed.csv' # 40.6MB, 371755 line file
CR = ord('\n')


# Copied from the question above. Used as a benchmark
def file_len(fname):
    with open(fname) as f:
        for i, l in enumerate(f):
            pass
    return i + 1


# Copied from another answer. Used as a benchmark
def rawincount(filename):
    f = open(filename, 'rb')
    bufgen = takewhile(lambda x: x, (f.read(1024*1024*10) for _ in repeat(None)))
    return sum( buf.count(b'\n') for buf in bufgen )


# Single thread
@jit(nopython=True)
def numbacountsingle_chunk(bs):

    c = 0
    for i in range(len(bs)):
        if bs[i] == CR:
            c += 1

    return c


def numbacountsingle(filename):
    f = open(filename, "rb")
    total = 0
    while True:
        chunk = f.read(1024*1024*10)
        lines = numbacountsingle_chunk(chunk)
        total += lines
        if not chunk:
            break

    return total


# Multi thread
@jit(nopython=True, parallel=True)
def numbacountparallel_chunk(bs):

    c = 0
    for i in prange(len(bs)):
        if bs[i] == CR:
            c += 1

    return c


def numbacountparallel(filename):
    f = open(filename, "rb")
    total = 0
    while True:
        chunk = f.read(1024*1024*10)
        lines = numbacountparallel_chunk(np.frombuffer(chunk, dtype=np.uint8))
        total += lines
        if not chunk:
            break

    return total

print('numbacountparallel')
print(numbacountparallel(FILE)) # This allows Numba to compile and cache the function without adding to the time.
print(timeit.Timer(lambda: numbacountparallel(FILE)).timeit(number=100))

print('\nnumbacountsingle')
print(numbacountsingle(FILE))
print(timeit.Timer(lambda: numbacountsingle(FILE)).timeit(number=100))

print('\nfile_len')
print(file_len(FILE))
print(timeit.Timer(lambda: rawincount(FILE)).timeit(number=100))

print('\nrawincount')
print(rawincount(FILE))
print(timeit.Timer(lambda: rawincount(FILE)).timeit(number=100))

每个函数调用100次的时间(以秒为单位)

numbacountparallel
371755
2.8007332000000003

numbacountsingle
371755
3.1508585999999994

file_len
371755
6.7945494

rawincount
371755
6.815438

我会使用Python的文件对象方法readlines,如下所示:

with open(input_file) as foo:
    lines = len(foo.readlines())

这将打开文件,在文件中创建一个行列表,计算列表的长度,将其保存到一个变量中,然后再次关闭文件。

已经有很多答案了,但不幸的是,它们中的大多数只是一个几乎不可优化的问题上的微型经济……

在我参与的几个项目中,行数是软件的核心功能,以最快的速度处理大量文件是至关重要的。

行数的主要瓶颈是I/O访问,因为您需要读取每一行以检测行返回字符,因此没有其他方法。第二个潜在的瓶颈是内存管理:一次加载的内存越多,处理的速度就越快,但与第一个瓶颈相比,这个瓶颈可以忽略不计。

因此,除了禁用gc收集和其他微管理技巧等微小优化外,还有3种主要方法可以减少行计数函数的处理时间:

Hardware solution: the major and most obvious way is non-programmatic: buy a very fast SSD/flash hard drive. By far, this is how you can get the biggest speed boosts. Data preparation solution: if you generate or can modify how the files you process are generated, or if it's acceptable that you can pre-process them, first convert the line return to unix style (\n) as this will save 1 character compared to Windows or MacOS styles (not a big save but it's an easy gain), and secondly and most importantly, you can potentially write lines of fixed length. If you need variable length, you can always pad smaller lines. This way, you can calculate instantly the number of lines from the total filesize, which is much faster to access. Often, the best solution to a problem is to pre-process it so that it better fits your end purpose. Parallelization + hardware solution: if you can buy multiple hard disks (and if possible SSD flash disks), then you can even go beyond the speed of one disk by leveraging parallelization, by storing your files in a balanced way (easiest is to balance by total size) among disks, and then read in parallel from all those disks. Then, you can expect to get a multiplier boost in proportion with the number of disks you have. If buying multiple disks is not an option for you, then parallelization likely won't help (except if your disk has multiple reading headers like some professional-grade disks, but even then the disk's internal cache memory and PCB circuitry will likely be a bottleneck and prevent you from fully using all heads in parallel, plus you have to devise a specific code for this hard drive you'll use because you need to know the exact cluster mapping so that you store your files on clusters under different heads, and so that you can read them with different heads after). Indeed, it's commonly known that sequential reading is almost always faster than random reading, and parallelization on a single disk will have a performance more similar to random reading than sequential reading (you can test your hard drive speed in both aspects using CrystalDiskMark for example).

如果这些都不是选择,那么你只能依靠微观管理技巧来提高行数函数的速度,但不要指望有什么真正重要的东西。相反,您可以预期,与您将看到的速度改进回报相比,您花费在调整上的时间将是不均衡的。