如何以最有效的内存和时间方式获取大文件的行数?
def file_len(filename):
with open(filename) as f:
for i, _ in enumerate(f):
pass
return i + 1
如何以最有效的内存和时间方式获取大文件的行数?
def file_len(filename):
with open(filename) as f:
for i, _ in enumerate(f):
pass
return i + 1
当前回答
创建一个可执行脚本文件count.py:
#!/usr/bin/python
import sys
count = 0
for line in sys.stdin:
count+=1
然后将文件的内容导入python脚本:cat huge.txt | ./count.py。管道也适用于Powershell,因此您将最终计算行数。
对我来说,在Linux上它比简单的解决方案快30%:
count=1
with open('huge.txt') as f:
count+=1
其他回答
计数= max(开放(文件))[0]
为什么下面的方法行不通呢?
import sys
# input comes from STDIN
file = sys.stdin
data = file.readlines()
# get total number of lines in file
lines = len(data)
print lines
在这种情况下,len函数使用输入行作为确定长度的方法。
凯尔的回答
num_lines = sum(1 for line in open('my_file.txt'))
最好的替代方案是什么
num_lines = len(open('my_file.txt').read().splitlines())
这里是两者的性能比较
In [20]: timeit sum(1 for line in open('Charts.ipynb'))
100000 loops, best of 3: 9.79 µs per loop
In [21]: timeit len(open('Charts.ipynb').read().splitlines())
100000 loops, best of 3: 12 µs per loop
def file_len(full_path):
""" Count number of lines in a file."""
f = open(full_path)
nr_of_lines = sum(1 for line in f)
f.close()
return nr_of_lines
与此答案类似的一行bash解决方案,使用了现代子进程。check_output功能:
def line_count(filename):
return int(subprocess.check_output(['wc', '-l', filename]).split()[0])