一般来说,有没有一种有效的方法可以知道Python中的迭代器中有多少个元素,而不用遍历每个元素并计数?
当前回答
所以,对于那些想知道讨论总结的人。使用以下方法计算5000万长度生成器表达式的最终最高分:
len(列表(创)), Len ([_ for _ in gen]), Sum (1 for _ in gen), Ilen (gen) (from more_itertool), Reduce (c, i: c + 1, gen, 0),
按执行性能排序(包括内存消耗),会让你大吃一惊:
```
1: test_list.py: 8:0.492 KiB
gen = (i for i in data*1000); t0 = monotonic(); len(list(gen))
('list, sec', 1.9684218849870376)
2: test_list_compr.py: 8:0.867 KiB
gen = (i for i in data*1000); t0 = monotonic(); len([i for i in gen])
('list_compr, sec', 2.5885991149989422)
3: test_sum.py:8: 0.859 KiB
gen = (i for i in data*1000); t0 = monotonic(); sum(1 for i in gen); t1 = monotonic()
('sum, sec', 3.441088170016883)
4: more_itertools/more.py:413: 1.266 KiB
d = deque(enumerate(iterable, 1), maxlen=1)
test_ilen.py:10: 0.875 KiB
gen = (i for i in data*1000); t0 = monotonic(); ilen(gen)
(ilen, sec, 9.812256851990242)
5: test_reduce.py:8: 0.859 KiB
gen = (i for i in data*1000); t0 = monotonic(); reduce(lambda counter, i: counter + 1, gen, 0)
('reduce, sec', 13.436614598002052) ' ' '
因此,len(list(gen))是使用频率最高且占用内存较少的
其他回答
迭代器只是一个对象,它有一个指向下一个对象的指针,由某种缓冲区或流读取,它就像一个LinkedList,在那里你不知道你有多少东西,直到你遍历它们。迭代器是高效的,因为它们所做的一切都是通过引用而不是使用索引告诉你下一个是什么(但是正如你所看到的,你失去了查看下一个条目有多少的能力)。
关于你最初的问题,答案仍然是,在Python中通常没有办法知道迭代器的长度。
Given that you question is motivated by an application of the pysam library, I can give a more specific answer: I'm a contributer to PySAM and the definitive answer is that SAM/BAM files do not provide an exact count of aligned reads. Nor is this information easily available from a BAM index file. The best one can do is to estimate the approximate number of alignments by using the location of the file pointer after reading a number of alignments and extrapolating based on the total size of the file. This is enough to implement a progress bar, but not a method of counting alignments in constant time.
我决定在现代版本的Python上重新运行基准测试,并发现几乎完全颠倒了基准测试
我运行了以下命令:
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s " return len(tuple(x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s " return len(list(x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s " return sum(map(lambda i: 1, x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s " return sum(1 for _ in x)" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s " d = deque(enumerate(x, 1), maxlen=1)" -s " return d[0][0] if d else 0" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s " counter = count()" -s " deque(zip(x, counter), maxlen=0)" -s " return next(counter)" -- "itlen(it)"
它们等价于为以下每个itlen*(it)函数计时:
it = iter(range(1000000))
from collections import deque
from itertools import count
def itlen1(x):
return len(tuple(x))
def itlen2(x):
return len(list(x))
def itlen3(x):
return sum(map(lambda i: 1, x))
def itlen4(x):
return sum(1 for _ in x)
def itlen5(x):
d = deque(enumerate(x, 1), maxlen=1)
return d[0][0] if d else 0
def itlen6(x):
counter = count()
deque(zip(x, counter), maxlen=0)
return next(counter)
在装有AMD Ryzen 7 5800H和16 GB RAM的Windows 11、Python 3.11机器上,我得到了以下输出:
10000000 loops, best of 5: 103 nsec per loop
10000000 loops, best of 5: 107 nsec per loop
10000000 loops, best of 5: 138 nsec per loop
10000000 loops, best of 5: 164 nsec per loop
10000000 loops, best of 5: 338 nsec per loop
10000000 loops, best of 5: 425 nsec per loop
这表明len(list(x))和len(tuple(x))是绑定的;后面跟着sum(map(lambda i: 1, x));然后紧靠sum(1 for _ in x);那么其他答案中提到的其他更复杂的方法和/或在基数中使用的方法至少要慢两倍。
这违背了迭代器的定义,迭代器是一个指向对象的指针,加上如何到达下一个对象的信息。
迭代器不知道在终止之前它还能迭代多少次。这个可以是无穷,所以无穷可能是你的答案。
不。这是不可能的。
例子:
import random
def gen(n):
for i in xrange(n):
if random.randint(0, 1) == 0:
yield i
iterator = gen(10)
迭代器的长度是未知的,直到迭代遍历它。