一般来说,有没有一种有效的方法可以知道Python中的迭代器中有多少个元素,而不用遍历每个元素并计数?


当前回答

假设,您希望在不遍历的情况下计算项的数量,这样迭代器就不会耗尽,稍后可以再次使用它。这是可能的复制或深度复制

import copy

def get_iter_len(iterator):
    return sum(1 for _ in copy.copy(iterator))

###############################################

iterator = range(0, 10)
print(get_iter_len(iterator))

if len(tuple(iterator)) > 1:
    print("Finding the length did not exhaust the iterator!")
else:
    print("oh no! it's all gone")

输出是“查找长度没有耗尽迭代器!”

可选的(并且不明智的),你可以像下面这样为内置的len函数添加阴影:

import copy

def len(obj, *, len=len):
    try:
        if hasattr(obj, "__len__"):
            r = len(obj)
        elif hasattr(obj, "__next__"):
            r = sum(1 for _ in copy.copy(obj))
        else:
            r = len(obj)
    finally:
        pass
    return r

其他回答

在计算机上有两种方法来获取“某物”的长度。

第一种方法是存储一个计数——这需要任何接触文件/数据的东西来修改它(或者一个只公开接口的类——但归根结底是一样的)。

另一种方法是遍历它并计算它有多大。

一个简单的基准:

import collections
import itertools

def count_iter_items(iterable):
    counter = itertools.count()
    collections.deque(itertools.izip(iterable, counter), maxlen=0)
    return next(counter)

def count_lencheck(iterable):
    if hasattr(iterable, '__len__'):
        return len(iterable)

    d = collections.deque(enumerate(iterable, 1), maxlen=1)
    return d[0][0] if d else 0

def count_sum(iterable):           
    return sum(1 for _ in iterable)

iter = lambda y: (x for x in xrange(y))

%timeit count_iter_items(iter(1000))
%timeit count_lencheck(iter(1000))
%timeit count_sum(iter(1000))

结果:

10000 loops, best of 3: 37.2 µs per loop
10000 loops, best of 3: 47.6 µs per loop
10000 loops, best of 3: 61 µs per loop

例如,简单的count_iter_items是可行的方法。

为python3调整:

61.9 µs ± 275 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
74.4 µs ± 190 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
82.6 µs ± 164 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

关于你最初的问题,答案仍然是,在Python中通常没有办法知道迭代器的长度。

Given that you question is motivated by an application of the pysam library, I can give a more specific answer: I'm a contributer to PySAM and the definitive answer is that SAM/BAM files do not provide an exact count of aligned reads. Nor is this information easily available from a BAM index file. The best one can do is to estimate the approximate number of alignments by using the location of the file pointer after reading a number of alignments and extrapolating based on the total size of the file. This is enough to implement a progress bar, but not a method of counting alignments in constant time.

不。这是不可能的。

例子:

import random

def gen(n):
    for i in xrange(n):
        if random.randint(0, 1) == 0:
            yield i

iterator = gen(10)

迭代器的长度是未知的,直到迭代遍历它。

所以,对于那些想知道讨论总结的人。使用以下方法计算5000万长度生成器表达式的最终最高分:

len(列表(创)), Len ([_ for _ in gen]), Sum (1 for _ in gen), Ilen (gen) (from more_itertool), Reduce (c, i: c + 1, gen, 0),

按执行性能排序(包括内存消耗),会让你大吃一惊:

```

1: test_list.py: 8:0.492 KiB

gen = (i for i in data*1000); t0 = monotonic(); len(list(gen))

('list, sec', 1.9684218849870376)

2: test_list_compr.py: 8:0.867 KiB

gen = (i for i in data*1000); t0 = monotonic(); len([i for i in gen])

('list_compr, sec', 2.5885991149989422)

3: test_sum.py:8: 0.859 KiB

gen = (i for i in data*1000); t0 = monotonic(); sum(1 for i in gen); t1 = monotonic()

('sum, sec', 3.441088170016883)

4: more_itertools/more.py:413: 1.266 KiB

d = deque(enumerate(iterable, 1), maxlen=1)

test_ilen.py:10: 0.875 KiB
gen = (i for i in data*1000); t0 = monotonic(); ilen(gen)

(ilen, sec, 9.812256851990242)

5: test_reduce.py:8: 0.859 KiB

gen = (i for i in data*1000); t0 = monotonic(); reduce(lambda counter, i: counter + 1, gen, 0)

('reduce, sec', 13.436614598002052) ' ' '

因此,len(list(gen))是使用频率最高且占用内存较少的