一般来说,有没有一种有效的方法可以知道Python中的迭代器中有多少个元素,而不用遍历每个元素并计数?


当前回答

我决定在现代版本的Python上重新运行基准测试,并发现几乎完全颠倒了基准测试

我运行了以下命令:

py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  return len(tuple(x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  return len(list(x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  return sum(map(lambda i: 1, x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  return sum(1 for _ in x)" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  d = deque(enumerate(x, 1), maxlen=1)" -s "  return d[0][0] if d else 0" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  counter = count()" -s "  deque(zip(x, counter), maxlen=0)" -s "  return next(counter)" -- "itlen(it)"

它们等价于为以下每个itlen*(it)函数计时:

it = iter(range(1000000))
from collections import deque
from itertools import count

def itlen1(x):
  return len(tuple(x))
def itlen2(x):
  return len(list(x))
def itlen3(x):
  return sum(map(lambda i: 1, x))
def itlen4(x):
  return sum(1 for _ in x)
def itlen5(x):
  d = deque(enumerate(x, 1), maxlen=1)
  return d[0][0] if d else 0
def itlen6(x):
  counter = count()
  deque(zip(x, counter), maxlen=0)
  return next(counter)

在装有AMD Ryzen 7 5800H和16 GB RAM的Windows 11、Python 3.11机器上,我得到了以下输出:

10000000 loops, best of 5: 103 nsec per loop
10000000 loops, best of 5: 107 nsec per loop
10000000 loops, best of 5: 138 nsec per loop
10000000 loops, best of 5: 164 nsec per loop
10000000 loops, best of 5: 338 nsec per loop
10000000 loops, best of 5: 425 nsec per loop

这表明len(list(x))和len(tuple(x))是绑定的;后面跟着sum(map(lambda i: 1, x));然后紧靠sum(1 for _ in x);那么其他答案中提到的其他更复杂的方法和/或在基数中使用的方法至少要慢两倍。

其他回答

一个简单的方法是使用内置函数set()或list():

答:set()在迭代器中没有重复项的情况下(最快的方式)

iter = zip([1,2,3],['a','b','c'])
print(len(set(iter)) # set(iter) = {(1, 'a'), (2, 'b'), (3, 'c')}
Out[45]: 3

or

iter = range(1,10)
print(len(set(iter)) # set(iter) = {1, 2, 3, 4, 5, 6, 7, 8, 9}
Out[47]: 9

B: list()以防迭代器中有重复的项

iter = (1,2,1,2,1,2,1,2)
print(len(list(iter)) # list(iter) = [1, 2, 1, 2, 1, 2, 1, 2]
Out[49]: 8
# compare with set function
print(len(set(iter)) # set(iter) = {1, 2}
Out[51]: 2

我决定在现代版本的Python上重新运行基准测试,并发现几乎完全颠倒了基准测试

我运行了以下命令:

py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  return len(tuple(x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  return len(list(x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  return sum(map(lambda i: 1, x))" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  return sum(1 for _ in x)" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  d = deque(enumerate(x, 1), maxlen=1)" -s "  return d[0][0] if d else 0" -- "itlen(it)"
py -m timeit -n 10000000 -s "it = iter(range(1000000))" -s "from collections import deque" -s "from itertools import count" -s "def itlen(x):" -s "  counter = count()" -s "  deque(zip(x, counter), maxlen=0)" -s "  return next(counter)" -- "itlen(it)"

它们等价于为以下每个itlen*(it)函数计时:

it = iter(range(1000000))
from collections import deque
from itertools import count

def itlen1(x):
  return len(tuple(x))
def itlen2(x):
  return len(list(x))
def itlen3(x):
  return sum(map(lambda i: 1, x))
def itlen4(x):
  return sum(1 for _ in x)
def itlen5(x):
  d = deque(enumerate(x, 1), maxlen=1)
  return d[0][0] if d else 0
def itlen6(x):
  counter = count()
  deque(zip(x, counter), maxlen=0)
  return next(counter)

在装有AMD Ryzen 7 5800H和16 GB RAM的Windows 11、Python 3.11机器上,我得到了以下输出:

10000000 loops, best of 5: 103 nsec per loop
10000000 loops, best of 5: 107 nsec per loop
10000000 loops, best of 5: 138 nsec per loop
10000000 loops, best of 5: 164 nsec per loop
10000000 loops, best of 5: 338 nsec per loop
10000000 loops, best of 5: 425 nsec per loop

这表明len(list(x))和len(tuple(x))是绑定的;后面跟着sum(map(lambda i: 1, x));然后紧靠sum(1 for _ in x);那么其他答案中提到的其他更复杂的方法和/或在基数中使用的方法至少要慢两倍。

不能(除非特定迭代器的类型实现了一些特定的方法,使之成为可能)。

通常,只能通过使用迭代器来计数迭代器项。最有效的方法之一:

import itertools
from collections import deque

def count_iter_items(iterable):
    """
    Consume an iterable not reading it into memory; return the number of items.
    """
    counter = itertools.count()
    deque(itertools.izip(iterable, counter), maxlen=0)  # (consume at C speed)
    return next(counter)

(对于Python 3。X替换itertools。Izip with zip)。

这在理论上是不可能的:事实上,这就是“停止问题”。

证明

相反,假设可以使用函数len(g)来确定任何生成器g的长度(或无限长度)。

对于任何程序P,现在让我们将P转换为生成器g(P): 对于P中的每个返回点或出口点,产生一个值而不是返回它。

如果len(g(P)) ==无穷大,P不会停止。

这解决了暂停问题,这是不可能的,见维基百科。矛盾。


因此,如果不对泛型生成器进行迭代(==实际运行整个程序),就不可能对其元素进行计数。

更具体地说,考虑

def g():
    while True:
        yield "more?"

长度是无限的。这样的发生器有无穷多个。

关于你最初的问题,答案仍然是,在Python中通常没有办法知道迭代器的长度。

Given that you question is motivated by an application of the pysam library, I can give a more specific answer: I'm a contributer to PySAM and the definitive answer is that SAM/BAM files do not provide an exact count of aligned reads. Nor is this information easily available from a BAM index file. The best one can do is to estimate the approximate number of alignments by using the location of the file pointer after reading a number of alignments and extrapolating based on the total size of the file. This is enough to implement a progress bar, but not a method of counting alignments in constant time.