我有一个Python脚本,它把一个整数列表作为输入,我需要一次处理四个整数。不幸的是,我无法控制输入,否则我将它作为一个四元素元组列表传入。目前,我以这种方式迭代它:

for i in range(0, len(ints), 4):
    # dummy op for example code
    foo += ints[i] * ints[i + 1] + ints[i + 2] * ints[i + 3]

不过,它看起来很像“C-think”,这让我怀疑有一种更python的方式来处理这种情况。该列表在迭代后被丢弃,因此不需要保留。也许这样会更好?

while ints:
    foo += ints[0] * ints[1] + ints[2] * ints[3]
    ints[0:4] = []

不过,感觉还是不太对。: - /

相关问题:在Python中如何将列表分割成大小均匀的块?


当前回答

在Python 3.8中,您可以使用walrus操作符和itertools.islice。

from itertools import islice

list_ = [i for i in range(10, 100)]

def chunker(it, size):
    iterator = iter(it)
    while chunk := list(islice(iterator, size)):
        print(chunk)
In [2]: chunker(list_, 10)                                                         
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59]
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69]
[70, 71, 72, 73, 74, 75, 76, 77, 78, 79]
[80, 81, 82, 83, 84, 85, 86, 87, 88, 89]
[90, 91, 92, 93, 94, 95, 96, 97, 98, 99]

其他回答

似乎没有一个漂亮的方法来做到这一点。下面是一个有很多方法的页面,包括:

def split_seq(seq, size):
    newseq = []
    splitsize = 1.0/size*len(seq)
    for i in range(size):
        newseq.append(seq[int(round(i*splitsize)):int(round((i+1)*splitsize))])
    return newseq

在我的特殊情况下,我需要填充项目,以重复最后一个元素,直到它达到大小,所以我改变了这个答案,以适应我的需要。

大小为4的输入输出示例:

Input = [1,2,3,4,5,6,7,8]
Output= [[1,2,3,4], [5,6,7,8]]

Input = [[1,2,3,4,5,6,7]]
Output= [[1,2,3,4], [5,6,7,7]]

Input = [1,2,3,4,5]
Output= [[1,2,3,4], [5,5,5,5]]
def chunker(seq, size):
    res = []
    for el in seq:
        res.append(el)
        if len(res) == size:
            yield res
            res = []
    if res:
        res = res + (size - len(res)) * [res[-1]] 
        yield res

制作itertools很容易。Groupby工作为您获得一个iterables的iterable,而不创建任何临时列表:

groupby(iterable, (lambda x,y: (lambda z: x.next()/y))(count(),100))

不要被嵌套lambda吓跑,外部lambda只运行一次,将count()生成器和常数100放入内部lambda的作用域。

我用它来发送行块到mysql。

for k,v in groupby(bigdata, (lambda x,y: (lambda z: x.next()/y))(count(),100))):
    cursor.executemany(sql, v)
def group_by(iterable, size):
    """Group an iterable into lists that don't exceed the size given.

    >>> group_by([1,2,3,4,5], 2)
    [[1, 2], [3, 4], [5]]

    """
    sublist = []

    for index, item in enumerate(iterable):
        if index > 0 and index % size == 0:
            yield sublist
            sublist = []

        sublist.append(item)

    if sublist:
        yield sublist

下面是我的go works on lists,iter和range…懒洋洋地:

def chunker(it,size):
    rv = [] 
    for i,el in enumerate(it,1) :   
        rv.append(el)
        if i % size == 0 : 
            yield rv
            rv = []
    if rv : yield rv        

几乎变成了一句俏皮话;(

In [95]: list(chunker(range(9),2) )                                                                                                                                          
Out[95]: [[0, 1], [2, 3], [4, 5], [6, 7], [8]]

In [96]: list(chunker([1,2,3,4,5],2) )                                                                                                                                       
Out[96]: [[1, 2], [3, 4], [5]]

In [97]: list(chunker(iter(range(9)),2) )                                                                                                                                    
Out[97]: [[0, 1], [2, 3], [4, 5], [6, 7], [8]]

In [98]: list(chunker(range(9),25) )                                                                                                                                         
Out[98]: [[0, 1, 2, 3, 4, 5, 6, 7, 8]]

In [99]: list(chunker(range(9),1) )                                                                                                                                          
Out[99]: [[0], [1], [2], [3], [4], [5], [6], [7], [8]]

In [101]: %timeit list(chunker(range(101),2) )                                                                                                                               
11.3 µs ± 68.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)