这样的代码经常发生:

l = []
while foo:
    # baz
    l.append(bar)
    # qux

如果您要向列表中添加数千个元素,这将非常缓慢,因为列表必须不断调整大小以适应新元素。

在Java中,可以创建具有初始容量的ArrayList。如果你知道你的清单有多大,这将会更有效率。

我知道这样的代码通常可以被重构成一个列表理解式。但是,如果for/while循环非常复杂,这是不可行的。对于我们Python程序员来说,是否也有类似的方法?


当前回答

Python列表没有内置的预分配。如果你真的需要做一个列表,并且需要避免附加的开销(并且你应该验证你做了),你可以这样做:

l = [None] * 1000 # Make a list of 1000 None's
for i in xrange(1000):
    # baz
    l[i] = bar
    # qux

也许你可以通过使用生成器来避免列表:

def my_things():
    while foo:
        #baz
        yield bar
        #qux

for thing in my_things():
    # do something with thing

这样,列表就不会全部存储在内存中,而只是根据需要生成。

其他回答

我运行了S.Lott的代码,通过预分配获得了同样10%的性能提升。我使用发电机尝试了Ned Batchelder的想法,并能够看到发电机的性能优于doAllocate。对于我的项目来说,10%的改进很重要,所以感谢每个人,因为这对我有帮助。

def doAppend(size=10000):
    result = []
    for i in range(size):
        message = "some unique object %d" % ( i, )
        result.append(message)
    return result

def doAllocate(size=10000):
    result = size*[None]
    for i in range(size):
        message = "some unique object %d" % ( i, )
        result[i] = message
    return result

def doGen(size=10000):
    return list("some unique object %d" % ( i, ) for i in xrange(size))

size = 1000
@print_timing
def testAppend():
    for i in xrange(size):
        doAppend()

@print_timing
def testAlloc():
    for i in xrange(size):
        doAllocate()

@print_timing
def testGen():
    for i in xrange(size):
        doGen()


testAppend()
testAlloc()
testGen()

输出

testAppend took 14440.000ms
testAlloc took 13580.000ms
testGen took 13430.000ms

简短版本:使用

pre_allocated_list = [None] * size

预先分配一个列表(也就是说,能够处理列表中的“size”元素,而不是通过追加逐渐形成列表)。这个操作非常快,即使是在大列表上。分配新对象,然后分配给列表元素将花费更长的时间,并将成为程序的性能瓶颈。

长版:

我认为初始化时间应该考虑在内。

因为在Python中,所有元素都是引用,所以不管你将每个元素设置为None还是某个字符串,它都只是一个引用。不过,如果您想为每个元素创建一个新对象来引用,则需要更长的时间。

对于Python 3.2:

import time
import copy

def print_timing (func):
  def wrapper (*arg):
    t1 = time.time()
    res = func (*arg)
    t2 = time.time ()
    print ("{} took {} ms".format (func.__name__, (t2 - t1) * 1000.0))
    return res

  return wrapper

@print_timing
def prealloc_array (size, init = None, cp = True, cpmethod = copy.deepcopy, cpargs = (), use_num = False):
  result = [None] * size
  if init is not None:
    if cp:
      for i in range (size):
          result[i] = init
    else:
      if use_num:
        for i in range (size):
            result[i] = cpmethod (i)
      else:
        for i in range (size):
            result[i] = cpmethod (cpargs)
  return result

@print_timing
def prealloc_array_by_appending (size):
  result = []
  for i in range (size):
    result.append (None)
  return result

@print_timing
def prealloc_array_by_extending (size):
  result = []
  none_list = [None]
  for i in range (size):
    result.extend (none_list)
  return result

def main ():
  n = 1000000
  x = prealloc_array_by_appending(n)
  y = prealloc_array_by_extending(n)
  a = prealloc_array(n, None)
  b = prealloc_array(n, "content", True)
  c = prealloc_array(n, "content", False, "some object {}".format, ("blah"), False)
  d = prealloc_array(n, "content", False, "some object {}".format, None, True)
  e = prealloc_array(n, "content", False, copy.deepcopy, "a", False)
  f = prealloc_array(n, "content", False, copy.deepcopy, (), False)
  g = prealloc_array(n, "content", False, copy.deepcopy, [], False)

  print ("x[5] = {}".format (x[5]))
  print ("y[5] = {}".format (y[5]))
  print ("a[5] = {}".format (a[5]))
  print ("b[5] = {}".format (b[5]))
  print ("c[5] = {}".format (c[5]))
  print ("d[5] = {}".format (d[5]))
  print ("e[5] = {}".format (e[5]))
  print ("f[5] = {}".format (f[5]))
  print ("g[5] = {}".format (g[5]))

if __name__ == '__main__':
  main()

评价:

prealloc_array_by_appending took 118.00003051757812 ms
prealloc_array_by_extending took 102.99992561340332 ms
prealloc_array took 3.000020980834961 ms
prealloc_array took 49.00002479553223 ms
prealloc_array took 316.9999122619629 ms
prealloc_array took 473.00004959106445 ms
prealloc_array took 1677.9999732971191 ms
prealloc_array took 2729.999780654907 ms
prealloc_array took 3001.999855041504 ms
x[5] = None
y[5] = None
a[5] = None
b[5] = content
c[5] = some object blah
d[5] = some object 5
e[5] = a
f[5] = []
g[5] = ()

正如您所看到的,创建一个对同一个None对象的引用的大列表所花费的时间非常少。

预扩展或扩展需要更长的时间(我没有平均任何东西,但在运行几次之后,我可以告诉您,扩展和追加大约需要相同的时间)。

Allocating new object for each element - that is what takes the most time. And S.Lott's answer does that - formats a new string every time. Which is not strictly required - if you want to preallocate some space, just make a list of None, then assign data to list elements at will. Either way it takes more time to generate data than to append/extend a list, whether you generate it while creating the list, or after that. But if you want a sparsely-populated list, then starting with a list of None is definitely faster.

对于某些应用程序,字典可能是您正在寻找的。例如,在find_totient方法中,我发现使用字典更方便,因为我没有零索引。

def totient(n):
    totient = 0

    if n == 1:
        totient = 1
    else:
        for i in range(1, n):
            if math.gcd(i, n) == 1:
                totient += 1
    return totient

def find_totients(max):
    totients = dict()
    for i in range(1,max+1):
        totients[i] = totient(i)

    print('Totients:')
    for i in range(1,max+1):
        print(i,totients[i])

这个问题也可以用预分配的列表来解决:

def find_totients(max):
    totients = None*(max+1)
    for i in range(1,max+1):
        totients[i] = totient(i)

    print('Totients:')
    for i in range(1,max+1):
        print(i,totients[i])

我觉得这不是很优雅,而且容易产生错误,因为我存储的是None,如果我不小心错误地使用它们,它可能会抛出异常,而且因为我需要考虑映射让我避免的边缘情况。

没错,字典的效率不会那么高,但正如其他人评论的那样,速度上的微小差异并不总是值得冒重大维护风险。

python的方法是:

x = [None] * numElements

或您希望预填充的任何默认值,例如。

bottles = [Beer()] * 99
sea = [Fish()] * many
vegetarianPizzas = [None] * peopleOrderingPizzaNotQuiche

(注意:[Beer()] * 99语法创建一个Beer,然后用99个引用填充一个数组到同一个实例)

Python的默认方法非常高效,尽管随着元素数量的增加,这种效率会下降。

比较

import time

class Timer(object):
    def __enter__(self):
        self.start = time.time()
        return self

    def __exit__(self, *args):
        end = time.time()
        secs = end - self.start
        msecs = secs * 1000  # Millisecs
        print('%fms' % msecs)

Elements   = 100000
Iterations = 144

print('Elements: %d, Iterations: %d' % (Elements, Iterations))


def doAppend():
    result = []
    i = 0
    while i < Elements:
        result.append(i)
        i += 1

def doAllocate():
    result = [None] * Elements
    i = 0
    while i < Elements:
        result[i] = i
        i += 1

def doGenerator():
    return list(i for i in range(Elements))


def test(name, fn):
    print("%s: " % name, end="")
    with Timer() as t:
        x = 0
        while x < Iterations:
            fn()
            x += 1


test('doAppend', doAppend)
test('doAllocate', doAllocate)
test('doGenerator', doGenerator)

with

#include <vector>
typedef std::vector<unsigned int> Vec;

static const unsigned int Elements = 100000;
static const unsigned int Iterations = 144;

void doAppend()
{
    Vec v;
    for (unsigned int i = 0; i < Elements; ++i) {
        v.push_back(i);
    }
}

void doReserve()
{
    Vec v;
    v.reserve(Elements);
    for (unsigned int i = 0; i < Elements; ++i) {
        v.push_back(i);
    }
}

void doAllocate()
{
    Vec v;
    v.resize(Elements);
    for (unsigned int i = 0; i < Elements; ++i) {
        v[i] = i;
    }
}

#include <iostream>
#include <chrono>
using namespace std;

void test(const char* name, void(*fn)(void))
{
    cout << name << ": ";

    auto start = chrono::high_resolution_clock::now();
    for (unsigned int i = 0; i < Iterations; ++i) {
        fn();
    }
    auto end = chrono::high_resolution_clock::now();

    auto elapsed = end - start;
    cout << chrono::duration<double, milli>(elapsed).count() << "ms\n";
}

int main()
{
    cout << "Elements: " << Elements << ", Iterations: " << Iterations << '\n';

    test("doAppend", doAppend);
    test("doReserve", doReserve);
    test("doAllocate", doAllocate);
}

在我的Windows 7 Core i7上,64位Python提供

Elements: 100000, Iterations: 144
doAppend: 3587.204933ms
doAllocate: 2701.154947ms
doGenerator: 1721.098185ms

而c++提供(用Microsoft Visual c++构建,64位,启用优化)

Elements: 100000, Iterations: 144
doAppend: 74.0042ms
doReserve: 27.0015ms
doAllocate: 5.0003ms

c++调试生成:

Elements: 100000, Iterations: 144
doAppend: 2166.12ms
doReserve: 2082.12ms
doAllocate: 273.016ms

这里的重点是,使用Python可以实现7-8%的性能改进,如果您认为您正在编写一个高性能应用程序(或者您正在编写用于web服务或其他东西的东西),那么这不是小意思,但您可能需要重新考虑您的语言选择。

另外,这里的Python代码并不是真正的Python代码。切换到真正的Pythonesque代码可以获得更好的性能:

import time

class Timer(object):
    def __enter__(self):
        self.start = time.time()
        return self

    def __exit__(self, *args):
        end = time.time()
        secs = end - self.start
        msecs = secs * 1000  # millisecs
        print('%fms' % msecs)

Elements   = 100000
Iterations = 144

print('Elements: %d, Iterations: %d' % (Elements, Iterations))


def doAppend():
    for x in range(Iterations):
        result = []
        for i in range(Elements):
            result.append(i)

def doAllocate():
    for x in range(Iterations):
        result = [None] * Elements
        for i in range(Elements):
            result[i] = i

def doGenerator():
    for x in range(Iterations):
        result = list(i for i in range(Elements))


def test(name, fn):
    print("%s: " % name, end="")
    with Timer() as t:
        fn()


test('doAppend', doAppend)
test('doAllocate', doAllocate)
test('doGenerator', doGenerator)

这给了

Elements: 100000, Iterations: 144
doAppend: 2153.122902ms
doAllocate: 1346.076965ms
doGenerator: 1614.092112ms

(在32位中,doGenerator比doAllocate做得更好)。

这里doAppend和doAllocate之间的差距明显更大。

显然,这里的区别只适用于这样的情况如果你做了很多次,或者你在一个负载很重的系统上做这个,这些数字会按数量级扩展,或者你在处理相当大的列表。

这里的重点是:为了获得最佳性能,使用python的方式进行操作。

但如果您担心的是一般的高级性能,那么Python是错误的语言。最根本的问题是,由于Python的一些特性,如装饰器等,Python函数调用传统上比其他语言慢300倍(PythonSpeed/PerformanceTips, Data Aggregation)。

警告:这个答案有争议。看到评论。

def doAppend( size=10000 ):
    result = []
    for i in range(size):
        message= "some unique object %d" % ( i, )
        result.append(message)
    return result

def doAllocate( size=10000 ):
    result=size*[None]
    for i in range(size):
        message= "some unique object %d" % ( i, )
        result[i]= message
    return result

结果。(计算每个函数144次,平均时间)

simple append 0.0102
pre-allocate  0.0098

结论。这无关紧要。

过早的优化是万恶之源。