如果要连接很多值,则两者都不使用。附加列表的开销很大。你可以使用StringIO。特别是当你通过大量的操作建立它的时候。
from cStringIO import StringIO
# python3: from io import StringIO
buf = StringIO()
buf.write('foo')
buf.write('foo')
buf.write('foo')
buf.getvalue()
# 'foofoofoo'
如果您已经从其他操作返回了一个完整的列表,那么只需使用“.join(aList)”
来自python常见问题:将多个字符串连接在一起的最有效方法是什么?
str and bytes objects are immutable, therefore concatenating many
strings together is inefficient as each concatenation creates a new
object. In the general case, the total runtime cost is quadratic in
the total string length.
To accumulate many str objects, the recommended idiom is to place them
into a list and call str.join() at the end:
chunks = []
for s in my_strings:
chunks.append(s)
result = ''.join(chunks)
(another reasonably efficient idiom is to use io.StringIO)
To accumulate many bytes objects, the recommended idiom is to extend a
bytearray object using in-place concatenation (the += operator):
result = bytearray()
for b in my_bytes_objects:
result += b
编辑:我很愚蠢,把结果向后粘贴,使它看起来像添加到列表中比cStringIO更快。我还添加了对bytearray/str concat的测试,以及使用更大字符串的更大列表的第二轮测试。(python 2.7.3)
大型字符串列表的Ipython测试示例
try:
from cStringIO import StringIO
except:
from io import StringIO
source = ['foo']*1000
%%timeit buf = StringIO()
for i in source:
buf.write(i)
final = buf.getvalue()
# 1000 loops, best of 3: 1.27 ms per loop
%%timeit out = []
for i in source:
out.append(i)
final = ''.join(out)
# 1000 loops, best of 3: 9.89 ms per loop
%%timeit out = bytearray()
for i in source:
out += i
# 10000 loops, best of 3: 98.5 µs per loop
%%timeit out = ""
for i in source:
out += i
# 10000 loops, best of 3: 161 µs per loop
## Repeat the tests with a larger list, containing
## strings that are bigger than the small string caching
## done by the Python
source = ['foo']*1000
# cStringIO
# 10 loops, best of 3: 19.2 ms per loop
# list append and join
# 100 loops, best of 3: 144 ms per loop
# bytearray() +=
# 100 loops, best of 3: 3.8 ms per loop
# str() +=
# 100 loops, best of 3: 5.11 ms per loop