在Python中对正则表达式使用compile有什么好处吗?
h = re.compile('hello')
h.match('hello world')
vs
re.match('hello', 'hello world')
在Python中对正则表达式使用compile有什么好处吗?
h = re.compile('hello')
h.match('hello world')
vs
re.match('hello', 'hello world')
当前回答
下面是一个使用re.compile的示例,在请求时速度超过50倍。
这一点与我在上面的评论中所说的是一样的,即当您的使用从编译缓存中获益不多时,使用re.compile可能是一个显著的优势。这种情况至少发生在一个特定的情况下(我在实践中遇到过),即当以下所有情况都成立时:
您有很多regex模式(不仅仅是re._MAXCACHE,它目前的默认值是512),以及 你经常使用这些正则表达式,而且 相同模式的连续使用之间被多个re._MAXCACHE其他正则表达式分隔,因此每个正则表达式在连续使用之间从缓存中刷新。
import re
import time
def setup(N=1000):
# Patterns 'a.*a', 'a.*b', ..., 'z.*z'
patterns = [chr(i) + '.*' + chr(j)
for i in range(ord('a'), ord('z') + 1)
for j in range(ord('a'), ord('z') + 1)]
# If this assertion below fails, just add more (distinct) patterns.
# assert(re._MAXCACHE < len(patterns))
# N strings. Increase N for larger effect.
strings = ['abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz'] * N
return (patterns, strings)
def without_compile():
print('Without re.compile:')
patterns, strings = setup()
print('searching')
count = 0
for s in strings:
for pat in patterns:
count += bool(re.search(pat, s))
return count
def without_compile_cache_friendly():
print('Without re.compile, cache-friendly order:')
patterns, strings = setup()
print('searching')
count = 0
for pat in patterns:
for s in strings:
count += bool(re.search(pat, s))
return count
def with_compile():
print('With re.compile:')
patterns, strings = setup()
print('compiling')
compiled = [re.compile(pattern) for pattern in patterns]
print('searching')
count = 0
for s in strings:
for regex in compiled:
count += bool(regex.search(s))
return count
start = time.time()
print(with_compile())
d1 = time.time() - start
print(f'-- That took {d1:.2f} seconds.\n')
start = time.time()
print(without_compile_cache_friendly())
d2 = time.time() - start
print(f'-- That took {d2:.2f} seconds.\n')
start = time.time()
print(without_compile())
d3 = time.time() - start
print(f'-- That took {d3:.2f} seconds.\n')
print(f'Ratio: {d3/d1:.2f}')
我在笔记本电脑上获得的示例输出(Python 3.7.7):
With re.compile:
compiling
searching
676000
-- That took 0.33 seconds.
Without re.compile, cache-friendly order:
searching
676000
-- That took 0.67 seconds.
Without re.compile:
searching
676000
-- That took 23.54 seconds.
Ratio: 70.89
I didn't bother with timeit as the difference is so stark, but I get qualitatively similar numbers each time. Note that even without re.compile, using the same regex multiple times and moving on to the next one wasn't so bad (only about 2 times as slow as with re.compile), but in the other order (looping through many regexes), it is significantly worse, as expected. Also, increasing the cache size works too: simply setting re._MAXCACHE = len(patterns) in setup() above (of course I don't recommend doing such things in production as names with underscores are conventionally “private”) drops the ~23 seconds back down to ~0.7 seconds, which also matches our understanding.
其他回答
FWIW:
$ python -m timeit -s "import re" "re.match('hello', 'hello world')"
100000 loops, best of 3: 3.82 usec per loop
$ python -m timeit -s "import re; h=re.compile('hello')" "h.match('hello world')"
1000000 loops, best of 3: 1.26 usec per loop
因此,如果您将经常使用同一个正则表达式,可能值得执行re.compile(特别是对于更复杂的正则表达式)。
反对过早优化的标准论点适用,但如果您怀疑regexp可能成为性能瓶颈,我不认为使用re.compile会真正失去多少清晰度/直接性。
更新:
在Python 3.6(我怀疑上述计时是使用Python 2.x完成的)和2018硬件(MacBook Pro)下,我现在得到以下计时:
% python -m timeit -s "import re" "re.match('hello', 'hello world')"
1000000 loops, best of 3: 0.661 usec per loop
% python -m timeit -s "import re; h=re.compile('hello')" "h.match('hello world')"
1000000 loops, best of 3: 0.285 usec per loop
% python -m timeit -s "import re" "h=re.compile('hello'); h.match('hello world')"
1000000 loops, best of 3: 0.65 usec per loop
% python --version
Python 3.6.5 :: Anaconda, Inc.
我还添加了一个案例(注意最后两次运行之间的引号差异),表明re.match(x,…)从字面上[大致]等价于re.compile(x).match(…),即似乎没有发生编译表示的幕后缓存。
我的理解是,这两个例子实际上是等价的。唯一的区别是,在第一种情况下,您可以在其他地方重用已编译的正则表达式,而不会导致再次编译它。
这里有一个参考:http://diveintopython3.ep.io/refactoring.html
使用字符串'M'调用已编译模式对象的搜索函数,其效果与同时使用正则表达式和字符串'M'调用re.search相同。只是要快得多。(事实上,re.search函数只是编译正则表达式,并为您调用结果模式对象的搜索方法。)
(几个月后)很容易在re.match周围添加自己的缓存, 或者其他任何事情——
""" Re.py: Re.match = re.match + cache
efficiency: re.py does this already (but what's _MAXCACHE ?)
readability, inline / separate: matter of taste
"""
import re
cache = {}
_re_type = type( re.compile( "" ))
def match( pattern, str, *opt ):
""" Re.match = re.match + cache re.compile( pattern )
"""
if type(pattern) == _re_type:
cpat = pattern
elif pattern in cache:
cpat = cache[pattern]
else:
cpat = cache[pattern] = re.compile( pattern, *opt )
return cpat.match( str )
# def search ...
一个wibni,如果:cachehint(size=), cacheinfo() -> size, hits, nclear…
一般来说,我发现在编译模式时使用标志比内联使用标志更容易(至少更容易记住如何使用),比如re.I。
>>> foo_pat = re.compile('foo',re.I)
>>> foo_pat.findall('some string FoO bar')
['FoO']
vs
>>> re.findall('(?i)foo','some string FoO bar')
['FoO']
下面是一个简单的测试用例:
~$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 're.match("[0-9]{3}-[0-9]{3}-[0-9]{4}", "123-123-1234")'; done
1 loops, best of 3: 3.1 usec per loop
10 loops, best of 3: 2.41 usec per loop
100 loops, best of 3: 2.24 usec per loop
1000 loops, best of 3: 2.21 usec per loop
10000 loops, best of 3: 2.23 usec per loop
100000 loops, best of 3: 2.24 usec per loop
1000000 loops, best of 3: 2.31 usec per loop
re.compile:
~$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 'r = re.compile("[0-9]{3}-[0-9]{3}-[0-9]{4}")' 'r.match("123-123-1234")'; done
1 loops, best of 3: 1.91 usec per loop
10 loops, best of 3: 0.691 usec per loop
100 loops, best of 3: 0.701 usec per loop
1000 loops, best of 3: 0.684 usec per loop
10000 loops, best of 3: 0.682 usec per loop
100000 loops, best of 3: 0.694 usec per loop
1000000 loops, best of 3: 0.702 usec per loop
因此,这种简单的情况下编译似乎更快,即使只匹配一次。