在Python中对正则表达式使用compile有什么好处吗?
h = re.compile('hello')
h.match('hello world')
vs
re.match('hello', 'hello world')
在Python中对正则表达式使用compile有什么好处吗?
h = re.compile('hello')
h.match('hello world')
vs
re.match('hello', 'hello world')
当前回答
我有很多运行一个编译过的正则表达式和实时编译的经验,并没有注意到任何可感知的差异。显然,这只是传闻,当然也不是反对编译的有力论据,但我发现两者之间的差异可以忽略不计。
编辑: 在快速浏览了实际的Python 2.5库代码后,我发现无论何时使用正则表达式(包括调用re.match()), Python都会在内部编译和缓存正则表达式,因此实际上只在正则表达式被编译时进行更改,并且不应该节省太多时间——只节省检查缓存所需的时间(对内部dict类型的键查找)。
来自re.py模块(评论是我的):
def match(pattern, string, flags=0):
return _compile(pattern, flags).match(string)
def _compile(*key):
# Does cache check at top of function
cachekey = (type(key[0]),) + key
p = _cache.get(cachekey)
if p is not None: return p
# ...
# Does actual compilation on cache miss
# ...
# Caches compiled regex
if len(_cache) >= _MAXCACHE:
_cache.clear()
_cache[cachekey] = p
return p
我仍然经常预编译正则表达式,但只是为了将它们绑定到一个漂亮的、可重用的名称,而不是为了任何预期的性能提升。
其他回答
这是个好问题。你经常看到人们毫无理由地使用re.compile。它降低了可读性。但是可以肯定的是,很多时候需要预编译表达式。就像你在循环中重复使用它一样。
这就像编程的一切(实际上是生活中的一切)。运用常识。
我同意诚实的亚伯,所给例子中的匹配(…)是不同的。他们不是一对一的比较,因此,结果是不同的。为了简化我的回答,我用A, B, C, D来表示这些函数。哦,是的,我们在re.py中处理的是4个函数而不是3个。
运行这段代码:
h = re.compile('hello') # (A)
h.match('hello world') # (B)
与运行此代码相同:
re.match('hello', 'hello world') # (C)
因为,当查看源代码re.py时,(A + B)意味着:
h = re._compile('hello') # (D)
h.match('hello world')
(C)实际上是:
re._compile('hello').match('hello world')
因此,(C)与(B)并不相同,实际上(C)在调用(D)之后调用(B), (D)也被(A)调用,换句话说,(C) = (A) + (B),因此,在循环中比较(A + B)与在循环中比较(C)的结果相同。
George的regexTest.py为我们证明了这一点。
noncompiled took 4.555 seconds. # (C) in a loop
compiledInLoop took 4.620 seconds. # (A + B) in a loop
compiled took 2.323 seconds. # (A) once + (B) in a loop
大家的兴趣是,如何得到2.323秒的结果。为了确保compile(…)只被调用一次,我们需要将编译后的regex对象存储在内存中。如果使用类,则可以存储对象,并在每次调用函数时重用该对象。
class Foo:
regex = re.compile('hello')
def my_function(text)
return regex.match(text)
如果我们不使用类(这是我今天的要求),那么我没有评论。我还在学习如何在Python中使用全局变量,我知道全局变量不是什么好东西。
还有一点,我认为使用(A) + (B)的方法有优势。以下是我观察到的一些事实(如果我错了,请指正):
Calls A once, it will do one search in the _cache followed by one sre_compile.compile() to create a regex object. Calls A twice, it will do two searches and one compile (because the regex object is cached). If the _cache gets flushed in between, then the regex object is released from memory and Python needs to compile again. (someone suggests that Python won't recompile.) If we keep the regex object by using (A), the regex object will still get into _cache and get flushed somehow. But our code keeps a reference on it and the regex object will not be released from memory. Those, Python need not to compile again. The 2 seconds difference in George's test compiled loop vs compiled is mainly the time required to build the key and search the _cache. It doesn't mean the compile time of regex. George's reallycompile test show what happens if it really re-do the compile every time: it will be 100x slower (he reduced the loop from 1,000,000 to 10,000).
以下是(A + B)比(C)更好的情况:
如果可以在类中缓存regex对象的引用。 如果需要重复调用(B)(在循环内或多次),则必须在循环外缓存对regex对象的引用。
如果(C)足够好:
不能缓存引用。 我们只是偶尔用一次。 总的来说,我们没有太多的正则表达式(假设编译后的正则表达式永远不会被刷新)
简单回顾一下,以下是abc:
h = re.compile('hello') # (A)
h.match('hello world') # (B)
re.match('hello', 'hello world') # (C)
感谢阅读。
Ubuntu 22.04:
$ python --version
Python 3.10.6
$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 're.match("[0-9]{3}-[0-9]{3}-[0-9]{4}", "123-123-1234")'; done
1 loop, best of 5: 972 nsec per loop
:0: UserWarning: The test results are likely unreliable. The worst time (186 usec) was more than four times slower than the best time (972 nsec).
10 loops, best of 5: 819 nsec per loop
:0: UserWarning: The test results are likely unreliable. The worst time (13.9 usec) was more than four times slower than the best time (819 nsec).
100 loops, best of 5: 763 nsec per loop
1000 loops, best of 5: 699 nsec per loop
10000 loops, best of 5: 653 nsec per loop
100000 loops, best of 5: 655 nsec per loop
1000000 loops, best of 5: 656 nsec per loop
$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 'r = re.compile("[0-9]{3}-[0-9]{3}-[0-9]{4}")' 'r.match("123-123-1234")'; done
1 loop, best of 5: 985 nsec per loop
:0: UserWarning: The test results are likely unreliable. The worst time (134 usec) was more than four times slower than the best time (985 nsec).
10 loops, best of 5: 775 nsec per loop
:0: UserWarning: The test results are likely unreliable. The worst time (13.9 usec) was more than four times slower than the best time (775 nsec).
100 loops, best of 5: 756 nsec per loop
1000 loops, best of 5: 701 nsec per loop
10000 loops, best of 5: 704 nsec per loop
100000 loops, best of 5: 654 nsec per loop
1000000 loops, best of 5: 651 nsec per loop
我想说的是,预编译在概念上和“字面上”(如在“文学编程”中)都是有利的。看看这段代码片段:
from re import compile as _Re
class TYPO:
def text_has_foobar( self, text ):
return self._text_has_foobar_re_search( text ) is not None
_text_has_foobar_re_search = _Re( r"""(?i)foobar""" ).search
TYPO = TYPO()
在你的应用程序中,你可以这样写:
from TYPO import TYPO
print( TYPO.text_has_foobar( 'FOObar ) )
this is about as simple in terms of functionality as it can get. because this is example is so short, i conflated the way to get _text_has_foobar_re_search all in one line. the disadvantage of this code is that it occupies a little memory for whatever the lifetime of the TYPO library object is; the advantage is that when doing a foobar search, you'll get away with two function calls and two class dictionary lookups. how many regexes are cached by re and the overhead of that cache are irrelevant here.
将其与更常见的风格进行比较,如下所示:
import re
class Typo:
def text_has_foobar( self, text ):
return re.compile( r"""(?i)foobar""" ).search( text ) is not None
在应用中:
typo = Typo()
print( typo.text_has_foobar( 'FOObar ) )
我很乐意承认我的风格在python中是非常不寻常的,甚至可能是有争议的。然而,在更接近python的使用方式的示例中,为了进行一次匹配,我们必须实例化一个对象,进行三次实例字典查找,并执行三次函数调用;此外,当使用超过100个正则表达式时,我们可能会遇到重新缓存的麻烦。此外,正则表达式被隐藏在方法体中,这在大多数情况下并不是一个好主意。
可以说,每一个措施的子集——有针对性的,别名的import语句;别名方法(如适用);减少函数调用和对象字典查找——可以帮助减少计算和概念的复杂性。
下面是一个使用re.compile的示例,在请求时速度超过50倍。
这一点与我在上面的评论中所说的是一样的,即当您的使用从编译缓存中获益不多时,使用re.compile可能是一个显著的优势。这种情况至少发生在一个特定的情况下(我在实践中遇到过),即当以下所有情况都成立时:
您有很多regex模式(不仅仅是re._MAXCACHE,它目前的默认值是512),以及 你经常使用这些正则表达式,而且 相同模式的连续使用之间被多个re._MAXCACHE其他正则表达式分隔,因此每个正则表达式在连续使用之间从缓存中刷新。
import re
import time
def setup(N=1000):
# Patterns 'a.*a', 'a.*b', ..., 'z.*z'
patterns = [chr(i) + '.*' + chr(j)
for i in range(ord('a'), ord('z') + 1)
for j in range(ord('a'), ord('z') + 1)]
# If this assertion below fails, just add more (distinct) patterns.
# assert(re._MAXCACHE < len(patterns))
# N strings. Increase N for larger effect.
strings = ['abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz'] * N
return (patterns, strings)
def without_compile():
print('Without re.compile:')
patterns, strings = setup()
print('searching')
count = 0
for s in strings:
for pat in patterns:
count += bool(re.search(pat, s))
return count
def without_compile_cache_friendly():
print('Without re.compile, cache-friendly order:')
patterns, strings = setup()
print('searching')
count = 0
for pat in patterns:
for s in strings:
count += bool(re.search(pat, s))
return count
def with_compile():
print('With re.compile:')
patterns, strings = setup()
print('compiling')
compiled = [re.compile(pattern) for pattern in patterns]
print('searching')
count = 0
for s in strings:
for regex in compiled:
count += bool(regex.search(s))
return count
start = time.time()
print(with_compile())
d1 = time.time() - start
print(f'-- That took {d1:.2f} seconds.\n')
start = time.time()
print(without_compile_cache_friendly())
d2 = time.time() - start
print(f'-- That took {d2:.2f} seconds.\n')
start = time.time()
print(without_compile())
d3 = time.time() - start
print(f'-- That took {d3:.2f} seconds.\n')
print(f'Ratio: {d3/d1:.2f}')
我在笔记本电脑上获得的示例输出(Python 3.7.7):
With re.compile:
compiling
searching
676000
-- That took 0.33 seconds.
Without re.compile, cache-friendly order:
searching
676000
-- That took 0.67 seconds.
Without re.compile:
searching
676000
-- That took 23.54 seconds.
Ratio: 70.89
I didn't bother with timeit as the difference is so stark, but I get qualitatively similar numbers each time. Note that even without re.compile, using the same regex multiple times and moving on to the next one wasn't so bad (only about 2 times as slow as with re.compile), but in the other order (looping through many regexes), it is significantly worse, as expected. Also, increasing the cache size works too: simply setting re._MAXCACHE = len(patterns) in setup() above (of course I don't recommend doing such things in production as names with underscores are conventionally “private”) drops the ~23 seconds back down to ~0.7 seconds, which also matches our understanding.