在Python中对正则表达式使用compile有什么好处吗?

h = re.compile('hello')
h.match('hello world')

vs

re.match('hello', 'hello world')

当前回答

尽管这两种方法在速度方面是可以比较的,但是您应该知道,如果您正在处理数百万次迭代,那么仍然存在一些可以忽略不计的时间差。

以下速度测试:

import re
import time

SIZE = 100_000_000

start = time.time()
foo = re.compile('foo')
[foo.search('bar') for _ in range(SIZE)]
print('compiled:  ', time.time() - start)

start = time.time()
[re.search('foo', 'bar') for _ in range(SIZE)]
print('uncompiled:', time.time() - start)

给出了以下结果:

compiled:   14.647532224655151
uncompiled: 61.483458042144775

编译后的方法在我的PC上(使用Python 3.7.0)始终快大约4倍。

如文档中所述:

如果在循环中访问正则表达式,预编译它将节省一些函数调用。在循环之外,由于内部缓存,没有太大区别。

其他回答

Ubuntu 22.04:

$ python --version
Python 3.10.6

$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 're.match("[0-9]{3}-[0-9]{3}-[0-9]{4}", "123-123-1234")'; done
1 loop, best of 5: 972 nsec per loop
:0: UserWarning: The test results are likely unreliable. The worst time (186 usec) was more than four times slower than the best time (972 nsec).
10 loops, best of 5: 819 nsec per loop
:0: UserWarning: The test results are likely unreliable. The worst time (13.9 usec) was more than four times slower than the best time (819 nsec).
100 loops, best of 5: 763 nsec per loop
1000 loops, best of 5: 699 nsec per loop
10000 loops, best of 5: 653 nsec per loop
100000 loops, best of 5: 655 nsec per loop
1000000 loops, best of 5: 656 nsec per loop

$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 'r = re.compile("[0-9]{3}-[0-9]{3}-[0-9]{4}")' 'r.match("123-123-1234")'; done
1 loop, best of 5: 985 nsec per loop
:0: UserWarning: The test results are likely unreliable. The worst time (134 usec) was more than four times slower than the best time (985 nsec).
10 loops, best of 5: 775 nsec per loop
:0: UserWarning: The test results are likely unreliable. The worst time (13.9 usec) was more than four times slower than the best time (775 nsec).
100 loops, best of 5: 756 nsec per loop
1000 loops, best of 5: 701 nsec per loop
10000 loops, best of 5: 704 nsec per loop
100000 loops, best of 5: 654 nsec per loop
1000000 loops, best of 5: 651 nsec per loop

大多数情况下,是否使用re.compile没有什么区别。在内部,所有函数都是按照编译步骤实现的:

def match(pattern, string, flags=0):
    return _compile(pattern, flags).match(string)

def fullmatch(pattern, string, flags=0):
    return _compile(pattern, flags).fullmatch(string)

def search(pattern, string, flags=0):
    return _compile(pattern, flags).search(string)

def sub(pattern, repl, string, count=0, flags=0):
    return _compile(pattern, flags).sub(repl, string, count)

def subn(pattern, repl, string, count=0, flags=0):
    return _compile(pattern, flags).subn(repl, string, count)

def split(pattern, string, maxsplit=0, flags=0):
    return _compile(pattern, flags).split(string, maxsplit)

def findall(pattern, string, flags=0):
    return _compile(pattern, flags).findall(string)

def finditer(pattern, string, flags=0):
    return _compile(pattern, flags).finditer(string)

此外,re.compile()绕过了额外的间接和缓存逻辑:

_cache = {}

_pattern_type = type(sre_compile.compile("", 0))

_MAXCACHE = 512
def _compile(pattern, flags):
    # internal: compile pattern
    try:
        p, loc = _cache[type(pattern), pattern, flags]
        if loc is None or loc == _locale.setlocale(_locale.LC_CTYPE):
            return p
    except KeyError:
        pass
    if isinstance(pattern, _pattern_type):
        if flags:
            raise ValueError(
                "cannot process flags argument with a compiled pattern")
        return pattern
    if not sre_compile.isstring(pattern):
        raise TypeError("first argument must be string or compiled pattern")
    p = sre_compile.compile(pattern, flags)
    if not (flags & DEBUG):
        if len(_cache) >= _MAXCACHE:
            _cache.clear()
        if p.flags & LOCALE:
            if not _locale:
                return p
            loc = _locale.setlocale(_locale.LC_CTYPE)
        else:
            loc = None
        _cache[type(pattern), pattern, flags] = p, loc
    return p

除了使用re.compile带来的小速度好处外,人们还喜欢命名潜在复杂的模式规范并将其与应用的业务逻辑分离所带来的可读性:

#### Patterns ############################################################
number_pattern = re.compile(r'\d+(\.\d*)?')    # Integer or decimal number
assign_pattern = re.compile(r':=')             # Assignment operator
identifier_pattern = re.compile(r'[A-Za-z]+')  # Identifiers
whitespace_pattern = re.compile(r'[\t ]+')     # Spaces and tabs

#### Applications ########################################################

if whitespace_pattern.match(s): business_logic_rule_1()
if assign_pattern.match(s): business_logic_rule_2()

注意,另一位受访者错误地认为pyc文件直接存储已编译的模式;然而,在现实中,每次PYC加载时,它们都会被重新构建:

>>> from dis import dis
>>> with open('tmp.pyc', 'rb') as f:
        f.read(8)
        dis(marshal.load(f))

  1           0 LOAD_CONST               0 (-1)
              3 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (re)
              9 STORE_NAME               0 (re)

  3          12 LOAD_NAME                0 (re)
             15 LOAD_ATTR                1 (compile)
             18 LOAD_CONST               2 ('[aeiou]{2,5}')
             21 CALL_FUNCTION            1
             24 STORE_NAME               2 (lc_vowels)
             27 LOAD_CONST               1 (None)
             30 RETURN_VALUE

上面的分解来自于一个包含tmp.py的PYC文件:

import re
lc_vowels = re.compile(r'[aeiou]{2,5}')

抛开性能差异不考虑,使用re.compile和使用编译后的正则表达式对象进行匹配(任何与正则表达式相关的操作)使得Python运行时的语义更加清晰。

我有过调试一些简单代码的痛苦经历:

compare = lambda s, p: re.match(p, s)

然后我用compare in

[x for x in data if compare(patternPhrases, x[columnIndex])]

其中patternPhrases应该是一个包含正则表达式字符串的变量,x[columnIndex]是一个包含字符串的变量。

我有麻烦,patternPhrases不匹配一些预期的字符串!

但是如果我使用re.compile形式:

compare = lambda s, p: p.match(s)

然后在

[x for x in data if compare(patternPhrases, x[columnIndex])]

Python会抱怨“字符串没有匹配属性”,因为在compare中通过位置参数映射,x[columnIndex]被用作正则表达式!其实我的意思是

compare = lambda p, s: p.match(s)

在我的例子中,使用re.compile更明确地表达了正则表达式的目的,当它的值对肉眼隐藏时,因此我可以从Python运行时检查中获得更多帮助。

因此,我这一课的寓意是,当正则表达式不仅仅是字面字符串时,那么我应该使用re.compile让Python帮助我断言我的假设。

下面是一个使用re.compile的示例,在请求时速度超过50倍。

这一点与我在上面的评论中所说的是一样的,即当您的使用从编译缓存中获益不多时,使用re.compile可能是一个显著的优势。这种情况至少发生在一个特定的情况下(我在实践中遇到过),即当以下所有情况都成立时:

您有很多regex模式(不仅仅是re._MAXCACHE,它目前的默认值是512),以及 你经常使用这些正则表达式,而且 相同模式的连续使用之间被多个re._MAXCACHE其他正则表达式分隔,因此每个正则表达式在连续使用之间从缓存中刷新。

import re
import time

def setup(N=1000):
    # Patterns 'a.*a', 'a.*b', ..., 'z.*z'
    patterns = [chr(i) + '.*' + chr(j)
                    for i in range(ord('a'), ord('z') + 1)
                    for j in range(ord('a'), ord('z') + 1)]
    # If this assertion below fails, just add more (distinct) patterns.
    # assert(re._MAXCACHE < len(patterns))
    # N strings. Increase N for larger effect.
    strings = ['abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz'] * N
    return (patterns, strings)

def without_compile():
    print('Without re.compile:')
    patterns, strings = setup()
    print('searching')
    count = 0
    for s in strings:
        for pat in patterns:
            count += bool(re.search(pat, s))
    return count

def without_compile_cache_friendly():
    print('Without re.compile, cache-friendly order:')
    patterns, strings = setup()
    print('searching')
    count = 0
    for pat in patterns:
        for s in strings:
            count += bool(re.search(pat, s))
    return count

def with_compile():
    print('With re.compile:')
    patterns, strings = setup()
    print('compiling')
    compiled = [re.compile(pattern) for pattern in patterns]
    print('searching')
    count = 0
    for s in strings:
        for regex in compiled:
            count += bool(regex.search(s))
    return count

start = time.time()
print(with_compile())
d1 = time.time() - start
print(f'-- That took {d1:.2f} seconds.\n')

start = time.time()
print(without_compile_cache_friendly())
d2 = time.time() - start
print(f'-- That took {d2:.2f} seconds.\n')

start = time.time()
print(without_compile())
d3 = time.time() - start
print(f'-- That took {d3:.2f} seconds.\n')

print(f'Ratio: {d3/d1:.2f}')

我在笔记本电脑上获得的示例输出(Python 3.7.7):

With re.compile:
compiling
searching
676000
-- That took 0.33 seconds.

Without re.compile, cache-friendly order:
searching
676000
-- That took 0.67 seconds.

Without re.compile:
searching
676000
-- That took 23.54 seconds.

Ratio: 70.89

I didn't bother with timeit as the difference is so stark, but I get qualitatively similar numbers each time. Note that even without re.compile, using the same regex multiple times and moving on to the next one wasn't so bad (only about 2 times as slow as with re.compile), but in the other order (looping through many regexes), it is significantly worse, as expected. Also, increasing the cache size works too: simply setting re._MAXCACHE = len(patterns) in setup() above (of course I don't recommend doing such things in production as names with underscores are conventionally “private”) drops the ~23 seconds back down to ~0.7 seconds, which also matches our understanding.

这是个好问题。你经常看到人们毫无理由地使用re.compile。它降低了可读性。但是可以肯定的是,很多时候需要预编译表达式。就像你在循环中重复使用它一样。

这就像编程的一切(实际上是生活中的一切)。运用常识。