在Python中对正则表达式使用compile有什么好处吗?
h = re.compile('hello')
h.match('hello world')
vs
re.match('hello', 'hello world')
在Python中对正则表达式使用compile有什么好处吗?
h = re.compile('hello')
h.match('hello world')
vs
re.match('hello', 'hello world')
当前回答
使用re.compile()还有一个额外的好处,即使用re.VERBOSE向正则表达式模式添加注释
pattern = '''
hello[ ]world # Some info on my pattern logic. [ ] to recognize space
'''
re.search(pattern, 'hello world', re.VERBOSE)
虽然这不会影响代码的运行速度,但我喜欢这样做,因为这是我注释习惯的一部分。当我想要修改代码时,我完全不喜欢花时间去记住代码背后的逻辑。
其他回答
我有很多运行编译过的regex 1000的经验 与实时编译相比,并没有注意到 任何可感知的差异
对已接受答案的投票导致假设@Triptych所说的对所有情况都是正确的。这并不一定是真的。一个很大的区别是当你必须决定是接受一个正则表达式字符串还是一个编译过的正则表达式对象作为函数的参数时:
>>> timeit.timeit(setup="""
... import re
... f=lambda x, y: x.match(y) # accepts compiled regex as parameter
... h=re.compile('hello')
... """, stmt="f(h, 'hello world')")
0.32881879806518555
>>> timeit.timeit(setup="""
... import re
... f=lambda x, y: re.compile(x).match(y) # compiles when called
... """, stmt="f('hello', 'hello world')")
0.809190034866333
编译正则表达式总是更好的,以防需要重用它们。
请注意,上面timeit中的示例模拟在导入时一次创建已编译的regex对象,而不是在需要匹配时“动态”创建。
这个答案可能姗姗来迟,但却是一个有趣的发现。如果你打算多次使用regex,使用compile真的可以节省你的时间(这在文档中也有提到)。下面你可以看到,当直接调用match方法时,使用编译后的正则表达式是最快的。将一个编译好的正则表达式传递给re.match会使它更慢,而将re.match与patter字符串传递在中间的某个地方。
>>> ipr = r'\D+((([0-2][0-5]?[0-5]?)\.){3}([0-2][0-5]?[0-5]?))\D+'
>>> average(*timeit.repeat("re.match(ipr, 'abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
1.5077415757028423
>>> ipr = re.compile(ipr)
>>> average(*timeit.repeat("re.match(ipr, 'abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
1.8324008992184038
>>> average(*timeit.repeat("ipr.match('abcd100.10.255.255 ')", globals={'ipr': ipr, 're': re}))
0.9187896518778871
下面是一个使用re.compile的示例,在请求时速度超过50倍。
这一点与我在上面的评论中所说的是一样的,即当您的使用从编译缓存中获益不多时,使用re.compile可能是一个显著的优势。这种情况至少发生在一个特定的情况下(我在实践中遇到过),即当以下所有情况都成立时:
您有很多regex模式(不仅仅是re._MAXCACHE,它目前的默认值是512),以及 你经常使用这些正则表达式,而且 相同模式的连续使用之间被多个re._MAXCACHE其他正则表达式分隔,因此每个正则表达式在连续使用之间从缓存中刷新。
import re
import time
def setup(N=1000):
# Patterns 'a.*a', 'a.*b', ..., 'z.*z'
patterns = [chr(i) + '.*' + chr(j)
for i in range(ord('a'), ord('z') + 1)
for j in range(ord('a'), ord('z') + 1)]
# If this assertion below fails, just add more (distinct) patterns.
# assert(re._MAXCACHE < len(patterns))
# N strings. Increase N for larger effect.
strings = ['abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz'] * N
return (patterns, strings)
def without_compile():
print('Without re.compile:')
patterns, strings = setup()
print('searching')
count = 0
for s in strings:
for pat in patterns:
count += bool(re.search(pat, s))
return count
def without_compile_cache_friendly():
print('Without re.compile, cache-friendly order:')
patterns, strings = setup()
print('searching')
count = 0
for pat in patterns:
for s in strings:
count += bool(re.search(pat, s))
return count
def with_compile():
print('With re.compile:')
patterns, strings = setup()
print('compiling')
compiled = [re.compile(pattern) for pattern in patterns]
print('searching')
count = 0
for s in strings:
for regex in compiled:
count += bool(regex.search(s))
return count
start = time.time()
print(with_compile())
d1 = time.time() - start
print(f'-- That took {d1:.2f} seconds.\n')
start = time.time()
print(without_compile_cache_friendly())
d2 = time.time() - start
print(f'-- That took {d2:.2f} seconds.\n')
start = time.time()
print(without_compile())
d3 = time.time() - start
print(f'-- That took {d3:.2f} seconds.\n')
print(f'Ratio: {d3/d1:.2f}')
我在笔记本电脑上获得的示例输出(Python 3.7.7):
With re.compile:
compiling
searching
676000
-- That took 0.33 seconds.
Without re.compile, cache-friendly order:
searching
676000
-- That took 0.67 seconds.
Without re.compile:
searching
676000
-- That took 23.54 seconds.
Ratio: 70.89
I didn't bother with timeit as the difference is so stark, but I get qualitatively similar numbers each time. Note that even without re.compile, using the same regex multiple times and moving on to the next one wasn't so bad (only about 2 times as slow as with re.compile), but in the other order (looping through many regexes), it is significantly worse, as expected. Also, increasing the cache size works too: simply setting re._MAXCACHE = len(patterns) in setup() above (of course I don't recommend doing such things in production as names with underscores are conventionally “private”) drops the ~23 seconds back down to ~0.7 seconds, which also matches our understanding.
除了表演。
使用compile帮助我区分的概念 1. 模块(re), 2. 正则表达式对象 3.匹配对象 当我开始学习正则表达式的时候
#regex object
regex_object = re.compile(r'[a-zA-Z]+')
#match object
match_object = regex_object.search('1.Hello')
#matching content
match_object.group()
output:
Out[60]: 'Hello'
V.S.
re.search(r'[a-zA-Z]+','1.Hello').group()
Out[61]: 'Hello'
作为补充,我做了一个详尽的备忘单模块re供您参考。
regex = {
'brackets':{'single_character': ['[]', '.', {'negate':'^'}],
'capturing_group' : ['()','(?:)', '(?!)' '|', '\\', 'backreferences and named group'],
'repetition' : ['{}', '*?', '+?', '??', 'greedy v.s. lazy ?']},
'lookaround' :{'lookahead' : ['(?=...)', '(?!...)'],
'lookbehind' : ['(?<=...)','(?<!...)'],
'caputuring' : ['(?P<name>...)', '(?P=name)', '(?:)'],},
'escapes':{'anchor' : ['^', '\b', '$'],
'non_printable' : ['\n', '\t', '\r', '\f', '\v'],
'shorthand' : ['\d', '\w', '\s']},
'methods': {['search', 'match', 'findall', 'finditer'],
['split', 'sub']},
'match_object': ['group','groups', 'groupdict','start', 'end', 'span',]
}
对我来说,re.compile的最大好处是能够将正则表达式的定义与其使用分开。
即使是一个简单的表达式,如0|[1-9][0-9]*(以10为基数,不带前导零的整数),也可能非常复杂,以至于您宁愿不重新输入它,检查是否有任何拼写错误,然后在开始调试时重新检查是否有拼写错误。另外,使用像num或num_b10这样的变量名比0|[1-9][0-9]*更好。
当然可以存储字符串并将它们传递给re.match;然而,这就不那么容易读了:
num = "..."
# then, much later:
m = re.match(num, input)
与编译:
num = re.compile("...")
# then, much later:
m = num.match(input)
虽然它很接近,但当重复使用时,第二句的最后一行感觉更自然、更简单。