我想我想做的是一项相当常见的任务,但我在网上找不到任何参考资料。我有带标点符号的文本,我想要一个单词列表。

"Hey, you - what are you doing here!?"

应该是

['hey', 'you', 'what', 'are', 'you', 'doing', 'here']

但Python的str.split()只对一个参数有效,所以在用空格拆分后,所有单词都带有标点符号。有什么想法吗?


当前回答

正则表达式对正的情况:

import re
DATA = "Hey, you - what are you doing here!?"
print re.findall(r"[\w']+", DATA)
# Prints ['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']

其他回答

正则表达式对正的情况:

import re
DATA = "Hey, you - what are you doing here!?"
print re.findall(r"[\w']+", DATA)
# Prints ['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']

实现这一点的另一种方法是使用自然语言工具包(nltk)。

import nltk
data= "Hey, you - what are you doing here!?"
word_tokens = nltk.tokenize.regexp_tokenize(data, r'\w+')
print word_tokens

这张照片显示:[“嘿”、“你”、“什么”、“是”、“您”、“正在做”、“在这里”]

这种方法的最大缺点是需要安装nltk包。

好处是,一旦获得令牌,就可以使用nltk包的其余部分做很多有趣的事情。

我喜欢pprzemek的解决方案,因为它不假设分隔符是单个字符,也不试图利用正则表达式(如果分隔符的数量太长,这将不会很好地工作)。

为清晰起见,以下是上述解决方案的更可读版本:

def split_string_on_multiple_separators(input_string, separators):
    buffer = [input_string]
    for sep in separators:
        strings = buffer
        buffer = []  # reset the buffer
        for s in strings:
            buffer = buffer + s.split(sep)

    return buffer

这是一个有一些解释的答案。

st = "Hey, you - what are you doing here!?"

# replace all the non alpha-numeric with space and then join.
new_string = ''.join([x.replace(x, ' ') if not x.isalnum() else x for x in st])
# output of new_string
'Hey  you  what are you doing here  '

# str.split() will remove all the empty string if separator is not provided
new_list = new_string.split()

# output of new_list
['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']

# we can join it to get a complete string without any non alpha-numeric character
' '.join(new_list)
# output
'Hey you what are you doing'

或者在一行中,我们可以这样做:

(''.join([x.replace(x, ' ') if not x.isalnum() else x for x in st])).split()

# output
['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']

更新的答案

我必须想出自己的解决方案,因为我迄今为止测试的所有东西都在某一点上失败了。

>>> import re
>>> def split_words(text):
...     rgx = re.compile(r"((?:(?<!'|\w)(?:\w-?'?)+(?<!-))|(?:(?<='|\w)(?:\w-?'?)+(?=')))")
...     return rgx.findall(text)

至少在下面的例子中,它似乎工作得很好。

>>> split_words("The hill-tops gleam in morning's spring.")
['The', 'hill-tops', 'gleam', 'in', "morning's", 'spring']
>>> split_words("I'd say it's James' 'time'.")
["I'd", 'say', "it's", "James'", 'time']
>>> split_words("tic-tac-toe's tic-tac-toe'll tic-tac'tic-tac we'll--if tic-tac")
["tic-tac-toe's", "tic-tac-toe'll", "tic-tac'tic-tac", "we'll", 'if', 'tic-tac']
>>> split_words("google.com email@google.com split_words")
['google', 'com', 'email', 'google', 'com', 'split_words']
>>> split_words("Kurt Friedrich Gödel (/ˈɡɜːrdəl/;[2] German: [ˈkʊɐ̯t ˈɡøːdl̩] (listen);")
['Kurt', 'Friedrich', 'Gödel', 'ˈɡɜːrdəl', '2', 'German', 'ˈkʊɐ', 't', 'ˈɡøːdl', 'listen']
>>> split_words("April 28, 1906 – January 14, 1978) was an Austro-Hungarian-born Austrian...")
['April', '28', '1906', 'January', '14', '1978', 'was', 'an', 'Austro-Hungarian-born', 'Austrian']