我正在寻找一个NLP项目,在任何编程语言(尽管Python将是我的偏好)。
我想取两个文档并确定它们有多相似。
我正在寻找一个NLP项目,在任何编程语言(尽管Python将是我的偏好)。
我想取两个文档并确定它们有多相似。
当前回答
与@larsman相同,但有一些预处理
import nltk, string
from sklearn.feature_extraction.text import TfidfVectorizer
nltk.download('punkt') # if necessary...
stemmer = nltk.stem.porter.PorterStemmer()
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)
def stem_tokens(tokens):
return [stemmer.stem(item) for item in tokens]
'''remove punctuation, lowercase, stem'''
def normalize(text):
return stem_tokens(nltk.word_tokenize(text.lower().translate(remove_punctuation_map)))
vectorizer = TfidfVectorizer(tokenizer=normalize, stop_words='english')
def cosine_sim(text1, text2):
tfidf = vectorizer.fit_transform([text1, text2])
return ((tfidf * tfidf.T).A)[0,1]
print cosine_sim('a little bird', 'a little bird')
print cosine_sim('a little bird', 'a little bird chirps')
print cosine_sim('a little bird', 'a big dog barks')
其他回答
你可能想尝试一下cos文档相似度的在线服务http://www.scurtu.it/documentSimilarity.html
import urllib,urllib2
import json
API_URL="http://www.scurtu.it/apis/documentSimilarity"
inputDict={}
inputDict['doc1']='Document with some text'
inputDict['doc2']='Other document with some text'
params = urllib.urlencode(inputDict)
f = urllib2.urlopen(API_URL, params)
response= f.read()
responseObject=json.loads(response)
print responseObject
这里有一个小应用程序让你开始…
import difflib as dl
a = file('file').read()
b = file('file1').read()
sim = dl.get_close_matches
s = 0
wa = a.split()
wb = b.split()
for i in wa:
if sim(i, wb):
s += 1
n = float(s) / float(len(wa))
print '%d%% similarity' % int(n * 100)
句法相似性 有3种简单的方法来检测相似性。
Word2Vec 手套 Tfidf或countvectorizer
语义相似性 可以使用BERT嵌入和尝试不同的词池策略来获得文档嵌入,然后在文档嵌入上应用余弦相似度。
一种先进的方法是利用BERT分数来获得相似度。
研究论文链接:https://arxiv.org/abs/1904.09675
如果你正在寻找一些非常精确的东西,你需要使用一些比tf-idf更好的工具。通用句子编码器是最准确的找到任何两段文本之间的相似性的编码器之一。谷歌提供了预训练的模型,您可以将其用于自己的应用程序,而不需要从头开始训练任何东西。首先,你必须安装tensorflow和tensorflow-hub:
pip install tensorflow
pip install tensorflow_hub
下面的代码允许您将任何文本转换为固定长度的向量表示,然后您可以使用点积来找出它们之间的相似性
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder/1?tf-hub-format=compressed"
# Import the Universal Sentence Encoder's TF Hub module
embed = hub.Module(module_url)
# sample text
messages = [
# Smartphones
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
]
similarity_input_placeholder = tf.placeholder(tf.string, shape=(None))
similarity_message_encodings = embed(similarity_input_placeholder)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
message_embeddings_ = session.run(similarity_message_encodings, feed_dict={similarity_input_placeholder: messages})
corr = np.inner(message_embeddings_, message_embeddings_)
print(corr)
heatmap(messages, messages, corr)
绘图的代码:
def heatmap(x_labels, y_labels, values):
fig, ax = plt.subplots()
im = ax.imshow(values)
# We want to show all ticks...
ax.set_xticks(np.arange(len(x_labels)))
ax.set_yticks(np.arange(len(y_labels)))
# ... and label them with the respective list entries
ax.set_xticklabels(x_labels)
ax.set_yticklabels(y_labels)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", fontsize=10,
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(len(y_labels)):
for j in range(len(x_labels)):
text = ax.text(j, i, "%.2f"%values[i, j],
ha="center", va="center", color="w",
fontsize=6)
fig.tight_layout()
plt.show()
结果将是:
正如你所看到的,最相似的是文本本身和意义相近的文本之间。
重要的是:第一次运行代码会很慢,因为它需要下载模型。如果你想防止它再次下载模型并使用本地模型,你必须为缓存创建一个文件夹,并将其添加到环境变量中,然后在第一次运行后使用该路径:
tf_hub_cache_dir = "universal_encoder_cached/"
os.environ["TFHUB_CACHE_DIR"] = tf_hub_cache_dir
# pointing to the folder inside cache dir, it will be unique on your system
module_url = tf_hub_cache_dir+"/d8fbeb5c580e50f975ef73e80bebba9654228449/"
embed = hub.Module(module_url)
更多信息:https://tfhub.dev/google/universal-sentence-encoder/2
Generally a cosine similarity between two documents is used as a similarity measure of documents. In Java, you can use Lucene (if your collection is pretty large) or LingPipe to do this. The basic concept would be to count the terms in every document and calculate the dot product of the term vectors. The libraries do provide several improvements over this general approach, e.g. using inverse document frequencies and calculating tf-idf vectors. If you are looking to do something copmlex, LingPipe also provides methods to calculate LSA similarity between documents which gives better results than cosine similarity. For Python, you can use NLTK.