我能从决策树中的训练树中提取基本的决策规则(或“决策路径”)作为文本列表吗?
喜欢的东西:
if A>0.4 then if B<0.2 then if C>0.8 then class='X'
我能从决策树中的训练树中提取基本的决策规则(或“决策路径”)作为文本列表吗?
喜欢的东西:
if A>0.4 then if B<0.2 then if C>0.8 then class='X'
当前回答
下面是一个通过转换export_text的输出从决策树生成Python代码的函数:
import string
from sklearn.tree import export_text
def export_py_code(tree, feature_names, max_depth=100, spacing=4):
if spacing < 2:
raise ValueError('spacing must be > 1')
# Clean up feature names (for correctness)
nums = string.digits
alnums = string.ascii_letters + nums
clean = lambda s: ''.join(c if c in alnums else '_' for c in s)
features = [clean(x) for x in feature_names]
features = ['_'+x if x[0] in nums else x for x in features if x]
if len(set(features)) != len(feature_names):
raise ValueError('invalid feature names')
# First: export tree to text
res = export_text(tree, feature_names=features,
max_depth=max_depth,
decimals=6,
spacing=spacing-1)
# Second: generate Python code from the text
skip, dash = ' '*spacing, '-'*(spacing-1)
code = 'def decision_tree({}):\n'.format(', '.join(features))
for line in repr(tree).split('\n'):
code += skip + "# " + line + '\n'
for line in res.split('\n'):
line = line.rstrip().replace('|',' ')
if '<' in line or '>' in line:
line, val = line.rsplit(maxsplit=1)
line = line.replace(' ' + dash, 'if')
line = '{} {:g}:'.format(line, float(val))
else:
line = line.replace(' {} class:'.format(dash), 'return')
code += skip + line + '\n'
return code
示例用法:
res = export_py_code(tree, feature_names=names, spacing=4)
print (res)
样例输出:
def decision_tree(f1, f2, f3):
# DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=3,
# max_features=None, max_leaf_nodes=None,
# min_impurity_decrease=0.0, min_impurity_split=None,
# min_samples_leaf=1, min_samples_split=2,
# min_weight_fraction_leaf=0.0, presort=False,
# random_state=42, splitter='best')
if f1 <= 12.5:
if f2 <= 17.5:
if f1 <= 10.5:
return 2
if f1 > 10.5:
return 3
if f2 > 17.5:
if f2 <= 22.5:
return 1
if f2 > 22.5:
return 1
if f1 > 12.5:
if f1 <= 17.5:
if f3 <= 23.5:
return 2
if f3 > 23.5:
return 3
if f1 > 17.5:
if f1 <= 25:
return 1
if f1 > 25:
return 2
上面的示例生成了names = ['f'+str(j+1) for j in range(NUM_FEATURES)]。
一个方便的功能是,它可以生成更小的文件大小与减少间距。只需要设置spacing=2。
其他回答
我相信这个答案比这里的其他答案更正确:
from sklearn.tree import _tree
def tree_to_code(tree, feature_names):
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != _tree.TREE_UNDEFINED else "undefined!"
for i in tree_.feature
]
print "def tree({}):".format(", ".join(feature_names))
def recurse(node, depth):
indent = " " * depth
if tree_.feature[node] != _tree.TREE_UNDEFINED:
name = feature_name[node]
threshold = tree_.threshold[node]
print "{}if {} <= {}:".format(indent, name, threshold)
recurse(tree_.children_left[node], depth + 1)
print "{}else: # if {} > {}".format(indent, name, threshold)
recurse(tree_.children_right[node], depth + 1)
else:
print "{}return {}".format(indent, tree_.value[node])
recurse(0, 1)
这将打印出一个有效的Python函数。下面是一个树的输出示例,它试图返回它的输入,一个0到10之间的数字。
def tree(f0):
if f0 <= 6.0:
if f0 <= 1.5:
return [[ 0.]]
else: # if f0 > 1.5
if f0 <= 4.5:
if f0 <= 3.5:
return [[ 3.]]
else: # if f0 > 3.5
return [[ 4.]]
else: # if f0 > 4.5
return [[ 5.]]
else: # if f0 > 6.0
if f0 <= 8.5:
if f0 <= 7.5:
return [[ 7.]]
else: # if f0 > 7.5
return [[ 8.]]
else: # if f0 > 8.5
return [[ 9.]]
以下是我在其他答案中看到的一些绊脚石:
使用tree_。用阈值== -2来判断节点是否是叶节点不是一个好主意。如果它是一个阈值为-2的真实决策节点呢?相反,你应该看看树。Feature or tree.children_*。 对于tree_中的i,行features = [feature_names[i]。我的sklearn版本崩溃了,因为树。树_。特征为-2(特别是叶节点)。 递归函数中不需要有多个if语句,一个就可以了。
只需使用sklearn中的函数。像这样的树
from sklearn.tree import export_graphviz
export_graphviz(tree,
out_file = "tree.dot",
feature_names = tree.columns) //or just ["petal length", "petal width"]
然后在项目文件夹中查找文件树。点,复制所有的内容,并粘贴到这里http://www.webgraphviz.com/,并生成您的图形:)
Scikit learn在0.21版(2019年5月)中引入了一个名为export_text的有趣的新方法,用于从树中提取规则。这里的文档。不再需要创建自定义函数。
一旦你适应了你的模型,你只需要两行代码。首先,导入export_text:
from sklearn.tree import export_text
其次,创建一个包含规则的对象。为了使规则看起来更具可读性,使用feature_names参数并传递一个特性名称列表。例如,如果你的模型是model,你的特征是在一个名为X_train的数据框架中命名的,你可以创建一个名为tree_rules的对象:
tree_rules = export_text(model, feature_names=list(X_train.columns))
然后打印或保存tree_rules。输出如下所示:
|--- Age <= 0.63
| |--- EstimatedSalary <= 0.61
| | |--- Age <= -0.16
| | | |--- class: 0
| | |--- Age > -0.16
| | | |--- EstimatedSalary <= -0.06
| | | | |--- class: 0
| | | |--- EstimatedSalary > -0.06
| | | | |--- EstimatedSalary <= 0.40
| | | | | |--- EstimatedSalary <= 0.03
| | | | | | |--- class: 1
下面是一个通过转换export_text的输出从决策树生成Python代码的函数:
import string
from sklearn.tree import export_text
def export_py_code(tree, feature_names, max_depth=100, spacing=4):
if spacing < 2:
raise ValueError('spacing must be > 1')
# Clean up feature names (for correctness)
nums = string.digits
alnums = string.ascii_letters + nums
clean = lambda s: ''.join(c if c in alnums else '_' for c in s)
features = [clean(x) for x in feature_names]
features = ['_'+x if x[0] in nums else x for x in features if x]
if len(set(features)) != len(feature_names):
raise ValueError('invalid feature names')
# First: export tree to text
res = export_text(tree, feature_names=features,
max_depth=max_depth,
decimals=6,
spacing=spacing-1)
# Second: generate Python code from the text
skip, dash = ' '*spacing, '-'*(spacing-1)
code = 'def decision_tree({}):\n'.format(', '.join(features))
for line in repr(tree).split('\n'):
code += skip + "# " + line + '\n'
for line in res.split('\n'):
line = line.rstrip().replace('|',' ')
if '<' in line or '>' in line:
line, val = line.rsplit(maxsplit=1)
line = line.replace(' ' + dash, 'if')
line = '{} {:g}:'.format(line, float(val))
else:
line = line.replace(' {} class:'.format(dash), 'return')
code += skip + line + '\n'
return code
示例用法:
res = export_py_code(tree, feature_names=names, spacing=4)
print (res)
样例输出:
def decision_tree(f1, f2, f3):
# DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=3,
# max_features=None, max_leaf_nodes=None,
# min_impurity_decrease=0.0, min_impurity_split=None,
# min_samples_leaf=1, min_samples_split=2,
# min_weight_fraction_leaf=0.0, presort=False,
# random_state=42, splitter='best')
if f1 <= 12.5:
if f2 <= 17.5:
if f1 <= 10.5:
return 2
if f1 > 10.5:
return 3
if f2 > 17.5:
if f2 <= 22.5:
return 1
if f2 > 22.5:
return 1
if f1 > 12.5:
if f1 <= 17.5:
if f3 <= 23.5:
return 2
if f3 > 23.5:
return 3
if f1 > 17.5:
if f1 <= 25:
return 1
if f1 > 25:
return 2
上面的示例生成了names = ['f'+str(j+1) for j in range(NUM_FEATURES)]。
一个方便的功能是,它可以生成更小的文件大小与减少间距。只需要设置spacing=2。
现在可以使用export_text了。
from sklearn.tree import export_text
r = export_text(loan_tree, feature_names=(list(X_train.columns)))
print(r)
来自[sklearn][1]的完整示例
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_text
iris = load_iris()
X = iris['data']
y = iris['target']
decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
decision_tree = decision_tree.fit(X, y)
r = export_text(decision_tree, feature_names=iris['feature_names'])
print(r)