我能从决策树中的训练树中提取基本的决策规则(或“决策路径”)作为文本列表吗?
喜欢的东西:
if A>0.4 then if B<0.2 then if C>0.8 then class='X'
我能从决策树中的训练树中提取基本的决策规则(或“决策路径”)作为文本列表吗?
喜欢的东西:
if A>0.4 then if B<0.2 then if C>0.8 then class='X'
当前回答
我修改了Zelazny7提交的代码来打印一些伪代码:
def get_code(tree, feature_names):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
value = tree.tree_.value
def recurse(left, right, threshold, features, node):
if (threshold[node] != -2):
print "if ( " + features[node] + " <= " + str(threshold[node]) + " ) {"
if left[node] != -1:
recurse (left, right, threshold, features,left[node])
print "} else {"
if right[node] != -1:
recurse (left, right, threshold, features,right[node])
print "}"
else:
print "return " + str(value[node])
recurse(left, right, threshold, features, 0)
如果你在同一个例子中调用get_code(dt, df.columns),你会得到:
if ( col1 <= 0.5 ) {
return [[ 1. 0.]]
} else {
if ( col2 <= 4.5 ) {
return [[ 0. 1.]]
} else {
if ( col1 <= 2.5 ) {
return [[ 1. 0.]]
} else {
return [[ 0. 1.]]
}
}
}
其他回答
下面是我以一种可以直接在sql中使用的形式提取决策规则的方法,这样数据就可以按节点分组。(根据之前海报的做法)
结果将是后续的CASE子句,可以复制到sql语句,例如。
SELECT COALESCE(*CASE WHEN <conditions> THEN > <NodeA>*, >* CASE WHEN <条件> THEN <NodeB>*, > ....)* > FROM <表或视图>
import numpy as np
import pickle
feature_names=.............
features = [feature_names[i] for i in range(len(feature_names))]
clf= pickle.loads(trained_model)
impurity=clf.tree_.impurity
importances = clf.feature_importances_
SqlOut=""
#global Conts
global ContsNode
global Path
#Conts=[]#
ContsNode=[]
Path=[]
global Results
Results=[]
def print_decision_tree(tree, feature_names, offset_unit='' ''):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
value = tree.tree_.value
if feature_names is None:
features = [''f%d''%i for i in tree.tree_.feature]
else:
features = [feature_names[i] for i in tree.tree_.feature]
def recurse(left, right, threshold, features, node, depth=0,ParentNode=0,IsElse=0):
global Conts
global ContsNode
global Path
global Results
global LeftParents
LeftParents=[]
global RightParents
RightParents=[]
for i in range(len(left)): # This is just to tell you how to create a list.
LeftParents.append(-1)
RightParents.append(-1)
ContsNode.append("")
Path.append("")
for i in range(len(left)): # i is node
if (left[i]==-1 and right[i]==-1):
if LeftParents[i]>=0:
if Path[LeftParents[i]]>" ":
Path[i]=Path[LeftParents[i]]+" AND " +ContsNode[LeftParents[i]]
else:
Path[i]=ContsNode[LeftParents[i]]
if RightParents[i]>=0:
if Path[RightParents[i]]>" ":
Path[i]=Path[RightParents[i]]+" AND not " +ContsNode[RightParents[i]]
else:
Path[i]=" not " +ContsNode[RightParents[i]]
Results.append(" case when " +Path[i]+" then ''" +"{:4d}".format(i)+ " "+"{:2.2f}".format(impurity[i])+" "+Path[i][0:180]+"''")
else:
if LeftParents[i]>=0:
if Path[LeftParents[i]]>" ":
Path[i]=Path[LeftParents[i]]+" AND " +ContsNode[LeftParents[i]]
else:
Path[i]=ContsNode[LeftParents[i]]
if RightParents[i]>=0:
if Path[RightParents[i]]>" ":
Path[i]=Path[RightParents[i]]+" AND not " +ContsNode[RightParents[i]]
else:
Path[i]=" not "+ContsNode[RightParents[i]]
if (left[i]!=-1):
LeftParents[left[i]]=i
if (right[i]!=-1):
RightParents[right[i]]=i
ContsNode[i]= "( "+ features[i] + " <= " + str(threshold[i]) + " ) "
recurse(left, right, threshold, features, 0,0,0,0)
print_decision_tree(clf,features)
SqlOut=""
for i in range(len(Results)):
SqlOut=SqlOut+Results[i]+ " end,"+chr(13)+chr(10)
下面的代码是我在anaconda python 2.7下的方法,加上一个包名“pydot-ng”,以制作具有决策规则的PDF文件。希望对大家有所帮助。
from sklearn import tree
clf = tree.DecisionTreeClassifier(max_leaf_nodes=n)
clf_ = clf.fit(X, data_y)
feature_names = X.columns
class_name = clf_.classes_.astype(int).astype(str)
def output_pdf(clf_, name):
from sklearn import tree
from sklearn.externals.six import StringIO
import pydot_ng as pydot
dot_data = StringIO()
tree.export_graphviz(clf_, out_file=dot_data,
feature_names=feature_names,
class_names=class_name,
filled=True, rounded=True,
special_characters=True,
node_ids=1,)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("%s.pdf"%name)
output_pdf(clf_, name='filename%s'%n)
这是一个树形图
现在可以使用export_text了。
from sklearn.tree import export_text
r = export_text(loan_tree, feature_names=(list(X_train.columns)))
print(r)
来自[sklearn][1]的完整示例
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_text
iris = load_iris()
X = iris['data']
y = iris['target']
decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
decision_tree = decision_tree.fit(X, y)
r = export_text(decision_tree, feature_names=iris['feature_names'])
print(r)
下面是一个函数,在python3下打印scikit-learn决策树的规则,并对条件块进行偏移,使结构更具可读性:
def print_decision_tree(tree, feature_names=None, offset_unit=' '):
'''Plots textual representation of rules of a decision tree
tree: scikit-learn representation of tree
feature_names: list of feature names. They are set to f1,f2,f3,... if not specified
offset_unit: a string of offset of the conditional block'''
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
value = tree.tree_.value
if feature_names is None:
features = ['f%d'%i for i in tree.tree_.feature]
else:
features = [feature_names[i] for i in tree.tree_.feature]
def recurse(left, right, threshold, features, node, depth=0):
offset = offset_unit*depth
if (threshold[node] != -2):
print(offset+"if ( " + features[node] + " <= " + str(threshold[node]) + " ) {")
if left[node] != -1:
recurse (left, right, threshold, features,left[node],depth+1)
print(offset+"} else {")
if right[node] != -1:
recurse (left, right, threshold, features,right[node],depth+1)
print(offset+"}")
else:
print(offset+"return " + str(value[node]))
recurse(left, right, threshold, features, 0,0)
Thank for the wonderful solution of @paulkerfeld. On top of his solution, for all those who want to have a serialized version of trees, just use tree.threshold, tree.children_left, tree.children_right, tree.feature and tree.value. Since the leaves don't have splits and hence no feature names and children, their placeholder in tree.feature and tree.children_*** are _tree.TREE_UNDEFINED and _tree.TREE_LEAF. Every split is assigned a unique index by depth first search. Notice that the tree.value is of shape [n, 1, 1]