ML in Action 决策树
生活随笔
收集整理的這篇文章主要介紹了
ML in Action 决策树
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
Project Address:
https://github.com/TheOneAC/ML.git
dataset in ML/ML_ation/tree
決策樹
- 計算復雜度低,中間值缺失不敏感,可理解不相關數據
- 可能過度匹配(過度分類)
- 適用:數值型和標稱型
決策樹偽代碼createbranch
檢測數據集中子項是否全部屬于一類if so return class_tagelse 尋找數據集最佳劃分特征劃分數據集創建分支節點對每一個子集,遞歸調用createbranch返回分支節點遞歸結束條件:所有屬性遍歷完,或者數據集屬于同一分類
香農熵
def calcShannonEnt(dataSet):numEntries = len(dataSet)labelCounts = {}for featVec in dataSet:currentLabel = featVec[-1]if currentLabel not in labelCounts.keys():labelCounts[currentLabel] = 0labelCounts[currentLabel] += 1shannonEnt = 0.0for key in labelCounts:prob = float(labelCounts[key])/numEntriesshannonEnt -= prob * log(prob,2)return shannonEnt數據及劃分與最優選擇(熵最小)
def splitDataSet(dataSet, axis, value):retDataSet = []for featVec in dataSet:if featVec[axis] == value:reduceFeatVec = featVec[:axis]reduceFeatVec.extend(featVec[axis + 1:])retDataSet.append(reduceFeatVec)return retDataSetdef chooseBestFeatureToSplit(dataSet):numFeatures = len(dataSet[0])- 1baseEntropy = calcShannonEnt(dataSet)bestInfoGain = 0.0bestFeature = -1for i in range(numFeatures):featList = [example[i] for example in dataSet]uniqueVals = set(featList)newEntropy = 0.0for value in uniqueVals:subDataSet = splitDataSet(dataSet, i, value)prob = len(subDataSet)/float(len(dataSet))newEntropy += prob * calcShannonEnt(subDataSet)infoGain = baseEntropy - newEntropyif infoGain > bestInfoGain:baseInfoGain = infoGainbestFeature = ireturn bestFeature所有標簽用盡無法確定類標簽時: 多數表決決定子葉分類
def majorityCnt(classList):classCount = {}for vote in classList:if vote not in classCount.keys(): classCount[vote] = 0classCount[vote] += 1sortedClassCount = sorted(classCount.iteritems(), key = operator.itemgetter(1), reverse = True)return sortedClassCount[0][0]創建樹
def createTree(dataSet, labels):classList = [example[-1] for example in dataSet]if classList.count(classList[0]) == len(classList):return classList[0]if len(dataSet[0]) == 1:return majorityCnt(classList)bestFeat = chooseBestFeatureToSplit(dataSet)bestFeatureLabel = labels[bestFeat]myTree = {bestFeatureLabel:{}}del(labels[bestFeat])featValues = [example[bestFeat] for example in dataSet]uniqueVals = set(featValues)for value in uniqueVals:subLabels = labels[:]myTree[bestFeatureLabel][value] = createTree(splitDataSet(dataSet, bestFeat,value), subLabels)return myTree測試
def classify(inputTree,featLabels,testVec):firstStr = inputTree.keys()[0]secondDict = inputTree[firstStr]featIndex = featLabels.index(firstStr)for key in secondDict.keys():if testVec[featIndex] == key:if type(secondDict[key]).__name__=='dict':classLabel = classify(secondDict[key],featLabels,testVec)else:classLabel = secondDict[key]return classLabel >>> import trees >>> myDat,labels=trees.createDataSet() >>> labels ['no surfacing', 'flippers'] >>> myTree=treePlotter.retrieveTree (0) >>> myTree {'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}} >>> trees.classify(myTree,labels,[1,0]) 'no' >>> trees.classify(myTree,labels,[1,1]) 'yes'存儲與重載
def storeTree(inputTree, filename):import picklefw = open(filename, 'w')pickle.dump(inputTree,fw)fw.close()def grabTree(filename):import picklefr = open(filename)return pickle.load(fr)test
#!/usr/bin/python import treesmyDat,labels = trees.createDataSet()myTree = trees.createTree(myDat, labels)trees.storeTree(myTree,'classifierStorage.txt')print(trees.grabTree('classifierStorage.txt'))圖形化顯示樹結構
#!/usr/bin/pythonimport matplotlib.pyplot as plt decisionNode = dict(boxstyle = "sawtooth", fc = "0.8") leafNode = dict(boxstyle = "round4", fc = "0.8") arrow_args = dict(arrowstyle = "<-")def plotNode(nodeTxt, centerPt, parentPt, nodeType):createPlot.ax1.annotate(nodeTxt, xy = parentPt, xycoords = "axes fraction",xytext = centerPt, textcoords = "axes fraction",va = "center", ha = "center", bbox = nodeType, arrowprops = arrow_args)創建節點
def createPlot():fig = plt.figure(1, facecolor = "white")fig.clf()createPlot.ax1 = plt.subplot(111, frameon = False)plotNode("a decision node",(0.5, 0.1), (0.1, 0.5), decisionNode)plotNode("a leaf node",(0.8, 0.1), (0.3, 0.8), leafNode)plt.show()python command line run command as this
import treeplotter treePlotter.createPlot()- result like this
擴展測試 lens.py
Project Address: ` https://github.com/TheOneAC/ML.git`dataset: `lens.txt in ML/ML_ation/tree` #!/usr/bin/pythonimport trees import treePlotterfr = open("lenses.txt") lenses = [inst.strip().split('\t') for inst in fr.readlines()] lensesLabels=['age', 'prescript', 'astigmatic', 'tearRate']lensesTree = trees.createTree(lenses,lensesLabels) print(lensesTree)treePlotter.createPlot(lensesTree)轉載于:https://www.cnblogs.com/zeroArn/p/6691287.html
總結
以上是生活随笔為你收集整理的ML in Action 决策树的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: pdf在浏览器的显示问题
- 下一篇: Redis学习笔记之入门基础知识——简介