决策树 算法原理及代码
?? 決策樹可以使用不熟悉的數(shù)據(jù)集合,并從中提取出一系列的規(guī)則,這是機器根據(jù)數(shù)據(jù)集創(chuàng)建規(guī)則的過程,就是機器學習的過程。用一個小案例分析:
?
通過No surfacing? 和 flippers判斷該生物是否是魚,No surfacing 是離開水面是否可以生存,flippers判斷是否有腳蹼
引入信息增益和信息熵的概念:
信息熵:計算熵,我們需要計算所有類別所有可能值包含的信息期望值。
??????????????????????????????????????? p(x)是類別出現(xiàn)的概率
條件熵(表示在已知隨機變量X的條件下隨機變量Y的不確定性。):
?????????????????????????? ? ? ? ? ? ??
信息增益(劃分數(shù)據(jù)集前后的信息發(fā)生的變化,通俗的說,就是信息熵減去條件熵):
???????????????????????????????? ? ? ? ??
代碼實現(xiàn):
????? 加載數(shù)據(jù):
計算原始熵:
def calcShannonEnt(dataSet):numEntries = len( dataSet)labelCounts = { }for featVec in dataSet:currentLabel = featVec[-1]if currentLabel not in labelCounts.keys():labelCounts[currentLabel] = 0labelCounts [currentLabel]+=1shannonEnt = 0.0 for key in labelCounts :prob = float (labelCounts[key])/numEntriesshannonEnt -=prob * log(prob,2)return shannonEnt 劃分數(shù)據(jù)集def splitDataSet(dataSet,axis,value): # 待劃分的數(shù)據(jù)集 ,劃分數(shù)據(jù)集的特征,需要返回的特征的值retDataSet=[]for featVec in dataSet:if featVec[axis] == value :reduceFeatVec=featVec[:axis] #取不到axis這一行reduceFeatVec.extend(featVec[axis+1:])retDataSet.append(reduceFeatVec)return retDataSet
測試數(shù)據(jù)及結(jié)果:
(myDat,0,1)? myDat是數(shù)據(jù)集,0是第一次劃分數(shù)據(jù)集,1是第一列為1的數(shù)據(jù)
計算出條件熵,然后求出信息增益,并找到最大的信息增益,最大的信息增益就是找到最好的劃分數(shù)據(jù)集的特征
def chooseBestFeatureToSplit(dataSet):numFeatures=len(dataSet[0])-1#計算出原始的香農(nóng)熵baseEntropy = calcShannonEnt(dataSet)bestInfoGain = 0.0; bestFeature =-1for i in range (numFeatures):#創(chuàng)建唯一的分類標簽列表featList = [example[i] for example in dataSet]uniqueVals = set (featList) #去重復#條件熵的初始化newEntropy = 0.0for value in uniqueVals :#劃分 獲得數(shù)據(jù)集subDataSet = splitDataSet(dataSet,i ,value)prob=len(subDataSet)/float(len(dataSet)) # 概率#條件熵的計算newEntropy += prob * calcShannonEnt (subDataSet)# 信息增益infoGain = baseEntropy -newEntropyif (infoGain >bestInfoGain):bestInfoGain = infoGain #找到最大的信息增益bestFeature =i #找出最好的劃分數(shù)據(jù)集的特征return bestFeature測試數(shù)據(jù):
dataSet,labels = createDataSet() print(dataSet) print(chooseBestFeatureToSplit(dataSet))輸入結(jié)果:
投票機制:
def majorityCnt(classList):classCount={}for vote in classList:if vote not in classCount.keys() :classCount[vote]=0sortedClassCount = sorted (classCount.iteritems(),key=operator.itemgetter(1),reverse=True)return sortedClassCount[0][0]
創(chuàng)建樹:
def createTree(dataSet,labels):classList = [example[-1] for example in dataSet]if classList.count(classList[0] )== len (classList) :return classList[0]if len(dataSet[0]) == 1:return majorityCnt(classList)bestFeat = chooseBestFeatureToSplit(dataSet)bestFeatLabel = labels[bestFeat]myTree={bestFeatLabel:{}}del(labels[bestFeat])featValues = [example[bestFeat] for example in dataSet]uniqueVals = set( featValues)for value in uniqueVals:subLabels =labels[:]myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet,bestFeat,value),subLabels)return myTree結(jié)果:
該方法是用信息增益的方法來構(gòu)建樹,在查閱其他的博客得知:
??? ID3算法主要是通過信息增益的大小來判定,最大信息增益的特征就是當前節(jié)點,這個算法存在許多的不足,第一,它解決不了過擬合問題,和缺失值的處理,第二,信息增益偏向取值較多的特征,第三,不能處理連續(xù)特征問題。
因此,引入C4.5算法,是利用信息增益率來代替信息增益。為了減少過度匹配問題,我們通過剪枝來處理冗余的數(shù)據(jù),生成決策樹時決定是否要剪枝叫預剪枝,生成樹之后進行交叉驗證的叫后剪枝。
還有一個是引入基尼指數(shù)來進行計算叫CART樹,以后再做介紹。
繪制樹形圖:
decisionNode = dict(boxstyle = "sawtooth", fc="0.8") leafNode = dict(boxstyle = "round4" ,fc="0.8") arrow_args = dict(arrowstyle="<-") def plotNode(nodeTxt,centerPt,parentPt,nodeType):createPlot.ax1.annotate(nodeTxt,xy=parentPt,xycoords='axes fraction' ,\xytext=centerPt ,textcoords='axes fraction',va="center" ,\ha ="center" ,bbox=nodeType,arrowprops = arrow_args)def getNumLeafs(myTree):numLeafs= 0firstStr =list(myTree.keys())[0]secondDict = myTree[firstStr]for key in secondDict.keys():if type(secondDict[key]).__name__=='dict':numLeafs +=getNumLeafs(secondDict[key])else : numLeafs+=1return numLeafs def getTreeDepth(myTree) :maxDepth=0firstStr =list(myTree.keys())[0]secondDict = myTree[firstStr]for key in secondDict.keys():if type(secondDict[key]).__name__=='dict':thisDepth = 1+ getTreeDepth(secondDict[key])else : thisDepth = 1if thisDepth > maxDepth : maxDepth=thisDepthreturn maxDepth def plotMidText(cntrPt , parentPt ,txtString) :xMid = (parentPt[0]-cntrPt[0])/2.0 +cntrPt[0]yMid = (parentPt[1]-cntrPt[1])/2.0 +cntrPt[1]createPlot.ax1.text(xMid,yMid,txtString) def plotTree(myTree,parentPt,nodeTxt):numLeafs = getNumLeafs(myTree)depth = getTreeDepth(myTree)firstStr = list(myTree.keys())[0]cntrPt = (plotTree.xOff+(1.0+float(numLeafs))/2.0/plotTree.totalW,plotTree.yOff)plotMidText(cntrPt,parentPt,nodeTxt)plotNode(firstStr,cntrPt,parentPt,decisionNode)secondDict = myTree[firstStr]plotTree.yOff = plotTree.yOff - 1.0/plotTree.totalDfor key in secondDict.keys():if type(secondDict[key]).__name__=='dict':plotTree(secondDict[key],cntrPt,str(key))else:plotTree.xOff = plotTree.xOff +1.0 /plotTree.totalWplotNode(secondDict[key],(plotTree.xOff,plotTree.yOff),cntrPt,leafNode)plotMidText((plotTree.xOff,plotTree.yOff) ,cntrPt,str(key))plotTree.yOff = plotTree.yOff + 1.0/plotTree.totalD
def createPlot(inTree) :fig = plt.figure(1,facecolor = 'white')fig.clf()axprops =dict(xticks=[],yticks=[])createPlot.ax1 = plt.subplot(111,frameon = False ,**axprops)plotTree.totalW =float(getNumLeafs(inTree))plotTree.totalD=float(getTreeDepth(inTree))plotTree.xOff = -0.5/plotTree.totalW;plotTree.yOff = 1.0plotTree(inTree,(0.5,1.0),'')plt.show() createPlot(myTree)
總結(jié)
以上是生活随笔為你收集整理的决策树 算法原理及代码的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: VS2015卸载方法
- 下一篇: 服装智能制造开启服装企业信息化建设