【数据科学系统学习】机器学习算法 # 西瓜书学习记录 [12] 集成学习实践
本篇內容為《機器學習實戰》第 7 章利用 AdaBoost 元算法提高分類性能程序清單。所用代碼為 python3。
AdaBoost
優點:泛化錯誤率低,易編碼,可以應用在大部分分類器上,無參數調整。
缺點:對離群點敏感。
適用數據類型:數值型和標稱型數據。
boosting 方法擁有多個版本,這里將只關注其中一個最流行的版本 AdaBoost。
在構造 AdaBoost 的代碼時,我們將首先通過一個簡單數據集來確保在算法實現上一切就緒。使用如下的數據集:
def loadSimpData():datMat = matrix([[ 1. , 2.1],[ 2. , 1.1],[ 1.3, 1. ],[ 1. , 1. ],[ 2. , 1. ]])classLabels = [1.0, 1.0, -1.0, -1.0, 1.0]return datMat,classLabels在 python 提示符下,執行代碼加載數據集:
>>> import adaboost >>> datMat, classLabels=adaboost.loadSimpData()我們先給出函數buildStump()的偽代碼:
程序清單 7-1 單層決策樹生成函數
''' Created on Sep 20, 2018@author: yufei Adaboost is short for Adaptive Boosting '''""" 測試是否有某個值小于或大于我們正在測試的閾值 """ def stumpClassify(dataMatrix,dimen,threshVal,threshIneq):#just classify the dataretArray = ones((shape(dataMatrix)[0],1))if threshIneq == 'lt':retArray[dataMatrix[:,dimen] <= threshVal] = -1.0else:retArray[dataMatrix[:,dimen] > threshVal] = -1.0return retArray""" 在一個加權數據集中循環 buildStump()將會遍歷stumpClassify()函數所有的可能輸入值 并找到具有最低錯誤率的單層決策樹 """ def buildStump(dataArr,classLabels,D):dataMatrix = mat(dataArr); labelMat = mat(classLabels).Tm,n = shape(dataMatrix)# 變量 numSteps 用于在特征的所有可能值上進行遍歷numSteps = 10.0# 創建一個空字典,用于存儲給定權重向量 D 時所得到的最佳單層決策樹的相關信息bestStump = {}; bestClasEst = mat(zeros((m,1)))# 初始化為正無窮大,之后用于尋找可能的最小錯誤率minError = inf# 第一層循環在數據集的所有特征上遍歷for i in range(n):#loop over all dimensionsrangeMin = dataMatrix[:,i].min(); rangeMax = dataMatrix[:,i].max();# 計算步長stepSize = (rangeMax-rangeMin)/numSteps# 第二層循環是了解步長后再在這些值上遍歷for j in range(-1,int(numSteps)+1):#loop over all range in current dimension# 第三個循環是在大于和小于之間切換不等式for inequal in ['lt', 'gt']: #go over less than and greater thanthreshVal = (rangeMin + float(j) * stepSize)# 調用 stumpClassify() 函數,返回分類預測結果predictedVals = stumpClassify(dataMatrix,i,threshVal,inequal)#call stump classify with i, j, lessThanerrArr = mat(ones((m,1)))errArr[predictedVals == labelMat] = 0weightedError = D.T*errArr #calc total error multiplied by D# print("split: dim %d, thresh %.2f, thresh ineqal: %s, the weighted error is %.3f" % (i, threshVal, inequal, weightedError))# 將當前錯誤率與已有的最小錯誤率進行比較if weightedError < minError:minError = weightedErrorbestClasEst = predictedVals.copy()bestStump['dim'] = ibestStump['thresh'] = threshValbestStump['ineq'] = inequalreturn bestStump,minError,bestClasEst為了解實際運行過程,在 python 提示符下,執行代碼并得到結果:
>>> D=mat(ones((5,1))/5) >>> adaboost.buildStump(datMat, classLabels, D) split: dim 0, thresh 0.90, thresh ineqal: lt, the weighted error is 0.400 split: dim 0, thresh 0.90, thresh ineqal: gt, the weighted error is 0.600 split: dim 0, thresh 1.00, thresh ineqal: lt, the weighted error is 0.400 split: dim 0, thresh 1.00, thresh ineqal: gt, the weighted error is 0.600 split: dim 0, thresh 1.10, thresh ineqal: lt, the weighted error is 0.400 split: dim 0, thresh 1.10, thresh ineqal: gt, the weighted error is 0.600 split: dim 0, thresh 1.20, thresh ineqal: lt, the weighted error is 0.400 split: dim 0, thresh 1.20, thresh ineqal: gt, the weighted error is 0.600 split: dim 0, thresh 1.30, thresh ineqal: lt, the weighted error is 0.200 split: dim 0, thresh 1.30, thresh ineqal: gt, the weighted error is 0.800 split: dim 0, thresh 1.40, thresh ineqal: lt, the weighted error is 0.200 split: dim 0, thresh 1.40, thresh ineqal: gt, the weighted error is 0.800 split: dim 0, thresh 1.50, thresh ineqal: lt, the weighted error is 0.200 split: dim 0, thresh 1.50, thresh ineqal: gt, the weighted error is 0.800 split: dim 0, thresh 1.60, thresh ineqal: lt, the weighted error is 0.200 split: dim 0, thresh 1.60, thresh ineqal: gt, the weighted error is 0.800 split: dim 0, thresh 1.70, thresh ineqal: lt, the weighted error is 0.200 split: dim 0, thresh 1.70, thresh ineqal: gt, the weighted error is 0.800 split: dim 0, thresh 1.80, thresh ineqal: lt, the weighted error is 0.200 split: dim 0, thresh 1.80, thresh ineqal: gt, the weighted error is 0.800 split: dim 0, thresh 1.90, thresh ineqal: lt, the weighted error is 0.200 split: dim 0, thresh 1.90, thresh ineqal: gt, the weighted error is 0.800 split: dim 0, thresh 2.00, thresh ineqal: lt, the weighted error is 0.600 split: dim 0, thresh 2.00, thresh ineqal: gt, the weighted error is 0.400 split: dim 1, thresh 0.89, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 0.89, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.00, thresh ineqal: lt, the weighted error is 0.200 split: dim 1, thresh 1.00, thresh ineqal: gt, the weighted error is 0.800 split: dim 1, thresh 1.11, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.11, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.22, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.22, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.33, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.33, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.44, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.44, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.55, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.55, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.66, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.66, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.77, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.77, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.88, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.88, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 1.99, thresh ineqal: lt, the weighted error is 0.400 split: dim 1, thresh 1.99, thresh ineqal: gt, the weighted error is 0.600 split: dim 1, thresh 2.10, thresh ineqal: lt, the weighted error is 0.600 split: dim 1, thresh 2.10, thresh ineqal: gt, the weighted error is 0.400 ({'dim': 0, 'thresh': 1.3, 'ineq': 'lt'}, matrix([[0.2]]), array([[-1.],[ 1.],[-1.],[-1.],[ 1.]]))這一行可以注釋掉,這里為了理解函數的運行而打印出來。
將當前錯誤率與已有的最小錯誤率進行對比后,如果當前的值較小,那么就在字典baseStump中保存該單層決策樹。字典、錯誤率和類別估計值都會返回給 AdaBoost 算法。
上述,我們已經構建了單層決策樹,得到了弱學習器。接下來,我們將使用多個弱分類器來構建 AdaBoost 代碼。
首先給出整個實現的偽代碼如下:
程序清單 7-2 基于單層決策樹的 AdaBoost 訓練過程
''' 輸入參數:數據集、類別標簽、迭代次數(需要用戶指定) ''' def adaBoostTrainDS(dataArr,classLabels,numIt=40):weakClassArr = []m = shape(dataArr)[0]# 向量 D 包含了每個數據點的權重,初始化為 1/mD = mat(ones((m,1))/m) #init D to all equal# 記錄每個數據點的類別估計累計值aggClassEst = mat(zeros((m,1)))for i in range(numIt):# 調用 buildStump() 函數建立一個單層決策樹bestStump,error,classEst = buildStump(dataArr,classLabels,D)#build Stumpprint ("D:",D.T)# 計算 alpha,本次單層決策樹輸出結果的權重# 確保沒有錯誤時不會發生除零溢出alpha = float(0.5*log((1.0-error)/max(error,1e-16)))#calc alpha, throw in max(error,eps) to account for error=0bestStump['alpha'] = alphaweakClassArr.append(bestStump) #store Stump Params in Arrayprint("classEst: ",classEst.T)# 為下一次迭代計算 Dexpon = multiply(-1*alpha*mat(classLabels).T,classEst) #exponent for D calc, getting messyD = multiply(D,exp(expon)) #Calc New D for next iterationD = D/D.sum()#calc training error of all classifiers, if this is 0 quit for loop early (use break)# 錯誤率累加計算aggClassEst += alpha*classEstprint("aggClassEst: ",aggClassEst.T)# 為了得到二值分類結果調用 sign() 函數aggErrors = multiply(sign(aggClassEst) != mat(classLabels).T,ones((m,1)))errorRate = aggErrors.sum()/mprint ("total error: ",errorRate)# 若總錯誤率為 0,則中止 for 循環if errorRate == 0.0: breakreturn weakClassArr,aggClassEst在 python 提示符下,執行代碼并得到結果:
>>> classifierArray = adaboost.adaBoostTrainDS(datMat, classLabels, 9) D: [[0.2 0.2 0.2 0.2 0.2]] classEst: [[-1. 1. -1. -1. 1.]] aggClassEst: [[-0.69314718 0.69314718 -0.69314718 -0.69314718 0.69314718]] total error: 0.2 D: [[0.5 0.125 0.125 0.125 0.125]] classEst: [[ 1. 1. -1. -1. -1.]] aggClassEst: [[ 0.27980789 1.66610226 -1.66610226 -1.66610226 -0.27980789]] total error: 0.2 D: [[0.28571429 0.07142857 0.07142857 0.07142857 0.5 ]] classEst: [[1. 1. 1. 1. 1.]] aggClassEst: [[ 1.17568763 2.56198199 -0.77022252 -0.77022252 0.61607184]] total error: 0.0最后,我們來觀察測試錯誤率。
程序清單 7-3 AdaBoost 分類函數
''' 將弱分類器的訓練過程從程序中抽查來,應用到某個具體的實例上去。datToClass: 一個或多個待分類樣例 classifierArr: 多個弱分類器組成的數組返回 aggClassEst 符號,大于 0 返回1;小于 0 返回 -1 ''' def adaClassify(datToClass,classifierArr):dataMatrix = mat(datToClass)#do stuff similar to last aggClassEst in adaBoostTrainDSm = shape(dataMatrix)[0]aggClassEst = mat(zeros((m,1)))for i in range(len(classifierArr)):classEst = stumpClassify(dataMatrix, classifierArr[0][i]['dim'], classifierArr[0][i]['thresh'],classifierArr[0][i]['ineq'])aggClassEst += classifierArr[0][i]['alpha']*classEstprint (aggClassEst)return sign(aggClassEst)在 python 提示符下,執行代碼并得到結果:
>>> datArr, labelArr = adaboost.loadSimpData() >>> classifierArr = adaboost.adaBoostTrainDS(datArr, labelArr, 30) D: [[0.2 0.2 0.2 0.2 0.2]] classEst: [[-1. 1. -1. -1. 1.]] aggClassEst: [[-0.69314718 0.69314718 -0.69314718 -0.69314718 0.69314718]] total error: 0.2 D: [[0.5 0.125 0.125 0.125 0.125]] classEst: [[ 1. 1. -1. -1. -1.]] aggClassEst: [[ 0.27980789 1.66610226 -1.66610226 -1.66610226 -0.27980789]] total error: 0.2 D: [[0.28571429 0.07142857 0.07142857 0.07142857 0.5 ]] classEst: [[1. 1. 1. 1. 1.]] aggClassEst: [[ 1.17568763 2.56198199 -0.77022252 -0.77022252 0.61607184]] total error: 0.0輸入以下命令進行分類:
>>> adaboost.adaClassify([0,0], classifierArr) [[-0.69314718]] [[-1.66610226]] matrix([[-1.]])隨著迭代的進行,數據點 [0,0] 的分類結果越來越強。也可以在其它點上分類:
>>> adaboost.adaClassify([[5,5],[0,0]], classifierArr) [[ 0.69314718][-0.69314718]] [[ 1.66610226][-1.66610226]] matrix([[ 1.],[-1.]])這兩個點的分類結果也會隨著迭代的進行而越來越強。
參考鏈接:
GBDT,ADABOOSTING概念區分 GBDT與XGBOOST區別
【機器學習實戰-python3】Adaboost元算法提高分類性能
$$$$
不足之處,歡迎指正。
總結
以上是生活随笔為你收集整理的【数据科学系统学习】机器学习算法 # 西瓜书学习记录 [12] 集成学习实践的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 设计模式 | 适配器模式及典型应用
- 下一篇: 远程阿里云window服务器报错身份验证