机器学习|基于SVM的鸢尾花数据集分类实现
iris數(shù)據(jù)集的中文名是安德森鳶尾花卉數(shù)據(jù)集,英文全稱是Anderson’s Iris data set。iris包含150個(gè)樣本,對(duì)應(yīng)數(shù)據(jù)集的每行數(shù)據(jù)。每行數(shù)據(jù)包含每個(gè)樣本的四個(gè)特征和樣本的類別信息,所以iris數(shù)據(jù)集是一個(gè)150行5列的二維表。通俗地說,iris數(shù)據(jù)集是用來給花做分類的數(shù)據(jù)集,每個(gè)樣本包含了花萼長度、花萼寬度、花瓣長度、花瓣寬度四個(gè)特征(前4列),我們需要建立一個(gè)分類器,分類器可以通過樣本的四個(gè)特征來判斷樣本屬于山鳶尾、變色鳶尾還是維吉尼亞鳶尾(這三個(gè)名詞都是花的品種)。
數(shù)據(jù)的獲取:
file=importdata('iris.csv');%讀取csv文件中從第R-1行,第C-1列的數(shù)據(jù)開始的數(shù)據(jù)
data=file.data;
features=data(:,1:4);%特征列表
classlabel=data(:,5);%對(duì)應(yīng)類別
n = randperm(size(features,1));%隨機(jī)產(chǎn)生訓(xùn)練集和測(cè)試集
繪制散點(diǎn)圖查看數(shù)據(jù):
%% 繪制散點(diǎn)圖 class_0 = find(data(:,5)==0); class_1 = find(data(:,5)==1); class_2 = find(data(:,5)==2);%返回類別為2的位置索引 subplot(3,2,1) hold on scatter(features(class_0,1),features(class_0,2),'x','b') scatter(features(class_1,1),features(class_1,2),'+','g') scatter(features(class_2,1),features(class_2,2),'o','r') subplot(3,2,2) hold on scatter(features(class_0,1),features(class_0,3),'x','b') scatter(features(class_1,1),features(class_1,3),'+','g') scatter(features(class_2,1),features(class_2,3),'o','r') subplot(3,2,3) hold on scatter(features(class_0,1),features(class_0,4),'x','b') scatter(features(class_1,1),features(class_1,4),'+','g') scatter(features(class_2,1),features(class_2,4),'o','r') subplot(3,2,4) hold on scatter(features(class_0,2),features(class_0,3),'x','b') scatter(features(class_1,2),features(class_1,3),'+','g') scatter(features(class_2,2),features(class_2,3),'o','r') subplot(3,2,5) hold on scatter(features(class_0,2),features(class_0,4),'x','b') scatter(features(class_1,2),features(class_1,4),'+','g') scatter(features(class_2,2),features(class_2,4),'o','r') subplot(3,2,6) hold on scatter(features(class_0,3),features(class_0,4),'x','b') scatter(features(class_1,3),features(class_1,4),'+','g') scatter(features(class_2,3),features(class_2,4),'o','r')
曲線為根據(jù)花萼長度、花萼寬度、花瓣長度、花瓣寬度之間的關(guān)系繪制的散點(diǎn)圖。
訓(xùn)練集與測(cè)試集:
%% 訓(xùn)練集--70個(gè)樣本 train_features=features(n(1:70),:); train_label=classlabel(n(1:70),:); %% 測(cè)試集--30個(gè)樣本 test_features=features(n(71:end),:); test_label=classlabel(n(71:end),:);
數(shù)據(jù)歸一化:
%% 數(shù)據(jù)歸一化
[Train_features,PS] = mapminmax(train_features');
Train_features = Train_features';
Test_features = mapminmax('apply',test_features',PS);
Test_features = Test_features';
使用SVM進(jìn)行分類:
%% 創(chuàng)建/訓(xùn)練SVM模型
model = svmtrain(train_label,Train_features);
%% SVM仿真測(cè)試
[predict_train_label] = svmpredict(train_label,Train_features,model);
[predict_test_label] = svmpredict(test_label,Test_features,model);
%% 打印準(zhǔn)確率
compare_train = (train_label == predict_train_label);
accuracy_train = sum(compare_train)/size(train_label,1)*100;
fprintf('訓(xùn)練集準(zhǔn)確率:%f
',accuracy_train)
compare_test = (test_label == predict_test_label);
accuracy_test = sum(compare_test)/size(test_label,1)*100;
fprintf('測(cè)試集準(zhǔn)確率:%f
',accuracy_test)
結(jié)果:
*
optimization finished, #iter = 18
nu = 0.668633
obj = -21.678546, rho = 0.380620
nSV = 30, nBSV = 28
*
optimization finished, #iter = 29
nu = 0.145900
obj = -3.676315, rho = -0.010665
nSV = 9, nBSV = 4
*
optimization finished, #iter = 21
nu = 0.088102
obj = -2.256080, rho = -0.133432
nSV = 7, nBSV = 2
Total nSV = 40
Accuracy = 97.1429% (68/70) (classification)
Accuracy = 97.5% (78/80) (classification)
訓(xùn)練集準(zhǔn)確率:97.142857
測(cè)試集準(zhǔn)確率:97.500000
總結(jié)
以上是生活随笔為你收集整理的机器学习|基于SVM的鸢尾花数据集分类实现的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: postgresql是如何求年龄的_Po
- 下一篇: 通过CMMI5的国内企业有几个?这个认证