我的K均值算法的matlab实现
這是我的第一篇博客;
?
K-Means算法過程,略;
這是一次課程的任務(wù)2333,是利用所學(xué)K-means聚類分析方法,對iris數(shù)據(jù)集進(jìn)行聚類分析,并利用已知的樣本類別標(biāo)
簽進(jìn)行聚類分析評價;
我的K均值算法以iris.data為例(附在文末);
數(shù)據(jù)集:Iris數(shù)據(jù)集
?(http://archive.ics.uci.edu/ml/datasets/Iris)
數(shù)據(jù)描述:Iris數(shù)據(jù)集包含150個鳶尾花模式樣本,其中 每個模式樣本采用5維的特征描述
X = (x1,x2,x3,x4,w);
x1: 萼片長度(厘米)
x2: 萼片寬度(厘米)
x3: 花瓣長度(厘米)
x4:花瓣寬度(厘米)
w(類別屬性 ): 山鳶尾 (Iris setosa),變色鳶尾(Iris versicolor)和維吉尼亞鳶尾(Iris virginica)
?
先貼上我的函數(shù)結(jié)構(gòu):
?
函數(shù)結(jié)構(gòu)—— FindCluster(~) 聚類算法主函數(shù)| MyKmeans —— MyPlot2(~),MyPlot3(~) 畫圖|—— Accuracy(~) 聚類精度評價?
?
MyKmean.m,程序運行的主函數(shù)
% 作者 : YG1501 LQY % 日期 : 2018.01.13 星期六 % 函數(shù)功能 : 實現(xiàn)對iris.data數(shù)據(jù)集的分類,并根據(jù)分類結(jié)果進(jìn)行精度評價clear;clc;close all; %手動選取文件,選用iris.data [filename,fpath] = uigetfile(... {...'*.m;*.txt;*.data',...'Data(*.m;*.txt;*.data)';...'*.*','All Files(*.*)'... },'Select data'); if ~filenamereturn end% filename = 'iris.data'; % 嫌麻煩可以用這個[X1,X2,X3,X4,X5] = textread(filename,'%f%f%f%f%s','delimiter',','); %Get data clear filename fpathX = [X1 X2 X3 X4]; %目前貌似還沒什么用 [m,~] = size(X);%分配索引 DataLabel = zeros(m,1); DataLabel(strcmp(X5,'Iris-setosa')) = 1; DataLabel(strcmp(X5,'Iris-versicolor')) = 2; DataLabel(strcmp(X5,'Iris-virginica')) = 3; clear m X5%二維結(jié)果 [MyCenter1,ClusterLabel12] = FindCluster([X1 X2],3,DataLabel); [MyCenter2,ClusterLabel13] = FindCluster([X1 X3],3,DataLabel); [MyCenter3,ClusterLabel14] = FindCluster([X1 X4],3,DataLabel); [MyCenter4,ClusterLabel23] = FindCluster([X2 X3],3,DataLabel); [MyCenter5,ClusterLabel24] = FindCluster([X2 X4],3,DataLabel); [MyCenter6,ClusterLabel34] = FindCluster([X3 X4],3,DataLabel);hold on; subplot(231)MyPlot2(X1,X2,DataLabel,MyCenter1),xlabel('X1'),ylabel('X2') subplot(232)MyPlot2(X1,X3,DataLabel,MyCenter2),xlabel('X1'),ylabel('X3') subplot(233)MyPlot2(X1,X4,DataLabel,MyCenter3),xlabel('X1'),ylabel('X4') subplot(234)MyPlot2(X2,X3,DataLabel,MyCenter4),xlabel('X2'),ylabel('X3') subplot(235)MyPlot2(X2,X4,DataLabel,MyCenter5),xlabel('X2'),ylabel('X4') subplot(236)MyPlot2(X3,X4,DataLabel,MyCenter6),xlabel('X3'),ylabel('X4')clear MyCenter1 MyCenter2 MyCenter3 MyCenter4 MyCenter5 MyCenter6%三維結(jié)果 [MyCenter7,ClusterLabel123] = FindCluster([X1,X2,X3],3,DataLabel); [MyCenter8,ClusterLabel124] = FindCluster([X1,X2,X4],3,DataLabel); [MyCenter9,ClusterLabel134] = FindCluster([X1,X3,X4],3,DataLabel); [MyCenter10,ClusterLabel234] = FindCluster([X2,X3,X4],3,DataLabel);hold on; figure,title('3D'); subplot(221)MyPlot3(X1,X2,X3,DataLabel,MyCenter7)xlabel('X1'),ylabel('X2'),zlabel('X3'); subplot(222)MyPlot3(X1,X2,X4,DataLabel,MyCenter8)xlabel('X1'),ylabel('X2'),zlabel('X4'); subplot(223)MyPlot3(X1,X3,X4,DataLabel,MyCenter9)xlabel('X1'),ylabel('X3'),zlabel('X4'); subplot(224)MyPlot3(X2,X3,X4,DataLabel,MyCenter10)xlabel('X2'),ylabel('X3'),zlabel('X4');clear MyCenter7 MyCenter8 MyCenter9 MyCenter10%聚類精度評價 %二維結(jié)果 ClusterLabel_2D = [ClusterLabel12,ClusterLabel13,ClusterLabel14...ClusterLabel23,ClusterLabel24,ClusterLabel34]; ClusterAccuracy_2D = Accuracy(DataLabel,ClusterLabel_2D); clear ClusterLabel12 ClusterLabel13 ClusterLabel14 clear ClusterLabel23 ClusterLabel24 ClusterLabel34 ClusterLabel_2D%三維結(jié)果 ClusterLabel_3D = [ClusterLabel123,ClusterLabel124,ClusterLabel134,ClusterLabel234]; ClusterAccuracy_3D = Accuracy(DataLabel,ClusterLabel_3D); clear ClusterLabel123 ClusterLabel124 ClusterLabel134 ClusterLabel234 clear ClusterLabel_3DFindCluster.m
%函數(shù)功能 : 輸入數(shù)據(jù)集、聚類中心個數(shù)與樣本標(biāo)簽 % 得到聚類中心與聚類樣本標(biāo)簽function [ClusterCenter,ClusterLabel] = FindCluster(MyData,ClusterCounts,DataLabel) [m,n] = size(MyData);ClusterLabel = zeros(m,1); %用于存儲聚類標(biāo)簽% MyLabel = unique(DataLabel,'rows'); % for i = 1:size(MyLabel,2); % LabelIndex(1,i) = i; %為數(shù)據(jù)標(biāo)簽創(chuàng)建索引 % end%已知數(shù)據(jù)集的每個樣本的中心 OriginCenter = zeros(ClusterCounts,n); for q = 1:ClusterCountsDataCounts = 0;for p = 1:m%按照數(shù)據(jù)標(biāo)簽,計算樣本中心if DataLabel(p) == qOriginCenter(q,:) = OriginCenter(q,:) + MyData(p,:);DataCounts = DataCounts + 1;endendOriginCenter(q,:) = OriginCenter(q,:) ./ DataCounts; end %按照第一列對樣本中心排序 %排序是為了解決新聚類中心因隨機分配而與樣本最初的聚類中心不匹配的問題 SortCenter1 = sortrows(OriginCenter,1);FalseTimes = 0; CalcuateTimes = 0; %此循環(huán)用于糾正分類錯誤的情況 while (CalcuateTimes < 15)ClusterCenter = zeros(ClusterCounts,n); %初始化聚類中心for p = 1:ClusterCountsClusterCenter(p,:) = MyData( randi(m,1),:); %隨機選取一個點作為中心end%此循環(huán)用于尋找聚類中心%目前還未解決該循環(huán)陷入死循環(huán)的問題,所以設(shè)置一個參數(shù)來終止循環(huán)kk = 0;while (kk < 15)Distance = zeros(1,ClusterCounts); %存儲單個樣本到每個聚類中心的距離DataCounts = zeros(1,ClusterCounts); %記錄每個聚類的樣本數(shù)目NewCenter = zeros(ClusterCounts,n);for p = 1:mfor q = 1:ClusterCountsDistance(q) = norm(MyData(p,:) - ClusterCenter(q,:));end%index返回最小距離的索引,亦即聚類中心的標(biāo)號[~,index] = min(Distance);ClusterLabel(p) = index;end k = 0;for q = 1:ClusterCountsfor p = 1:m%按照標(biāo)記,對樣本進(jìn)行分類,并計算聚類中心if ClusterLabel(p) == qNewCenter(q,:) = NewCenter(q,:) + MyData(p,:);DataCounts(q) = DataCounts(q) + 1;endendNewCenter(q,:) = NewCenter(q,:) ./ DataCounts(q);%若新的聚類中心與上一個聚類中心相同,則認(rèn)為算法收斂if norm(NewCenter(q,:) - ClusterCenter(q,:)) < 0.1k = k + 1;endendClusterCenter = NewCenter;%判斷是否全部收斂if k == ClusterCountsbreak;endkk = kk + 1 ;end%再次判斷每個聚類是否分類正確,若分類錯誤則進(jìn)行懲罰trueCounts = ClusterCounts;SortCenter2 = sortrows(ClusterCenter,1);for p = 1:ClusterCountsif norm(SortCenter1(p,:) - SortCenter2(p,:)) > 0.5trueCounts = trueCounts - 1;FalseTimes = FalseTimes + 1;break;endendif trueCounts == ClusterCountsbreak;endCalcuateTimes = CalcuateTimes + 1; end% FalseTimes % CalcuateTimes % trueCounts% kk % DataCounts % OriginCenter % NewCenter%理論上每個聚類的標(biāo)簽應(yīng)是123排列的,但實際上,由于每個聚類中心都是隨機選取的, %實際分類的順序可能是213,132等,所以需要對分類標(biāo)簽進(jìn)行糾正,這對之后的精度評 %價帶來了方便,如果不需要進(jìn)行精度評價,可以注釋下方代碼。 %對分類標(biāo)簽進(jìn)行糾正: %算法原理:從第一個已知的樣本中心開始,尋找離其最近的聚類中心,然后將歸類于該 % 聚類中心的樣本的聚類標(biāo)簽更換為i for i = 1:ClusterCounts %遍歷原始樣本中心for j = 1:ClusterCounts %遍歷聚類中心,與原樣本中心比較if norm(OriginCenter(i,:) - ClusterCenter(j,:)) < 0.6% for p = 1:m% if ClusterLabel(p) == j% ClusterLabel(p) = 2 * ClusterCounts + i;% end% endClusterLabel(ClusterLabel == j) = 2 * ClusterCounts + i;endend end ClusterLabel = ClusterLabel - 2 * ClusterCounts; %Temp = [MyData(:,:),ClusterLabel]
至此已經(jīng)完成了聚類中心的計算。該方法基本解決了因隨機選取初始中心而導(dǎo)致最后聚類中心明顯錯誤的情況,但缺點在于循環(huán)太多,時間復(fù)雜度O(?),而且偶爾會陷入死循環(huán),原因在于有兩個或者以上的聚類中心被選到了同一點。若超過10s還沒跑出結(jié)果,還是重新運行下程序吧
?
畫圖的函數(shù):
MyPlot2.m
%函數(shù)功能 : 輸入樣本,樣本標(biāo)簽及求出的聚類中心,顯示二維圖像,實現(xiàn)數(shù)據(jù)可視化function MyPlot2(X1,X2,DataLabel,ClusterCenter) [m,~] = size(X1);hold on; % for p = 1:m % if(DataLabel(p) == 1) % plot(X1(p),X2(p),'*r') % elseif(DataLabel(p) == 2) % plot(X1(p),X2(p),'*g') % else % plot(X1(p),X2(p),'*b') % end % endp = find(DataLabel == 1); plot(X1(p),X2(p),'*r') p = find(DataLabel == 2); plot(X1(p),X2(p),'*g') p = find(DataLabel == 3); plot(X1(p),X2(p),'*b')% xlabel(who('X1')); % ylabel(who('X2')); %PS : 我想在坐標(biāo)軸中根據(jù)輸入的形參的矩陣的名字,轉(zhuǎn)換成字符串,來動態(tài)輸出 % 坐標(biāo)軸名稱,不知道該怎么做?用上面注釋的語句不行。。 [n,~] = size(ClusterCenter);plot(ClusterCenter(1:1:n,1),ClusterCenter(1:1:n,2),'ok')grid on;MyPlot3.m
function MyPlot3(X1,X2,X3,DataLabel,ClusterCenter) [m,~] = size(X1);hold on; % for i = 1:m % if(DataLabel(i) == 1) % plot3(X1(i),X2(i),X3(i),'.r') % elseif(DataLabel(i) == 2) % plot3(X1(i),X2(i),X3(i),'.g') % else % plot3(X1(i),X2(i),X3(i),'.b') % end % endp = find(DataLabel == 1); plot3(X1(p),X2(p),X3(p),'.r') p = find(DataLabel == 2); plot3(X1(p),X2(p),X3(p),'.g') p = find(DataLabel == 3); plot3(X1(p),X2(p),X3(p),'.b')% xlabel('X1'); % ylabel('X2'); % zlabel('X3'); %PS : 我想在坐標(biāo)軸中根據(jù)輸入的形參的矩陣的名字,轉(zhuǎn)換成字符串,來動態(tài)輸出 % 坐標(biāo)軸名稱,不知道該怎么做?用上面注釋的語句不行。。[n,~] = size(ClusterCenter);% for i = 1:n % plot3(ClusterCenter(i,1),ClusterCenter(i,2),ClusterCenter(i,3),'ok') % end plot3(ClusterCenter(1:1:n,1),ClusterCenter(1:1:n,2),ClusterCenter(1:1:n,3),'ok')view([1 1 1]); grid on;精度評價Accuracy.m
%函數(shù)功能:根據(jù)聚類結(jié)果進(jìn)行精度評價 %精度評價,返回每一種分類的精度值(正確率) function ClusterAccuracy = Accuracy(DataLable,CLusterLabel) [m,n] = size(CLusterLabel); ClusterAccuracy = zeros(1,n);%理論上的聚類標(biāo)簽應(yīng)為 1,2,3,但實際上可能變成了 213 , 132等,導(dǎo)致計算失誤 %因此需要對分類標(biāo)簽進(jìn)行糾正,而這一步驟已經(jīng)在FindCluster函數(shù)中完成了 for i = 1:n%原理:假設(shè)某樣本在已知數(shù)據(jù)集中屬于第一類,而其聚類后的也同樣被分到了第一類,那么它們的標(biāo)簽%都是1,這樣相減后結(jié)果就為0,表明已經(jīng)分類正確,否則不為0,分類錯誤Temp(:,i) = DataLable - CLusterLabel(:,i); endfor j = 1:nfor i = 1:mif Temp(i,j) == 0ClusterAccuracy(1,j) = ClusterAccuracy(1,j) + 1;endend endClusterAccuracy = ClusterAccuracy ./ m;?
?
運行結(jié)果展示(二維結(jié)果):
可以看到,在二維的情況下,比較X3: 花瓣長度(厘米)和X4:花瓣寬度(厘米)的精度更高(94.67%),也就是說只比較兩種特征時,比較花瓣長度和花瓣寬度,區(qū)分三種花的效果更好,分類結(jié)果更可靠。
?
三維分類結(jié)果:
可以看到,在三維的情況下,比較X2:萼片寬度(厘米),X3: 花瓣長度(厘米)和X4:花瓣寬度(厘米)的精度更高(93.33%),也就是說比較三種特征時,取X2,X3,X4,區(qū)分三種花的效果更好,分類結(jié)果更可靠。
?
?
最后:本程序目前還存在諸多不足,比如時間復(fù)雜度高,效率較低;目前對Matlab語言還不是很熟,寫的程序也比較C-Style,各位看官若是有改進(jìn)的建議,歡迎留言,或者直接聯(lián)系我(聯(lián)系方式附在文最末),大家一起探討:D
?
?
附iris.data數(shù)據(jù)集:
?
5.1,3.5,1.4,0.2,Iris-setosa 4.9,3.0,1.4,0.2,Iris-setosa 4.7,3.2,1.3,0.2,Iris-setosa 4.6,3.1,1.5,0.2,Iris-setosa 5.0,3.6,1.4,0.2,Iris-setosa 5.4,3.9,1.7,0.4,Iris-setosa 4.6,3.4,1.4,0.3,Iris-setosa 5.0,3.4,1.5,0.2,Iris-setosa 4.4,2.9,1.4,0.2,Iris-setosa 4.9,3.1,1.5,0.1,Iris-setosa 5.4,3.7,1.5,0.2,Iris-setosa 4.8,3.4,1.6,0.2,Iris-setosa 4.8,3.0,1.4,0.1,Iris-setosa 4.3,3.0,1.1,0.1,Iris-setosa 5.8,4.0,1.2,0.2,Iris-setosa 5.7,4.4,1.5,0.4,Iris-setosa 5.4,3.9,1.3,0.4,Iris-setosa 5.1,3.5,1.4,0.3,Iris-setosa 5.7,3.8,1.7,0.3,Iris-setosa 5.1,3.8,1.5,0.3,Iris-setosa 5.4,3.4,1.7,0.2,Iris-setosa 5.1,3.7,1.5,0.4,Iris-setosa 4.6,3.6,1.0,0.2,Iris-setosa 5.1,3.3,1.7,0.5,Iris-setosa 4.8,3.4,1.9,0.2,Iris-setosa 5.0,3.0,1.6,0.2,Iris-setosa 5.0,3.4,1.6,0.4,Iris-setosa 5.2,3.5,1.5,0.2,Iris-setosa 5.2,3.4,1.4,0.2,Iris-setosa 4.7,3.2,1.6,0.2,Iris-setosa 4.8,3.1,1.6,0.2,Iris-setosa 5.4,3.4,1.5,0.4,Iris-setosa 5.2,4.1,1.5,0.1,Iris-setosa 5.5,4.2,1.4,0.2,Iris-setosa 4.9,3.1,1.5,0.1,Iris-setosa 5.0,3.2,1.2,0.2,Iris-setosa 5.5,3.5,1.3,0.2,Iris-setosa 4.9,3.1,1.5,0.1,Iris-setosa 4.4,3.0,1.3,0.2,Iris-setosa 5.1,3.4,1.5,0.2,Iris-setosa 5.0,3.5,1.3,0.3,Iris-setosa 4.5,2.3,1.3,0.3,Iris-setosa 4.4,3.2,1.3,0.2,Iris-setosa 5.0,3.5,1.6,0.6,Iris-setosa 5.1,3.8,1.9,0.4,Iris-setosa 4.8,3.0,1.4,0.3,Iris-setosa 5.1,3.8,1.6,0.2,Iris-setosa 4.6,3.2,1.4,0.2,Iris-setosa 5.3,3.7,1.5,0.2,Iris-setosa 5.0,3.3,1.4,0.2,Iris-setosa 7.0,3.2,4.7,1.4,Iris-versicolor 6.4,3.2,4.5,1.5,Iris-versicolor 6.9,3.1,4.9,1.5,Iris-versicolor 5.5,2.3,4.0,1.3,Iris-versicolor 6.5,2.8,4.6,1.5,Iris-versicolor 5.7,2.8,4.5,1.3,Iris-versicolor 6.3,3.3,4.7,1.6,Iris-versicolor 4.9,2.4,3.3,1.0,Iris-versicolor 6.6,2.9,4.6,1.3,Iris-versicolor 5.2,2.7,3.9,1.4,Iris-versicolor 5.0,2.0,3.5,1.0,Iris-versicolor 5.9,3.0,4.2,1.5,Iris-versicolor 6.0,2.2,4.0,1.0,Iris-versicolor 6.1,2.9,4.7,1.4,Iris-versicolor 5.6,2.9,3.6,1.3,Iris-versicolor 6.7,3.1,4.4,1.4,Iris-versicolor 5.6,3.0,4.5,1.5,Iris-versicolor 5.8,2.7,4.1,1.0,Iris-versicolor 6.2,2.2,4.5,1.5,Iris-versicolor 5.6,2.5,3.9,1.1,Iris-versicolor 5.9,3.2,4.8,1.8,Iris-versicolor 6.1,2.8,4.0,1.3,Iris-versicolor 6.3,2.5,4.9,1.5,Iris-versicolor 6.1,2.8,4.7,1.2,Iris-versicolor 6.4,2.9,4.3,1.3,Iris-versicolor 6.6,3.0,4.4,1.4,Iris-versicolor 6.8,2.8,4.8,1.4,Iris-versicolor 6.7,3.0,5.0,1.7,Iris-versicolor 6.0,2.9,4.5,1.5,Iris-versicolor 5.7,2.6,3.5,1.0,Iris-versicolor 5.5,2.4,3.8,1.1,Iris-versicolor 5.5,2.4,3.7,1.0,Iris-versicolor 5.8,2.7,3.9,1.2,Iris-versicolor 6.0,2.7,5.1,1.6,Iris-versicolor 5.4,3.0,4.5,1.5,Iris-versicolor 6.0,3.4,4.5,1.6,Iris-versicolor 6.7,3.1,4.7,1.5,Iris-versicolor 6.3,2.3,4.4,1.3,Iris-versicolor 5.6,3.0,4.1,1.3,Iris-versicolor 5.5,2.5,4.0,1.3,Iris-versicolor 5.5,2.6,4.4,1.2,Iris-versicolor 6.1,3.0,4.6,1.4,Iris-versicolor 5.8,2.6,4.0,1.2,Iris-versicolor 5.0,2.3,3.3,1.0,Iris-versicolor 5.6,2.7,4.2,1.3,Iris-versicolor 5.7,3.0,4.2,1.2,Iris-versicolor 5.7,2.9,4.2,1.3,Iris-versicolor 6.2,2.9,4.3,1.3,Iris-versicolor 5.1,2.5,3.0,1.1,Iris-versicolor 5.7,2.8,4.1,1.3,Iris-versicolor 6.3,3.3,6.0,2.5,Iris-virginica 5.8,2.7,5.1,1.9,Iris-virginica 7.1,3.0,5.9,2.1,Iris-virginica 6.3,2.9,5.6,1.8,Iris-virginica 6.5,3.0,5.8,2.2,Iris-virginica 7.6,3.0,6.6,2.1,Iris-virginica 4.9,2.5,4.5,1.7,Iris-virginica 7.3,2.9,6.3,1.8,Iris-virginica 6.7,2.5,5.8,1.8,Iris-virginica 7.2,3.6,6.1,2.5,Iris-virginica 6.5,3.2,5.1,2.0,Iris-virginica 6.4,2.7,5.3,1.9,Iris-virginica 6.8,3.0,5.5,2.1,Iris-virginica 5.7,2.5,5.0,2.0,Iris-virginica 5.8,2.8,5.1,2.4,Iris-virginica 6.4,3.2,5.3,2.3,Iris-virginica 6.5,3.0,5.5,1.8,Iris-virginica 7.7,3.8,6.7,2.2,Iris-virginica 7.7,2.6,6.9,2.3,Iris-virginica 6.0,2.2,5.0,1.5,Iris-virginica 6.9,3.2,5.7,2.3,Iris-virginica 5.6,2.8,4.9,2.0,Iris-virginica 7.7,2.8,6.7,2.0,Iris-virginica 6.3,2.7,4.9,1.8,Iris-virginica 6.7,3.3,5.7,2.1,Iris-virginica 7.2,3.2,6.0,1.8,Iris-virginica 6.2,2.8,4.8,1.8,Iris-virginica 6.1,3.0,4.9,1.8,Iris-virginica 6.4,2.8,5.6,2.1,Iris-virginica 7.2,3.0,5.8,1.6,Iris-virginica 7.4,2.8,6.1,1.9,Iris-virginica 7.9,3.8,6.4,2.0,Iris-virginica 6.4,2.8,5.6,2.2,Iris-virginica 6.3,2.8,5.1,1.5,Iris-virginica 6.1,2.6,5.6,1.4,Iris-virginica 7.7,3.0,6.1,2.3,Iris-virginica 6.3,3.4,5.6,2.4,Iris-virginica 6.4,3.1,5.5,1.8,Iris-virginica 6.0,3.0,4.8,1.8,Iris-virginica 6.9,3.1,5.4,2.1,Iris-virginica 6.7,3.1,5.6,2.4,Iris-virginica 6.9,3.1,5.1,2.3,Iris-virginica 5.8,2.7,5.1,1.9,Iris-virginica 6.8,3.2,5.9,2.3,Iris-virginica 6.7,3.3,5.7,2.5,Iris-virginica 6.7,3.0,5.2,2.3,Iris-virginica 6.3,2.5,5.0,1.9,Iris-virginica 6.5,3.0,5.2,2.0,Iris-virginica 6.2,3.4,5.4,2.3,Iris-virginica 5.9,3.0,5.1,1.8,Iris-virginica?
代碼與數(shù)據(jù)可直接下載,
鏈接:https://pan.baidu.com/s/1fVzbH5fJnRKjIBCe8ir0Gw?
提取碼:gekv
?
?
?
?
?
?
2018 / 01 / 27 ? 補充:
我想到了一些提高運行效率的方法,那就是盡量減少for循環(huán),因為matlab對for循環(huán)的處理效率是很低的!
可拱參考的優(yōu)化思路:
1) 使用parfor;
2)使用find;
3)向量化,即使用向量操作代替循環(huán);
4)bsxfun(),sum(), ‘./’ '.*'
? ? ? 5)待補充
?
?
?
2018 / 05 / 03 ? 更新:
程序之所以跑得慢,是因為畫圖函數(shù)用了for循環(huán),把for循環(huán)去掉直接用plot,會快很多。
?
?
?
2018/07/12更新:
? ? ?? 優(yōu)化了畫圖函數(shù),現(xiàn)在可以很快看到結(jié)果了;?
? ? ? 直接從網(wǎng)頁把代碼復(fù)制過去的話,函數(shù)運行會出問題,修改方法如下:看到FindCluster函數(shù)里,找到所有while循環(huán)語句和 if 判斷語句,把? @It; 改成? < ,? 把? @gt; 改成 ? > 就行了(這個問題我解決不了,修改的時候能正確顯示,但保存后就出問題了.....)
????
CSDN已經(jīng)很少登錄了,消息幾乎不看,如有疑問,請用以下方式聯(lián)系博主:
郵箱:1765928683@qq.com
?
?
?
?
?
總結(jié)
以上是生活随笔為你收集整理的我的K均值算法的matlab实现的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Idea、pycharm、Phpstor
- 下一篇: matlab人脸追踪,求大神帮助我这个菜