日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

聚类分析与判别分析十题_数学建模系列

發布時間:2023/12/10 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 聚类分析与判别分析十题_数学建模系列 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

聚類分析與判別分析習題_數學建模系列


1.

【問題描述】:

5位代理商對某種產品的四種指標評分如下:

x1x2x3x4
124632
252538
333730
412316
543230

其中,x1x2x3為態度測度,共有17個分值,x4為興趣測度,取值為1140.求出其絕對值距離矩陣,平方和距離矩陣。

【解析】:
編寫如下的Matlab程序:(為latex版本,后文省略排版代碼)

\begin{verbatim} X = [2 4 6 32;5 2 5 38;3 3 7 30;1 2 3 16;4 3 2 30]; D1 = pdist(X, 'cityblock') D2 = pdist(X, 'euclidean') \end{verbatim}

于是得到如下的計算結果:

(1)絕對值距離矩陣:

D1=???????012522912013281351302162228210199136190???????

(2)平方和距離矩陣:

D2=???????07.07112.645816.43175.00007.071108.544022.44998.66032.64588.5440014.73095.099016.431722.449914.7309014.38755.00008.66035.099014.38750????????

2.

【問題描述】:

檢測某類產品的重量,抽了六個樣品,每個樣品只測了一個指標,分別為1,2,3,6,9,11.試用最短距離法,重心法進行聚類分析。

【解析】:
(1)按照最短距離法進行聚類分析。編寫如下的R語言程序:

X <- data.frame(x1 = c(1, 2, 3, 6, 9, 11),row.names = c("1", "2", "3", "4", "5", "6") ) d <- dist(scale(X), method = 'euclidean') heatmap(as.matrix(d),labRow = rownames(d), labCol = colnames(d))kinds <- 3 model1 <- hclust(d, method = 'single') result <- cutree(model1, k = kinds) plot(model1, -1) rect.hclust(model1, k = kinds, border = "red")


↑首先得到的是樣本之間的相關關系,顏色越深表示這兩個樣本之間的關系越近,亦可能屬于同一類。

(下面為latex繪圖代碼片,后文略)

\begin{figure}[h] \centering \includegraphics[width=*0.75*\textwidth]{*5.png*} \end{figure}


↑考慮將其劃分為兩類,得到譜系聚類圖。

所以將樣本劃分為:{1,2,3,4}{5,6}

(2)按照重心法進行聚類分析。編寫如下的R語言程序:

kinds <- 2 model1 <- hclust(d, method = 'centroid') result <- cutree(model1, k = kinds) plot(model1, -1) rect.hclust(model1, k = kinds, border = "red")


↑得到譜系聚類圖。

所以將樣本劃分為:{1,2,3}{4,5,6}

3.

【問題描述】:

某店五個售貨員的銷售量x1與教育水平x2之間的評分表如下,試用最短距離法做聚類分析

x1x2
111
212
363
482
580

【解析】:
編寫如下的R語言程序:

X <- data.frame(x1 = c(1, 1, 6, 8, 8),x2 = c(1, 2, 3, 2, 0),row.names = c("1", "2", "3", "4", "5") ) d <- dist(scale(X), method = 'euclidean') heatmap(as.matrix(d),labRow = rownames(d), labCol = colnames(d))kinds <- 3 model1 <- hclust(d, method = 'single') result <- cutree(model1, k = kinds) plclust(model1, -1) rect.hclust(model1, k = kinds, border = "red")mds <- cmdscale(d, k = 2, eig = T) x <- mds$points[,1] y <- mds$points[,2] library(ggplot2) p <- ggplot(data.frame(x,y),aes(x,y)) p + geom_point(size = kinds, alpha = 0.8, aes(colour = factor(result),shape = factor(result)))


↑首先得到的是樣本之間的相關關系,顏色越深表示這兩個樣本之間的關系越近,亦可能屬于同一類。


↑考慮將樣本劃分為3類,于是得到譜系聚類圖。


↑為了更直觀的表示出3類之間的關系,利用經典MDS對樣本數據進行變換,在二維平面上繪制出散點圖。

可見,按照3類進行劃分是合理的。所以將樣本劃分為:{1,2}{3,4}{5}

4.

【問題描述】:

下面給出七個樣品兩兩之間的歐氏距離矩陣

D=????????????12345671047121819212038141517305111214406179501360270?????????????

試分別用最小距離法、最大距離法、重心舉例法進行聚類,并畫出系譜圖。

【解析】:
編寫如下的Matlab程序:

d = [4, 7, 12, 18, 19, 21, 3, 8, 14, 15, 17, 5, 11, 12, 14, 6, 7, 9, 1, 3, 2]; z1 = linkage(d); z2 = linkage(d, 'complete'); z3 = linkage(d, 'average'); k = 3; figure(1); H1 = dendrogram(z1); T1 = cluster(z1, k) figure(2); H2 = dendrogram(z2); T2 = cluster(z2, k) figure(3); H3 = dendrogram(z3); T3 = cluster(z3, k)

(1)最小距離法得到的聚類結果為:{1,2,3}{4}{5,6,7}。繪制的譜系聚類圖,如下:

(2)最大距離法得到的聚類結果為:{1,2,3}{4}{5,6,7}。繪制的譜系聚類圖,如下:

(3)重心距離法得到的聚類結果為:{1,2,3}{4}{5,6,7}。繪制的譜系聚類圖,如下:

5.

【問題描述】:

華北五站(北京、天津、營口、太遠、石家莊)1968年(及1969年)7、8月份降水量(Y)作預報。

(1)根據專業的統計分析Y主要取決于下列因子:

X_1:上海4月份平均氣溫,
X_2:北京三月份降水總量,
X_3:5月份地磁Ci指數,
X_4:4月份500mbW環流型日數

(2)1961-1967年的歷史數據如下:

時間Y/mmx1x2x3x4
196141014.820.10.6913
196225512.52.30.364
196352714.512.40.6912
196451016.410.60.5826
196522612.20.30.354
196645613.812.30.4223
196738913.67.70.8225
196813.70.60.6812.5
196914.216.50.6515

【解析】:

首先,編寫如下的R語言程序,檢測二變量間關系。

X1 <- c(14.8, 12.5, 14.5, 16.4, 12.2, 13.8, 13.6) X2 <- c(20.1, 2.3, 12.4, 10.6, 0.3, 12.3, 7.7) X3 <- c(0.69, 0.36, 0.69, 0.58, 0.35, 0.42, 0.82) X4 <- c(13, 4, 12, 26, 4, 23, 25) Y <- c(410, 255, 527, 510, 226, 456, 389)testData <- data.frame(X1, X2, X3, X4, Y) cor(testData)library(car) scatterplotMatrix(testData, spread = FALSE, lty.smooth = 2, main = "Scatter Plot Matrix")

得到相關系數矩陣如下:

X1X2X3X4Y
X11.00000000.69501380.51426210.66271850.8497245
X20.69501381.00000000.57625080.43863730.7227803
X30.51426210.57625081.00000000.52837240.5735971
X40.66271850.43863730.52837241.00000000.6979025

得到的散點圖矩陣如下所示:

接著,利用R語言程序進行多元線性擬合。

fit <- lm(Y ~ X1 + X2 + X3 + X4, data = testData) summary(fit) vif(fit)

得到回歸分析表:

lm(formula = Y ~ X1 + X2 + X3 + X4, data = testData)Residuals:1 2 3 4 5 6 7 -63.59 -11.54 101.28 -22.20 -19.41 37.15 -21.69Coefficients:Estimate Std. Error t value Pr(>|t|) (Intercept) -278.856 541.448 -0.515 0.658 X1 40.935 44.060 0.929 0.451 X2 4.251 8.506 0.500 0.667 X3 34.616 269.756 0.128 0.910 X4 2.869 5.642 0.508 0.662Residual standard error: 92.62 on 2 degrees of freedom Multiple R-squared: 0.7928, Adjusted R-squared: 0.3783 F-statistic: 1.913 on 4 and 2 DF, p-value: 0.3715

以及方差膨脹因c:

X1 X2 X3 X4 2.791265 2.272406 1.743121 2.001629

說明該模型存在多重共線性。

于是,將X3剔除,再進行多元擬合:

fit <- lm(Y ~ X1 + X2 + X4, data = testData) summary(fit) vif(fit)

得到回歸分析表:

lm(formula = Y ~ X1 + X2 + X4, data = testData)Residuals:1 2 3 4 5 6 7 -62.90 -12.92 105.41 -24.26 -20.33 29.29 -14.28Coefficients:Estimate Std. Error t value Pr(>|t|) (Intercept) -265.627 435.785 -0.610 0.585 X1 40.830 36.116 1.131 0.340 X2 4.669 6.439 0.725 0.521 X4 3.106 4.369 0.711 0.528Residual standard error: 75.93 on 3 degrees of freedom Multiple R-squared: 0.791, Adjusted R-squared: 0.5821 F-statistic: 3.786 on 3 and 3 DF, p-value: 0.1516

以及方差膨脹因子:

X1 X2 X4 2.790316 1.937625 1.786125

說明該模型存在多重共線性。

于是,將X4剔除,再進行多元擬合:

fit <- lm(Y ~ X1 + X2, data = testData) summary(fit) vif(fit)

得到回歸分析表:

lm(formula = Y ~ X1 + X2, data = testData)Residuals:1 2 3 4 5 6 7 -79.79 -28.38 88.25 -25.28 -31.89 56.23 20.86Coefficients:Estimate Std. Error t value Pr(>|t|) (Intercept) -415.076 357.354 -1.162 0.310 X1 55.051 28.151 1.956 0.122 X2 4.483 6.023 0.744 0.498Residual standard error: 71.08 on 4 degrees of freedom Multiple R-squared: 0.7558, Adjusted R-squared: 0.6338 F-statistic: 6.191 on 2 and 4 DF, p-value: 0.05961

以及方差膨脹因子:

X1 X2 1.934401 1.934401

說明該模型不存在多重共線性。

若將X2剔除,再進行多元擬合:

fit <- lm(Y ~ X1, data = testData) summary(fit)

得到回歸分析表:

Call: lm(formula = Y ~ X1, data = testData)Residuals: 1 2 3 4 5 6 7 -43.82 -38.71 94.06 -55.20 -46.83 71.79 18.71Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -576.45 271.09 -2.126 0.0868 . X1 69.61 19.32 3.604 0.0155 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1Residual standard error: 67.84 on 5 degrees of freedom Multiple R-squared: 0.722, Adjusted R-squared: 0.6664 F-statistic: 12.99 on 1 and 5 DF, p-value: 0.01548

此時,可以建立回歸模型:Y=0.0868+0.0155X1。散點圖和擬合直線如下所示:

于是,在置信水平為95%下,1968年華北五站7、8月份降水量預測值為377.2479,置信區間為[309.9719,444.5238];1969年華北五站7、8月份降水量預測值為412.0544,置信區間為[345.1723478.9366]

6.

下表是15個上市公司2001年的一些主要財務指標,使用系統聚類法和K-均值法分別對這些公司進行聚類,并對結果進行比較分析。其中,x1:公司編號,x2:凈資產收益率,x3:每股凈利潤,x4:總資產周轉率,x5:資產負債率,x6:流動負債比率,x7:每股凈資產,x8:凈利潤增長率,x9:總資產增長率

x1x2x3x4x5x6x7x8x9
111.090.210.0596.9870.531.86-44.0481.99
211.960.590.7451.7890.734.957.0216.11
300.030.03181.99100-2.98103.3321.18
411.580.130.1746.0792.181.146.55-56.32
5-6.19-0.090.0343.382.241.52-1713.5-3.36
6100.470.4868.4864.7-11.560.85
710.490.110.3582.9899.871.02100.2330.32
811.12-1.690.12132.14100-0.66-4454.39-62.75
93.410.040.267.8698.511.25-11.25-11.43
101.160.010.5443.71001.03-87.18-7.41
1130.220.160.487.3694.880.53729.41-9.97
128.190.220.3830.311002.73-12.31-2.77
1395.79-5.20.5252.3499.34-5.42-9816.52-46.82
1416.550.350.9372.3184.052.14115.95123.41
15-24.18-1.160.7956.2697.84.81-533.89-27.74

【解析】:

(1)層次聚類法:編寫如下的R語言程序:

X <- data.frame(x2 = c(11.09, 11.96, 0, 11.58, -6.19, 10, 10.49, 11.12, 3.41, 1.16, 30.22, 8.19, 95.79, 16.55, -24.18),x3 = c(0.21, 0.59, 0.03, 0.13, -0.09, 0.47, 0.11, -1.69, 0.04, 0.01,0.16, 0.22, -5.2, 0.35, -1.16),x4 = c(0.05, 0.74, 0.03, 0.17, 0.03, 0.48, 0.35, 0.12, 0.2, 0.54, 0.4,0.38, 0.5, 0.93, 0.79),x5 = c(96.98, 51.78, 181.99, 46.07, 43.3, 68.4, 82.98, 132.14, 67.86,43.7, 87.36, 30.31, 252.34, 72.31, 56.26),x6 = c(70.53, 90.73, 100, 92.18, 82.24, 86, 99.87, 100, 98.51, 100,94.88, 100, 99.34, 84.05, 97.8),x7 = c(1.86, 4.95, -2.98, 1.14, 1.52, 4.7, 1.02, -0.66, 1.25, 1.03,0.53, 2.73, -5.42, 2.14, 4.81),x8 = c(-44.04, 7.02, 103.33, 6.55, -1713.5, -11.56, 100.23, -4454.39,-11.25, -87.18, 729.41, -12.31, -9816.52, 115.95, -533.89),x9 = c(81.99, 16.11, 21.18, -56.32, -3.36, 0.85, 30.32, -62.75,-11.43, -7.41, -9.97, -2.77, -46.82, 123.41, -27.74),row.names = c("1", "2", "3", "4", "5", "6", "7", "8", "9", "10","11", "12", "13", "14", "15") ) d <- dist(scale(X), method = 'euclidean') heatmap(as.matrix(d),labRow = rownames(d), labCol = colnames(d))kinds <- 3 model1 <- hclust(d, method = 'average') result <- cutree(model1, k = kinds) plclust(model1, -1) rect.hclust(model1, k = kinds, border = "red")mds <- cmdscale(d, k = 2, eig = T) x <- mds$points[,1] y <- mds$points[,2] library(ggplot2) p <- ggplot(data.frame(x,y),aes(x,y)) p + geom_point(size = kinds, alpha = 0.8, aes(colour = factor(result),shape = factor(result)))


↑首先得到的是熱力圖,

考慮將樣本劃分為3類,


↑得到譜系聚類圖。

接著,為了使得可視化效果更好,使用經典MDS對數據進行變換,畫出散點圖↓

所以,將1和14劃為一組,13自成一組,其余的劃為一組。

(2)K-means算法:依然把樣本分為3類,編寫如下R語言程序:

model <- kmeans(scale(X), centers = 3, nstart = 10) model$cluster

得到聚類分析結果:

123456789101112131415
221112111111321

所以,將1、2、6、14劃為一組,13自成一組,其余的劃為一組。

(3)系統聚類:(↓R)

library(fpc) model <- dbscan(X, eps = 2.5, MinPts = 5, scale = T, showplot = 1, method = "hybrid") model$cluster

得到分析結果:

0 1 0 1 1 1 1 0 1 1 1 1 0 0 0

這里結果表明將其劃分為2類。

7.

【問題描述】:

下表是某年我國16個地區農民支出情況的抽樣調查數據,每個地區調查了反映每人平均生活消費支出情況的六個經濟指標。是通過統計分析軟件用不同的方法進行聚類分析,并比較何種方法與人們觀察到的實際情況較接近。

地區食品衣著燃料住房交通和通訊娛樂教育文化
北京190.3343.779.7360.5449.019.04
天津135.236.410.4744.1636.493.94
河北95.2122.839.322.4422.812.8
山西104.7825.116.49.8918.173.25
內蒙128.4127.638.9412.5823.992.27
遼寧145.6832.8317.7927.2939.093.47
吉林159.3733.3819.2711.8125.295.22
黑龍江116.2229.5713.2411.8125.295.22
上海221.1138.6412.53115.6550.825.89
江蘇144.9829.1211.6742.627.35.74
浙江169.9232.7512.7247.1234.355
安徽135.1123.0915.6223.5418.186.39
福建144.9221.2616.9619.5221.756.73
山西140.5421.517.6419.1915.974.94
山東115.8430.2612.233.633.773.85
河南101.1823.268.4620.220.54.3

【解析】:
(1)層次聚類法:編寫如下的R語言程序,數據從csv文件中讀取。

consumptionData <- read.csv("*C:\\Users\\lenovo\\Desktop\\Book1.csv*", header = TRUE, sep = ",") X <- consumptionData[,-1]d <- dist(scale(X), method = 'euclidean') heatmap(as.matrix(d),labRow = consumptionData$city, labCol = consumptionData$city)kinds <- 5 model1 <- hclust(d, method = 'average') result <- cutree(model1, k = kinds) plclust(model1, -1, labels = consumptionData$city) rect.hclust(model1, k = kinds, border = "red")mds <- cmdscale(d, k = 2, eig = T) x <- mds$points[,1] y <- mds$points[,2] library(ggplot2) p <- ggplot(data.frame(x,y),aes(x,y)) p + geom_point(size = kinds, alpha = 0.8, aes(colour = factor(result), shape = factor(result)))


↑首先得到的是熱力圖。

考慮將樣本劃分為5類,


↑得到譜系聚類圖。

接著,為了使得可視化效果更好,使用經典MDS對數據進行變換,畫出散點圖↓

可見,北京、上海為一線城市,自成一類;江蘇、浙江等為第二線省份;內蒙古等為一類省份;安徽等為一類省份。

(2)K-means算法:編寫如下的R語言程序

model <- kmeans(scale(X), centers = 5, nstart = 10) table(consumptionData$city, model$cluster)

得到如下的類別劃分結果:

12345
Anhui10000
Beijing00001
Fujian10000
Hebei00100
Heilongjiang10000
Henan00100
InnerMongolia00100
Jiangsu00010
Jiangxi10000
Jilin10000
Liaoning00010
Shandong00010
Shanghai01000
Shanxi00100
Tianjin00010
Zhejiang00010

這個結果與層次聚類法的結果十分相似。

8.

【問題描述】:

某公司為掌握其新產品的動向,向12個代理商做調查,要他們對產品基于評估(對產品式樣、包裝及耐久性,用10分制打分,高分表示性能良好,低分制則較差)并說明是否購買,調查結果如下,是做fisher判別

式樣包裝耐久性購買組樣品19872766310784845599768677756非購買組樣品14442366363342455122

【解析】:

利用Matlab實現Fisher線性分類器,程序如下:

function [w, y1, y2, Jw] = FisherLinearDiscriminate(data, label)% FLD Fisher Linear Discriminant. % data: D*N data % label: {+1,-1} % Reference: M.Bishop Pattern Recognition and Machine Learning p186-p189 % Copyright: LiFeitengup@CSDN% compute means and scatter matrix %------------------------------- inx1 = find(label == 1); inx2 = find(label == -1); n1 = length(inx1); n2 = length(inx2);m1 = mean(data(:,inx1),2); m2 = mean(data(:,inx2),2);S1 = (data(:,inx1)-m1*ones(1,n1))*(data(:,inx1)-m1*ones(1,n1))'; S2 = (data(:,inx2)-m2*ones(1,n2))*(data(:,inx2)-m2*ones(1,n2))'; Sw = S1 + S2;% compute FLD %------------------------------- W = inv(Sw)*(m1-m2);y1 = W'*m1; %label=+1 y2 = W'*m2; %label=-1 w = W; Jw = (y1-y2)^2/(W'*Sw*W);end data = [9 7 10 8 9 8 7 4 3 6 2 1;8 6 7 4 9 6 5 4 6 3 4 2;7 6 8 5 7 7 6 4 6 3 5 2]; label = [1, 1, 1, 1, 1, 1, 1, -1, -1, -1, -1, -1]; [w, y1, y2, Jw] = FisherLinearDiscriminate(data, label); w0 = (y1 + y2) / 2; figure(1); scatter3(data(1,1:7), data(2,1:7), data(3,1:7)); hold on; scatter3(data(1,8:12), data(2,8:12), data(3,8:12)); hold on; [x, y] = meshgrid(0:0.01:10); z = (w0 - w(1) * x - w(2) * y) / w(3); mesh(x, y, z);

于是可以設計出Fisher分類器:

{wTx>w0?xω1wTx<w0?xω2
其中, w=(0.2197,?0.0792,0.1778)T w0=1.7964

將樣本點繪制在三維空間中,可以得到

↑可見,Fisher判別平面將兩類訓練集一分為二,沒有出現誤劃。

9.

【問題描述】:

人文發展指數是聯合國開發計劃署于1990年5月發表的第一份《人類發展報告》中公布的,該報告建議,目前對人文發展的衡量應當以人生的三大要素為重點,衡量人生三大要素的指示指標分別要用出生時的預期壽命、成人識字率和實際人均GDP,將以上三個指示指標的數值合成為一個復合指數,即為人文發展指數,資料來源:UNDP《人類發展報告》1995年,今從1995年世界各國人文發展指數的排序中,選取高發展水平、中等發展水平的國家各五個作為兩組樣品,另選四個國家作為待判樣品作距離判別分析。

類別序號國家名稱出生時預期壽命成人識字率人均GDP第一類(高發展水平國家)1美國769053742日本79.59953593瑞士789953724阿根廷72.195.952425阿聯酋73.877.75370第二類(中等發展水平國家)6保加利亞71.29342507古巴75.394.934128巴拉圭7091.233909格魯尼亞72.899230010南非62.980.63799待判樣品11保加利亞71.293425012羅馬尼亞69.996.9284013希臘77.693.8523314哥倫比亞69.390.35158

【解析】:

編寫如下的Matlab程序:

training = [76 99 5374;79.5 99 5359;78 99 5372;72.1 95.9 5242;73.8 77.7 5370;71.2 93 4250;75.3 94.9 3412;70 91.2 3390;72.8 99 2300;62.9 80.6 3799]; group = ['level1'; 'level1'; 'level1'; 'level1'; 'level1'; 'level2'; 'level2'; 'level2'; 'level2'; 'level2']; sample = [68.5 79.3 1950;69.9 96.9 2840;77.6 93.8 5233;69.3 90.3 5158]; [class, err] = classify(sample, training, group, 'linear'); class errfigure(1); scatter3(training(1:5, 1), training(1:5, 2), training(1:5, 3), 'bl'); hold on; scatter3(training(6:10, 1), training(6:10, 2), training(6:10, 3), 'r'); hold on; scatter3(sample(1:2, 1), sample(1:2, 2), sample(1:2, 3), 'g'); hold on; scatter3(sample(3:4, 1), sample(3:4, 2), sample(3:4, 3), 'y'); hold on;

↓判別的結果為中國和羅馬尼亞屬于第二類(中等發展水平國家),希臘和哥倫比亞屬于第一類(高發展水平國家)。繪制出的散點圖如下所示,其中綠色為中國和羅馬尼亞,黃色為希臘和哥倫比亞。

10.

【問題描述】:

為了更深入地了解我國人口的文化程度狀況,現利用1990年全國人口普查數據對全國30個省、直轄市、自治區進行聚類分析,原始數據如下表。分析選用了三個指標:大學以上文化程度的人口比例(DXBZ)、初中文化程度的人口比例(CZBZ)、文盲半文盲人口比例(WMBZ)來反映較高、中等、較低的文化程度人口的狀況。
(1)計算樣本的Euclid距離,分別用最長距離法、均值法、重心法和Ward法作聚類分析,并畫出相應的譜系圖。如果將所有樣本分為四類,試寫出各種方法的分類結果。
(2)用動態規劃方法分四類,寫出相應的分類結果。

地區DXBZCZBZWMBZ地區DXBZCZBZWMBZ
北京9.3030.558.70河南0.8526.5516.15
天津4.6729.388.92湖北1.5723.1615.79
河北0.9624.6915.21湖南1.1422.5712.10
山西1.3829.2411.30廣東1.3423.0410.45
內蒙古1.4825.4715.39廣西0.7919.1410.61
遼寧2.6032.328.81海南1.2422.5313.97
吉林2.1526.3110.49四川0.9621.6516.24
黑龍江2.1428.4610.87四川0.9621.6516.24
上海6.5331.5911.04云南0.8113.8525.44
江蘇1.4726.4317.23西藏0.573.8544.43
浙江1.1723.7417.46陜西1.6724.3617.62
安徽0.8819.9724.43甘肅1.1016.8527.93
福建1.2316.8715.63青海1.4917.7627.70
江西0.9918.8416.22寧夏1.6120.2722.06
山東0.9825.1816.87新疆1.8520.6612.75

【解析】:
(1)編寫如下的R語言程序,計算樣本的Euclidean距離。

eduLevel <- read.csv("C:\\Users\\lenovo\\Desktop\\Book1.csv",header = TRUE, sep = ",") X <- eduLevel[,-1] d <- dist(scale(X), method = 'euclidean')

計算得到的結果如下所示:

1 2 3 4 5 6 2 2.5079170 3 4.6857035 2.3019024 4 4.2959228 1.8039758 0.9399503 5 4.3944976 2.0267019 0.3100576 0.8254167 6 3.6299646 1.2195791 1.7576168 0.8952417 1.5493019 7 3.9315549 1.4674278 0.9317467 0.6482392 0.7504160 1.0491364 8 3.8923692 1.3982869 1.0586160 0.4339814 0.8509429 0.7377850 9 1.5366735 1.1047217 3.2642179 2.8084129 2.9651112 2.1456970 10 4.4267279 2.1006361 0.4787978 0.9077872 0.2891008 1.5953600 11 4.6764200 2.3876265 0.3531529 1.2235189 0.4289354 1.9765734 12 5.2920620 3.2801539 1.4408375 2.3237337 1.5302906 3.0415855 13 4.9962087 2.9202433 1.3060070 2.1298048 1.4327314 2.8124863 14 4.9879677 2.8142055 0.9791512 1.8532057 1.1359749 2.5876656 15 4.7038653 2.3542003 0.2325878 1.0165142 0.3359695 1.8121611 16 4.7133609 2.3182390 0.3374037 0.8278320 0.3971652 1.6537074 17 4.4486265 2.1627689 0.4226744 1.1719777 0.3896500 1.8584956 18 4.6223959 2.2544768 0.5471008 1.1184841 0.6715998 1.8497048 19 4.4810853 2.0926126 0.7117096 1.0343031 0.7665879 1.6961709 20 4.9759926 2.7059410 1.1041512 1.7071680 1.2780311 2.4056610 21 4.6032501 2.2714186 0.4214002 1.1688495 0.5376976 1.9058726 22 4.8413151 2.5647399 0.5218575 1.4335069 0.7017735 2.2053846 23 5.6821710 3.7986133 2.0475042 2.9749037 2.1719924 3.6959492 24 5.7883753 3.9583294 2.2442609 3.1693233 2.3621934 3.8818716 25 7.9859267 6.6710568 5.1638710 6.0652753 5.2529014 6.7307005 26 4.4043230 2.1491260 0.4998567 1.1687817 0.3604245 1.8245068 27 5.5790971 3.7744757 2.1160352 2.9997768 2.1883371 3.6771920 28 5.3377510 3.5672580 2.0210296 2.8725387 2.0590996 3.5104481 29 4.8186167 2.8250665 1.2112484 2.0538539 1.2301677 2.7012342 30 4.3767932 2.1591311 0.8840645 1.4576084 0.8921172 2.04181007 8 9 10 11 12 2 3 4 5 6 7 8 0.3600082 9 2.5231009 2.4268835 10 0.9572380 0.9693732 2.9760847 11 1.1388353 1.2781240 3.2834039 0.4755084 12 2.2173503 2.3677507 4.0129802 1.4629511 1.1182973 13 1.7751660 2.0795075 3.8093320 1.6042588 1.1645993 1.2772731 14 1.5783324 1.8499334 3.7257041 1.2917584 0.8342850 1.0943322 15 1.0649649 1.1433110 3.2705295 0.3394169 0.2711551 1.3160227 16 1.0223564 1.0319408 3.2489007 0.3640672 0.5258015 1.5393033 17 0.9240584 1.1328132 3.0847800 0.5766857 0.3222702 1.3046557 18 0.8524209 1.1275936 3.2753820 0.9454631 0.7292930 1.6791419 19 0.6966700 0.9986770 3.1417674 1.0541664 0.9310212 1.9186413 20 1.3975247 1.7090549 3.7246088 1.5328149 1.1961048 1.8180130 21 0.9179096 1.1697352 3.2503633 0.7850780 0.5011037 1.4488522 22 1.2563742 1.4754466 3.4969424 0.8490626 0.3981838 1.1102947 23 2.7477729 2.9783102 4.5321877 2.1923434 1.7645284 0.8840365 24 2.9385431 3.1677404 4.6646561 2.3742691 1.9549944 1.0240915 25 5.8654520 6.0610060 7.1193300 5.1937573 4.8465005 3.7483477 26 1.0226485 1.1445478 3.0115315 0.3634436 0.2896743 1.2285151 27 2.8304711 3.0042608 4.4132326 2.1287997 1.7864622 0.7016984 28 2.6887646 2.8532727 4.1759308 1.9878753 1.6780856 0.6531804 29 1.8411181 2.0196721 3.5594982 1.2042119 0.8667955 0.5044283 30 0.9958719 1.3259214 3.1179869 1.1413015 0.8815281 1.622674313 14 15 16 17 18 2 3 4 5 6 7 8 9 10 11 12 13 14 0.3598412 15 1.3940355 1.0547307 16 1.6195897 1.2807005 0.2558243 17 1.0592217 0.7838317 0.4834798 0.6851052 18 1.0535236 0.8251935 0.7654856 0.8614349 0.5455278 19 1.2294583 1.0455410 0.9340074 0.9835525 0.7114148 0.2540625 20 0.7946200 0.7451584 1.2989826 1.4277362 1.0407148 0.6303693 21 0.9634393 0.6925593 0.5978235 0.7552413 0.3156283 0.2511668 22 0.8098546 0.4662293 0.5912265 0.8147478 0.4179470 0.5722001 23 1.2157545 1.2687608 2.0004555 2.2424544 1.8465004 2.0758068 24 1.3988772 1.4682225 2.1910441 2.4328922 2.0378131 2.2764274 25 4.3645250 4.4623215 5.0614000 5.2859615 4.9645580 5.2632010 26 1.2911220 1.0031510 0.4086485 0.6042337 0.3163675 0.8330112 27 1.6143654 1.5716495 2.0038010 2.2340859 1.9217564 2.2822358 28 1.5957303 1.5397850 1.8990088 2.1301350 1.8006958 2.2036566 29 1.0348036 0.8687461 1.1143224 1.3613674 0.9518609 1.3840183 30 0.8060034 0.7167807 1.0364833 1.2017547 0.5946436 0.504546619 20 21 22 23 24 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0.7119306 21 0.4723393 0.7544083 22 0.8195330 0.8524282 0.3643507 23 2.3045003 1.9397409 1.8955337 1.5701437 24 2.5034838 2.1332893 2.0946242 1.7704832 0.2034588 25 5.4909486 5.1095948 5.0672944 4.7348903 3.1949646 2.9944379 26 0.9816173 1.3490393 0.6124201 0.6177919 1.8930502 2.0746495 27 2.5147225 2.3086834 2.0599749 1.7288197 0.6271003 0.6152875 28 2.4267821 2.2840717 1.9710517 1.6601331 0.7843551 0.8018722 29 1.5968088 1.5764910 1.1426498 0.8706076 1.0739024 1.2313761 30 0.5679284 0.6855035 0.4798381 0.6836312 1.8996725 2.087895125 26 27 28 29 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 4.9271870 27 3.0674899 1.8636170 28 3.2216611 1.7187512 0.2608285 29 4.0413404 0.8943896 0.9949259 0.8510835 30 5.0500586 0.8908598 2.1272359 2.0277721 1.2293341

利用如下語句繪制熱力圖:(↓R)

heatmap(as.matrix(d),labRow = eduLevel$region, labCol = eduLevel$region)

下面進行各種方法的聚類分析:

①最長距離法:

kinds <- 4 model1 <- hclust(d, method = 'complete') result <- cutree(model1, k = kinds) plot(model1, -1, labels = eduLevel$region) rect.hclust(model1, k = kinds, border = "red")mds <- cmdscale(d, k = 2, eig = T) x <- mds$points[,1] y <- mds$points[,2] library(ggplot2) p <- ggplot(data.frame(x,y),aes(x,y)) p + geom_point(size = kinds, alpha = 0.8, aes(colour = factor(result), shape = factor(result)))

↓得到譜系圖

利用經典MDS變換,↓散點圖

②均值法:

kinds <- 4 model1 <- hclust(d, method = 'average') result <- cutree(model1, k = kinds) plot(model1, -1, labels = eduLevel$region) rect.hclust(model1, k = kinds, border = "red")mds <- cmdscale(d, k = 2, eig = T) x <- mds$points[,1] y <- mds$points[,2] library(ggplot2) p <- ggplot(data.frame(x,y),aes(x,y)) p + geom_point(size = kinds, alpha = 0.8, aes(colour = factor(result), shape = factor(result)))

↓得到譜系圖

利用經典MDS變換,↓散點圖

③重心法:

kinds <- 4 model1 <- hclust(d, method = 'centroid') result <- cutree(model1, k = kinds) plot(model1, -1, labels = eduLevel$region) rect.hclust(model1, k = kinds, border = "red")mds <- cmdscale(d, k = 2, eig = T) x <- mds$points[,1] y <- mds$points[,2] library(ggplot2) p <- ggplot(data.frame(x,y),aes(x,y)) p + geom_point(size = kinds, alpha = 0.8, aes(colour = factor(result), shape = factor(result)))

↓得到譜系圖

利用經典MDS變換,↓散點圖

④離差平方和法:

kinds <- 4 model1 <- hclust(d, method = 'ward') result <- cutree(model1, k = kinds) plclust(model1, -1, labels = eduLevel$region) rect.hclust(model1, k = kinds, border = "red")mds <- cmdscale(d, k = 2, eig = T) x <- mds$points[,1] y <- mds$points[,2] library(ggplot2) p <- ggplot(data.frame(x,y),aes(x,y)) p + geom_point(size = kinds, alpha = 0.8, aes(colour = factor(result), shape = factor(result)))

↓得到譜系圖

利用經典MDS變換,↓散點圖

(2)利用K-means算法,編寫如下R語言程序:

model <- kmeans(scale(X), centers = 4, nstart = 10) table(eduLevel$region, model$cluster)

得到如下的分類結果:

1 2 3 4Anhui 0 1 0 0Beijing 0 0 0 1Fujian 0 1 0 0Gansu 0 1 0 0Guangdong 1 0 0 0Guangxi 1 0 0 0Guizhou 0 1 0 0Hainan 1 0 0 0Hebei 1 0 0 0Heilongjiang 1 0 0 0Henan 1 0 0 0Hubei 1 0 0 0Hunan 1 0 0 0InnerMongolia 1 0 0 0Jiangsu 1 0 0 0Jiangxi 0 1 0 0Jilin 1 0 0 0Liaoning 1 0 0 0Ningxia 0 1 0 0Qinghai 0 1 0 0Shandong 1 0 0 0Shanghai 0 0 0 1Shanxi(Taiyuan) 1 0 0 0Shanxi(Xi'an) 1 0 0 0Sichuan 1 0 0 0Tianjin 0 0 0 1Tibet 0 0 1 0Xinjiang 1 0 0 0Yunnan 0 1 0 0Zhejiang 1 0 0 0

附:
用ggfortify軟件包做更好看的二維可視化

本文由廈門大學荔枝帶飛隊編寫

總結

以上是生活随笔為你收集整理的聚类分析与判别分析十题_数学建模系列的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。