日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

机器学习实战-PCA算法-26

發(fā)布時(shí)間:2024/9/15 编程问答 25 豆豆
生活随笔 收集整理的這篇文章主要介紹了 机器学习实战-PCA算法-26 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

PCA算法-手寫數(shù)字降維可視化

from sklearn.neural_network import MLPClassifier from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,confusion_matrix import numpy as np import matplotlib.pyplot as plt digits = load_digits()#載入數(shù)據(jù) x_data = digits.data #數(shù)據(jù) y_data = digits.target #標(biāo)簽x_train,x_test,y_train,y_test = train_test_split(x_data,y_data) #分割數(shù)據(jù)1/4為測試數(shù)據(jù),3/4為訓(xùn)練數(shù)據(jù)

mlp = MLPClassifier(hidden_layer_sizes=(100,50) ,max_iter=500) mlp.fit(x_train,y_train)

# 數(shù)據(jù)中心化 def zeroMean(dataMat):# 按列求平均,即各個(gè)特征的平均meanVal = np.mean(dataMat, axis=0) newData = dataMat - meanValreturn newData, meanValdef pca(dataMat,top):# 數(shù)據(jù)中心化newData,meanVal=zeroMean(dataMat) # np.cov用于求協(xié)方差矩陣,參數(shù)rowvar=0說明數(shù)據(jù)一行代表一個(gè)樣本covMat = np.cov(newData, rowvar=0)# np.linalg.eig求矩陣的特征值和特征向量eigVals, eigVects = np.linalg.eig(np.mat(covMat))# 對特征值從小到大排序eigValIndice = np.argsort(eigVals)# 最大的n個(gè)特征值的下標(biāo)n_eigValIndice = eigValIndice[-1:-(top+1):-1]# 最大的n個(gè)特征值對應(yīng)的特征向量n_eigVect = eigVects[:,n_eigValIndice]# 低維特征空間的數(shù)據(jù)lowDDataMat = newData*n_eigVect# 利用低緯度數(shù)據(jù)來重構(gòu)數(shù)據(jù)reconMat = (lowDDataMat*n_eigVect.T) + meanVal# 返回低維特征空間的數(shù)據(jù)和重構(gòu)的矩陣return lowDDataMat,reconMat lowDDataMat,reconMat = pca(x_data,2) # 重構(gòu)的數(shù)據(jù) x = np.array(lowDDataMat)[:,0] y = np.array(lowDDataMat)[:,1] plt.scatter(x,y,c='r') plt.show()

predictions = mlp.predict(x_data) # 重構(gòu)的數(shù)據(jù) x = np.array(lowDDataMat)[:,0] y = np.array(lowDDataMat)[:,1] plt.scatter(x,y,c=y_data) plt.show()

lowDDataMat,reconMat = pca(x_data,3) from mpl_toolkits.mplot3d import Axes3D x = np.array(lowDDataMat)[:,0] y = np.array(lowDDataMat)[:,1] z = np.array(lowDDataMat)[:,2] ax = plt.figure().add_subplot(111, projection = '3d') ax.scatter(x, y, z, c = y_data, s = 10) #點(diǎn)為紅色三角形 plt.show()

.PCA算法-手寫數(shù)字降維預(yù)測

from sklearn.neural_network import MLPClassifier from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,confusion_matrix from sklearn import decomposition import matplotlib.pyplot as plt digits = load_digits()#載入數(shù)據(jù) x_data = digits.data #數(shù)據(jù) y_data = digits.target #標(biāo)簽x_train,x_test,y_train,y_test = train_test_split(x_data,y_data) #分割數(shù)據(jù)1/4為測試數(shù)據(jù),3/4為訓(xùn)練數(shù)據(jù) mlp = MLPClassifier(hidden_layer_sizes=(100,50) ,max_iter=500) mlp.fit(x_train,y_train )

predictions = mlp.predict(x_test) print(classification_report(predictions, y_test)) print(confusion_matrix(predictions, y_test))

pca = decomposition.PCA() pca.fit(x_data)

# 方差 pca.explained_variance_

# 方差占比 pca.explained_variance_ratio_

variance = [] for i in range(len(pca.explained_variance_ratio_)):variance.append(sum(pca.explained_variance_ratio_[:i+1])) plt.plot(range(1,len(pca.explained_variance_ratio_)+1), variance) plt.show()

pca = decomposition.PCA(whiten=True,n_components=0.8) pca.fit(x_data)

pca.explained_variance_ratio_

x_train_pca = pca.transform(x_train) mlp = MLPClassifier(hidden_layer_sizes=(100,50) ,max_iter=500) mlp.fit(x_train_pca,y_train )

x_test_pca = pca.transform(x_test) predictions = mlp.predict(x_test_pca) print(classification_report(predictions, y_test)) print(confusion_matrix(predictions, y_test))

與50位技術(shù)專家面對面20年技術(shù)見證,附贈技術(shù)全景圖

總結(jié)

以上是生活随笔為你收集整理的机器学习实战-PCA算法-26的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。