日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

ML_SVM

發布時間:2025/4/16 编程问答 17 豆豆
生活随笔 收集整理的這篇文章主要介紹了 ML_SVM 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

機器學習100天系列學習筆記 機器學習100天(中文翻譯版)機器學習100天(英文原版)
代碼閱讀:

第一步:導包

#Step 1: Importing the Libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd

第二步:導入數據

#Step 2: Importing the dataset dataset = pd.read_csv('D:/daily/機器學習100天/100-Days-Of-ML-Code-中文版本/100-Days-Of-ML-Code-master/datasets/Social_Network_Ads.csv') X = dataset.iloc[:, [2, 3]].values y = dataset.iloc[:, 4].values

第三步:劃分訓練集、測試集

#Step 3: Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)

第四步:特征縮放

#Step 4: Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)

經過特征縮放后的X_train:

[[ 0.58164944 -0.88670699][-0.60673761 1.46173768][-0.01254409 -0.5677824 ][-0.60673761 1.89663484][ 1.37390747 -1.40858358][ 1.47293972 0.99784738][ 0.08648817 -0.79972756][-0.01254409 -0.24885782][-0.21060859 -0.5677824 ]...]

對于進行特征縮放這一步,個人認為是非常重要的,它可以加快收斂速度,在深度學習中間尤為重要(梯度爆炸問題)。

第五步:Support Vector Machine

#Step 5: Fitting SVM to the Training set from sklearn.svm import SVC classifier = SVC(kernel = 'linear', random_state = 0) classifier.fit(X_train, y_train)

SVC函數使用的是線性核,算法中采用的核函數類型,可選參數有:‘poly’:多項式核函數、‘rbf’:徑像核函數/高斯核、‘sigmod’:sigmod核函數、‘precomputed’:核矩陣
注:precomputed表示自己提前計算好核函數矩陣,這時候算法內部就不再用核函數去計算核矩陣,而是直接用你給的核矩陣。

第六步:預測

#Step 6: Predicting the Test set results y_pred = classifier.predict(X_test)

第七步:混淆矩陣

#Step 7: Making the Confusion Matrix from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report cm = confusion_matrix(y_test, y_pred) print(cm) # print confusion_matrix print(classification_report(y_test, y_pred)) # print classification report

混淆:簡單理解為一個class被預測成另一個class。
給一個參考鏈接 混淆矩陣
然后談談classification_report函數;科學上網,正常上網

輸出:

[[66 2][ 8 24]]precision recall f1-score support0 0.89 0.97 0.93 681 0.92 0.75 0.83 32accuracy 0.90 100macro avg 0.91 0.86 0.88 100 weighted avg 0.90 0.90 0.90 100

precision:精確度;
recall:召回率;
f1-score:precision、recall的調和函數,越接近1越好;
support:每個標簽的出現次數;
avg / total行為各列的均值(support列為總和);

第八步:可視化

#Step 8: Visualization from matplotlib.colors import ListedColormap X_set,y_set = X_train,y_train X1,X2 = np. meshgrid(np. arange(start=X_set[:,0].min()-1, stop=X_set[:,0].max()+1, step=0.01),np. arange(start=X_set[:,1].min()-1, stop=X_set[:,1].max()+1, step=0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(),X1.max()) plt.ylim(X2.min(),X2.max())for i,j in enumerate(np.unique(y_set)):plt.scatter(X_set[y_set==j,0],X_set[y_set==j,1],c = ListedColormap(('red', 'green'))(i), label=j) plt. title(' SVM(Training set)') plt. xlabel(' Age') plt. ylabel(' Estimated Salary') plt. legend() plt. show()X_set,y_set=X_test,y_test X1,X2=np. meshgrid(np. arange(start=X_set[:,0].min()-1, stop=X_set[:, 0].max()+1, step=0.01),np. arange(start=X_set[:,1].min()-1, stop=X_set[:,1].max()+1, step=0.01))plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(),X1.max()) plt.ylim(X2.min(),X2.max()) for i,j in enumerate(np. unique(y_set)):plt.scatter(X_set[y_set==j,0],X_set[y_set==j,1],c = ListedColormap(('red', 'green'))(i), label=j)plt. title(' SVM(Test set)') plt. xlabel(' Age') plt. ylabel(' Estimated Salary') plt. legend() plt. show()


當使用RBF核時:

[[64 4][ 3 29]]precision recall f1-score support0 0.96 0.94 0.95 681 0.88 0.91 0.89 32accuracy 0.93 100macro avg 0.92 0.92 0.92 100 weighted avg 0.93 0.93 0.93 100


Anaconda Pytorch 實現 SVM

全部代碼:

#Day 5: KNN 2022/4/9 #Step 1: Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd#Step 2: Importing the dataset dataset = pd.read_csv('D:/daily/機器學習100/100-Days-Of-ML-Code-中文版本/100-Days-Of-ML-Code-master/datasets/Social_Network_Ads.csv') X = dataset.iloc[:, [2, 3]].values y = dataset.iloc[:, 4].values#Step 3: Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)#Step 4: Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)#Step 5: Fitting K-NN to the Training set from sklearn.neighbors import KNeighborsClassifier classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2) classifier.fit(X_train, y_train)#Step 6: Predicting the Test set results y_pred = classifier.predict(X_test)#Step 7: Making the Confusion Matrix from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report cm = confusion_matrix(y_test, y_pred) print(cm) print(classification_report(y_test, y_pred))#Step 8: Visualization from matplotlib.colors import ListedColormap X_set,y_set = X_train,y_train X1,X2 = np. meshgrid(np. arange(start = X_set[:,0].min()-1, stop = X_set[:,0].max()+1, step = 0.01),np. arange(start = X_set[:,1].min()-1, stop = X_set[:,1].max()+1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(),X1.max()) plt.ylim(X2.min(),X2.max()) for i,j in enumerate(np.unique(y_set)):plt.scatter(X_set[y_set==j,0],X_set[y_set==j,1],c = ListedColormap(('red', 'green'))(i), label=j) plt. title(' SVM(Training set)') plt. xlabel(' Age') plt. ylabel(' Estimated Salary') plt. legend() plt. show()X_set,y_set = X_test,y_test X1,X2=np. meshgrid(np. arange(start = X_set[:,0].min()-1, stop = X_set[:,0].max()+1, step = 0.01),np. arange(start = X_set[:,1].min()-1, stop = X_set[:,1].max()+1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(),X1.max()) plt.ylim(X2.min(),X2.max()) for i,j in enumerate(np. unique(y_set)):plt.scatter(X_set[y_set==j,0],X_set[y_set==j,1],c = ListedColormap(('red', 'green'))(i), label=j)plt. title(' SVM(Test set)') plt. xlabel(' Age') plt. ylabel(' Estimated Salary') plt. legend() plt. show()

總結

以上是生活随笔為你收集整理的ML_SVM的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。