日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

熵的基础知识,特征工程,特征归一化,交叉验证,grid search,模型存储与加载

發布時間:2024/7/23 编程问答 20 豆豆
生活随笔 收集整理的這篇文章主要介紹了 熵的基础知识,特征工程,特征归一化,交叉验证,grid search,模型存储与加载 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1.自信息:

2.信息熵

3.p對Q的KL散度(相對熵)

證明kl散度大于等于0

?

4.交叉熵

可看出交叉熵==信息熵+相對熵

數據集地址:水果數據集_機器學習水果識別,水果分類數據集-機器學習其他資源-CSDN下載?

一,類別型特征和有序性特征?,轉變成onehot

def one_hot():# 隨機生成有序型特征和類別特征作為例子X_train = np.array([['male', 'low'],['female', 'low'],['female', 'middle'],['male', 'low'],['female', 'high'],['male', 'low'],['female', 'low'],['female', 'high'],['male', 'low'],['male', 'high']])X_test = np.array([['male', 'low'],['male', 'low'],['female', 'middle'],['female', 'low'],['female', 'high']])from sklearn.preprocessing import LabelEncoder, OneHotEncoder# 在訓練集上進行編碼操作label_enc1 = LabelEncoder() # 首先將male, female用數字編碼one_hot_enc = OneHotEncoder() # 將數字編碼轉換為獨熱編碼label_enc2 = LabelEncoder() # 將low, middle, high用數字編碼tr_feat1_tmp = label_enc1.fit_transform(X_train[:, 0]).reshape(-1, 1) # reshape(-1, 1)保證為一維列向量tr_feat1 = one_hot_enc.fit_transform(tr_feat1_tmp)tr_feat1 = tr_feat1.todense()print('=====male female one hot====')print(tr_feat1)tr_feat2 = label_enc2.fit_transform(X_train[:, 1]).reshape(-1, 1)print('===low middle high class====')print(tr_feat2)X_train_enc = np.hstack((tr_feat1, tr_feat2))print('=====train encode====')print(X_train_enc)# 在測試集上進行編碼操作te_feat1_tmp = label_enc1.transform(X_test[:, 0]).reshape(-1, 1) # reshape(-1, 1)保證為一維列向量te_feat1 = one_hot_enc.transform(te_feat1_tmp)te_feat1 = te_feat1.todense()te_feat2 = label_enc2.transform(X_test[:, 1]).reshape(-1, 1)X_test_enc = np.hstack((te_feat1, te_feat2))print('====test encode====')print(X_test_enc)

?

二,特征歸一化

import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from mpl_toolkits.mplot3d import Axes3D from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import cross_val_scoredef load_data():# 加載數據集fruits_df = pd.read_table('fruit_data_with_colors.txt')# print(fruits_df)print('樣本個數:', len(fruits_df))# 創建目標標簽和名稱的字典fruit_name_dict = dict(zip(fruits_df['fruit_label'], fruits_df['fruit_name']))# 劃分數據集X = fruits_df[['mass', 'width', 'height', 'color_score']]y = fruits_df['fruit_label']X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/4, random_state=0)print('數據集樣本數:{},訓練集樣本數:{},測試集樣本數:{}'.format(len(X), len(X_train), len(X_test)))# print(X_train)return X_train, X_test, y_train, y_test #特征歸一化 def minmax_scaler(X_train,X_test):scaler = MinMaxScaler()X_train_scaled = scaler.fit_transform(X_train)# print(X_train_scaled)#此時scaled得到一個最小最大值,對于test直接transform就行X_test_scaled = scaler.transform(X_test)for i in range(4):print('歸一化前,訓練數據第{}維特征最大值:{:.3f},最小值:{:.3f}'.format(i + 1,X_train.iloc[:, i].max(),X_train.iloc[:, i].min()))print('歸一化后,訓練數據第{}維特征最大值:{:.3f},最小值:{:.3f}'.format(i + 1,X_train_scaled[:, i].max(),X_train_scaled[:, i].min()))return X_train_scaled,X_test_scaled def show_3D(X_train,X_train_scaled):label_color_dict = {1: 'red', 2: 'green', 3: 'blue', 4: 'yellow'}colors = list(map(lambda label: label_color_dict[label], y_train))print(colors)print(len(colors))fig = plt.figure()ax1 = fig.add_subplot(111, projection='3d', aspect='equal')ax1.scatter(X_train['width'], X_train['height'], X_train['color_score'], c=colors, marker='o', s=100)ax1.set_xlabel('width')ax1.set_ylabel('height')ax1.set_zlabel('color_score')plt.show()fig = plt.figure()ax2 = fig.add_subplot(111, projection='3d', aspect='equal')ax2.scatter(X_train_scaled[:, 1], X_train_scaled[:, 2], X_train_scaled[:, 3], c=colors, marker='o', s=100)ax2.set_xlabel('width')ax2.set_ylabel('height')ax2.set_zlabel('color_score')plt.show()

?

三,交叉驗證?

#交叉驗證 def cross_val(X_train_scaled, y_train):k_range = [2, 4, 5, 10]cv_scores = []for k in k_range:knn = KNeighborsClassifier(n_neighbors=k)scores = cross_val_score(knn, X_train_scaled, y_train, cv=3)cv_score = np.mean(scores)print('k={},驗證集上的準確率={:.3f}'.format(k, cv_score))cv_scores.append(cv_score)print('np.argmax(cv_scores)=',np.argmax(cv_scores))best_k = k_range[np.argmax(cv_scores)]best_knn = KNeighborsClassifier(n_neighbors=best_k)best_knn.fit(X_train_scaled, y_train)print('測試集準確率:', best_knn.score(X_test_scaled, y_test))

四,調用validation_curve 查看超參數對訓練集和驗證集的影響

# 調用validation_curve 查看超參數對訓練集和驗證集的影響 def show_effect(X_train_scaled, y_train):from sklearn.model_selection import validation_curvefrom sklearn.svm import SVCc_range = [1e-3, 1e-2, 0.1, 1, 10, 100, 1000, 10000]train_scores, test_scores = validation_curve(SVC(kernel='linear'), X_train_scaled, y_train,param_name='C', param_range=c_range,cv=5, scoring='accuracy')print(train_scores)print(train_scores.shape)train_scores_mean = np.mean(train_scores, axis=1)# print(train_scores_mean)train_scores_std = np.std(train_scores, axis=1)test_scores_mean = np.mean(test_scores, axis=1)test_scores_std = np.std(test_scores, axis=1)#plt.figure(figsize=(10, 8))plt.title('Validation Curve with SVM')plt.xlabel('C')plt.ylabel('Score')plt.ylim(0.0, 1.1)lw = 2plt.semilogx(c_range, train_scores_mean, label="Training score",color="darkorange", lw=lw)plt.fill_between(c_range, train_scores_mean - train_scores_std,train_scores_mean + train_scores_std, alpha=0.2,color="darkorange", lw=lw)plt.semilogx(c_range, test_scores_mean, label="Cross-validation score",color="navy", lw=lw)plt.fill_between(c_range, test_scores_mean - test_scores_std,test_scores_mean + test_scores_std, alpha=0.2,color="navy", lw=lw)plt.legend(loc="best")plt.show()# 從上圖可知對SVM,C=100為最優參數svm_model = SVC(kernel='linear', C=1000)svm_model.fit(X_train_scaled, y_train)print(svm_model.score(X_test_scaled, y_test))

可看成,剛開始方差較大,然后模型趨于穩定,最后過擬合,C=100為最優參數。

五,grid_search,模型存儲與加載

def grid_search(X_train_scaled, y_train):from sklearn.model_selection import GridSearchCVfrom sklearn.tree import DecisionTreeClassifierparameters = {'max_depth':[3, 5, 7, 9], 'min_samples_leaf': [1, 2, 3, 4]}clf = GridSearchCV(DecisionTreeClassifier(), parameters, cv=3, scoring='accuracy')print(clf)clf.fit(X_train_scaled, y_train)print('最優參數:', clf.best_params_)print('驗證集最高得分:', clf.best_score_)# 獲取最優模型best_model = clf.best_estimator_print('測試集上準確率:', best_model.score(X_test_scaled, y_test))return best_model def model_save(best_model):# 使用pickleimport picklemodel_path1 = './trained_model1.pkl'# 保存模型到硬盤with open(model_path1, 'wb') as f:pickle.dump(best_model, f)def load_model(X_test_scaled,y_test):import pickle# 加載保存的模型model_path1 = './trained_model1.pkl'with open(model_path1, 'rb') as f:model = pickle.load(f)# 預測print('預測值為', model.predict([X_test_scaled[0, :]]))print('真實值為', y_test.values[0])

總結

以上是生活随笔為你收集整理的熵的基础知识,特征工程,特征归一化,交叉验证,grid search,模型存储与加载的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。