日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 >

tensorflow之过拟合问题实战

發(fā)布時(shí)間:2025/3/21 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 tensorflow之过拟合问题实战 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

tensorflow之過擬合問題實(shí)戰(zhàn)

1.構(gòu)建數(shù)據(jù)集

我們使用的數(shù)據(jù)集樣本特性向量長度為 2,標(biāo)簽為 0 或 1,分別代表了 2 種類別。借助于 scikit-learn 庫中提供的 make_moons 工具我們可以生成任意多數(shù)據(jù)的訓(xùn)練集。

import matplotlib.pyplot as plt # 導(dǎo)入數(shù)據(jù)集生成工具 import numpy as np import seaborn as sns from sklearn.datasets import make_moons from sklearn.model_selection import train_test_split from tensorflow.keras import layers, Sequential, regularizers from mpl_toolkits.mplot3d import Axes3D
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

為了演示過擬合現(xiàn)象,我們只采樣了 1000 個(gè)樣本數(shù)據(jù),同時(shí)添加標(biāo)準(zhǔn)差為 0.25 的高斯噪聲數(shù)據(jù):

def load_dataset():# 采樣點(diǎn)數(shù)N_SAMPLES = 1000# 測(cè)試數(shù)量比率TEST_SIZE = None# 從 moon 分布中隨機(jī)采樣 1000 個(gè)點(diǎn),并切分為訓(xùn)練集-測(cè)試集X, y = make_moons(n_samples=N_SAMPLES, noise=0.25, random_state=100)X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE, random_state=42)return X, y, X_train, X_test, y_train, y_test
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

make_plot 函數(shù)可以方便地根據(jù)樣本的坐標(biāo) X 和樣本的標(biāo)簽 y 繪制出數(shù)據(jù)的分布圖:

def make_plot(X, y, plot_name, file_name, XX=None, YY=None, preds=None, dark=False, output_dir=OUTPUT_DIR):# 繪制數(shù)據(jù)集的分布, X 為 2D 坐標(biāo), y 為數(shù)據(jù)點(diǎn)的標(biāo)簽if dark:plt.style.use('dark_background')else:sns.set_style("whitegrid")axes = plt.gca()axes.set_xlim([-2, 3])axes.set_ylim([-1.5, 2])axes.set(xlabel="$x_1$", ylabel="$x_2$")plt.title(plot_name, fontsize=20, fontproperties='SimHei')plt.subplots_adjust(left=0.20)plt.subplots_adjust(right=0.80)if XX is not None and YY is not None and preds is not None:plt.contourf(XX, YY, preds.reshape(XX.shape), 25, alpha=0.08, cmap=plt.cm.Spectral)plt.contour(XX, YY, preds.reshape(XX.shape), levels=[.5], cmap="Greys", vmin=0, vmax=.6)# 繪制散點(diǎn)圖,根據(jù)標(biāo)簽區(qū)分顏色m=markersmarkers = ['o' if i == 1 else 's' for i in y.ravel()]mscatter(X[:, 0], X[:, 1], c=y.ravel(), s=20, cmap=plt.cm.Spectral, edgecolors='none', m=markers, ax=axes)# 保存矢量圖plt.savefig(output_dir + '/' + file_name)plt.close()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
def mscatter(x, y, ax=None, m=None, **kw):import matplotlib.markers as mmarkersif not ax: ax = plt.gca()sc = ax.scatter(x, y, **kw)if (m is not None) and (len(m) == len(x)):paths = []for marker in m:if isinstance(marker, mmarkers.MarkerStyle):marker_obj = markerelse:marker_obj = mmarkers.MarkerStyle(marker)path = marker_obj.get_path().transformed(marker_obj.get_transform())paths.append(path)sc.set_paths(paths)return sc
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
X, y, X_train, X_test, y_train, y_test = load_dataset() make_plot(X,y,"haha",'月牙形狀二分類數(shù)據(jù)集分布.svg')
  • 1
  • 2

2.網(wǎng)絡(luò)層數(shù)的影響

為了探討不同的網(wǎng)絡(luò)深度下的過擬合程度,我們共進(jìn)行了 5 次訓(xùn)練實(shí)驗(yàn)。在𝑛 ∈ [0,4]時(shí),構(gòu)建網(wǎng)絡(luò)層數(shù)為n + 2層的全連接層網(wǎng)絡(luò),并通過 Adam 優(yōu)化器訓(xùn)練 500 個(gè) Epoch

def network_layers_influence(X_train, y_train):# 構(gòu)建 5 種不同層數(shù)的網(wǎng)絡(luò)for n in range(5):# 創(chuàng)建容器model = Sequential()# 創(chuàng)建第一層model.add(layers.Dense(8, input_dim=2, activation='relu'))# 添加 n 層,共 n+2 層for _ in range(n):model.add(layers.Dense(32, activation='relu'))# 創(chuàng)建最末層model.add(layers.Dense(1, activation='sigmoid'))# 模型裝配與訓(xùn)練model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])model.fit(X_train, y_train, epochs=N_EPOCHS, verbose=1)# 繪制不同層數(shù)的網(wǎng)絡(luò)決策邊界曲線# 可視化的 x 坐標(biāo)范圍為[-2, 3]xx = np.arange(-2, 3, 0.01)# 可視化的 y 坐標(biāo)范圍為[-1.5, 2]yy = np.arange(-1.5, 2, 0.01)# 生成 x-y 平面采樣網(wǎng)格點(diǎn),方便可視化XX, YY = np.meshgrid(xx, yy)preds = model.predict_classes(np.c_[XX.ravel(), YY.ravel()])print(preds)title = "網(wǎng)絡(luò)層數(shù):{0}".format(2 + n)file = "網(wǎng)絡(luò)容量_%i.png" % (2 + n)make_plot(X_train, y_train, title, file, XX, YY, preds, output_dir=OUTPUT_DIR + '/network_layers')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
network_layers_influence(X_train, y_train)
  • 1




3.Dropout的影響

為了探討 Dropout 層對(duì)網(wǎng)絡(luò)訓(xùn)練的影響,我們共進(jìn)行了 5 次實(shí)驗(yàn),每次實(shí)驗(yàn)使用 7 層的全連接層網(wǎng)絡(luò)進(jìn)行訓(xùn)練,但是在全連接層中間隔插入 0~4 個(gè) Dropout 層并通過 Adam優(yōu)化器訓(xùn)練 500 個(gè) Epoch

def dropout_influence(X_train, y_train):# 構(gòu)建 5 種不同數(shù)量 Dropout 層的網(wǎng)絡(luò)for n in range(5):# 創(chuàng)建容器model = Sequential()# 創(chuàng)建第一層model.add(layers.Dense(8, input_dim=2, activation='relu'))counter = 0# 網(wǎng)絡(luò)層數(shù)固定為 5for _ in range(5):model.add(layers.Dense(64, activation='relu'))# 添加 n 個(gè) Dropout 層if counter < n:counter += 1model.add(layers.Dropout(rate=0.5))# 輸出層model.add(layers.Dense(1, activation='sigmoid'))# 模型裝配model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])# 訓(xùn)練model.fit(X_train, y_train, epochs=N_EPOCHS, verbose=1)# 繪制不同 Dropout 層數(shù)的決策邊界曲線# 可視化的 x 坐標(biāo)范圍為[-2, 3]xx = np.arange(-2, 3, 0.01)# 可視化的 y 坐標(biāo)范圍為[-1.5, 2]yy = np.arange(-1.5, 2, 0.01)# 生成 x-y 平面采樣網(wǎng)格點(diǎn),方便可視化XX, YY = np.meshgrid(xx, yy)preds = model.predict_classes(np.c_[XX.ravel(), YY.ravel()])title = "無Dropout層" if n == 0 else "{0}層 Dropout層".format(n)file = "Dropout_%i.png" % nmake_plot(X_train, y_train, title, file, XX, YY, preds, output_dir=OUTPUT_DIR + '/dropout')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
dropout_influence(X_train, y_train)
  • 1
  • 2




4.正則化的影響

為了探討正則化系數(shù)𝜆對(duì)網(wǎng)絡(luò)模型訓(xùn)練的影響,我們采用 L2 正則化方式,構(gòu)建了 5 層的神經(jīng)網(wǎng)絡(luò),其中第 2,3,4 層神經(jīng)網(wǎng)絡(luò)層的權(quán)值張量 W 均添加 L2 正則化約束項(xiàng):

def build_model_with_regularization(_lambda):# 創(chuàng)建帶正則化項(xiàng)的神經(jīng)網(wǎng)絡(luò)model = Sequential()model.add(layers.Dense(8, input_dim=2, activation='relu')) # 不帶正則化項(xiàng)# 2-4層均是帶 L2 正則化項(xiàng)model.add(layers.Dense(256, activation='relu', kernel_regularizer=regularizers.l2(_lambda)))model.add(layers.Dense(256, activation='relu', kernel_regularizer=regularizers.l2(_lambda)))model.add(layers.Dense(256, activation='relu', kernel_regularizer=regularizers.l2(_lambda)))# 輸出層model.add(layers.Dense(1, activation='sigmoid'))model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # 模型裝配return model
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

下面我們首先來實(shí)現(xiàn)一個(gè)權(quán)重可視化的函數(shù)

def plot_weights_matrix(model, layer_index, plot_name, file_name, output_dir=OUTPUT_DIR):# 繪制權(quán)值范圍函數(shù)# 提取指定層的權(quán)值矩陣weights = model.layers[layer_index].get_weights()[0]shape = weights.shape# 生成和權(quán)值矩陣等大小的網(wǎng)格坐標(biāo)X = np.array(range(shape[1]))Y = np.array(range(shape[0]))X, Y = np.meshgrid(X, Y)# 繪制3D圖fig = plt.figure()ax = fig.gca(projection='3d')ax.xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))ax.yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))ax.zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))plt.title(plot_name, fontsize=20, fontproperties='SimHei')# 繪制權(quán)值矩陣范圍ax.plot_surface(X, Y, weights, cmap=plt.get_cmap('rainbow'), linewidth=0)# 設(shè)置坐標(biāo)軸名ax.set_xlabel('網(wǎng)格x坐標(biāo)', fontsize=16, rotation=0, fontproperties='SimHei')ax.set_ylabel('網(wǎng)格y坐標(biāo)', fontsize=16, rotation=0, fontproperties='SimHei')ax.set_zlabel('權(quán)值', fontsize=16, rotation=90, fontproperties='SimHei')# 保存矩陣范圍圖plt.savefig(output_dir + "/" + file_name + ".svg")plt.close(fig)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

在保持網(wǎng)絡(luò)結(jié)構(gòu)不變的條件下,我們通過調(diào)節(jié)正則化系數(shù)?𝜆 = 0.00001,0.001,0.1,0.12,0.13?來測(cè)試網(wǎng)絡(luò)的訓(xùn)練效果,并繪制出學(xué)習(xí)模型在訓(xùn)練集上的決策邊界曲線

def regularizers_influence(X_train, y_train):for _lambda in [1e-5, 1e-3, 1e-1, 0.12, 0.13]: # 設(shè)置不同的正則化系數(shù)# 創(chuàng)建帶正則化項(xiàng)的模型model = build_model_with_regularization(_lambda)# 模型訓(xùn)練model.fit(X_train, y_train, epochs=N_EPOCHS, verbose=1)# 繪制權(quán)值范圍layer_index = 2plot_title = "正則化系數(shù):{}".format(_lambda)file_name = "正則化網(wǎng)絡(luò)權(quán)值_" + str(_lambda)# 繪制網(wǎng)絡(luò)權(quán)值范圍圖plot_weights_matrix(model, layer_index, plot_title, file_name, output_dir=OUTPUT_DIR + '/regularizers')# 繪制不同正則化系數(shù)的決策邊界線# 可視化的 x 坐標(biāo)范圍為[-2, 3]xx = np.arange(-2, 3, 0.01)# 可視化的 y 坐標(biāo)范圍為[-1.5, 2]yy = np.arange(-1.5, 2, 0.01)# 生成 x-y 平面采樣網(wǎng)格點(diǎn),方便可視化XX, YY = np.meshgrid(xx, yy)preds = model.predict_classes(np.c_[XX.ravel(), YY.ravel()])title = "正則化系數(shù):{}".format(_lambda)file = "正則化_%g.svg" % _lambdamake_plot(X_train, y_train, title, file, XX, YY, preds, output_dir=OUTPUT_DIR + '/regularizers')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
regularizers_influence(X_train, y_train)
  • 1
  • 2










總結(jié)

以上是生活随笔為你收集整理的tensorflow之过拟合问题实战的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。