日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

05_专家的快速入门、加载MNIST数据集、数据集切分和混淆、定义类的方式构建模型、选择优化器和损失函数、训练模型和测试模型准确率

發布時間:2024/9/27 编程问答 30 豆豆

https://tensorflow.google.cn/tutorials/quickstart/advanced

導入TensorFlow到你的程序中:

import tensorflow as tf from tensorflow.keras.layers import Dense, Flatten, Conv2D from tensorflow.keras import Model

加載和準備MNIST數據集

mnist = tf.keras.datasets.mnist(x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # Add a channels dimension x_train = x_train[..., tf.newaxis] x_test = x_test[..., tf.newaxis]

使用 tf.data 來將數據集切分為 batch 以及混淆數據集:

train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(10000).batch(32) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)

使用 Keras 模型子類化(model subclassing) API構建tf.keras模型:

class MyModel(Model):def __init__(self):super(MyModel, self).__init__()self.conv1 = Conv2D(32, 3, activation='relu')self.flatten = Flatten()self.d1 = Dense(128, activation='relu')self.d2 = Dense(10, activation='softmax')def call(self, x):x = self.conv1(x)x = self.flatten(x)x = self.d1(x)return self.d2(x)model = MyModel()

為訓練選擇優化器與損失函數:

loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam()

選擇衡量指標來度量模型的損失值(loss)和準確率(accuracy)。這些指標在epoch上累積值,然后打印出整理結果。

train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')test_loss = tf.keras.metrics.Mean(name='test_loss') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')

使用tf.GradientTape來訓練模型:

@tf.function def train_step(images, labels):with tf.GradientTape() as tape:predictions = model(images)loss = loss_object(labels, predictions)gradients = tape.gradient(loss,model.trainable_variables)optimizer.apply_gradients(zip(gradients,model.trainable_variables))train_loss(loss)train_accuracy(labels,predictions)

測試模型:

@tf.function def test_step(images, labels):predictions = model(images)t_loss = loss_object(labels, predictions)test_loss(t_loss)test_accuracy(labels, predictions)EPOCHS = 5 for epoch in range(EPOCHS):# 在下一個epoch開始是,重置評估指標train_loss.reset_states()train_accuracy.reset_states()test_loss.reset_states()test_accuracy.reset_states()for images, labels in train_ds:train_step(images,labels)for test_images,test_labels in test_ds:test_step(test_images,test_labels)template = "Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}"print(template.format(epoch + 1,train_loss.result(),train_accuracy.result() * 100,test_loss.result(),test_accuracy.result() * 100))

輸出結果:

WARNING:tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx. If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2. To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor. Epoch 1, Loss: 0.13633669912815094, Accuracy: 95.92000579833984, Test Loss: 0.054682306945323944, Test Accuracy: 98.19999694824219 Epoch 2, Loss: 0.041911669075489044, Accuracy: 98.70333099365234, Test Loss: 0.04665009677410126, Test Accuracy: 98.4000015258789 Epoch 3, Loss: 0.021748166531324387, Accuracy: 99.31666564941406, Test Loss: 0.05017175152897835, Test Accuracy: 98.36000061035156 Epoch 4, Loss: 0.01320651639252901, Accuracy: 99.55166625976562, Test Loss: 0.058168746531009674, Test Accuracy: 98.30999755859375 Epoch 5, Loss: 0.008145572617650032, Accuracy: 99.7316665649414, Test Loss: 0.06632857024669647, Test Accuracy: 98.30999755859375 與50位技術專家面對面20年技術見證,附贈技術全景圖

總結

以上是生活随笔為你收集整理的05_专家的快速入门、加载MNIST数据集、数据集切分和混淆、定义类的方式构建模型、选择优化器和损失函数、训练模型和测试模型准确率的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。