日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

2.2tensorflow2官方demo

發(fā)布時間:2024/10/8 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 2.2tensorflow2官方demo 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

文章目錄

    • 1與2的不同
    • 安裝
    • tensorflow的tensor排列
    • model
      • 用的類似pytorch的搭建風格
    • train

1與2的不同

動態(tài)圖機制:定義的每一個變量,都可以實時獲取其狀態(tài)
裝飾器:能夠高效,加速(陷阱比較多,建議線上官網(wǎng)看)
keras原來是第三方包,官方主推了·

安裝

Tensorflow2.1-cpu安裝(缺少msvcp140_1.dll)

tensorflow的tensor排列

model

用的類似pytorch的搭建風格

from tensorflow.keras.layers import Dense, Flatten, Conv2D from tensorflow.keras import Model# 創(chuàng)建一個mymodel類繼承自model class MyModel(Model):def __init__(self):super(MyModel, self).__init__()self.conv1 = Conv2D(32, 3, activation='relu')#一個卷積層 Conv2D是tensorflow.keras.layers的,(filters卷積核個數(shù),kernel_sizes卷積核大小,strides 步距高度與寬度,padding有所不同——直接給了vailed和same——不需要時valide(整不完時丟棄)-需要時same,data_format-channel放在哪,activation激活函數(shù),use_bias是否使用偏置,——initializer指定正則化處理)self.flatten = Flatten()#Flatten層用來將輸入“壓平”,即把多維的輸入一維化,常用在從卷積層到全連接層的過渡##兩個全連接層self.d1 = Dense(128, activation='relu')self.d2 = Dense(10, activation='softmax')#自動推理上一層參數(shù),不用設置 #用call來定義正向傳播過程def call(self, x):x = self.conv1(x) # input[batch, 28, 28, 1] output[batch, 26, 26, 32]x = self.flatten(x) # output [batch, 21632]x = self.d1(x) # output [batch, 128]return self.d2(x) # output [batch, 10]

train

from __future__ import absolute_import, division, print_function, unicode_literals#導入模塊import tensorflow as tf from model import MyModelmnist = tf.keras.datasets.mnist#導入數(shù)據(jù)集mnist是個手寫數(shù)據(jù)集# download and load data,運行就會下載并載入,會下載到用戶.kears里 (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0#查看數(shù)據(jù)是啥樣的 #import numpy as np #import matplotlib.pyplot as plt #是numpy的array加載進來的 #imgs=x_test[:3] #labs=y_test[:3] #print(labs) #plot_imgs=np.hstack(imgs)#橫向拼接 #plt.imshow(plot_imgs,cmap='gray')#灰度圖 #plt.show()# Add a channels dimension#加入深度信息(keras圖像沒有) x_train = x_train[..., tf.newaxis] x_test = x_test[..., tf.newaxis]# create data generator數(shù)據(jù)生成器 train_ds = tf.data.Dataset.from_tensor_slices(#載入數(shù)據(jù)(x_train, y_train)).shuffle(10000).batch(32)#把圖像與標簽合并成元組的方式,shuffle是隨機(打亂順序的10000張 32張為一批,提取一批再打亂一次,再提取) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)# create model model = MyModel()# define loss loss_object = tf.keras.losses.SparseCategoricalCrossentropy()#稀疏的多類別交叉熵損失(不是onehot編碼) # define optimizer#adam優(yōu)化器 optimizer = tf.keras.optimizers.Adam()# define train_loss and train_accuracy train_loss = tf.keras.metrics.Mean(name='train_loss')#訓練中的平均損失值 train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')#準確率# define train_loss and train_accuracy test_loss = tf.keras.metrics.Mean(name='test_loss') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')# define train function including calculating loss, applying gradient and calculating accuracy @tf.function#裝飾器#加上之后就不能正常斷點調(diào)試,因為自動轉(zhuǎn)化成tensorflow的靜態(tài)圖代碼,用GPU計算 def train_step(images, labels):#pytorch中會自動跟蹤每一個可訓練的參數(shù),但tensorflow不會,所以with tf.GradientTape() as tape:#計算梯度https://blog.csdn.net/Forrest97/article/details/105913952predictions = model(images)#數(shù)據(jù)輸入模型,得到輸出loss = loss_object(labels, predictions)#上面define loss那里定義的計算損失的函數(shù),計算損失值gradients = tape.gradient(loss, model.trainable_variables)#經(jīng)過tf.GradientTape() (tape)的gradinet函數(shù)將損失反向傳播到模型的每一個可訓練參數(shù)上,計算誤差梯度optimizer.apply_gradients(zip(gradients, model.trainable_variables))#剛剛定義的adam優(yōu)化器將每一個節(jié)點的誤差梯度用于更新該節(jié)點的值,zip:將每一個點的誤差梯度與參數(shù)值打包成一個元組輸入進去train_loss(loss)#用剛剛定義計數(shù)器累加器計算歷史的損失值train_accuracy(labels, predictions)# define test function including calculating loss and calculating accuracy @tf.function def test_step(images, labels):predictions = model(images)t_loss = loss_object(labels, predictions) #沒有gadient因為不用計算參數(shù)更新test_loss(t_loss)test_accuracy(labels, predictions)EPOCHS = 5#迭代5輪 #訓練過程 for epoch in range(EPOCHS): #誤差計數(shù)器,準確值計數(shù)器,不然會把上一輪的歷史數(shù)據(jù)加進來train_loss.reset_states() # clear history infotrain_accuracy.reset_states() # clear history infotest_loss.reset_states() # clear history infotest_accuracy.reset_states() # clear history info#遍歷訓練迭代器(train_ds = tf.data.Dataset.from_tensor_slices(#載入數(shù)據(jù)(x_train, y_train)).shuffle(10000).batch(32)這一句)for images, labels in train_ds:train_step(images, labels)#計算誤差,反向傳播,參數(shù)更新(上面定義的)for test_images, test_labels in test_ds:test_step(test_images, test_labels)#每訓練完一個epoch之后,就打印信息template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'print(template.format(epoch + 1,#當前是第幾個epochtrain_loss.result(),#損時累加器的歷史平均值(result)train_accuracy.result() * 100,#準確率是個小數(shù)test_loss.result(),test_accuracy.result() * 100)) 與50位技術專家面對面20年技術見證,附贈技術全景圖

總結(jié)

以上是生活随笔為你收集整理的2.2tensorflow2官方demo的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。