日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

基于Keras搭建cifar10数据集训练预测Pipeline

發布時間:2025/3/21 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 基于Keras搭建cifar10数据集训练预测Pipeline 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

基于Keras搭建cifar10數據集訓練預測Pipeline

鋼筆先生關注

0.5412019.01.17 22:52:05字數 227閱讀 500

Pipeline

本次訓練模型的數據直接使用Keras.datasets.cifar10.load_data()得到,模型建立是通過Sequential搭建。

重點思考的內容是如何應用訓練過的模型進行實際預測,里面牽涉到一些細節,需要注意。同時,Keras提供的ImageDataGenerator為模型訓練時提供數據輸入,之前有總結過這個類,并給出了從文件系統中加載原始圖片數據的方法。

模型搭建

from __future__ import print_function import keras from keras.datasets import cifar10 from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D import os# 指定超參數 batch_size = 32 num_classes = 10 epochs = 50 data_augmentation = True # 數據增強 num_predictions = 20 save_dir = os.path.join(os.getcwd(), 'saved_models') model_name = 'keras_cifar10_trained_model.h5'# The data, split between train and test sets: (x_train, y_train), (x_test, y_test) = cifar10.load_data() print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples')# Convert class vectors to binary class matrices. y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes)# 搭建模型 model = Sequential() model.add(Conv2D(32, (3, 3), padding='same',input_shape=x_train.shape[1:])) model.add(Activation('relu')) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25))model.add(Conv2D(64, (3, 3), padding='same')) model.add(Activation('relu')) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25))model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes)) model.add(Activation('softmax'))# initiate RMSprop optimizer opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6)# Let's train the model using RMSprop model.compile(loss='categorical_crossentropy',optimizer=opt,metrics=['accuracy'])x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255# 如果不用模型增強 if not data_augmentation:print('Not using data augmentation.')model.fit(x_train, y_train,batch_size=batch_size,epochs=epochs,validation_data=(x_test, y_test),shuffle=True)# 使用模型增強 else:print('Using real-time data augmentation.')# This will do preprocessing and realtime data augmentation:datagen = ImageDataGenerator(featurewise_center=False, # set input mean to 0 over the datasetsamplewise_center=False, # set each sample mean to 0featurewise_std_normalization=False, # divide inputs by std of the datasetsamplewise_std_normalization=False, # divide each input by its stdzca_whitening=False, # apply ZCA whiteningzca_epsilon=1e-06, # epsilon for ZCA whiteningrotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)# randomly shift images horizontally (fraction of total width)width_shift_range=0.1,# randomly shift images vertically (fraction of total height)height_shift_range=0.1,shear_range=0., # set range for random shearzoom_range=0., # set range for random zoomchannel_shift_range=0., # set range for random channel shifts# set mode for filling points outside the input boundariesfill_mode='nearest',cval=0., # value used for fill_mode = "constant"horizontal_flip=True, # randomly flip imagesvertical_flip=False, # randomly flip images# set rescaling factor (applied before any other transformation)rescale=None,# set function that will be applied on each inputpreprocessing_function=None,# image data format, either "channels_first" or "channels_last"data_format=None,# fraction of images reserved for validation (strictly between 0 and 1)validation_split=0.0)# Compute quantities required for feature-wise normalization# (std, mean, and principal components if ZCA whitening is applied).datagen.fit(x_train)# Fit the model on the batches generated by datagen.flow().history = model.fit_generator(datagen.flow(x_train, y_train,batch_size=batch_size),epochs=epochs,steps_per_epoch = 600,validation_data=(x_test, y_test),validation_steps = 10,workers=4)# Save model and weights if not os.path.isdir(save_dir):os.makedirs(save_dir) model_path = os.path.join(save_dir, model_name) model.save(model_path) print('Saved trained model at %s ' % model_path)# Score trained model. scores = model.evaluate(x_test, y_test, verbose=1) print('Test loss:', scores[0]) print('Test accuracy:', scores[1])

訓練完畢后,模型保存為:keras_cifar10_trained_model.h5

使用預訓練模型

# 使用已經訓練好的參數來加載模型from keras.models import load_modelmodel = load_model('./saved_models/keras_cifar10_trained_model.h5')model.summary()''' Layer (type) Output Shape Param # ================================================================= conv2d_9 (Conv2D) (None, 32, 32, 32) 896 _________________________________________________________________ activation_13 (Activation) (None, 32, 32, 32) 0 _________________________________________________________________ conv2d_10 (Conv2D) (None, 30, 30, 32) 9248 _________________________________________________________________ activation_14 (Activation) (None, 30, 30, 32) 0 _________________________________________________________________ max_pooling2d_5 (MaxPooling2 (None, 15, 15, 32) 0 _________________________________________________________________ dropout_7 (Dropout) (None, 15, 15, 32) 0 _________________________________________________________________ conv2d_11 (Conv2D) (None, 15, 15, 64) 18496 _________________________________________________________________ activation_15 (Activation) (None, 15, 15, 64) 0 _________________________________________________________________ conv2d_12 (Conv2D) (None, 13, 13, 64) 36928 _________________________________________________________________ activation_16 (Activation) (None, 13, 13, 64) 0 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 6, 6, 64) 0 _________________________________________________________________ dropout_8 (Dropout) (None, 6, 6, 64) 0 _________________________________________________________________ flatten_3 (Flatten) (None, 2304) 0 _________________________________________________________________ dense_5 (Dense) (None, 512) 1180160 _________________________________________________________________ activation_17 (Activation) (None, 512) 0 _________________________________________________________________ dropout_9 (Dropout) (None, 512) 0 _________________________________________________________________ dense_6 (Dense) (None, 10) 5130 _________________________________________________________________ activation_18 (Activation) (None, 10) 0 ================================================================= Total params: 1,250,858 Trainable params: 1,250,858 Non-trainable params: 0 '''

識別測試集圖片

lst= ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] def onehot_to_label(res):label = ''for i in range(len(res[0])):if res[0][i] == 1:label = lst[i]return labeldef softmax_to_label(res):label = ''index = res[0].argmax()label = lst[index]return label# 識別測試集圖片 test_image = x_test[100].reshape([1,32,32,3]) test_image.shape res = model.predict(test_image) label = softmax_to_label(res) print(label)

本地加載圖片識別

# 自己加載raw image進行識別 from PIL import Image from keras.preprocessing.image import img_to_array import numpy as npimage = Image.open('./images/airplane.jpeg') # 加載圖片 image = image.resize((32,32)) image = img_to_array(image)# 加載進來之后開始預測 image = image.reshape([1,32,32,3]) # 需要reshape到四維張量才行 res = model.predict(image) label = softmax_to_label(res) print("The image is: ", label)# 或者整合為一個函數 def image_to_array(path):image = Image.open(path)image = image.resize((32,32),Image.NEAREST) # 會將圖像整體縮放到指定大小,不是裁剪image = img_to_array(image) # 變成數組image = image.reshape([1,32,32,3]) # reshape到4維張量return image

使用時注意到輸入到網絡的數據是張量,且需要reshape到四維,因為按照批量往里輸入的時候,也是四維,單獨輸入一張圖片,使用方式相同。

總結

以上是生活随笔為你收集整理的基于Keras搭建cifar10数据集训练预测Pipeline的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 亚洲专区中文字幕 | 91最新国产 | 成人免费三级 | 免费无码国产v片在线观看 三级全黄做爰在线观看 | 亚洲欧美一区二区三区久久 | 美国免费高清电影在线观看 | 日韩av网址在线观看 | 亚洲一二三四视频 | 久久精品国产免费 | 中文字幕国产 | 潮喷失禁大喷水aⅴ无码 | 久久精品久久久久 | 日本亚洲黄色 | 久久99精品波多结衣一区 | 99re视频在线播放 | 全国探花| 日韩精品欧美 | 亚洲国产私拍精品国模在线观看 | 欧美日韩成人一区二区在线观看 | 国产成人无码精品久久 | 国产成人影视 | 91不卡视频 | 九九九九九伊人 | 婷婷国产一区二区三区 | 亚洲搞av| 国产成人无码AA精品区 | 亚洲精品国产精品国自产网站按摩 | 欧美少妇15p | 好吊视频一区二区三区 | 91精品一区| 78m78成人免费网站 | 91青青操 | 伊人网影院 | 另类小说一区二区 | 伊人网综合视频 | 精品黑人一区二区三区在线观看 | 日韩v| 日韩性爰视频 | 好吊视频一区二区三区四区 | 国产福利免费视频 | 日韩视频在线观看一区二区三区 | www.欧美国产 | 久久午夜夜伦鲁鲁片无码免费 | 国产露脸91国语对白 | 国产精品无码一本二本三本色 | 亚洲在线一区二区 | 成人爱爱网站 | 国产精品乱码久久久久 | 免费久久一级欧美特大黄 | 欧美二区三区 | 久久国产乱子伦精品 | 日本少妇作爱视频 | 爆操欧美 | 免费大片黄在线观看 | 四虎在线精品 | 色开心 | 欧美不卡在线观看 | 天天射天天干 | 国产极品美女高潮无套在线观看 | 深夜福利一区二区三区 | 日韩黄色录像 | av中字在线 | 五月色综合 | 欧美一区免费观看 | 91香蕉黄 | 成片免费观看 | 成人av播放| 美女作爱网站 | 亚洲二区在线观看 | 激情视频在线观看免费 | 在线观看的毛片 | a毛片在线免费观看 | h在线网站 | 狠狠操天天操夜夜操 | 国外成人性视频免费 | 九色91视频 | 午夜888 | 国产美女在线观看 | 国产精品卡一卡二 | 国产香蕉尹人视频在线 | 国产精品丝袜在线观看 | av片免费看 | 亚洲av无码专区在线 | 毛片大全免费看 | 亚洲一区二区三区中文字幕 | 国产黄色片子 | 狠狠爱天天干 | 色婷婷久久五月综合成人 | 久久噜噜噜精品国产亚洲综合 | 日本国产亚洲 | 免费高清欧美大片在线观看 | 成年人午夜免费视频 | 国产裸体永久免费无遮挡 | 中文在线а√在线 | www.四虎影视| 日韩一区二区高清视频 | 97自拍视频在线 | 中文字幕永久在线观看 | 免费插插视频 |