日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

根据 *_train_test.prototxt文件生成 *_deploy.prototxt文件

發布時間:2025/3/21 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 根据 *_train_test.prototxt文件生成 *_deploy.prototxt文件 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.


根據 *_train_test.prototxt文件生成 *_deploy.prototxt文件

發表于2016/8/6 19:43:11 ?1218人閱讀

本文參考博文

(1)介紹?*_train_test.prototxt文件與 *_deploy.prototxt文件的不同:http://blog.csdn.net/sunshine_in_moon/article/details/49472901????

(2)生成deploy文件的Python代碼:http://www.cnblogs.com/denny402/p/5685818.html???????


*_train_test.prototxt文件

這是訓練與測試網絡配置文件


*_deploy.prototxt文件 這是模型構造文件

在博文http://www.cnblogs.com/denny402/p/5685818.html?????中給出了生成 deploy.prototxt文件的Python源代碼,但是每個網絡不同,修改起來比較麻煩,下面給出該博文中以mnist為例生成deploy文件的源代碼,可根據自己網絡的設置做出相應修改:(下方代碼未測試)


# -*- coding: utf-8 -*-from caffe import layers as L,params as P,to_proto root='/home/xxx/' deploy=root+'mnist/deploy.prototxt' #文件保存路徑def create_deploy():#少了第一層,data層conv1=L.Convolution(bottom='data', kernel_size=5, stride=1,num_output=20, pad=0,weight_filler=dict(type='xavier'))pool1=L.Pooling(conv1, pool=P.Pooling.MAX, kernel_size=2, stride=2)conv2=L.Convolution(pool1, kernel_size=5, stride=1,num_output=50, pad=0,weight_filler=dict(type='xavier'))pool2=L.Pooling(conv2, pool=P.Pooling.MAX, kernel_size=2, stride=2)fc3=L.InnerProduct(pool2, num_output=500,weight_filler=dict(type='xavier'))relu3=L.ReLU(fc3, in_place=True)fc4 = L.InnerProduct(relu3, num_output=10,weight_filler=dict(type='xavier'))#最后沒有accuracy層,但有一個Softmax層prob=L.Softmax(fc4)return to_proto(prob) def write_deploy(): with open(deploy, 'w') as f:f.write('name:"Lenet"\n')f.write('input:"data"\n')f.write('input_dim:1\n')f.write('input_dim:3\n')f.write('input_dim:28\n')f.write('input_dim:28\n')f.write(str(create_deploy())) if __name__ == '__main__':write_deploy()


用代碼生成deploy文件還是比較麻煩。我們在構建深度學習網絡時,肯定會先定義好訓練與測試網絡的配置文件——*_train_test.prototxt文件,我們可以通過修改*_train_test.prototxt文件 來生成 deploy 文件。以cifar10為例先簡單介紹一下兩者的區別。


(1)deploy 文件中的數據層更為簡單,即將*_train_test.prototxt文件中的輸入訓練數據lmdb與輸入測試數據lmdb這兩層刪除,取而代之的是

layer {name: "data"type: "Input"top: "data"input_param { shape: { dim: 1 dim: 3 dim: 32 dim: 32 } } }

注:shape: { dim: 1 dim: 3 dim: 32 dim: 32 }代表含義:
shape {dim: 1 #num,對待識別樣本進行數據增廣的數量,可自行定義。一般會進行5次crop,之后分別flip。如果該值為10則表示一個樣本會變成10個,之后輸入到網絡進行識別。如果不進行數據增廣,可以設置成1dim: 3 #通道數,表示RGB三個通道dim: 32 #圖像的長和寬,通過 *_train_test.prototxt文件中數據輸入層的crop_size獲取dim: 32


(2)卷積層和全連接層中weight_filler{}與bias_filler{}兩個參數不用再填寫,因為這兩個參數的值,由已經訓練好的模型*.caffemodel文件提供。如下所示代碼,將*_train_test.prototxt文件中的weight_fillerbias_filler全部刪除。


layer {????????????????????????????? # weight_filler、bias_filler刪除
? name: "ip2"
? type: "InnerProduct"
? bottom: "ip1"
? top: "ip2"
? param {
??? lr_mult: 1?? #權重w的學習率倍數
? }
? param {
??? lr_mult: 2??? #偏置b的學習率倍數
? }
? inner_product_param {
??? num_output: 10
??? weight_filler {
????? type: "gaussian"
????? std: 0.1
??? }
??? bias_filler {
????? type: "constant"
??? }

? }
}


刪除后變為


layer { name: "ip2"type: "InnerProduct"bottom: "ip1"top: "ip2"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 10} }
(3)輸出層的變化?? ???? 1)沒有了test模塊測試精度?,將該層刪除????? ?????2)輸出層
1)*_deploy.prototxt文件的構造和*_train_test.prototxt文件的構造最為明顯的不同點是,deploy文件沒有test網絡中的test模塊,只有訓練模塊,即將*_train_test.prototxt中最后部分的test模塊測試精度刪除,即將如下代碼刪除。
layer { #刪除該層name: "accuracy"type: "Accuracy"bottom: "ip2"bottom: "label"top: "accuracy"include {phase: TEST} }

2) 輸出層?

*_train_test.prototxt文件


layer{name: "loss" #注意此處層名稱與下面的不同type: "SoftmaxWithLoss" #注意此處與下面的不同bottom: "ip2"bottom: "label" #注意標簽項在下面沒有了,因為下面的預測屬于哪個標簽,因此不能提供標簽top: "loss" }


*_deploy.prototxt文件

layer {name: "prob"type: "Softmax"bottom: "ip2"top: "prob" }


注意在兩個文件中輸出層的類型都發生了變化一個是SoftmaxWithLoss,另一個是Softmax。另外為了方便區分訓練與應用輸出,訓練是輸出時是loss,應用時是prob。


下面給出CIFAR10中的配置文件cifar10_quick_train_test.prototxt與其模型構造文件? cifar10_quick.prototxt 直觀展示兩者的區別。


cifar10_quick_train_test.prototxt文件代碼

name: "CIFAR10_quick"
layer {?????????????? #該層去掉
? name: "cifar"
? type: "Data"
? top: "data"
? top: "label"
? include {
??? phase: TRAIN
? }
? transform_param {
??? mean_file: "examples/cifar10/mean.binaryproto"
? }
? data_param {
??? source: "examples/cifar10/cifar10_train_lmdb"
??? batch_size: 100
??? backend: LMDB
? }
}
layer {???????????? #該層去掉
? name: "cifar"
? type: "Data"
? top: "data"
? top: "label"
? include {
??? phase: TEST
? }
? transform_param {
??? mean_file: "examples/cifar10/mean.binaryproto"
? }
? data_param {
??? source: "examples/cifar10/cifar10_test_lmdb"
??? batch_size: 100
??? backend: LMDB
? }
}
layer {??????????????????????? #將下方的weight_filler、bias_filler全部刪除
? name: "conv1"
? type: "Convolution"
? bottom: "data"
? top: "conv1"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? convolution_param {
??? num_output: 32
??? pad: 2
??? kernel_size: 5
??? stride: 1
??? weight_filler {
????? type: "gaussian"
????? std: 0.0001
??? }
??? bias_filler {
????? type: "constant"
??? }

? }
}
layer {
? name: "pool1"
? type: "Pooling"
? bottom: "conv1"
? top: "pool1"
? pooling_param {
??? pool: MAX
??? kernel_size: 3
??? stride: 2
? }
}
layer {
? name: "relu1"
? type: "ReLU"
? bottom: "pool1"
? top: "pool1"
}
layer {???????????????????????? #weight_filler、bias_filler刪除
? name: "conv2"
? type: "Convolution"
? bottom: "pool1"
? top: "conv2"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? convolution_param {
??? num_output: 32
??? pad: 2
??? kernel_size: 5
??? stride: 1
??? weight_filler {
????? type: "gaussian"
????? std: 0.01
??? }
??? bias_filler {
????? type: "constant"
??? }

? }
}
layer {
? name: "relu2"
? type: "ReLU"
? bottom: "conv2"
? top: "conv2"
}
layer {
? name: "pool2"
? type: "Pooling"
? bottom: "conv2"
? top: "pool2"
? pooling_param {
??? pool: AVE
??? kernel_size: 3
??? stride: 2
? }
}
layer {???????????????????????? #weight_filler、bias_filler刪除
? name: "conv3"
? type: "Convolution"
? bottom: "pool2"
? top: "conv3"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? convolution_param {
??? num_output: 64
??? pad: 2
??? kernel_size: 5
??? stride: 1
??? weight_filler {
????? type: "gaussian"
????? std: 0.01
??? }
??? bias_filler {
????? type: "constant"
??? }

? }
}
layer {
? name: "relu3"
? type: "ReLU"
? bottom: "conv3"
? top: "conv3"
}
layer {
? name: "pool3"
? type: "Pooling"
? bottom: "conv3"
? top: "pool3"
? pooling_param {
??? pool: AVE
??? kernel_size: 3
??? stride: 2
? }
}
layer {?????????????????????? #weight_filler、bias_filler刪除
? name: "ip1"
? type: "InnerProduct"
? bottom: "pool3"
? top: "ip1"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? inner_product_param {
??? num_output: 64
??? weight_filler {
????? type: "gaussian"
????? std: 0.1
??? }
??? bias_filler {
????? type: "constant"
??? }

? }
}
layer {????????????????????????????? # weight_filler、bias_filler刪除
? name: "ip2"
? type: "InnerProduct"
? bottom: "ip1"
? top: "ip2"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? inner_product_param {
??? num_output: 10
??? weight_filler {
????? type: "gaussian"
????? std: 0.1
??? }
??? bias_filler {
????? type: "constant"
??? }

? }
}
layer {????????????????????????????????? #將該層刪除
? name: "accuracy"
? type: "Accuracy"
? bottom: "ip2"
? bottom: "label"
? top: "accuracy"
? include {
??? phase: TEST
? }
}
layer {???????????????????????????????? #修改
? name: "loss"?????? #---loss? 修改為? prob
? type: "SoftmaxWithLoss"??????????? ?# SoftmaxWithLoss 修改為 softmax
? bottom: "ip2"
? bottom: "label"????????? #去掉
? top: "loss"
}


以下為cifar10_quick.prototxt

layer {?????????????? #將兩個輸入層修改為該層
? name: "data"
? type: "Input"
? top: "data"
? input_param { shape: { dim: 1 dim: 3 dim: 32 dim: 32 } }???? #注意shape中變量值的修改,CIFAR10中的 *_train_test.protxt文件中沒有 crop_size
}

layer {
? name: "conv1"
? type: "Convolution"
? bottom: "data"
? top: "conv1"
? param {
??? lr_mult: 1?? #權重W的學習率倍數
}
? param {
??? lr_mult: 2?? #偏置b的學習率倍數
? }
? convolution_param {
??? num_output: 32
??? pad: 2???#加邊為2
?? kernel_size: 5
??? stride: 1
? }
}
layer {
? name: "pool1"
? type: "Pooling"
? bottom: "conv1"
? top: "pool1"
? pooling_param {
??? pool: MAX????#Max Pooling
?? kernel_size: 3
??? stride: 2
? }
}
layer {
? name: "relu1"
? type: "ReLU"
? bottom: "pool1"
? top: "pool1"
}
layer {
? name: "conv2"
? type: "Convolution"
? bottom: "pool1"
? top: "conv2"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? convolution_param {
??? num_output: 32
??? pad: 2
??? kernel_size: 5
??? stride: 1
? }
}
layer {
? name: "relu2"
? type: "ReLU"
? bottom: "conv2"
? top: "conv2"
}
layer {
? name: "pool2"
? type: "Pooling"
? bottom: "conv2"
? top: "pool2"
? pooling_param {
??? pool: AVE?? #均值池化
??? kernel_size: 3
??? stride: 2
? }
}
layer {
? name: "conv3"
? type: "Convolution"
? bottom: "pool2"
? top: "conv3"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? convolution_param {
??? num_output: 64
??? pad: 2
??? kernel_size: 5
??? stride: 1
? }
}
layer {
? name: "relu3"
? type: "ReLU"? #使用ReLU激勵函數,這里需要注意的是,本層的bottom和top都是conv3>
? bottom: "conv3"
? top: "conv3"
}
layer {
? name: "pool3"
? type: "Pooling"
? bottom: "conv3"
? top: "pool3"
? pooling_param {
??? pool: AVE
kernel_size: 3
??? stride: 2
? }
}
layer {
? name: "ip1"
? type: "InnerProduct"
? bottom: "pool3"
? top: "ip1"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? inner_product_param {
??? num_output: 64
? }
}
layer {
? name: "ip2"
? type: "InnerProduct"
? bottom: "ip1"
? top: "ip2"
? param {
??? lr_mult: 1
? }
? param {
??? lr_mult: 2
? }
? inner_product_param {
??? num_output: 10
? }
}
layer {
? name: "prob"
? type: "Softmax"
? bottom: "ip2"
? top: "prob"
}

總結

以上是生活随笔為你收集整理的根据 *_train_test.prototxt文件生成 *_deploy.prototxt文件的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。