工厂模式-CaffeNet训练
參考鏈接:http://blog.csdn.net/lingerlanlan/article/details/32329761
RNN神經網絡:http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/detection.ipynb
官方鏈接:http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/classification.ipynb
參考鏈接:http://suanfazu.com/t/caffe-shen-du-xue-xi-kuang-jia-shang-shou-jiao-cheng/281/3
模型定義中有一點比較容易被誤解,信號在有向圖中是自下而上流動的,并不是自上而下。
層的結構定義如下:
?????? 1 name:層名稱 2 type:層類型 3 top:出口 4 bottom:入口?
Each layer type defines three critical computations: setup, forward, andbackward.
- Setup: initialize the layer and its connections once at model initialization.
- Forward: given input from bottom compute the output and send to the top.
- Backward: given the gradient w.r.t. the top output compute the gradient w.r.t. to the input and send to the bottom. A layer with parameters computes the gradient w.r.t. to its parameters and stores it internally.
/home/wishchin/caffe-master/examples/hdf5_classification/train_val2.prototxt
name: "LogisticRegressionNet" layer {name: "data"type: "HDF5Data"top: "data"top: "label"include {phase: TRAIN}hdf5_data_param {source: "hdf5_classification/data/train.txt"batch_size: 10} } layer {name: "data"type: "HDF5Data"top: "data"top: "label"include {phase: TEST}hdf5_data_param {source: "hdf5_classification/data/test.txt"batch_size: 10} } layer {name: "fc1"type: "InnerProduct"bottom: "data"top: "fc1"param {lr_mult: 1decay_mult: 1}param {lr_mult: 2decay_mult: 0}inner_product_param {num_output: 40weight_filler {type: "gaussian"std: 0.01}bias_filler {type: "constant"value: 0}} } layer {name: "relu1"type: "ReLU"bottom: "fc1"top: "fc1" } layer {name: "fc2"type: "InnerProduct"bottom: "fc1"top: "fc2"param {lr_mult: 1decay_mult: 1}param {lr_mult: 2decay_mult: 0}inner_product_param {num_output: 2weight_filler {type: "gaussian"std: 0.01}bias_filler {type: "constant"value: 0}} } layer {name: "loss"type: "SoftmaxWithLoss"bottom: "fc2"bottom: "label"top: "loss" } layer {name: "accuracy"type: "Accuracy"bottom: "fc2"bottom: "label"top: "accuracy"include {phase: TEST} }關于參數與結果的關系:多次訓練效果一直在0.7,后來改動了全鏈接層的初始化參數。高斯分布的標準差由0.001改為0.0001,就是調小了。 我的結果有點相似。
總結
以上是生活随笔為你收集整理的工厂模式-CaffeNet训练的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 鸟随鸾凤飞腾远人伴贤良品自高是谁的诗
- 下一篇: 孩子比赛输赢不重要的句子72句