工厂模式-CaffeNet训练
參考鏈接:http://blog.csdn.net/lingerlanlan/article/details/32329761
RNN神經(jīng)網(wǎng)絡(luò):http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/detection.ipynb
官方鏈接:http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/classification.ipynb
參考鏈接:http://suanfazu.com/t/caffe-shen-du-xue-xi-kuang-jia-shang-shou-jiao-cheng/281/3
模型定義中有一點(diǎn)比較容易被誤解,信號(hào)在有向圖中是自下而上流動(dòng)的,并不是自上而下。
層的結(jié)構(gòu)定義如下:
?????? 1 name:層名稱 2 type:層類型 3 top:出口 4 bottom:入口?
Each layer type defines three critical computations: setup, forward, andbackward.
- Setup: initialize the layer and its connections once at model initialization.
- Forward: given input from bottom compute the output and send to the top.
- Backward: given the gradient w.r.t. the top output compute the gradient w.r.t. to the input and send to the bottom. A layer with parameters computes the gradient w.r.t. to its parameters and stores it internally.
/home/wishchin/caffe-master/examples/hdf5_classification/train_val2.prototxt
name: "LogisticRegressionNet" layer {name: "data"type: "HDF5Data"top: "data"top: "label"include {phase: TRAIN}hdf5_data_param {source: "hdf5_classification/data/train.txt"batch_size: 10} } layer {name: "data"type: "HDF5Data"top: "data"top: "label"include {phase: TEST}hdf5_data_param {source: "hdf5_classification/data/test.txt"batch_size: 10} } layer {name: "fc1"type: "InnerProduct"bottom: "data"top: "fc1"param {lr_mult: 1decay_mult: 1}param {lr_mult: 2decay_mult: 0}inner_product_param {num_output: 40weight_filler {type: "gaussian"std: 0.01}bias_filler {type: "constant"value: 0}} } layer {name: "relu1"type: "ReLU"bottom: "fc1"top: "fc1" } layer {name: "fc2"type: "InnerProduct"bottom: "fc1"top: "fc2"param {lr_mult: 1decay_mult: 1}param {lr_mult: 2decay_mult: 0}inner_product_param {num_output: 2weight_filler {type: "gaussian"std: 0.01}bias_filler {type: "constant"value: 0}} } layer {name: "loss"type: "SoftmaxWithLoss"bottom: "fc2"bottom: "label"top: "loss" } layer {name: "accuracy"type: "Accuracy"bottom: "fc2"bottom: "label"top: "accuracy"include {phase: TEST} }關(guān)于參數(shù)與結(jié)果的關(guān)系:多次訓(xùn)練效果一直在0.7,后來(lái)改動(dòng)了全鏈接層的初始化參數(shù)。高斯分布的標(biāo)準(zhǔn)差由0.001改為0.0001,就是調(diào)小了。 我的結(jié)果有點(diǎn)相似。
總結(jié)
以上是生活随笔為你收集整理的工厂模式-CaffeNet训练的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 鸟随鸾凤飞腾远人伴贤良品自高是谁的诗
- 下一篇: 孩子比赛输赢不重要的句子72句