Caffe 层
卷積神經網絡(Convolutional Neural Network, CNN)是一種前饋神經網絡,它的人工神經元可以響應一部分覆蓋范圍內的周圍單元,[1]對于大型圖像處理有出色表現。
Deep Neural Network(DNN)模型是基本的深度學習框架
遞歸神經網絡(RNN)是兩種人工神經網絡的總稱。一種是時間遞歸神經網絡(recurrent neural network),另一種是結構遞歸神經網絡(recursive neural network)。時間遞歸神經網絡的神經元間連接構成矩陣,而結構遞歸神經網絡利用相似的神經網絡結構遞歸構造更為復雜的深度網絡。RNN一般指代時間遞歸神經網絡。單純遞歸神經網絡因為無法處理隨著遞歸,權重指數級爆炸或消失的問題(Vanishing gradient problem),難以捕捉長期時間關聯;而結合不同的LSTM可以很好解決這個問題。
# bottom = last top name: "LeNet" # 數據層 layer {name: "mnist"type: "Data"top: "data"top: "label"include {phase: TRAIN}transform_param {scale: 0.00390625}data_param {source: "mnist_train_lmdb"batch_size: 64backend: LMDB} } # 數據層 layer {name: "mnist"type: "Data"top: "data"top: "label"include {phase: TEST}transform_param {scale: 0.00390625}data_param {source: "mnist_test_lmdb"batch_size: 100backend: LMDB} } # 卷積層 layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 20kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}} } # 池化層 layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2} } # 卷積層 layer {name: "conv2"type: "Convolution"bottom: "pool1"top: "conv2"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 50kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}} } # 池化層 layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2} } # 全連接層 layer {name: "ip1"type: "InnerProduct"bottom: "pool2"top: "ip1"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}} } # ReLU層 layer {name: "relu1"type: "ReLU"bottom: "ip1"top: "ip1" } # 全連接層 layer {name: "ip2"type: "InnerProduct"bottom: "ip1"top: "ip2"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 10weight_filler {type: "xavier"}bias_filler {type: "constant"}} } # 損失層/預測精度 layer {name: "accuracy"type: "Accuracy"bottom: "ip2"bottom: "label"top: "accuracy"include {phase: TEST} } # 損失層 layer {name: "loss"type: "SoftmaxWithLoss"bottom: "ip2"bottom: "label"top: "loss" }數據層 Data Layers
- Image Data - read raw images.
- Database - read data from LEVELDB or LMDB.
- HDF5 Input - read HDF5 data, allows data of arbitrary dimensions.
- HDF5 Output - write data as HDF5.
- Input - typically used for networks that are being deployed.
- Window Data - read window data file.
- Memory Data - read data directly from memory.
- Dummy Data - for static data and debugging.
視覺層 Vision Layers
- Convolution Layer - convolves the input image with a set of learnable filters, each producing one feature map in the output image.
- Pooling Layer - max, average, or stochastic pooling.
- Spatial Pyramid Pooling (SPP)
- Crop - perform cropping transformation.
- Deconvolution Layer - transposed convolution.
- Im2Col - relic helper layer that is not used much anymore.
經常層 Recurrent Layers
- Recurrent
- RNN
- Long-Short Term Memory (LSTM)
普通層 Common Layers
- Inner Product - fully connected layer.
- Dropout
- Embed - for learning embeddings of one-hot encoded vector (takes index as input).
歸一化層 Normalization Layers
- Local Response Normalization (LRN) - performs a kind of “lateral inhibition” by normalizing over local input regions.
- Mean Variance Normalization (MVN) - performs contrast normalization / instance normalization.
- Batch Normalization - performs normalization over mini-batches.
激活/神經元層 Activation / Neuron Layers
- ReLU / Rectified-Linear and Leaky-ReLU - ReLU and Leaky-ReLU rectification.
- PReLU - parametric ReLU.
- ELU - exponential linear rectification.
- Sigmoid
- TanH
- Absolute Value
- Power - f(x) = (shift + scale * x) ^ power.
- Exp - f(x) = base ^ (shift + scale * x).
- Log - f(x) = log(x).
- BNLL - f(x) = log(1 + exp(x)).
- Threshold - performs step function at user defined threshold.
- Bias - adds a bias to a blob that can either be learned or fixed.
- Scale - scales a blob by an amount that can either be learned or fixed.
實用層 Utility Layers
- Flatten
*Reshape - Batch Reindex
- Split
- Concat
- Slicing
- Eltwise - element-wise operations such as product or sum between two blobs.
- Filter / Mask - mask or select output using last blob.
- Parameter - enable parameters to be shared between layers.
- Reduction - reduce input blob to scalar blob using operations such as sum or mean.
- Silence - prevent top-level blobs from being printed during training.
- ArgMax
- Softmax
- Python - allows custom Python layers.
損失層 Loss Layers
- Multinomial Logistic Loss
- Infogain Loss - a generalization of MultinomialLogisticLossLayer.
- Softmax with Loss - computes the multinomial logistic loss of the softmax of its inputs. It’s conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.
- Sum-of-Squares / Euclidean - computes the sum of squares of differences of its two inputs, 12N∑Ni=1∥x1i?x2i∥2212N∑i=1N‖xi1?xi2‖22
- Hinge / Margin - The hinge loss layer computes a one-vs-all hinge (L1) or squared hinge loss (L2).
- Sigmoid Cross-Entropy Loss - computes the cross-entropy (logistic) loss, often used for predicting targets interpreted as probabilities.
- Accuracy / Top-k layer - scores the output as an accuracy with respect to target – it is not actually a loss and has no backward step.
- Contrastive Loss
轉載于:https://www.cnblogs.com/cheungxiongwei/articles/7746386.html
總結
- 上一篇: struts2原理(转)
- 下一篇: Android驱动开发之Hello实例