利用MatConvNet进行孪生多分支网络设计
生活随笔
收集整理的這篇文章主要介紹了
利用MatConvNet进行孪生多分支网络设计
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
前面提及到了利用vl_nndist作為多分支網(wǎng)絡(luò)的特征測度函數(shù),將多個網(wǎng)絡(luò)的局部輸出融合到一起。參見博客:https://blog.csdn.net/shenziheng1/article/details/81263547。 很多文章中也提及到了,除了采用顯式的距離測度函數(shù),我們還可以使用全連接層進行設(shè)計,其中關(guān)鍵的一環(huán)就是如何將多個分支網(wǎng)絡(luò)的輸出拼接成一個輸出。Matconvnet中已經(jīng)開發(fā)了這樣的函數(shù)dagnn.Concat 和 vl_nnconcat。
1. vl_nnconcat
function y = vl_nnconcat(inputs, dim, dzdy, varargin) %卷積神經(jīng)網(wǎng)絡(luò)中用于連接多個輸入 % Y = VL_NNCONCAT(INPUTS, DIM) 沿著維度DIM連接輸入信息 % % DZDINPUTS = VL_NNCONCAT(INPUTS, DIM, DZDY) computes the derivatives % of the block projected onto DZDY. DZDINPUTS has one element for % each element of INPUTS, each of which is an array that has the same % dimensions of the corresponding array in INPUTS.opts.inputSizes = [] ; opts = vl_argparse(opts, varargin, 'nonrecursive') ;if nargin < 2, dim = 3; end; if nargin < 3, dzdy = []; end;if isempty(dzdy)y = cat(dim, inputs{:}); elseif isempty(opts.inputSizes)opts.inputSizes = cellfun(@(inp) [size(inp,1),size(inp,2),size(inp,3),size(inp,4)], inputs, 'UniformOutput', false) ;endstart = 1 ;y = cell(1, numel(opts.inputSizes)) ;s.type = '()' ;s.subs = {':', ':', ':', ':'} ;for i = 1:numel(opts.inputSizes)stop = start + opts.inputSizes{i}(dim) ;s.subs{dim} = start:stop-1 ;y{i} = subsref(dzdy,s) ;start = stop ;end end2. dagnn.Concat
classdef Concat < dagnn.ElementWisepropertiesdim = 3 % 默認(rèn)是按照第三個維度進行拼接的 應(yīng)用過程中指定維度就好了endproperties (Transient)inputSizes = {}endmethodsfunction outputs = forward(obj, inputs, params)outputs{1} = vl_nnconcat(inputs, obj.dim) ;obj.inputSizes = cellfun(@size, inputs, 'UniformOutput', false) ;endfunction [derInputs, derParams] = backward(obj, inputs, params, derOutputs)derInputs = vl_nnconcat(inputs, obj.dim, derOutputs{1}, 'inputSizes', obj.inputSizes) ;derParams = {} ;endfunction reset(obj)obj.inputSizes = {} ;endfunction outputSizes = getOutputSizes(obj, inputSizes)sz = inputSizes{1} ;for k = 2:numel(inputSizes)sz(obj.dim) = sz(obj.dim) + inputSizes{k}(obj.dim) ;endoutputSizes{1} = sz ;endfunction rfs = getReceptiveFields(obj)numInputs = numel(obj.net.layers(obj.layerIndex).inputs) ;if obj.dim == 3 || obj.dim == 4rfs = getReceptiveFields@dagnn.ElementWise(obj) ;rfs = repmat(rfs, numInputs, 1) ;elsefor i = 1:numInputsrfs(i,1).size = [NaN NaN] ;rfs(i,1).stride = [NaN NaN] ;rfs(i,1).offset = [NaN NaN] ;endendendfunction load(obj, varargin)s = dagnn.Layer.argsToStruct(varargin{:}) ;% backward file compatibilityif isfield(s, 'numInputs'), s = rmfield(s, 'numInputs') ; endload@dagnn.Layer(obj, s) ;endfunction obj = Concat(varargin)obj.load(varargin{:}) ;endend end一個應(yīng)用實例:
function net = initializeUnet()net=dagnn.DagNN(); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % STAGE I %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 1: 1st conv block : conv-batchnorm-relu % ---------------------------------------------- conv1 = dagnn.Conv('size',[3,3,1,64], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv1', conv1, {'FBP'},{'conv_x1'},{'conv_f1','conv_b1'}); net.addLayer('bn1', dagnn.BatchNorm('numChannels', 64), {'conv_x1'}, {'bn_x1'}, {'bn1f', 'bn1b', 'bn1m'}); % 注意批歸一化的通道數(shù) relu1 = dagnn.ReLU(); net.addLayer('relu1', relu1, {'bn_x1'}, {'relu_x1'}, {}); % ---------------------------------------------- % Stage 1: 2nd conv block : conv-batchnorm-relu % ---------------------------------------------- conv2 = dagnn.Conv('size',[3,3,64,64], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv2', conv2, {'relu_x1'},{'conv_x2'},{'conv_f2','conv_b2'}); net.addLayer('bn2', dagnn.BatchNorm('numChannels', 64), {'conv_x2'}, {'bn_x2'}, {'bn2f', 'bn2b', 'bn2m'}); relu2 = dagnn.ReLU(); net.addLayer('relu2', relu2, {'bn_x2'}, {'relu_x2'}, {}); % ---------------------------------------------- % Stage 1: pooling % ---------------------------------------------- pool1 = dagnn.Pooling('method', 'max', 'poolSize', [2 2], 'stride', 2,'pad', 0); net.addLayer('pool1', pool1, {'relu_x2'}, {'pool_x1'}, {});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % STAGE II %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 2: 1st conv block : conv-batchnorm-relu % ---------------------------------------------- conv3 = dagnn.Conv('size',[3,3,64,128], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv3', conv3, {'pool_x1'},{'conv_x3'},{'conv_f3','conv_b3'}); net.addLayer('bn3', dagnn.BatchNorm('numChannels', 128), {'conv_x3'}, {'bn_x3'}, {'bn3f', 'bn3b', 'bn3m'}); relu3 = dagnn.ReLU(); net.addLayer('relu3', relu3, {'bn_x3'}, {'relu_x3'}, {}); % ---------------------------------------------- % Stage 2: 2nd conv block : conv-batchnorm-relu % ---------------------------------------------- conv4 = dagnn.Conv('size',[3,3,128,128], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv4', conv4, {'relu_x3'},{'conv_x4'},{'conv_f4','conv_b4'}); net.addLayer('bn4', dagnn.BatchNorm('numChannels', 128), {'conv_x4'}, {'bn_x4'}, {'bn4f', 'bn4b', 'bn4m'}); relu4 = dagnn.ReLU(); net.addLayer('relu4', relu4, {'bn_x4'}, {'relu_x4'}, {}); % ---------------------------------------------- % Stage 2: pooling % ---------------------------------------------- pool2 = dagnn.Pooling('method', 'max', 'poolSize', [2 2], 'stride', 2); net.addLayer('pool2', pool2, {'relu_x4'}, {'pool_x2'}, {});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % STAGE III %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 3: 1st conv block : conv-batchnorm-relu % ---------------------------------------------- conv5 = dagnn.Conv('size',[3,3,128,256], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv5', conv5, {'pool_x2'},{'conv_x5'},{'conv_f5','conv_b5'}); net.addLayer('bn5', dagnn.BatchNorm('numChannels', 256), {'conv_x5'}, {'bn_x5'}, {'bn5f', 'bn5b', 'bn5m'}); relu5 = dagnn.ReLU(); net.addLayer('relu5', relu5, {'bn_x5'}, {'relu_x5'}, {}); % ---------------------------------------------- % Stage 3: 2nd conv block : conv-batchnorm-relu % ---------------------------------------------- conv6 = dagnn.Conv('size',[3,3,256,256], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv6', conv6, {'relu_x5'},{'conv_x6'},{'conv_f6','conv_b6'}); net.addLayer('bn6', dagnn.BatchNorm('numChannels', 256), {'conv_x6'}, {'bn_x6'}, {'bn6f', 'bn6b', 'bn6m'}); relu6 = dagnn.ReLU(); net.addLayer('relu6', relu6, {'bn_x6'}, {'relu_x6'}, {}); % ---------------------------------------------- % Stage 3: pooling % ---------------------------------------------- pool3 = dagnn.Pooling('method', 'max', 'poolSize', [2 2], 'stride', 2); net.addLayer('pool3', pool3, {'relu_x6'}, {'pool_x3'}, {});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % STAGE IV %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 4: 1st conv block : conv-batchnorm-relu % ---------------------------------------------- conv7 = dagnn.Conv('size',[3,3,256,512], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv7', conv7, {'pool_x3'},{'conv_x7'},{'conv_f7','conv_b7'}); net.addLayer('bn7', dagnn.BatchNorm('numChannels', 512), {'conv_x7'}, {'bn_x7'}, {'bn7f', 'bn7b', 'bn7m'}); relu7 = dagnn.ReLU(); net.addLayer('relu7', relu7, {'bn_x7'}, {'relu_x7'}, {}); % ---------------------------------------------- % Stage 4: 2nd conv block : conv-batchnorm-relu % ---------------------------------------------- conv8 = dagnn.Conv('size',[3,3,512,512], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv8', conv8, {'relu_x7'},{'conv_x8'},{'conv_f8','conv_b8'}); net.addLayer('bn8', dagnn.BatchNorm('numChannels', 512), {'conv_x8'}, {'bn_x8'}, {'bn8f', 'bn8b', 'bn8m'}); relu8 = dagnn.ReLU(); net.addLayer('relu8', relu8, {'bn_x8'}, {'relu_x8'}, {}); % ---------------------------------------------- % Stage 4: pooling % ---------------------------------------------- pool4 = dagnn.Pooling('method', 'max', 'poolSize', [2 2], 'stride', 2); net.addLayer('pool4', pool4, {'relu_x8'}, {'pool_x4'}, {});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % STAGE V %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 3: 1st conv block : conv-batchnorm-relu % ---------------------------------------------- conv9 = dagnn.Conv('size',[3,3,512,1024], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv9', conv9, {'pool_x4'},{'conv_x9'},{'conv_f9','conv_b9'}); net.addLayer('bn9', dagnn.BatchNorm('numChannels', 1024), {'conv_x9'}, {'bn_x9'}, {'bn9f', 'bn9b', 'bn9m'}); relu9 = dagnn.ReLU(); net.addLayer('relu9', relu9, {'bn_x9'}, {'relu_x9'}, {}); % ---------------------------------------------- % Stage 3: 2nd conv block : conv-batchnorm-relu % ---------------------------------------------- conv10 = dagnn.Conv('size',[3,3,1024,512], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv10', conv10, {'relu_x9'},{'conv_x10'},{'conv_f10','conv_b10'}); net.addLayer('bn10', dagnn.BatchNorm('numChannels', 512), {'conv_x10'}, {'bn_x10'}, {'bn10f', 'bn10b', 'bn10m'}); relu10 = dagnn.ReLU(); net.addLayer('relu10', relu10, {'bn_x10'}, {'relu_x10'}, {}); % ---------------------------------------------- % Stage 3: unpooling : 注意!!! 上采樣層 % ---------------------------------------------- Upsample1=dagnn.ConvTranspose('size',[3,3,512,512],'hasBias',false,'upsample',[2,2],'crop',[0,1,0,1]); net.addLayer('unpool1', Upsample1,{'relu_x10'},{'unpool_x1'},{'f1'});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % UPCONV STAGE IV : 上采樣卷積 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 4: concat block :輸入拼接 % ---------------------------------------------- concat1 = dagnn.Concat('dim', 3); % 深度 net.addLayer('concat1', concat1, {'relu_x8', 'unpool_x1'}, {'concat_x1'}, {}); % ---------------------------------------------- % Stage 4: 1st conv block : conv-batchnorm-relu % ---------------------------------------------- conv11 = dagnn.Conv('size',[3,3,1024,512], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv11', conv11, {'concat_x1'}, {'conv_x11'}, {'conv_f11','conv_b11'}); net.addLayer('bn11', dagnn.BatchNorm('numChannels', 512), {'conv_x11'}, {'bn_x11'}, {'bn11f', 'bn11b', 'bn11m'}); relu11 = dagnn.ReLU(); net.addLayer('relu11', relu11, {'bn_x11'}, {'relu_x11'}, {}); % ---------------------------------------------- % Stage 4: 2nd conv block : conv-batchnorm-relu % ---------------------------------------------- conv12 = dagnn.Conv('size',[3,3,512,256], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv12', conv12, {'relu_x11'}, {'conv_x12'},{'conv_f12','conv_b12'}); net.addLayer('bn12', dagnn.BatchNorm('numChannels', 256), {'conv_x12'}, {'bn_x12'}, {'bn12f', 'bn12b', 'bn12m'}); relu12 = dagnn.ReLU(); net.addLayer('relu12', relu12, {'bn_x12'}, {'relu_x12'}, {}); % ---------------------------------------------- % Stage 4: unpooling : 繼續(xù)進行上采樣 % ---------------------------------------------- Upsample2=dagnn.ConvTranspose('size',[3,3,256,256],'hasBias',false,'upsample',[2,2],'crop',[1,0,1,0]); net.addLayer('unpool2', Upsample2,{'relu_x12'},{'unpool_x2'},{'f2'});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % UPCONV STAGE III 上采樣卷積 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 3: concat block : 輸入拼接 % ---------------------------------------------- concat2 = dagnn.Concat('dim', 3); net.addLayer('concat2', concat2, {'relu_x6', 'unpool_x2'}, {'concat_x2'}, {}); % ---------------------------------------------- % Stage 3: 1st conv block : conv-batchnorm-relu % ---------------------------------------------- conv13 = dagnn.Conv('size',[3,3,512,256], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv13', conv13, {'concat_x2'}, {'conv_x13'}, {'conv_f13','conv_b13'}); net.addLayer('bn13', dagnn.BatchNorm('numChannels', 256), {'conv_x13'}, {'bn_x13'}, {'bn13f', 'bn13b', 'bn13m'}); relu13 = dagnn.ReLU(); net.addLayer('relu13', relu13, {'bn_x13'}, {'relu_x13'}, {}); % ---------------------------------------------- % Stage 3: 2nd conv block : conv-batchnorm-relu % ---------------------------------------------- conv14 = dagnn.Conv_original('size',[3,3,256,128], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv14', conv14, {'relu_x13'}, {'conv_x14'},{'conv_f14','conv_b14'}); net.addLayer('bn14', dagnn.BatchNorm('numChannels', 128), {'conv_x14'}, {'bn_x14'}, {'bn14f', 'bn14b', 'bn14m'}); relu14 = dagnn.ReLU(); net.addLayer('relu14', relu14, {'bn_x14'}, {'relu_x14'}, {}); % ---------------------------------------------- % Stage 3: unpooling :繼續(xù)上采樣 % ---------------------------------------------- Upsample3=dagnn.ConvTranspose('size',[3,3,128,128],'hasBias',false,'upsample',[2,2],'crop',[0,1,0,1]); net.addLayer('unpool3', Upsample3,{'relu_x14'},{'unpool_x3'},{'f3'});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % UPCONV STAGE II %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 2: concat block % ---------------------------------------------- concat3 = dagnn.Concat('dim', 3); net.addLayer('concat3', concat3, {'relu_x4', 'unpool_x3'}, {'concat_x3'}, {}); % ---------------------------------------------- % Stage 2: 1st conv block % ---------------------------------------------- conv15 = dagnn.Conv('size',[3,3,256,128], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv15', conv15, {'concat_x3'}, {'conv_x15'}, {'conv_f15','conv_b15'}); net.addLayer('bn15', dagnn.BatchNorm('numChannels', 128), {'conv_x15'}, {'bn_x15'}, {'bn15f', 'bn15b', 'bn15m'}); relu15 = dagnn.ReLU(); net.addLayer('relu15', relu15, {'bn_x15'}, {'relu_x15'}, {}); % ---------------------------------------------- % Stage 2: 2nd conv block % ---------------------------------------------- conv16 = dagnn.Conv('size',[3,3,128,64], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv16', conv16, {'relu_x15'}, {'conv_x16'},{'conv_f16','conv_b16'}); net.addLayer('bn16', dagnn.BatchNorm('numChannels', 64), {'conv_x16'}, {'bn_x16'}, {'bn16f', 'bn16b', 'bn16m'}); relu16 = dagnn.ReLU(); net.addLayer('relu16', relu16, {'bn_x16'}, {'relu_x16'}, {}); % ---------------------------------------------- % Stage 2: unpooling % ---------------------------------------------- Upsample4=dagnn.ConvTranspose('size',[3,3,64,64],'hasBias',false,'upsample',[2,2],'crop',[0,1,0,1]); net.addLayer('unpool4', Upsample4,{'relu_x16'},{'unpool_x4'},{'f4'});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % UPCONV STAGE I %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ---------------------------------------------- % Stage 1: concat block % ---------------------------------------------- concat4 = dagnn.Concat('dim', 3); net.addLayer('concat4', concat4, {'relu_x2', 'unpool_x4'}, {'concat_x4'}, {}); % ---------------------------------------------- % Stage 1: 1st conv block % ---------------------------------------------- conv17 = dagnn.Conv('size',[3,3,128,64], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv17', conv17, {'concat_x4'}, {'conv_x17'}, {'conv_f17','conv_b17'}); net.addLayer('bn17', dagnn.BatchNorm('numChannels', 64), {'conv_x17'}, {'bn_x17'}, {'bn17f', 'bn17b', 'bn17m'}); relu17 = dagnn.ReLU(); net.addLayer('relu17', relu17, {'bn_x17'}, {'relu_x17'}, {}); % ---------------------------------------------- % Stage 1: 2nd conv block % ---------------------------------------------- conv18 = dagnn.Conv('size',[3,3,64,64], 'pad', 1, 'stride', 1, 'hasBias', true); net.addLayer('conv18', conv18, {'relu_x17'}, {'conv_x18'},{'conv_f18','conv_b18'}); net.addLayer('bn18', dagnn.BatchNorm('numChannels', 64), {'conv_x18'}, {'bn_x18'}, {'bn18f', 'bn18b', 'bn18m'}); relu18 = dagnn.ReLU(); net.addLayer('relu18', relu18, {'bn_x18'}, {'relu_x18'}, {}); % ---------------------------------------------- % Stage 0: Prediction block % ---------------------------------------------- pred = dagnn.Conv('size',[1,1,64,1], 'pad', 0, 'stride', 1, 'hasBias', true); net.addLayer('pred', pred, {'relu_x18'},{'Image_Pre'},{'pred_f1','pred_b1'}); SumBlock=dagnn.Sum(); net.addLayer('sum',SumBlock,{'Image_Pre','FBP'},{'Image'});%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % LOSS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SmoothBlock=dagnn.Smooth(); net.addLayer('Smooth', SmoothBlock, {'Image'}, {'loss'}) ; PrjCompareBlock=dagnn.PrjCompare(); net.addLayer('PrjCompare', PrjCompareBlock, {'Image','data','Hsys','weights'}, {'loss2'}) ; net.initParams() ;end3. 額外補充
關(guān)于使用MatConvnetNet中的dagnn.BatchNorm進行批歸一化時,應(yīng)該注意什么?
classdef BatchNorm < dagnn.ElementWisepropertiesnumChannelsepsilon = 1e-5opts = {'NoCuDNN'} % ours seems slightly fasterendproperties (Transient)momentsendmethodsfunction outputs = forward(obj, inputs, params)if strcmp(obj.net.mode, 'test')outputs{1} = vl_nnbnorm(inputs{1}, params{1}, params{2}, ...'moments', params{3}, ...'epsilon', obj.epsilon, ...obj.opts{:}) ;else[outputs{1},obj.moments] = ...vl_nnbnorm(inputs{1}, params{1}, params{2}, ...'epsilon', obj.epsilon, ...obj.opts{:}) ;endendfunction [derInputs, derParams] = backward(obj, inputs, params, derOutputs)[derInputs{1}, derParams{1}, derParams{2}, derParams{3}] = ...vl_nnbnorm(inputs{1}, params{1}, params{2}, derOutputs{1}, ...'epsilon', obj.epsilon, ...'moments', obj.moments, ...obj.opts{:}) ;obj.moments = [] ;% multiply the moments update by the number of images in the batch% this is required to make the update additive for subbatches% and will eventually be normalized awayderParams{3} = derParams{3} * size(inputs{1},4) ;end% ---------------------------------------------------------------------function obj = BatchNorm(varargin)obj.load(varargin{:}) ;endfunction params = initParams(obj)params{1} = ones(obj.numChannels,1,'single') ;params{2} = zeros(obj.numChannels,1,'single') ;params{3} = zeros(obj.numChannels,2,'single') ;endfunction attach(obj, net, index)attach@dagnn.ElementWise(obj, net, index) ;p = net.getParamIndex(net.layers(index).params{3}) ;net.params(p).trainMethod = 'average' ;net.params(p).learningRate = 0.1 ;endend end其實一般情況下,我們直接調(diào)用原函數(shù)就好了,內(nèi)部的參數(shù)匹配機制會幫助我們識別通道數(shù)。
net.addLayers('bn1', dagnn,BatchNorm(), {'input'}, {'output'}, {'bn1f', 'bn1b', 'bn1m'})如果從代碼可讀性角度考慮,也可以顯式指明參數(shù)‘numChannels’:
net.addLayer('bn1', dagnn.BatchNorm('numChannels', 512), {'input'}, {'output'}, {'bn1f', 'bn1b', 'bn1m'});?
與50位技術(shù)專家面對面20年技術(shù)見證,附贈技術(shù)全景圖總結(jié)
以上是生活随笔為你收集整理的利用MatConvNet进行孪生多分支网络设计的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 关于Matconvnet中模型发布与共享
- 下一篇: 一种被忽视的构造和整数溢出重现