卷积神经网络的网络层与参数的解析
生活随笔
收集整理的這篇文章主要介紹了
卷积神经网络的网络层与参数的解析
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
參考博主:https://blog.csdn.net/weixin_41457494/article/details/86238443
import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F # 神經(jīng)網(wǎng)絡(luò)參數(shù)解析 ''' 神經(jīng)網(wǎng)絡(luò)參數(shù)解析 ''' class Net(nn.Module):def __init__(self):super(Net, self).__init__()# 1 input image channel, 6 output channels, 5x5 square convolution# kernelself.conv1 = nn.Conv2d(1, 6, 2)# an affine operation: y = Wx + bself.fc1 = nn.Linear(6 * 3 * 3, 4)def forward(self, x):# Max pooling over a (2, 2) windowprint("x.size:{}".format(x.size()))x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2),stride=1)print('================')print("x.size:{}".format(x.size()))x = x.view(-1, self.num_flat_features(x))x = F.relu(self.fc1(x))return xdef num_flat_features(self, x):size = x.size()[1:] # all dimensions except the batch dimensionnum_features = 1for s in size:num_features *= sreturn num_featuresif __name__ =='__main__':net = Net() # print("net:{}".format(net)) # print(net)params = list(net.parameters())print("創(chuàng)建的神經(jīng)網(wǎng)絡(luò)共有{}組參數(shù)\n".format(len(params)))print("卷積核的初始化參數(shù):{}\n".format(params[0]))print("卷積核的初始化bias:{}\n".format( params[1]))print("全連接層的初始化參數(shù):{}\n".format( params[2]))print("全連接層的初始化bias:{}\n".format( params[3]))print("卷積核的初始化參數(shù)的size:{}\n".format(params[0].size()))print("卷積核的初始化bias的size:{}\n".format( params[1].size()))print("全連接層的初始化參數(shù)的size:{}\n".format( params[2].size()))print("全連接層的初始化bias的size:{}\n".format( params[3].size()))input = Variable(torch.randn(1, 1, 5, 5))output = net(input)總結(jié)
以上是生活随笔為你收集整理的卷积神经网络的网络层与参数的解析的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Policy-based RL小结(Po
- 下一篇: RRT,RRT*,A*,Dijkstra