(pytorch-深度学习)SE-ResNet的pytorch实现
生活随笔
收集整理的這篇文章主要介紹了
(pytorch-深度学习)SE-ResNet的pytorch实现
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
SE-ResNet的pytorch實現
殘差塊:
class Resiual_block(nn.Module):def __init__(self, in, middle_out, out, kernel_size=3, padding=1):self.out_channel = middle_outsuper(Resiual_block, self).__init__()self.shortcut = nn.Sequential(nn.Conv2d(nin, nout, kernel_size=1),nn.BatchNorm2d(nout))self.block = nn.Sequential(nn.Conv2d(in, middle_out, kernel_size=kernel_size, padding=padding),nn.BatchNorm2d(middle_out),nn.ReLU(inplace=True),nn.Conv2d(middle_out, out, kernel_size=kernel_size, padding=padding),nn.BatchNorm2d(nout))def forward(self, input):x = self.block(input)return nn.ReLU(inplace=True)(x + self.shortcut(input))注意 在input與輸出相加前,這里需要一個shortcut層來調整input的通道大小
SE-NET:
class SE(nn.Module):def __init__(self, in, middle_out, out, reduce=16):super(SE, self).__init__()self.block = Resiual_block(in, middle_out, out)self.shortcut = nn.Sequential(nn.Conv2d(nin, nout, kernel_size=1),nn.BatchNorm2d(nout))self.se = nn.Sequential(nn.Linear(out, out // reduce),nn.ReLU(inplace=True),nn.Linear(out // reduce, out),nn.Sigmoid())def forward(self, input):x = self.block(input)batch_size, channel, _, _ = x.size()y = nn.AvgPool2d(x.size()[2])(x)y = y.view(y.shape[0],-1)y = self.se(y).view(batch_size, channel, 1, 1)y = x * y.expand_as(x)out = y + self.shortcut(input)return out注意 我們使用平均池化層進行下采樣,這里的kernel_size是動態的
總結
以上是生活随笔為你收集整理的(pytorch-深度学习)SE-ResNet的pytorch实现的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: SWPU第一届APP程序设计大赛筹备工作
- 下一篇: 小批量随机梯度下降