pytorch无法将模型加载到gpu上
生活随笔
收集整理的這篇文章主要介紹了
pytorch无法将模型加载到gpu上
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
通常是model = model.to(cuda)就好了
但由于搭建模型的時候,forward函數(shù)的代碼直接調(diào)用這個類外部的函數(shù),如圖所示:
?在這里直接調(diào)用了外部的函數(shù), 這個函數(shù)里面有torch.nn.Conv2d等
所以會導致在將模型加載到GPU的時候,無法將這些外部的函數(shù)也加載到GPU上
所以在一個類的forward函數(shù)里面盡量不要使用外部的函數(shù),而是調(diào)用這個類本身的函數(shù),或者是在初始化的時候,定義好self.block,然后再在forward函數(shù)里面使用self.block
盡量寫成像下面的代碼一樣:
from .cspdarknet import CSP, CBL import torch.nn as nn import torchdef make_five_conv(ch_in, ch_out):return nn.Sequential(CBL(ch_in, ch_out, 1, p=0),CBL(ch_out, ch_out * 2, 3),CBL(ch_out * 2, ch_out, 1, p=0),CBL(ch_out, ch_out * 2, 3),CBL(ch_out * 2, ch_out, 1, p=0))def final_process(ch_in, ch_out):return nn.Sequential(CBL(ch_in, ch_out, 3),nn.Conv2d(ch_out, ch_out, 1, padding=0))class YOLOV3(nn.Module):def __init__(self, nc):super(YOLOV3, self).__init__()self.nc = ncself.bone = CSP()self.block1 = nn.Sequential(CBL(512, 256, 1, p=0),nn.UpsamplingBilinear2d(scale_factor=2))self.block2 = nn.Sequential(CBL(256, 128, 1, p=0),nn.UpsamplingBilinear2d(scale_factor=2))self.block3 = make_five_conv(1024, 512)self.block4 = final_process(512, (self.nc+5)*3)self.block5 = make_five_conv(768, 256)self.block6 = final_process(256, (self.nc+5)*3)self.block7 = make_five_conv(384, 128)self.block8 = final_process(128, (self.nc+5)*3)def forward(self, x):big_feat, middle_feat, small_feat = self.bone(x)# 1.small部分:small_feat = self.block3(small_feat)out_small = self.block4(small_feat)# 2.middel部分up_small = self.block1(small_feat)cat_middle = torch.cat([middle_feat, up_small], dim=1)middle_set = self.block5(cat_middle)out_middle = self.block6(middle_set)# 3.big部分up_middel = self.block2(middle_set)cat_big = torch.cat([big_feat, up_middel], dim=1)big_set = self.block7(cat_big)out_big = self.block8(big_set)# 4.對輸出的維度進行轉(zhuǎn)換# out_small = out_small.view(-1, 3, (5 + self.nc), out_small.shape[-2], out_small.shape[-1])# out_small = out_small.permute(0, 1, 3, 4, 2)## out_middle = out_middle.view(-1, 3, (5 + self.nc), out_middle.shape[-2], out_middle.shape[-1])# out_middle = out_middle.permute(0, 1, 3, 4, 2)## out_big = out_big.view(-1, 3, (5 + self.nc), out_big.shape[-2], out_big.shape[-1])# out_big = out_big.permute(0, 1, 3, 4, 2)return out_small, out_middle, out_big總結
以上是生活随笔為你收集整理的pytorch无法将模型加载到gpu上的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: RuntimeError: one of
- 下一篇: NotImplementedError: