生活随笔
收集整理的這篇文章主要介紹了
使用onnx包将pth文件转换为onnx文件
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
本文對比一下兩種pth文件轉為onnx的區別以及onnx文件在NETRON中的圖
只有參數的pth文件:cat_dog.pth既有參數又有模型結構的pth文件:cat_dog_model_args.pth既有參數又有模型結構的onnx文件:cat_dog_model_args.onnx
cat_dog_model.pth 在NETRON中的圖(無網絡架構)
由于沒有網絡結構,所以不能通過代碼將其轉為onnx文件
cat_dog_model_args.pth 在NETRON中的圖
cat_dog_model_args.onnx在NETRON中的圖
先將cat_dog_model_args.pth 轉為cat_dog_model_args.onnx
代碼:
import torch
import torchvision
dummy_input
= torch
.randn
(1, 3, 224, 224)
model
= torch
.load
('D:\***\swin_transformer_flower\cat_dog_model_args.pth')
model
.eval()
input_names
= ["input"]
output_names
= ["output"]
torch
.onnx
.export
(model
,dummy_input
,"cat_dog_model_args.onnx",verbose
=True,input_names
=input_names
,output_names
=output_names
)
運行以上代碼
輸出
graph
(%input : Float
(1:150528, 3:50176, 224:224, 224:1, requires_grad
=0, device
=cpu
),%features
.0.weight
: Float
(64:27, 3:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.0.bias
: Float
(64:1, requires_grad
=0, device
=cpu
),%features
.2.weight
: Float
(64:576, 64:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.2.bias
: Float
(64:1, requires_grad
=0, device
=cpu
),%features
.5.weight
: Float
(128:576, 64:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.5.bias
: Float
(128:1, requires_grad
=0, device
=cpu
),%features
.7.weight
: Float
(128:1152, 128:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.7.bias
: Float
(128:1, requires_grad
=0, device
=cpu
),%features
.10.weight
: Float
(256:1152, 128:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.10.bias
: Float
(256:1, requires_grad
=0, device
=cpu
),%features
.12.weight
: Float
(256:2304, 256:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.12.bias
: Float
(256:1, requires_grad
=0, device
=cpu
),%features
.14.weight
: Float
(256:2304, 256:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.14.bias
: Float
(256:1, requires_grad
=0, device
=cpu
),%features
.17.weight
: Float
(512:2304, 256:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.17.bias
: Float
(512:1, requires_grad
=0, device
=cpu
),%features
.19.weight
: Float
(512:4608, 512:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.19.bias
: Float
(512:1, requires_grad
=0, device
=cpu
),%features
.21.weight
: Float
(512:4608, 512:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.21.bias
: Float
(512:1, requires_grad
=0, device
=cpu
),%features
.24.weight
: Float
(512:4608, 512:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.24.bias
: Float
(512:1, requires_grad
=0, device
=cpu
),%features
.26.weight
: Float
(512:4608, 512:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.26.bias
: Float
(512:1, requires_grad
=0, device
=cpu
),%features
.28.weight
: Float
(512:4608, 512:9, 3:3, 3:1, requires_grad
=0, device
=cpu
),%features
.28.bias
: Float
(512:1, requires_grad
=0, device
=cpu
),%classifier
.0.weight
: Float
(100:25088, 25088:1, requires_grad
=1, device
=cpu
),%classifier
.0.bias
: Float
(100:1, requires_grad
=1, device
=cpu
),%classifier
.3.weight
: Float
(2:100, 100:1, requires_grad
=1, device
=cpu
),%classifier
.3.bias
: Float
(2:1, requires_grad
=1, device
=cpu
)):%31 : Float
(1:3211264, 64:50176, 224:224, 224:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%input, %features
.0.weight
, %features
.0.bias
) %32 : Float
(1:3211264, 64:50176, 224:224, 224:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%31) %33 : Float
(1:3211264, 64:50176, 224:224, 224:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%32, %features
.2.weight
, %features
.2.bias
) %34 : Float
(1:3211264, 64:50176, 224:224, 224:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%33) %35 : Float
(1:802816, 64:12544, 112:112, 112:1, requires_grad
=0, device
=cpu
) = onnx
::MaxPool
[kernel_shape
=[2, 2], pads
=[0, 0, 0, 0], strides
=[2, 2]](%34) %36 : Float
(1:1605632, 128:12544, 112:112, 112:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%35, %features
.5.weight
, %features
.5.bias
) %37 : Float
(1:1605632, 128:12544, 112:112, 112:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%36) %38 : Float
(1:1605632, 128:12544, 112:112, 112:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%37, %features
.7.weight
, %features
.7.bias
) %39 : Float
(1:1605632, 128:12544, 112:112, 112:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%38) %40 : Float
(1:401408, 128:3136, 56:56, 56:1, requires_grad
=0, device
=cpu
) = onnx
::MaxPool
[kernel_shape
=[2, 2], pads
=[0, 0, 0, 0], strides
=[2, 2]](%39) %41 : Float
(1:802816, 256:3136, 56:56, 56:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%40, %features
.10.weight
, %features
.10.bias
) %42 : Float
(1:802816, 256:3136, 56:56, 56:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%41) %43 : Float
(1:802816, 256:3136, 56:56, 56:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%42, %features
.12.weight
, %features
.12.bias
) %44 : Float
(1:802816, 256:3136, 56:56, 56:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%43) %45 : Float
(1:802816, 256:3136, 56:56, 56:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%44, %features
.14.weight
, %features
.14.bias
) %46 : Float
(1:802816, 256:3136, 56:56, 56:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%45) %47 : Float
(1:200704, 256:784, 28:28, 28:1, requires_grad
=0, device
=cpu
) = onnx
::MaxPool
[kernel_shape
=[2, 2], pads
=[0, 0, 0, 0], strides
=[2, 2]](%46) %48 : Float
(1:401408, 512:784, 28:28, 28:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%47, %features
.17.weight
, %features
.17.bias
) %49 : Float
(1:401408, 512:784, 28:28, 28:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%48) %50 : Float
(1:401408, 512:784, 28:28, 28:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%49, %features
.19.weight
, %features
.19.bias
) %51 : Float
(1:401408, 512:784, 28:28, 28:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%50) %52 : Float
(1:401408, 512:784, 28:28, 28:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%51, %features
.21.weight
, %features
.21.bias
) %53 : Float
(1:401408, 512:784, 28:28, 28:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%52) %54 : Float
(1:100352, 512:196, 14:14, 14:1, requires_grad
=0, device
=cpu
) = onnx
::MaxPool
[kernel_shape
=[2, 2], pads
=[0, 0, 0, 0], strides
=[2, 2]](%53) %55 : Float
(1:100352, 512:196, 14:14, 14:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%54, %features
.24.weight
, %features
.24.bias
) %56 : Float
(1:100352, 512:196, 14:14, 14:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%55) %57 : Float
(1:100352, 512:196, 14:14, 14:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%56, %features
.26.weight
, %features
.26.bias
) %58 : Float
(1:100352, 512:196, 14:14, 14:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%57) %59 : Float
(1:100352, 512:196, 14:14, 14:1, requires_grad
=0, device
=cpu
) = onnx
::Conv
[dilations
=[1, 1], group
=1, kernel_shape
=[3, 3], pads
=[1, 1, 1, 1], strides
=[1, 1]](%58, %features
.28.weight
, %features
.28.bias
) %60 : Float
(1:100352, 512:196, 14:14, 14:1, requires_grad
=0, device
=cpu
) = onnx
::Relu
(%59) %61 : Float
(1:25088, 512:49, 7:7, 7:1, requires_grad
=0, device
=cpu
) = onnx
::MaxPool
[kernel_shape
=[2, 2], pads
=[0, 0, 0, 0], strides
=[2, 2]](%60) %62 : Float
(1:25088, 512:49, 7:7, 7:1, requires_grad
=0, device
=cpu
) = onnx
::AveragePool
[kernel_shape
=[1, 1], strides
=[1, 1]](%61) %63 : Float
(1:25088, 25088:1, requires_grad
=0, device
=cpu
) = onnx
::Flatten
[axis
=1](%62) %64 : Float
(1:100, 100:1, requires_grad
=1, device
=cpu
) = onnx
::Gemm
[alpha
=1., beta
=1., transB
=1](%63, %classifier
.0.weight
, %classifier
.0.bias
) %65 : Float
(1:100, 100:1, requires_grad
=1, device
=cpu
) = onnx
::Relu
(%64) %output
: Float
(1:2, 2:1, requires_grad
=1, device
=cpu
) = onnx
::Gemm
[alpha
=1., beta
=1., transB
=1](%65, %classifier
.3.weight
, %classifier
.3.bias
) return (%output
)Process finished
with exit code
0
圖片居中方法:
參考:CSDN博客文章中圖片居中
即只需要在圖片下方代碼頁最后加上#pic_center即可
總結
以上是生活随笔為你收集整理的使用onnx包将pth文件转换为onnx文件的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。