pc端MNIST數據集pytorch模型CNN網絡轉換為onnx部署樹莓派4B和神經棒NCS2(使用openvino2021框架)
前提:openvino在pc和樹莓派都配置ok
在PC端pytorch構建CNN 識別手寫數字
自動下載mnist數據集 分析 訓練
參考開源代碼
import torch
import torch
.nn
as nn
import torch
.nn
.functional
as F
import torch
.optim
as optim
from torchvision
import datasets
, transformsBATCH_SIZE
= 256
EPOCHS
= 1
DEVICE
= 'cpu'
train_loader
= torch
.utils
.data
.DataLoader
(datasets
.MNIST
('data', train
=True, download
=True,transform
=transforms
.Compose
([transforms
.ToTensor
(),transforms
.Normalize
((0.1307,), (0.3081,))])),batch_size
=BATCH_SIZE
, shuffle
=True)
test_loader
= torch
.utils
.data
.DataLoader
(datasets
.MNIST
('data', train
=False, transform
=transforms
.Compose
([transforms
.ToTensor
(),transforms
.Normalize
((0.1307,), (0.3081,))])),batch_size
=BATCH_SIZE
, shuffle
=True)
class ConvNet(nn
.Module
):def __init__(self
):super().__init__
()self
.conv1
= nn
.Conv2d
(1, 10, 5) self
.conv2
= nn
.Conv2d
(10, 20, 3) self
.fc1
= nn
.Linear
(20 * 10 * 10, 500) self
.fc2
= nn
.Linear
(500, 10) def forward(self
, x
):in_size
= x
.size
(0) out
= self
.conv1
(x
) out
= F
.relu
(out
) out
= F
.max_pool2d
(out
, 2, 2) out
= self
.conv2
(out
) out
= F
.relu
(out
) out
= out
.view
(in_size
, -1) out
= self
.fc1
(out
) out
= F
.relu
(out
) out
= self
.fc2
(out
) out
= F
.log_softmax
(out
, dim
=1) return out
def train(model
, device
, train_loader
, optimizer
, epoch
):model
.train
()for batch_idx
, (data
, target
) in enumerate(train_loader
):data
, target
= data
.to
(device
), target
.to
(device
)optimizer
.zero_grad
()output
= model
(data
)loss
= F
.nll_loss
(output
, target
)loss
.backward
()optimizer
.step
()if (batch_idx
+ 1) % 30 == 0:print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch
, batch_idx
* len(data
), len(train_loader
.dataset
),100. * batch_idx
/ len(train_loader
), loss
.item
()))
def test(model
, device
, test_loader
):model
.eval()test_loss
= 0correct
= 0with torch
.no_grad
():for data
, target
in test_loader
:data
, target
= data
.to
(device
), target
.to
(device
)output
= model
(data
)test_loss
+= F
.nll_loss
(output
, target
, reduction
='sum').item
() pred
= output
.max(1, keepdim
=True)[1] correct
+= pred
.eq
(target
.view_as
(pred
)).sum().item
()test_loss
/= len(test_loader
.dataset
)print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(test_loss
, correct
, len(test_loader
.dataset
),100. * correct
/ len(test_loader
.dataset
)))if __name__
== '__main__':model
= ConvNet
().to
(DEVICE
)optimizer
= optim
.Adam
(model
.parameters
())for epoch
in range(1, EPOCHS
+ 1):train
(model
, DEVICE
, train_loader
, optimizer
, epoch
)test
(model
, DEVICE
, test_loader
)torch
.save
(model
, './MNIST.pth')
model
.eval()
dummy_input
= torch
.randn
(1,1,28,28)
torch
.onnx
.export
(model
,(dummy_input
),'model.onnx')
現在應該文件夾下有個.onnx
在PC端轉換出mnist中的圖片用于測試
注意轉換的路徑
import os
from skimage
import io
import torchvision
.datasets
.mnist
as mnistroot
="D:\\pycharm_project\\MNIST2\\data\\MNIST\\raw\\"
train_set
= (mnist
.read_image_file
(os
.path
.join
(root
, 'train-images-idx3-ubyte')),mnist
.read_label_file
(os
.path
.join
(root
, 'train-labels-idx1-ubyte')))
test_set
= (mnist
.read_image_file
(os
.path
.join
(root
, 't10k-images-idx3-ubyte')),mnist
.read_label_file
(os
.path
.join
(root
, 't10k-labels-idx1-ubyte')))
print("training set :",train_set
[0].size
())
print("test set :",test_set
[0].size
())def convert_to_img(train
=True):if(train
):f
=open(root
+'train.txt','w')data_path
=root
+'/train/'if(not os
.path
.exists
(data_path
)):os
.makedirs
(data_path
)for i
, (img
,label
) in enumerate(zip(train_set
[0],train_set
[1])):img_path
=data_path
+str(i
)+'.jpg'io
.imsave
(img_path
,img
.numpy
())f
.write
(img_path
+' '+str(label
)+'\n')f
.close
()else:f
= open(root
+ 'test.txt', 'w')data_path
= root
+ '/test/'if (not os
.path
.exists
(data_path
)):os
.makedirs
(data_path
)for i
, (img
,label
) in enumerate(zip(test_set
[0],test_set
[1])):img_path
= data_path
+ str(i
) + '.jpg'io
.imsave
(img_path
, img
.numpy
())f
.write
(img_path
+ ' ' + str(label
) + '\n')f
.close
()convert_to_img
(True)
convert_to_img
(False)
在\raw\test的文件夾下可以看到所有的圖片
復制幾張圖片復制到主程序下 等等進行檢驗
PC端使用openvino將模型轉成xml和bin
使用cmd窗口
cd 【根據自己openvino位置如下目錄】
C:\Program
Files (x86
)\Intel\openvino_2021
.4.752\deployment_tools\model_optimizer
看到mo.py文件
在cmd中輸入如下
-input_mode 輸入模型地址 是個onnx文件
–output_dir 輸出xml和bin文件的地址
–batch 1 好像都是1
–data_type=FP16 是樹莓派接收的類型
python mo_onnx.py --input_mode D:\pycharm_project\MNIST2\model.onnx --output_dir D:\pycharm_project\MNIST2\ --batch 1 --data_type=FP16
在–output_dir 地址下 看到xml和bin文件 就歐ok
PC端使用openvino將模型進行推理
使用openvino的 iecore引擎進行推理
前面import是用來調試使用的 不影響
沒有用到torch框架
下面部分代碼參考這個視頻 p8
【基于 Python 的 OpenVINO 開發實戰教程】 https://www.bilibili.com/video/BV1Xq4y177Sw?p=8&share_source=copy_web&vd_source=439a535f314073204168dbf828e16e2c
如圖,視頻使用的2020 與本次的2021.4openvino語法有不同 比如inputs不能再使用 具體更改如下
import cv2
import numpy
as np
import torch
from openvino
.inference_engine
import IECore
,IENetwork
from time
import time
ie
= IECore
()
for device
in ie
.available_devices
:print(device
)
DEVICE
= 'CPU'model_xml
= 'model.xml'
model_bin
= 'model.bin'
image_file
= '7.jpg'
net
= ie
.read_network
(model
=model_xml
, weights
=model_bin
)
print("Preparing input blobs")
input_blob
= next(iter(net
.input_info
))
out_blob
= next(iter(net
.outputs
))
net
.batch_size
= 1
print("Loading IR to the plugin...")
exec_net
= ie
.load_network
(network
=net
, num_requests
=1, device_name
=DEVICE
)
print('read picture')
n
, c
, h
, w
= net
.input_info
[input_blob
].input_data
.shape
print(n
,c
,h
,w
)image1
= cv2
.imread
(image_file
,0)
print(type(image1
))image
= cv2
.resize
(image1
, (w
, h
))
start
= time
()
res
= exec_net
.infer
(inputs
={input_blob
:image
})
end
= time
()
print("Infer Time:{}s".format((end
-start
)))
print("Processing output blob")
res
= res
[out_blob
]
def softmax(X
):X_exp
=np
.exp
(X
)partition
= X_exp
.sum()return X_exp
/ partition
X_prob
= softmax
(res
)
X_prob
=np
.squeeze
(X_prob
,0)
print(np
.where
(X_prob
.flat
== 1)[0])cv2
.imshow
("Detection results", image
)
cv2
.waitKey
(0)
cv2
.destroyAllWindows
()
樹莓派使用openvino將模型進行測試
把測試圖片 xml bin 和一個載入程序傳到樹莓派
插上神經棒
進入文件夾 用python3 運行.py文件
這.py文件和上面的差不多
import cv2
import numpy
as np
from openvino
.inference_engine
import IECore
,IENetwork
from time
import time
ie
= IECore
()
for device
in ie
.available_devices
:print(device
)
DEVICE
= 'MYRIAD'model_xml
= 'model.xml'
model_bin
= 'model.bin'
image_file
= '7.jpg'
net
= ie
.read_network
(model
=model_xml
, weights
=model_bin
)
print("Preparing input blobs")
input_blob
= next(iter(net
.input_info
))
out_blob
= next(iter(net
.outputs
))
net
.batch_size
= 1
print("Loading IR to the plugin...")
exec_net
= ie
.load_network
(network
=net
, num_requests
=1, device_name
=DEVICE
)
print('read picture')
n
, c
, h
, w
= net
.input_info
[input_blob
].input_data
.shape
print(n
,c
,h
,w
)image1
= cv2
.imread
(image_file
,0)
print(type(image1
))image
= cv2
.resize
(image1
, (w
, h
))
start
= time
()
res
= exec_net
.infer
(inputs
={input_blob
:image
})
end
= time
()
print("Infer Time:{}s".format((end
-start
)))print("Processing output blob")
res
= res
[out_blob
]def softmax(x
):x
=np
.exp
(x
)p
=x
.sum()return x
/p
x
=softmax
(res
)x
=np
.squeeze
(x
,0)
print(np
.where
(x
.flat
== 1)[0])cv2
.imshow
("Detection results", image
)
cv2
.waitKey
(0)
cv2
.destroyAllWindows
()
得到結果
這個py不能在內置的Thonny里面run
openvino配置時候沒有這個
所以要在命令行里運行
https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html
694569236
總結
以上是生活随笔為你收集整理的pc端MNIST数据集pytorch模型CNN网络转换为onnx部署树莓派4B和神经棒NCS2(使用openvino2021框架)的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。