keras保存模型_onnx+tensorrt部署keras模型
由于項目需要,最近搗鼓了一波如何讓用tensorrt部署訓練好的模型來達到更快的推理速度,期間花費了大量的時間在知乎和各種網頁上去搜索別人的方案,但始終沒有找到我想要的條理相對清晰的記錄貼(也許只是我沒找到),因此作為記錄,也希望能幫到別人,彎路嘛,少走一條是一條。
由于之前項目一直是用tensorflow和keras,因此本文所講述的是如何將模型從keras上遷移到tensorrt中,采用的方案是keras保存的hdf5模型先轉為onnx模型,再將onnx模型轉為tensorrt模型。采取這種方案主要有兩點考慮,一方面是在tensorrt的官網只看到tensorflow轉為tensorrt的教程,沒有將keras模型直接部署的教程。另一方面是考慮到以后可能轉去pytorch陣營(嘴上說著轉已經說了半年,tf的api實在看的我眼花繚亂),到時候只需要再熟悉一下pytorch轉onnx即可。廢話不多說,下面是我的部署記錄。
首先tensorrt是老黃為自家英偉達GPU所專門打造的推理引擎,TPU、CPU、FPGA等等請去看看TVM,本文是基于tensorrt已經安裝好的基礎上進行模型部署,沒有安裝的小伙伴請去百度,官網上的英文教程很詳細,配合網友的中文安裝記錄很快就能配好。
Keras To Onnx
這一步主要是將keras保存的hdf5模型轉為onnx后綴的模型,主要代碼如下:
import keras2onnx import onnx from keras.models import load_model model = load_model('testonnx.hdf5') onnx_model = keras2onnx.convert_keras(model, model.name) temp_model_file = 'result.onnx' onnx.save_model(onnx_model, temp_model_file)代碼很簡單,分為三步,分別是加載keras模型,keras模型中轉為onnx模型,保存onnx模型。需要注意的是加載的hdf5文件中需要包含網絡結構和參數,僅僅包含參數是不行的。
此外,在我部署過程中,報了一些錯誤,主要是一些自定義的layer和自定義的loss無法被識別,如何轉換自定義的函數我暫時還沒有深入研究,后續會補上,采取了一些“曲線救國”的法子,把自定義的layer盡量用keras自帶的代替,效果差點就差點吧。在訓練時采用自定義的loss,最后用keras自帶的loss重新編譯加載再保存就不會再報錯了。
如果不出錯的話,生成onnx結尾的文件表示轉換完成,接下來是如何將onnx模型轉換為tensorrt模型。
Onnx To Tensorrt
在安裝tensorrt時,會下載一些官方示例,以tensorrt6為例,在samples/python/introductoryparsersamples中,有名為onnx_resnet50.py的文件,此文件是將resnet50.onnx轉為tensorrt模型的示例代碼,接下來以這個文件為基礎,講述如何將我們自己的onnx模型轉為tensorrt模型。
先上代碼。
from PIL import Image import numpy as np import pycuda.driver as cuda import time import tensorrt as trt import sys, os sys.path.insert(1, os.path.join(sys.path[0], "..")) import commonclass ModelData(object):MODEL_PATH = "result.onnx"INPUT_SHAPE = (1, 512, 512)# We can convert TensorRT data types to numpy types with trt.nptype()DTYPE = trt.float32# You can set the logger severity higher to suppress messages (or lower to display more messages). TRT_LOGGER = trt.Logger(trt.Logger.WARNING)# Allocate host and device buffers, and create a stream. def allocate_buffers(engine):# Determine dimensions and create page-locked memory buffers (i.e. won't be swapped to disk) to hold host inputs/outputs.h_input = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(0)), dtype=trt.nptype(ModelData.DTYPE))h_output = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(1)), dtype=trt.nptype(ModelData.DTYPE))# Allocate device memory for inputs and outputs.d_input = cuda.mem_alloc(h_input.nbytes)d_output = cuda.mem_alloc(h_output.nbytes)# Create a stream in which to copy inputs/outputs and run inference.stream = cuda.Stream()return h_input, d_input, h_output, d_output, streamdef do_inference(context, h_input, d_input, h_output, d_output, stream):# Transfer input data to the GPU.cuda.memcpy_htod_async(d_input, h_input, stream)# Run inference.context.execute_async(bindings=[int(d_input), int(d_output)], stream_handle=stream.handle)# Transfer predictions back from the GPU.cuda.memcpy_dtoh_async(h_output, d_output, stream)# Synchronize the streamstream.synchronize()# The Onnx path is used for Onnx models. def build_engine_onnx(model_file):with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:builder.max_workspace_size = common.GiB(1)# Load the Onnx model and parse it in order to populate the TensorRT network.with open(model_file, 'rb') as model:parser.parse(model.read())last_layer = network.get_layer(network.num_layers - 1)network.mark_output(last_layer.get_output(0))returnresult=builder.build_cuda_engine(network)return returnresultdef load_normalized_test_case(test_image, pagelocked_buffer):# Converts the input image to a CHW Numpy arraydef normalize_image(image):# Resize, antialias and transpose the image to CHW.c, h, w = ModelData.INPUT_SHAPEimage_arr = np.asarray(image.resize((w, h), Image.ANTIALIAS))image_arr = np.reshape(image_arr, image_arr.shape + (1,))image_arr=image_arr.transpose([2, 0, 1])image_arr=image_arr.astype(trt.nptype(ModelData.DTYPE))image_arr=image_arr.ravel()# This particular ResNet50 model requires some preprocessing, specifically, mean normalization.return (image_arr / 255.0 - 0.45) / 0.225# Normalize the image and copy to pagelocked memory.np.copyto(pagelocked_buffer, normalize_image(Image.open(test_image)))return test_imagedef main(): onnx_model_file='result.onnx'# Build a TensorRT engine.with build_engine_onnx(onnx_model_file) as engine:# Inference is the same regardless of which parser is used to build the engine, since the model architecture is the same.# Allocate buffers and create a CUDA stream.h_input, d_input, h_output, d_output, stream = allocate_buffers(engine)with engine.create_execution_context() as context:# Load a normalized test case into the host input page-locked buffer.starttime=time.time()for i in range(100):test_image ='test.jpg'test_case = load_normalized_test_case(test_image, h_input)# Run the engine. The output will be a 1D tensor of length 1000, where each value represents the# probability that the image corresponds to that labeldo_inference(context, h_input, d_input, h_output, d_output, stream)#print('ok')endtime=time.time()pertime=(endtime-starttime)/100print('perimg cost'+str(pertime))以上代碼的大部分都是源自于onnx_resnet50中,各位可以參照著比對,主要的改動是:
(1)初始化部分,模型的輸入大小,即經過各種預處理之后喂入到網絡的圖像大小。
(2)build_engineonnx函數部分,下面是onnx_resnet50中的原函數和我修改后的函數對比。
#修改后 def build_engine_onnx(model_file):with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:builder.max_workspace_size = common.GiB(1)# Load the Onnx model and parse it in order to populate the TensorRT network.with open(model_file, 'rb') as model:parser.parse(model.read())last_layer = network.get_layer(network.num_layers - 1)network.mark_output(last_layer.get_output(0))returnresult=builder.build_cuda_engine(network)return returnresult#原函數 def build_engine_onnx(model_file):with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:builder.max_workspace_size = common.GiB(1)# Load the Onnx model and parse it in order to populate the TensorRT network.with open(model_file, 'rb') as model:parser.parse(model.read())returnresult=builder.build_cuda_engine(network)return returnresult主要的改動是在with循環之外增加了兩句代碼:
last_layer = network.get_layer(network.num_layers - 1) network.mark_output(last_layer.get_output(0))如果不做修改的話,會報如下錯誤:
[TensorRT] ERROR: Network must have at least one output上述增加代碼就是明確了模型結構的輸出。
以上代碼就是onnx+tensorrt部署keras模型的全記錄,希望能讓彎路少一點。關于速度提升,我只是簡單的轉換了一下模型,沒有做任何的優化,大約能夠50%的速度提升,后面會繼續優化,看情況補上前后的速度對比。
總結
以上是生活随笔為你收集整理的keras保存模型_onnx+tensorrt部署keras模型的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: python多态的概念_python中的
- 下一篇: python定义数组是带指针_在cyth