这里是目录
- 总览
- 环境配置
- 模型准备
- Moble SAM onnx模型获取
- Moble SAM pre onnx模型获取
- 运行
- cmakelist
- 运行结果
总览
相比于使用python离线部署SAM大模型,C++要麻烦的多,本篇的部署过程主要基于项目:https://github.com/dinglufe/segment-anything-cpp-wrapper
环境配置
模型准备
通过C++进行部署的主要原因就是希望能够有效的提升运行效率减少推理耗时,SAM大模型的官方网站中提供了vit_h,vit_l,vit_b三种大小不同的模型参数,在我们的实际运行中发现,以vit_h参数为例,对于一帧图像的整体运算时间高达6000ms(读取图像+推理+获得掩膜并显示),因此我们认为SAM的三种参数都不适用于C++的部署工作,我们最终选择了MobileSAM作为C++的实际部署模型
在项目中需要处理模型mobilesam.onnx和预处理模型mobilesam_process.onnx
在当前以有项目和博客指导这两种模型应该如何获取,但是都太过于笼统对初学者并不友好,在当初运行时走了很多弯路,在此给出详细步骤过程
Moble SAM onnx模型获取
懒彦祖传送门:
https://download.csdn.net/download/qq_43649786/89380411
这部分在mobilesam的官方项目中给出了方法https://github.com/ChaoningZhang/MobileSAM#onnx-export
非常详细,需要注意的是需要安装onnx=1.12.0 && onnxruntime=1.13.1
- 创建conda环境并激活
conda create --name mobilesam python=3.8
conda activate mobilesam
- 下载源码并配置环境(在此默认已安装pytorch和torchvision)
pip install git+https://github.com/ChaoningZhang/MobileSAM.git
#如果不准备跑app.py下述可以不用
pip install gradio
#安装完后可能会出现打不开spyder的情况,运行以下指令
pip install Spyder
- 运行onnx生成文件
注意此时系统的路径是在下载的源码内
python scripts/export_onnx_model.py --checkpoint ./weights/mobile_sam.pt --model-type vit_t --output ./mobile_sam.onnx
这么详细还搞不定我就真没办法了,彦祖
Moble SAM pre onnx模型获取
懒彦祖传送门:
https://download.csdn.net/download/qq_43649786/89380451
预训练的部分在部署项目中给出了代码
https://github.com/dinglufe/segment-anything-cpp-wrapper/blob/main/export_pre_model.py
但是同样有一些需要注意的点,首先在头文件的引用中需要将import segment_anything as SAM更改为import mobile_sam as SAM
需要注意的是如果没有在conda环境中配置mobileSAM环境和会出现问题,同时将SAM和mobileSAM同时安装在一个conda环境也有可能报错,在此建议分别安装
# import segment_anything as SAM
import mobile_sam as SAM
此处还需要一个mobileSAM 的.pt模型文件,在官方的项目中可自行下载:
https://github.com/ChaoningZhang/MobileSAM#onnx-export
完整代码
import torch
import numpy as np
import os
from segment_anything.utils.transforms import ResizeLongestSide
from onnxruntime.quantization import QuantType
from onnxruntime.quantization.quantize import quantize_dynamic
output_names = ['output']
# Gener
# Mobile-SAM
# # Download Mobile-SAM model "mobile_sam.pt" from https://github.com/ChaoningZhang/MobileSAM/blob/master/weights/mobile_sam.pt
import mobile_sam as SAM
checkpoint = 'mobile_sam.pt'
model_type = 'vit_t'
output_path = 'models/mobile_sam_preprocess.onnx'
quantize = False
# Target image size is 1024x720
image_size = (1024, 720)
output_raw_path = output_path
if quantize:
# The raw directory can be deleted after the quantization is done
output_name = os.path.basename(output_path).split('.')[0]
output_raw_path = '{}/{}_raw/{}.onnx'.format(
os.path.dirname(output_path), output_name, output_name)
os.makedirs(os.path.dirname(output_raw_path), exist_ok=True)
sam = SAM.sam_model_registry[model_type](checkpoint=checkpoint)
sam.to(device='cpu')
transform = ResizeLongestSide(sam.image_encoder.img_size)
image = np.zeros((image_size[1], image_size[0], 3), dtype=np.uint8)
input_image = transform.apply_image(image)
input_image_torch = torch.as_tensor(input_image, device='cpu')
input_image_torch = input_image_torch.permute(
2, 0, 1).contiguous()[None, :, :, :]
class Model(torch.nn.Module):
def __init__(self, image_size, checkpoint, model_type):
super().__init__()
self.sam = SAM.sam_model_registry[model_type](checkpoint=checkpoint)
self.sam.to(device='cpu')
self.predictor = SAM.SamPredictor(self.sam)
self.image_size = image_size
def forward(self, x):
self.predictor.set_torch_image(x, (self.image_size))
if 'interm_embeddings' not in output_names:
return self.predictor.get_image_embedding()
else:
return self.predictor.get_image_embedding(), torch.stack(self.predictor.interm_features, dim=0)
model = Model(image_size, checkpoint, model_type)
model_trace = torch.jit.trace(model, input_image_torch)
torch.onnx.export(model_trace, input_image_torch, output_raw_path,
input_names=['input'], output_names=output_names)
if quantize:
quantize_dynamic(
model_input=output_raw_path,
model_output=output_path,
per_channel=False,
reduce_range=False,
weight_type=QuantType.QUInt8,
)
运行
cmakelist
cmake_minimum_required(VERSION 3.21)
set(CMAKE_CXX_STANDARD 17)
project(SamCPP)
find_package(OpenCV CONFIG REQUIRED)
find_package(gflags CONFIG REQUIRED)
set(ONNXRUNTIME_ROOT_DIR /home/ubuntu/onnxruntime-linux-x64-gpu-1.14.1)
add_library(sam_cpp_lib SHARED sam.h sam.cpp click_sample.cpp)
set(onnxruntime_lib ${ONNXRUNTIME_ROOT_DIR}/lib/libonnxruntime.so)
target_include_directories(sam_cpp_lib PRIVATE ${ONNXRUNTIME_ROOT_DIR}/include)
target_link_libraries(sam_cpp_lib PRIVATE
${onnxruntime_lib}
${OpenCV_LIBS}
)
add_executable(sam_cpp_test test.cpp)
target_link_libraries(sam_cpp_test PRIVATE
sam_cpp_lib
${OpenCV_LIBS}
gflags
)
缺啥安啥
更改test.cpp中的路径:
DEFINE_string(pre_model, "models/mobile_sam_preprocess.onnx", "Path to the preprocessing model");
DEFINE_string(sam_model, "models/mobile_sam.onnx", "Path to the sam model");
DEFINE_string(image, "images/input.jpg", "Path to the image to segment");
DEFINE_string(pre_device, "cpu", "cpu or cuda:0(1,2,3...)");
DEFINE_string(sam_device, "cpu", "cpu or cuda:0(1,2,3...)");
确保以上路径都正确且可以访问到文件
在项目主文件夹内打开终端
编译
mkdir build
cd build
cmake ..
make -j2
cd ..
./build/sam_cpp_test
运行结果
都看到这了,点个赞再走吧彦祖