9.7 visual studio 搭建yolov10的onnx的预测(c++)

1.环境配置

在进行onnx预测前,需要搭建的环境如下:

1.opencv环境的配置,可参考博客:9.2 c++搭建opencv环境-CSDN博客

2.libtorch环境的配置,可参考博客:9.4 visualStudio 2022 配置 cuda 和 torch (c++)-CSDN博客

3.cuda环境的配置,可参考博客:9.4 visualStudio 2022 配置 cuda 和 torch (c++)-CSDN博客

4.onnx环境的配置,可参考博客:VS2019配置ONNXRuntime c++环境_microsoft.ml.onnxruntime-CSDN博客

2.yolov10的c++代码

该代码做了部分的修改,最后调试成功。具体的代码如下:

main.cpp

#include <iostream>
//#include <getopt.h>
#include "yolov5v8_dnn.h"
#include "yolov5v8_ort.h"

using namespace std;
using namespace cv;

void main(int argc, char** argv)
{
    string img_path = "E:\\vs\\daima\\1_8\\Project1\\x64\\Release\\street.jpg";
    string model_path = "E:\\vs\\daima\\1_8\\Project1\\x64\\Release\\yolov8n-seg.onnx";
    string test_cls = "dnn";
    if (test_cls == "dnn") {
        // Input the path of model ("yolov8s.onnx" or "yolov5s.onnx") to run Inference with yolov8/yolov5 (ONNX)
        // Note that in this example the classes are hard-coded and 'classes.txt' is a place holder.
        Inference inf(model_path, cv::Size(640, 640), "classes.txt", true);
        cv::Mat frame = cv::imread(img_path);
        std::vector<Detection> output = inf.runInference(frame);
        if (output.size() != 0) inf.DrawPred(frame, output);
        else cout << "Detect Nothing!" << endl;
    }

    if (test_cls == "ort") {
        DCSP_CORE* yoloDetector = new DCSP_CORE;
#ifdef USE_CUDA
        //DCSP_INIT_PARAM params{ model_path, YOLO_ORIGIN_V5, {640, 640},  0.25, 0.45, 0.5, true }; // GPU FP32 inference
        DCSP_INIT_PARAM params{ model_path, YOLO_ORIGIN_V5_HALF, {640, 640},  0.25, 0.45, 0.5, true }; // GPU FP16 inference
#else
        DCSP_INIT_PARAM params{ model_path, YOLO_ORIGIN_V5, {640, 640},0.25, 0.45, 0.5, false };  // CPU inference
#endif
        yoloDetector->CreateSession(params);
        cv::Mat img = cv::imread(img_path);
        std::vector<DCSP_RESULT> res;
        yoloDetector->RunSession(img, res);
        if (res.size() != 0) yoloDetector->DrawPred(img, res);
        else cout << "Detect Nothing!" << endl;
    }
}

yolov5v8_dnn.cpp

#include "yolov5v8_dnn.h"
using namespace std;

Inference::Inference(const std::string& onnxModelPath, const cv::Size& modelInputShape, const std::string& classesTxtFile, const bool& runWithCuda)
{
    modelPath = onnxModelPath;
    modelShape = modelInputShape;
    classesPath = classesTxtFile;
    cudaEnabled = runWithCuda;

    loadOnnxNetwork();
    // loadClassesFromFile(); The classes are hard-coded for this example
}

std::vector<Detection> Inference::runInference(const cv::Mat& input)
{
    cv::Mat SrcImg = input;
    cv::Mat netInputImg;
    cv::Vec4d params;
    LetterBox(SrcImg, netInputImg, params, cv::Size(modelShape.width, modelShape.height));

    cv::Mat blob;
    cv::dnn::blobFromImage(netInputImg, blob, 1.0 / 255.0, modelShape, cv::Scalar(), true, false);
    net.setInput(blob);
    std::vector<cv::Mat> outputs;
    net.forward(outputs, net.getUnconnectedOutLayersNames());
    if (outputs.size() == 2) RunSegmentation = true;

    int rows = outputs[0].size[1];
    int dimensions = outputs[0].size[2];

    bool yolov8 = false;
    // yolov5 has an output of shape (batchSize, 25200, 85) (Num classes + box[x,y,w,h] + confidence[c])
    // yolov8 has an output of shape (batchSize, 84,  8400) (Num classes + box[x,y,w,h])
    if (dimensions > rows) // Check if the shape[2] is more than shape[1] (yolov8)
    {
        yolov8 = true;
        rows = outputs[0].size[2];
        dimensions = outputs[0].size[1];

        outputs[0] = outputs[0].reshape(1, dimensions);
        cv::transpose(outputs[0], outputs[0]);
    }
    float* data = (float*)outputs[0].data;

    std::vector<int> class_ids;
    std::vector<float> confidences;
    std::vector<cv::Rect> boxes;
    std::vector<vector<float>> picked_proposals;

    for (int i = 0; i < rows; ++i)
    {
        int _segChannels;
        if (yolov8)
        {
            float* classes_scores = data + 4;
            cv::Mat scores(1, classes.size(), CV_32FC1, classes_scores);
            cv::Point class_id;
            double maxClassScore;
            minMaxLoc(scores, 0, &maxClassScore, 0, &class_id);
            if (maxClassScore > modelScoreThreshold)
            {
                if (RunSegmentation) {
                    _segChannels = outputs[1].size[1];
                    vector<float> temp_proto(data + classes.size() + 4, data + classes.size() + 4 + _segChannels);
                    picked_proposals.push_back(temp_proto);
                }
                confidences.push_back(maxClassScore);
                class_ids.push_back(class_id.x);
                float x = (data[0] - params[2]) / params[0];
                float y = (data[1] - params[3]) / params[1];
                float w = data[2] / params[0];
                float h = data[3] / params[1];
                int left = MAX(round(x - 0.5 * w + 0.5), 0);
                int top = MAX(round(y - 0.5 * h + 0.5), 0);
                if ((left + w) > SrcImg.cols) { w = SrcImg.cols - left; }
                if ((top + h) > SrcImg.rows) { h = SrcImg.rows - top; }
                boxes.push_back(cv::Rect(left, top, int(w), int(h)));
            }
        }
        else // yolov5
        {
            float confidence = data[4];
            if (confidence >= modelConfidenceThreshold)
            {
                float* classes_scores = data + 5;
                cv::Mat scores(1, classes.size(), CV_32FC1, classes_scores);
                cv::Point class_id;
                double max_class_score;
                minMaxLoc(scores, 0, &max_class_score, 0, &class_id);
                if (max_class_score > modelScoreThreshold)
                {
                    if (RunSegmentation) {
                        _segChannels = outputs[1].size[1];
                        vector<float> temp_proto(data + classes.size() + 5, data + classes.size() + 5 + _segChannels);
                        picked_proposals.push_back(temp_proto);
                    }
                    confidences.push_back(confidence);
                    class_ids.push_back(class_id.x);
                    float x = (data[0] - params[2]) / params[0];
                    float y = (data[1] - params[3]) / params[1];
                    float w = data[2] / params[0];
                    float h = data[3] / params[1];
                    int left = MAX(round(x - 0.5 * w + 0.5), 0);
                    int top = MAX(round(y - 0.5 * h + 0.5), 0);
                    if ((left + w) > SrcImg.cols) { w = SrcImg.cols - left; }
                    if ((top + h) > SrcImg.rows) { h = SrcImg.rows - top; }
                    boxes.push_back(cv::Rect(left, top, int(w), int(h)));
                }
            }
        }
        data += dimensions;
    }

    std::vector<int> nms_result;
    cv::dnn::NMSBoxes(boxes, confidences, modelScoreThreshold, modelNMSThreshold, nms_result);
    std::vector<Detection> detections{};
    std::vector<vector<float>> temp_mask_proposals;
    for (unsigned long i = 0; i < nms_result.size(); ++i)
    {
        int idx = nms_result[i];
        Detection result;
        result.class_id = class_ids[idx];
        result.confidence = confidences[idx];

        std::random_device rd;
        std::mt19937 gen(rd());
        std::uniform_int_distribution<int> dis(100, 255);
        result.color = cv::Scalar(dis(gen),
            dis(gen),
            dis(gen));

        result.className = classes[result.class_id];
        result.box = boxes[idx];
        if (RunSegmentation) temp_mask_proposals.push_back(picked_proposals[idx]);
        if (result.box.width != 0 && result.box.height != 0) detections.push_back(result);
    }
    if (RunSegmentation) {
        cv::Mat mask_proposals;
        for (int i = 0; i < temp_mask_proposals.size(); ++i)
            mask_proposals.push_back(cv::Mat(temp_mask_proposals[i]).t());
        GetMask(mask_proposals, outputs[1], params, SrcImg.size(), detections);
    }
    return detections;
}

void Inference::loadClassesFromFile()
{
    std::ifstream inputFile(classesPath);
    if (inputFile.is_open())
    {
        std::string classLine;
        while (std::getline(inputFile, classLine))
            classes.push_back(classLine);
        inputFile.close();
    }
}

void Inference::loadOnnxNetwork()
{
    net = cv::dnn::readNetFromONNX(modelPath);

    if (cudaEnabled)
    {
        std::cout << "\nRunning on CUDA" << std::endl;
        net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
        net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);
    }
    else
    {
        std::cout << "\nRunning on CPU" << std::endl;
        net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
        net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
    }
}

void Inference::LetterBox(const cv::Mat& image, cv::Mat& outImage, cv::Vec4d& params, const cv::Size& newShape,
    bool autoShape, bool scaleFill, bool scaleUp, int stride, const cv::Scalar& color)
{
    if (false) {
        int maxLen = MAX(image.rows, image.cols);
        outImage = cv::Mat::zeros(cv::Size(maxLen, maxLen), CV_8UC3);
        image.copyTo(outImage(cv::Rect(0, 0, image.cols, image.rows)));
        params[0] = 1;
        params[1] = 1;
        params[3] = 0;
        params[2] = 0;
    }

    cv::Size shape = image.size();
    float r = std::min((float)newShape.height / (float)shape.height,
        (float)newShape.width / (float)shape.width);
    if (!scaleUp)
        r = std::min(r, 1.0f);

    float ratio[2]{ r, r };
    int new_un_pad[2] = { (int)std::round((float)shape.width * r),(int)std::round((float)shape.height * r) };

    auto dw = (float)(newShape.width - new_un_pad[0]);
    auto dh = (float)(newShape.height - new_un_pad[1]);

    if (autoShape)
    {
        dw = (float)((int)dw % stride);
        dh = (float)((int)dh % stride);
    }
    else if (scaleFill)
    {
        dw = 0.0f;
        dh = 0.0f;
        new_un_pad[0] = newShape.width;
        new_un_pad[1] = newShape.height;
        ratio[0] = (float)newShape.width / (float)shape.width;
        ratio[1] = (float)newShape.height / (float)shape.height;
    }

    dw /= 2.0f;
    dh /= 2.0f;

    if (shape.width != new_un_pad[0] && shape.height != new_un_pad[1])
    {
        cv::resize(image, outImage, cv::Size(new_un_pad[0], new_un_pad[1]));
    }
    else {
        outImage = image.clone();
    }

    int top = int(std::round(dh - 0.1f));
    int bottom = int(std::round(dh + 0.1f));
    int left = int(std::round(dw - 0.1f));
    int right = int(std::round(dw + 0.1f));
    params[0] = ratio[0];
    params[1] = ratio[1];
    params[2] = left;
    params[3] = top;
    cv::copyMakeBorder(outImage, outImage, top, bottom, left, right, cv::BORDER_CONSTANT, color);
}

void Inference::GetMask(const cv::Mat& maskProposals, const cv::Mat& mask_protos, const cv::Vec4d& params, const cv::Size& srcImgShape, std::vector<Detection>& output) {
    if (output.size() == 0) return;
    int _segChannels = mask_protos.size[1];
    int _segHeight = mask_protos.size[2];
    int _segWidth = mask_protos.size[3];
    cv::Mat protos = mask_protos.reshape(0, { _segChannels,_segWidth * _segHeight });
    cv::Mat matmulRes = (maskProposals * protos).t();
    cv::Mat masks = matmulRes.reshape(output.size(), { _segHeight,_segWidth });
    vector<cv::Mat> maskChannels;
    split(masks, maskChannels);

    for (int i = 0; i < output.size(); ++i) {
        cv::Mat dest, mask;
        //sigmoid
        cv::exp(-maskChannels[i], dest);
        dest = 1.0 / (1.0 + dest);

        cv::Rect roi(int(params[2] / modelShape.width * _segWidth), int(params[3] / modelShape.height * _segHeight), int(_segWidth - params[2] / 2), int(_segHeight - params[3] / 2));
        dest = dest(roi);
        cv::resize(dest, mask, srcImgShape, cv::INTER_NEAREST);

        //crop
        cv::Rect temp_rect = output[i].box;
        mask = mask(temp_rect) > modelScoreThreshold;
        output[i].boxMask = mask;
    }
}

void Inference::DrawPred(cv::Mat& img, vector<Detection>& result) {
    int detections = result.size();
    std::cout << "Number of detections:" << detections << std::endl;
    cv::Mat mask = img.clone();
    for (int i = 0; i < detections; ++i)
    {
        Detection detection = result[i];

        cv::Rect box = detection.box;
        cv::Scalar color = detection.color;

        // Detection box
        cv::rectangle(img, box, color, 2);
        mask(detection.box).setTo(color, detection.boxMask);

        // Detection box text
        std::string classString = detection.className + ' ' + std::to_string(detection.confidence).substr(0, 4);
        cv::Size textSize = cv::getTextSize(classString, cv::FONT_HERSHEY_DUPLEX, 1, 2, 0);
        cv::Rect textBox(box.x, box.y - 40, textSize.width + 10, textSize.height + 20);

        cv::rectangle(img, textBox, color, cv::FILLED);
        cv::putText(img, classString, cv::Point(box.x + 5, box.y - 10), cv::FONT_HERSHEY_DUPLEX, 1, cv::Scalar(0, 0, 0), 2, 0);
    }
    // Detection mask
    if (RunSegmentation) cv::addWeighted(img, 0.5, mask, 0.5, 0, img); //将mask加在原图上面
    cv::imshow("Inference", img);
    cv::imwrite("out.bmp", img);
    cv::waitKey();
    cv::destroyWindow("Inference");
}

yolov5v8_ort.cpp

#define _CRT_SECURE_NO_WARNINGS
#include "yolov5v8_ort.h"
#include <regex>
#include <random>
#define benchmark
using namespace std;

DCSP_CORE::DCSP_CORE() {

}


DCSP_CORE::~DCSP_CORE() {
    delete session;
}

#ifdef USE_CUDA
namespace Ort
{
    template<>
    struct TypeToTensorType<half> { static constexpr ONNXTensorElementDataType type = ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16; };
}
#endif


template<typename T>
char* BlobFromImage(cv::Mat& iImg, T& iBlob) {
    int channels = iImg.channels();
    int imgHeight = iImg.rows;
    int imgWidth = iImg.cols;
    for (int c = 0; c < channels; c++) {
        for (int h = 0; h < imgHeight; h++) {
            for (int w = 0; w < imgWidth; w++) {
                iBlob[c * imgWidth * imgHeight + h * imgWidth + w] = typename std::remove_pointer<T>::type(
                    (iImg.at<cv::Vec3b>(h, w)[c]) / 255.0f);
            }
        }
    }
    return RET_OK;
}


char* PreProcess(cv::Mat& iImg, std::vector<int> iImgSize, cv::Mat& oImg) {
    cv::Mat img = iImg.clone();
    cv::resize(iImg, oImg, cv::Size(iImgSize.at(0), iImgSize.at(1)));
    if (img.channels() == 1) {
        cv::cvtColor(oImg, oImg, cv::COLOR_GRAY2BGR);
    }
    cv::cvtColor(oImg, oImg, cv::COLOR_BGR2RGB);
    return RET_OK;
}

void LetterBox(const cv::Mat& image, cv::Mat& outImage, cv::Vec4d& params, const cv::Size& newShape = cv::Size(640, 640),
    bool autoShape = false, bool scaleFill = false, bool scaleUp = true, int stride = 32, const cv::Scalar& color = cv::Scalar(114, 114, 114))
{
    if (false) {
        int maxLen = MAX(image.rows, image.cols);
        outImage = cv::Mat::zeros(cv::Size(maxLen, maxLen), CV_8UC3);
        image.copyTo(outImage(cv::Rect(0, 0, image.cols, image.rows)));
        params[0] = 1;
        params[1] = 1;
        params[3] = 0;
        params[2] = 0;
    }

    cv::Size shape = image.size();
    float r = std::min((float)newShape.height / (float)shape.height,
        (float)newShape.width / (float)shape.width);
    if (!scaleUp)
        r = std::min(r, 1.0f);

    float ratio[2]{ r, r };
    int new_un_pad[2] = { (int)std::round((float)shape.width * r),(int)std::round((float)shape.height * r) };

    auto dw = (float)(newShape.width - new_un_pad[0]);
    auto dh = (float)(newShape.height - new_un_pad[1]);

    if (autoShape)
    {
        dw = (float)((int)dw % stride);
        dh = (float)((int)dh % stride);
    }
    else if (scaleFill)
    {
        dw = 0.0f;
        dh = 0.0f;
        new_un_pad[0] = newShape.width;
        new_un_pad[1] = newShape.height;
        ratio[0] = (float)newShape.width / (float)shape.width;
        ratio[1] = (float)newShape.height / (float)shape.height;
    }

    dw /= 2.0f;
    dh /= 2.0f;

    if (shape.width != new_un_pad[0] && shape.height != new_un_pad[1])
    {
        cv::resize(image, outImage, cv::Size(new_un_pad[0], new_un_pad[1]));
    }
    else {
        outImage = image.clone();
    }

    int top = int(std::round(dh - 0.1f));
    int bottom = int(std::round(dh + 0.1f));
    int left = int(std::round(dw - 0.1f));
    int right = int(std::round(dw + 0.1f));
    params[0] = ratio[0];
    params[1] = ratio[1];
    params[2] = left;
    params[3] = top;
    cv::copyMakeBorder(outImage, outImage, top, bottom, left, right, cv::BORDER_CONSTANT, color);
}

void GetMask(const int* const _seg_params, const float& rectConfidenceThreshold, const cv::Mat& maskProposals, const cv::Mat& mask_protos, const cv::Vec4d& params, const cv::Size& srcImgShape, std::vector<DCSP_RESULT>& output) {
    int _segChannels = *_seg_params;
    int _segHeight = *(_seg_params + 1);
    int _segWidth = *(_seg_params + 2);
    int _netHeight = *(_seg_params + 3);
    int _netWidth = *(_seg_params + 4);

    cv::Mat protos = mask_protos.reshape(0, { _segChannels,_segWidth * _segHeight });
    cv::Mat matmulRes = (maskProposals * protos).t();
    cv::Mat masks = matmulRes.reshape(output.size(), { _segHeight,_segWidth });
    std::vector<cv::Mat> maskChannels;
    split(masks, maskChannels);

    for (int i = 0; i < output.size(); ++i) {
        cv::Mat dest, mask;
        //sigmoid
        cv::exp(-maskChannels[i], dest);
        dest = 1.0 / (1.0 + dest);
        cv::Rect roi(int(params[2] / _netWidth * _segWidth), int(params[3] / _netHeight * _segHeight), int(_segWidth - params[2] / 2), int(_segHeight - params[3] / 2));
        dest = dest(roi);
        cv::resize(dest, mask, srcImgShape, cv::INTER_NEAREST);
        //crop
        cv::Rect temp_rect = output[i].box;
        mask = mask(temp_rect) > rectConfidenceThreshold;
        output[i].boxMask = mask;
    }
}


void DCSP_CORE::DrawPred(cv::Mat& img, std::vector<DCSP_RESULT>& result) {
    int detections = result.size();
    std::cout << "Number of detections:" << detections << std::endl;
    cv::Mat mask = img.clone();
    for (int i = 0; i < detections; ++i)
    {
        DCSP_RESULT detection = result[i];
        cv::Rect box = detection.box;
        cv::Scalar color = detection.color;
        // Detection box
        cv::rectangle(img, box, color, 2);
        mask(detection.box).setTo(color, detection.boxMask);
        // Detection box text
        std::string classString = detection.className + ' ' + std::to_string(detection.confidence).substr(0, 4);
        cv::Size textSize = cv::getTextSize(classString, cv::FONT_HERSHEY_DUPLEX, 1, 2, 0);
        cv::Rect textBox(box.x, box.y - 40, textSize.width + 10, textSize.height + 20);
        cv::rectangle(img, textBox, color, cv::FILLED);
        cv::putText(img, classString, cv::Point(box.x + 5, box.y - 10), cv::FONT_HERSHEY_DUPLEX, 1, cv::Scalar(0, 0, 0), 2, 0);
    }
    // Detection mask
    if (RunSegmentation) cv::addWeighted(img, 0.5, mask, 0.5, 0, img); //将mask加在原图上面
    cv::imshow("Inference", img);
    cv::imwrite("out.bmp", img);
    cv::waitKey();
    cv::destroyWindow("Inference");
}

char* DCSP_CORE::CreateSession(DCSP_INIT_PARAM& iParams) {
    char* Ret = RET_OK;
    std::regex pattern("[\u4e00-\u9fa5]");
    bool result = std::regex_search(iParams.ModelPath, pattern);
    if (result) {
        char str_tmp[] = "[DCSP_ONNX]:Model path error.Change your model path without chinese characters.";
        Ret = str_tmp;
        std::cout << Ret << std::endl;
        return Ret;
    }
    try {
        modelConfidenceThreshold = iParams.modelConfidenceThreshold;
        rectConfidenceThreshold = iParams.RectConfidenceThreshold;
        iouThreshold = iParams.iouThreshold;
        imgSize = iParams.imgSize;
        modelType = iParams.ModelType;
        env = Ort::Env(ORT_LOGGING_LEVEL_WARNING, "Yolo");
        Ort::SessionOptions sessionOption;
        if (iParams.CudaEnable) {
            cudaEnable = iParams.CudaEnable;
            OrtCUDAProviderOptions cudaOption;
            cudaOption.device_id = 0;
            sessionOption.AppendExecutionProvider_CUDA(cudaOption);
        }
        sessionOption.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_ALL);
        sessionOption.SetIntraOpNumThreads(iParams.IntraOpNumThreads);
        sessionOption.SetLogSeverityLevel(iParams.LogSeverityLevel);

#ifdef _WIN32
        int ModelPathSize = MultiByteToWideChar(CP_UTF8, 0, iParams.ModelPath.c_str(), static_cast<int>(iParams.ModelPath.length()), nullptr, 0);
        wchar_t* wide_cstr = new wchar_t[ModelPathSize + 1];
        MultiByteToWideChar(CP_UTF8, 0, iParams.ModelPath.c_str(), static_cast<int>(iParams.ModelPath.length()), wide_cstr, ModelPathSize);
        wide_cstr[ModelPathSize] = L'\0';
        const wchar_t* modelPath = wide_cstr;
#else
        const char* modelPath = iParams.ModelPath.c_str();
#endif // _WIN32

        session = new Ort::Session(env, modelPath, sessionOption);
        Ort::AllocatorWithDefaultOptions allocator;
        size_t inputNodesNum = session->GetInputCount();
        for (size_t i = 0; i < inputNodesNum; i++) {
            Ort::AllocatedStringPtr input_node_name = session->GetInputNameAllocated(i, allocator);
            char* temp_buf = new char[50];
            strcpy(temp_buf, input_node_name.get());
            inputNodeNames.push_back(temp_buf);
        }
        size_t OutputNodesNum = session->GetOutputCount();
        for (size_t i = 0; i < OutputNodesNum; i++) {
            Ort::AllocatedStringPtr output_node_name = session->GetOutputNameAllocated(i, allocator);
            char* temp_buf = new char[10];
            strcpy(temp_buf, output_node_name.get());
            outputNodeNames.push_back(temp_buf);
        }
        if (outputNodeNames.size() == 2) RunSegmentation = true;
        options = Ort::RunOptions{ nullptr };
        WarmUpSession();
        return RET_OK;
    }
    catch (const std::exception& e) {
        const char* str1 = "[DCSP_ONNX]:";
        const char* str2 = e.what();
        std::string result = std::string(str1) + std::string(str2);
        char* merged = new char[result.length() + 1];
        std::strcpy(merged, result.c_str());
        std::cout << merged << std::endl;
        delete[] merged;
        char str_tmps[] = "[DCSP_ONNX]:Create session failed.";
        char* strs = str_tmps;
        return strs;
    }
}


char* DCSP_CORE::RunSession(cv::Mat& iImg, std::vector<DCSP_RESULT>& oResult) {
#ifdef benchmark
    clock_t starttime_1 = clock();
#endif // benchmark
    char* Ret = RET_OK;
    cv::Mat processedImg;
    cv::Vec4d params;
    //resize图片尺寸,PreProcess是直接resize,LetterBox有padding操作
    //PreProcess(iImg, imgSize, processedImg);
    LetterBox(iImg, processedImg, params, cv::Size(imgSize.at(1), imgSize.at(0)));
    if (modelType < 4) {
        float* blob = new float[processedImg.total() * 3];
        BlobFromImage(processedImg, blob);
        std::vector<int64_t> inputNodeDims = { 1, 3, imgSize.at(0), imgSize.at(1) };
        TensorProcess(starttime_1, params, iImg, blob, inputNodeDims, oResult);
    }
    else {
#ifdef USE_CUDA
        half* blob = new half[processedImg.total() * 3];
        BlobFromImage(processedImg, blob);
        std::vector<int64_t> inputNodeDims = { 1,3,imgSize.at(0),imgSize.at(1) };
        TensorProcess(starttime_1, params, iImg, blob, inputNodeDims, oResult);
#endif
    }
    return Ret;
}


template<typename N>
char* DCSP_CORE::TensorProcess(clock_t& starttime_1, cv::Vec4d& params, cv::Mat& iImg, N* blob, std::vector<int64_t>& inputNodeDims, std::vector<DCSP_RESULT>& oResult) {
    Ort::Value inputTensor = Ort::Value::CreateTensor<typename std::remove_pointer<N>::type>(
        Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU), blob, 3 * imgSize.at(0) * imgSize.at(1), inputNodeDims.data(), inputNodeDims.size());
#ifdef benchmark
    clock_t starttime_2 = clock();
#endif // benchmark
    auto outputTensor = session->Run(options, inputNodeNames.data(), &inputTensor, 1, outputNodeNames.data(), outputNodeNames.size());
#ifdef benchmark
    clock_t starttime_3 = clock();
#endif // benchmark

    std::vector<int64_t> _outputTensorShape;
    _outputTensorShape = outputTensor[0].GetTensorTypeAndShapeInfo().GetShape();
    //auto output = outputTensor[0].GetTensorMutableData<typename std::remove_pointer<N>::type>();
    N* output = outputTensor[0].GetTensorMutableData<N>();
    delete blob;
    // yolov5 has an output of shape (batchSize, 25200, 85) (Num classes + box[x,y,w,h] + confidence[c])
    // yolov8 has an output of shape (batchSize, 84,  8400) (Num classes + box[x,y,w,h])
    // yolov5
    int dimensions = _outputTensorShape[1];
    int rows = _outputTensorShape[2];
    cv::Mat rowData;
    if (modelType < 3)
        rowData = cv::Mat(dimensions, rows, CV_32F, output);
    else
        rowData = cv::Mat(dimensions, rows, CV_16S, output);
    // yolov8
    if (rows > dimensions) {
        dimensions = _outputTensorShape[2];
        rows = _outputTensorShape[1];
        rowData = rowData.t();
    }
    std::vector<int> class_ids;
    std::vector<float> confidences;
    std::vector<cv::Rect> boxes;
    std::vector<std::vector<float>> picked_proposals;

    N* data = (N*)rowData.data;
    for (int i = 0; i < dimensions; ++i) {
        switch (modelType) {
        case 0://V5_ORIGIN_FP32
        case 7://V5_ORIGIN_FP16
        {
            N confidence = data[4];
            if (confidence >= modelConfidenceThreshold)
            {
                cv::Mat scores;
                if (modelType < 3) scores = cv::Mat(1, classes.size(), CV_32FC1, data + 5);
                else scores = cv::Mat(1, classes.size(), CV_16SC1, data + 5);
                cv::Point class_id;
                double max_class_score;
                minMaxLoc(scores, 0, &max_class_score, 0, &class_id);
                max_class_score = *(data + 5 + class_id.x) * confidence;
                if (max_class_score > rectConfidenceThreshold)
                {
                    if (RunSegmentation) {
                        int _segChannels = outputTensor[1].GetTensorTypeAndShapeInfo().GetShape()[1];
                        std::vector<float> temp_proto(data + classes.size() + 5, data + classes.size() + 5 + _segChannels);
                        picked_proposals.push_back(temp_proto);
                    }
                    confidences.push_back(confidence);
                    class_ids.push_back(class_id.x);
                    float x = (data[0] - params[2]) / params[0];
                    float y = (data[1] - params[3]) / params[1];
                    float w = data[2] / params[0];
                    float h = data[3] / params[1];
                    int left = MAX(round(x - 0.5 * w + 0.5), 0);
                    int top = MAX(round(y - 0.5 * h + 0.5), 0);
                    if ((left + w) > iImg.cols) { w = iImg.cols - left; }
                    if ((top + h) > iImg.rows) { h = iImg.rows - top; }
                    boxes.emplace_back(cv::Rect(left, top, int(w), int(h)));
                }
            }
            break;
        }
        case 1://V8_ORIGIN_FP32
        case 4://V8_ORIGIN_FP16
        {
            cv::Mat scores;
            if (modelType < 3) scores = cv::Mat(1, classes.size(), CV_32FC1, data + 4);
            else scores = cv::Mat(1, classes.size(), CV_16SC1, data + 4);
            cv::Point class_id;
            double maxClassScore;
            cv::minMaxLoc(scores, 0, &maxClassScore, 0, &class_id);
            maxClassScore = *(data + 4 + class_id.x);
            if (maxClassScore > rectConfidenceThreshold) {
                if (RunSegmentation) {
                    int _segChannels = outputTensor[1].GetTensorTypeAndShapeInfo().GetShape()[1];
                    std::vector<float> temp_proto(data + classes.size() + 4, data + classes.size() + 4 + _segChannels);
                    picked_proposals.push_back(temp_proto);
                }
                confidences.push_back(maxClassScore);
                class_ids.push_back(class_id.x);
                float x = (data[0] - params[2]) / params[0];
                float y = (data[1] - params[3]) / params[1];
                float w = data[2] / params[0];
                float h = data[3] / params[1];
                int left = MAX(round(x - 0.5 * w + 0.5), 0);
                int top = MAX(round(y - 0.5 * h + 0.5), 0);
                if ((left + w) > iImg.cols) { w = iImg.cols - left; }
                if ((top + h) > iImg.rows) { h = iImg.rows - top; }
                boxes.emplace_back(cv::Rect(left, top, int(w), int(h)));
            }
            break;
        }
        }
        data += rows;
    }
    std::vector<int> nmsResult;
    cv::dnn::NMSBoxes(boxes, confidences, rectConfidenceThreshold, iouThreshold, nmsResult);
    std::vector<std::vector<float>> temp_mask_proposals;
    for (int i = 0; i < nmsResult.size(); ++i) {
        int idx = nmsResult[i];
        DCSP_RESULT result;
        result.classId = class_ids[idx];
        result.confidence = confidences[idx];
        result.box = boxes[idx];
        result.className = classes[result.classId];
        std::random_device rd;
        std::mt19937 gen(rd());
        std::uniform_int_distribution<int> dis(100, 255);
        result.color = cv::Scalar(dis(gen), dis(gen), dis(gen));
        if (result.box.width != 0 && result.box.height != 0) oResult.push_back(result);
        if (RunSegmentation) temp_mask_proposals.push_back(picked_proposals[idx]);
    }

    if (RunSegmentation) {
        cv::Mat mask_proposals;
        for (int i = 0; i < temp_mask_proposals.size(); ++i)
            mask_proposals.push_back(cv::Mat(temp_mask_proposals[i]).t());
        std::vector<int64_t> _outputMaskTensorShape;
        _outputMaskTensorShape = outputTensor[1].GetTensorTypeAndShapeInfo().GetShape();
        int _segChannels = _outputMaskTensorShape[1];
        int _segWidth = _outputMaskTensorShape[2];
        int _segHeight = _outputMaskTensorShape[3];
        N* pdata = outputTensor[1].GetTensorMutableData<N>();
        std::vector<float> mask(pdata, pdata + _segChannels * _segWidth * _segHeight);
        int _seg_params[5] = { _segChannels, _segWidth, _segHeight, inputNodeDims[2], inputNodeDims[3] };
        cv::Mat mask_protos = cv::Mat(mask);
        GetMask(_seg_params, rectConfidenceThreshold, mask_proposals, mask_protos, params, iImg.size(), oResult);
    }

#ifdef benchmark
    clock_t starttime_4 = clock();
    double pre_process_time = (double)(starttime_2 - starttime_1) / CLOCKS_PER_SEC * 1000;
    double process_time = (double)(starttime_3 - starttime_2) / CLOCKS_PER_SEC * 1000;
    double post_process_time = (double)(starttime_4 - starttime_3) / CLOCKS_PER_SEC * 1000;
    if (cudaEnable) {
        std::cout << "[DCSP_ONNX(CUDA)]: " << pre_process_time << "ms pre-process, " << process_time
            << "ms inference, " << post_process_time << "ms post-process." << std::endl;
    }
    else {
        std::cout << "[DCSP_ONNX(CPU)]: " << pre_process_time << "ms pre-process, " << process_time
            << "ms inference, " << post_process_time << "ms post-process." << std::endl;
    }
#endif // benchmark

    return RET_OK;
}


char* DCSP_CORE::WarmUpSession() {
    clock_t starttime_1 = clock();
    cv::Mat iImg = cv::Mat(cv::Size(imgSize.at(0), imgSize.at(1)), CV_8UC3);
    cv::Mat processedImg;
    cv::Vec4d params;
    //resize图片尺寸,PreProcess是直接resize,LetterBox有padding操作
    //PreProcess(iImg, imgSize, processedImg);
    LetterBox(iImg, processedImg, params, cv::Size(imgSize.at(1), imgSize.at(0)));
    if (modelType < 4) {
        float* blob = new float[iImg.total() * 3];
        BlobFromImage(processedImg, blob);
        std::vector<int64_t> YOLO_input_node_dims = { 1, 3, imgSize.at(0), imgSize.at(1) };
        Ort::Value input_tensor = Ort::Value::CreateTensor<float>(
            Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU), blob, 3 * imgSize.at(0) * imgSize.at(1),
            YOLO_input_node_dims.data(), YOLO_input_node_dims.size());
        auto output_tensors = session->Run(options, inputNodeNames.data(), &input_tensor, 1, outputNodeNames.data(), outputNodeNames.size());
        delete[] blob;
        clock_t starttime_4 = clock();
        double post_process_time = (double)(starttime_4 - starttime_1) / CLOCKS_PER_SEC * 1000;
        if (cudaEnable) {
            std::cout << "[DCSP_ONNX(CUDA)]: " << "Cuda warm-up cost " << post_process_time << " ms. " << std::endl;
        }
    }
    else {
#ifdef USE_CUDA
        half* blob = new half[iImg.total() * 3];
        BlobFromImage(processedImg, blob);
        std::vector<int64_t> YOLO_input_node_dims = { 1,3,imgSize.at(0),imgSize.at(1) };
        Ort::Value input_tensor = Ort::Value::CreateTensor<half>(Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU), blob, 3 * imgSize.at(0) * imgSize.at(1), YOLO_input_node_dims.data(), YOLO_input_node_dims.size());
        auto output_tensors = session->Run(options, inputNodeNames.data(), &input_tensor, 1, outputNodeNames.data(), outputNodeNames.size());
        delete[] blob;
        clock_t starttime_4 = clock();
        double post_process_time = (double)(starttime_4 - starttime_1) / CLOCKS_PER_SEC * 1000;
        if (cudaEnable)
        {
            std::cout << "[DCSP_ONNX(CUDA)]: " << "Cuda warm-up cost " << post_process_time << " ms. " << std::endl;
        }
#endif
    }
    return RET_OK;
}

yolov5v8_dnn.h

#ifndef YOLOV5V8_DNN_H
#define YOLOV5V8_DNN_H

// Cpp native
#include <fstream>
#include <vector>
#include <string>
#include <random>

// OpenCV / DNN / Inference
#include <opencv2/imgproc.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>

struct Detection
{
    int class_id{ 0 };
    std::string className{};
    float confidence{ 0.0 };
    cv::Scalar color{};
    cv::Rect box{};
    cv::Mat boxMask;
};

class Inference
{
public:
    Inference(const std::string& onnxModelPath, const cv::Size& modelInputShape = { 640, 640 }, const std::string& classesTxtFile = "", const bool& runWithCuda = true);
    std::vector<Detection> runInference(const cv::Mat& input);
    void DrawPred(cv::Mat& img, std::vector<Detection>& result);

private:
    void loadClassesFromFile();
    void loadOnnxNetwork();
    void LetterBox(const cv::Mat& image, cv::Mat& outImage,
        cv::Vec4d& params, //[ratio_x,ratio_y,dw,dh]
        const cv::Size& newShape = cv::Size(640, 640),
        bool autoShape = false,
        bool scaleFill = false,
        bool scaleUp = true,
        int stride = 32,
        const cv::Scalar& color = cv::Scalar(114, 114, 114));
    void GetMask(const cv::Mat& maskProposals, const cv::Mat& mask_protos, const cv::Vec4d& params, const cv::Size& srcImgShape, std::vector<Detection>& output);

private:
    std::string modelPath{};
    bool cudaEnabled{};
    cv::Size2f modelShape{};
    bool RunSegmentation = false;
    float modelConfidenceThreshold{ 0.25 };
    float modelScoreThreshold{ 0.45 };
    float modelNMSThreshold{ 0.50 };

    bool letterBoxForSquare = true;
    cv::dnn::Net net;
    std::string classesPath{};
    std::vector<std::string> classes{ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant",
        "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella",
        "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
        "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot",
        "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard",
        "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" };
};

#endif // YOLOV5V8_DNN_H

yolov5v8_ort.h

#pragma once

#define    RET_OK nullptr
#define    USE_CUDA

#ifdef _WIN32
#include <Windows.h>
#include <direct.h>
#include <io.h>
#endif

#include <string>
#include <vector>
#include <cstdio>
#include <opencv2/opencv.hpp>
#include "onnxruntime_cxx_api.h"

#ifdef USE_CUDA
#include <cuda_fp16.h>
#endif


enum MODEL_TYPE {
    //FLOAT32 MODEL
    YOLO_ORIGIN_V5 = 0,//support
    YOLO_ORIGIN_V8 = 1,//support
    YOLO_POSE_V8 = 2,
    YOLO_CLS_V8 = 3,
    YOLO_ORIGIN_V8_HALF = 4,//support
    YOLO_POSE_V8_HALF = 5,
    YOLO_CLS_V8_HALF = 6,
    YOLO_ORIGIN_V5_HALF = 7 //support
};


typedef struct _DCSP_INIT_PARAM {
    std::string ModelPath;
    MODEL_TYPE ModelType = YOLO_ORIGIN_V8;
    std::vector<int> imgSize = { 640, 640 };
    float modelConfidenceThreshold = 0.25;
    float RectConfidenceThreshold = 0.6;
    float iouThreshold = 0.5;
    bool CudaEnable = false;
    int LogSeverityLevel = 3;
    int IntraOpNumThreads = 1;
} DCSP_INIT_PARAM;


typedef struct _DCSP_RESULT {
    int classId;
    std::string className;
    float confidence;
    cv::Rect box;
    cv::Mat boxMask;       //矩形框内mask
    cv::Scalar color;
} DCSP_RESULT;


class DCSP_CORE {
public:
    DCSP_CORE();
    ~DCSP_CORE();

public:
    void DrawPred(cv::Mat& img, std::vector<DCSP_RESULT>& result);
    char* CreateSession(DCSP_INIT_PARAM& iParams);
    char* RunSession(cv::Mat& iImg, std::vector<DCSP_RESULT>& oResult);
    char* WarmUpSession();
    template<typename N>
    char* TensorProcess(clock_t& starttime_1, cv::Vec4d& params, cv::Mat& iImg, N* blob, std::vector<int64_t>& inputNodeDims, std::vector<DCSP_RESULT>& oResult);
    std::vector<std::string> classes{ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant",
    "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella",
    "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
    "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot",
    "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard",
    "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" };

private:
    Ort::Env env;
    Ort::Session* session;
    bool cudaEnable;
    Ort::RunOptions options;
    bool RunSegmentation = false;
    std::vector<const char*> inputNodeNames;
    std::vector<const char*> outputNodeNames;

    MODEL_TYPE modelType;
    std::vector<int> imgSize;
    float modelConfidenceThreshold;
    float rectConfidenceThreshold;
    float iouThreshold;


};

3.geotpt文件的配置

geotpt文件的配置比较简单,我们只需要写两个文件放入我们的工程就行,代码如下:

getopt.h

# ifndef __GETOPT_H_
# define __GETOPT_H_

# ifdef _GETOPT_API
#     undef _GETOPT_API
# endif
//------------------------------------------------------------------------------
# if defined(EXPORTS_GETOPT) && defined(STATIC_GETOPT)
#     error "The preprocessor definitions of EXPORTS_GETOPT and STATIC_GETOPT \
can only be used individually"
# elif defined(STATIC_GETOPT)
#     pragma message("Warning static builds of getopt violate the Lesser GNU \
Public License")
#     define _GETOPT_API
# elif defined(EXPORTS_GETOPT)
#     pragma message("Exporting getopt library")
#     define _GETOPT_API __declspec(dllexport)
# else
#     pragma message("Importing getopt library")
#     define _GETOPT_API __declspec(dllimport)
# endif

# include <tchar.h>
// Standard GNU options
# define null_argument           0 /*Argument Null*/
# define no_argument             0 /*Argument Switch Only*/
# define required_argument       1 /*Argument Required*/
# define optional_argument       2 /*Argument Optional*/
// Shorter Versions of options
# define ARG_NULL 0 /*Argument Null*/
# define ARG_NONE 0 /*Argument Switch Only*/
# define ARG_REQ  1 /*Argument Required*/
# define ARG_OPT  2 /*Argument Optional*/
// Change behavior for C\C++
# ifdef __cplusplus
#     define _BEGIN_EXTERN_C extern "C" {
#     define _END_EXTERN_C }
#     define _GETOPT_THROW throw()
# else
#     define _BEGIN_EXTERN_C
#     define _END_EXTERN_C
#     define _GETOPT_THROW
# endif
_BEGIN_EXTERN_C
extern _GETOPT_API TCHAR* optarg;
extern _GETOPT_API int    optind;
extern _GETOPT_API int    opterr;
extern _GETOPT_API int    optopt;
struct option
{
    /* The predefined macro variable __STDC__ is defined for C++, and it has the in-
       teger value 0 when it is used in an #if statement, indicating that the C++ l-
       anguage is not a proper superset of C, and that the compiler does not confor-
       m to C. In C, __STDC__ has the integer value 1. */
# if defined (__STDC__) && __STDC__
    const TCHAR* name;
# else
    TCHAR* name;
# endif
    int has_arg;
    int* flag;
    TCHAR val;
};
extern _GETOPT_API int getopt(int argc, TCHAR* const* argv
    , const TCHAR* optstring) _GETOPT_THROW;
extern _GETOPT_API int getopt_long
(int ___argc, TCHAR* const* ___argv
    , const TCHAR* __shortopts
    , const struct option* __longopts
    , int* __longind) _GETOPT_THROW;
extern _GETOPT_API int getopt_long_only
(int ___argc, TCHAR* const* ___argv
    , const TCHAR* __shortopts
    , const struct option* __longopts
    , int* __longind) _GETOPT_THROW;
// harly.he add for reentrant 12.09/2013
extern _GETOPT_API void getopt_reset() _GETOPT_THROW;
_END_EXTERN_C
// Undefine so the macros are not included
# undef _BEGIN_EXTERN_C
# undef _END_EXTERN_C
# undef _GETOPT_THROW
# undef _GETOPT_API
# endif  // __GETOPT_H_

getopt.c

# ifndef _CRT_SECURE_NO_WARNINGS
#     define _CRT_SECURE_NO_WARNINGS
# endif

# include <stdlib.h>
# include <stdio.h>
# include <tchar.h>
# include "getopt.h"

# ifdef __cplusplus
#     define _GETOPT_THROW throw()
# else
#     define _GETOPT_THROW
# endif

enum ENUM_ORDERING
{
    REQUIRE_ORDER, PERMUTE, RETURN_IN_ORDER
};

struct _getopt_data
{
    int     optind;
    int     opterr;
    int     optopt;
    TCHAR* optarg;
    int     __initialized;
    TCHAR* __nextchar;
    int     __ordering;
    int     __posixly_correct;
    int     __first_nonopt;
    int     __last_nonopt;
};
static struct _getopt_data  getopt_data = { 0, 0, 0, NULL, 0, NULL, 0, 0, 0, 0 };

TCHAR* optarg = NULL;
int     optind = 1;
int     opterr = 1;
int     optopt = _T('?');

static void exchange(TCHAR** argv, struct _getopt_data* d)
{
    int     bottom = d->__first_nonopt;
    int     middle = d->__last_nonopt;
    int     top = d->optind;
    TCHAR* tem;
    while (top > middle && middle > bottom)
    {
        if (top - middle > middle - bottom)
        {
            int len = middle - bottom;
            register int i;
            for (i = 0; i < len; i++)
            {
                tem = argv[bottom + i];
                argv[bottom + i] = argv[top - (middle - bottom) + i];
                argv[top - (middle - bottom) + i] = tem;
            }
            top -= len;
        }
        else
        {
            int len = top - middle;
            register int i;
            for (i = 0; i < len; i++)
            {
                tem = argv[bottom + i];
                argv[bottom + i] = argv[middle + i];
                argv[middle + i] = tem;
            }
            bottom += len;
        }
    }
    d->__first_nonopt += (d->optind - d->__last_nonopt);
    d->__last_nonopt = d->optind;
}

static const TCHAR* _getopt_initialize(const TCHAR* optstring
    , struct _getopt_data* d
    , int posixly_correct)
{
    d->__first_nonopt = d->__last_nonopt = d->optind;
    d->__nextchar = NULL;
    d->__posixly_correct = posixly_correct
        | !!_tgetenv(_T("POSIXLY_CORRECT"));
    if (optstring[0] == _T('-'))
    {
        d->__ordering = RETURN_IN_ORDER;
        ++optstring;
    }
    else if (optstring[0] == _T('+'))
    {
        d->__ordering = REQUIRE_ORDER;
        ++optstring;
    }
    else if (d->__posixly_correct)
    {
        d->__ordering = REQUIRE_ORDER;
    }
    else
    {
        d->__ordering = PERMUTE;
    }
    return optstring;
}

int _getopt_internal_r(int argc
    , TCHAR* const* argv
    , const TCHAR* optstring
    , const struct option* longopts
    , int* longind
    , int long_only
    , struct _getopt_data* d
    , int posixly_correct)
{
    int print_errors = d->opterr;
    if (argc < 1)
    {
        return -1;
    }
    d->optarg = NULL;
    if (d->optind == 0 || !d->__initialized)
    {
        if (d->optind == 0)
        {
            d->optind = 1;
        }
        optstring = _getopt_initialize(optstring, d, posixly_correct);
        d->__initialized = 1;
    }
    else if (optstring[0] == _T('-') || optstring[0] == _T('+'))
    {
        optstring++;
    }
    if (optstring[0] == _T(':'))
    {
        print_errors = 0;
    }
    if (d->__nextchar == NULL || *d->__nextchar == _T('\0'))
    {
        if (d->__last_nonopt > d->optind)
        {
            d->__last_nonopt = d->optind;
        }
        if (d->__first_nonopt > d->optind)
        {
            d->__first_nonopt = d->optind;
        }
        if (d->__ordering == PERMUTE)
        {
            if (d->__first_nonopt != d->__last_nonopt
                && d->__last_nonopt != d->optind)
            {
                exchange((TCHAR**)argv, d);
            }
            else if (d->__last_nonopt != d->optind)
            {
                d->__first_nonopt = d->optind;
            }
            while (d->optind
                < argc
                && (argv[d->optind][0] != _T('-')
                    || argv[d->optind][1] == _T('\0')))
            {
                d->optind++;
            }
            d->__last_nonopt = d->optind;
        }
        if (d->optind != argc && !_tcscmp(argv[d->optind], _T("--")))
        {
            d->optind++;
            if (d->__first_nonopt != d->__last_nonopt
                && d->__last_nonopt != d->optind)
            {
                exchange((TCHAR**)argv, d);
            }
            else if (d->__first_nonopt == d->__last_nonopt)
            {
                d->__first_nonopt = d->optind;
            }
            d->__last_nonopt = argc;
            d->optind = argc;
        }
        if (d->optind == argc)
        {
            if (d->__first_nonopt != d->__last_nonopt)
            {
                d->optind = d->__first_nonopt;
            }
            return -1;
        }
        if ((argv[d->optind][0] != _T('-')
            || argv[d->optind][1] == _T('\0')))
        {
            if (d->__ordering == REQUIRE_ORDER)
            {
                return -1;
            }
            d->optarg = argv[d->optind++];
            return 1;
        }
        d->__nextchar = (argv[d->optind]
            + 1 + (longopts != NULL
                && argv[d->optind][1] == _T('-')));
    }
    if (longopts != NULL
        && (argv[d->optind][1] == _T('-')
            || (long_only && (argv[d->optind][2]
                || !_tcschr(optstring, argv[d->optind][1]))
                )
            )
        )
    {
        TCHAR* nameend;
        const struct option* p;
        const struct option* pfound = NULL;
        int                     exact = 0;
        int                     ambig = 0;
        int                     indfound = -1;
        int                     option_index;
        for (nameend = d->__nextchar;
            *nameend && *nameend != _T('=');
            nameend++)
            ;
        for (p = longopts, option_index = 0; p->name; p++, option_index++)
        {
            if (!_tcsncmp(p->name, d->__nextchar, nameend - d->__nextchar))
            {
                if ((unsigned int)(nameend - d->__nextchar)
                    == (unsigned int)_tcslen(p->name))
                {
                    pfound = p;
                    indfound = option_index;
                    exact = 1;
                    break;
                }
                else if (pfound == NULL)
                {
                    pfound = p;
                    indfound = option_index;
                }
                else if (long_only
                    || pfound->has_arg != p->has_arg
                    || pfound->flag != p->flag
                    || pfound->val != p->val)
                {
                    ambig = 1;
                }
            }
        }
        if (ambig && !exact)
        {
            if (print_errors)
            {
                _ftprintf(stderr
                    , _T("%s: option '%s' is ambiguous\n")
                    , argv[0]
                    , argv[d->optind]);
            }
            d->__nextchar += _tcslen(d->__nextchar);
            d->optind++;
            d->optopt = 0;
            return _T('?');
        }
        if (pfound != NULL)
        {
            option_index = indfound;
            d->optind++;
            if (*nameend)
            {
                if (pfound->has_arg)
                {
                    d->optarg = nameend + 1;
                }
                else
                {
                    if (print_errors)
                    {
                        if (argv[d->optind - 1][1] == _T('-'))
                        {
                            _ftprintf(stderr
                                , _T("%s: option '--%s' doesn't allow ")
                                _T("an argument\n")
                                , argv[0]
                                , pfound->name);
                        }
                        else
                        {
                            _ftprintf(stderr
                                , _T("%s: option '%c%s' doesn't allow ")
                                _T("an argument\n")
                                , argv[0]
                                , argv[d->optind - 1][0]
                                , pfound->name);
                        }
                    }
                    d->__nextchar += _tcslen(d->__nextchar);
                    d->optopt = pfound->val;
                    return _T('?');
                }
            }
            else if (pfound->has_arg == 1)
            {
                if (d->optind < argc)
                {
                    d->optarg = argv[d->optind++];
                }
                else
                {
                    if (print_errors)
                    {
                        _ftprintf(stderr
                            , _T("%s: option '--%s' requires an ")
                            _T("argument\n")
                            , argv[0]
                            , pfound->name);
                    }
                    d->__nextchar += _tcslen(d->__nextchar);
                    d->optopt = pfound->val;
                    return optstring[0] == _T(':') ? _T(':') : _T('?');
                }
            }
            d->__nextchar += _tcslen(d->__nextchar);
            if (longind != NULL)
            {
                *longind = option_index;
            }
            if (pfound->flag)
            {
                *(pfound->flag) = pfound->val;
                return 0;
            }
            return pfound->val;
        }
        if (!long_only
            || argv[d->optind][1]
            == _T('-')
            || _tcschr(optstring
                , *d->__nextchar)
            == NULL)
        {
            if (print_errors)
            {
                if (argv[d->optind][1] == _T('-'))
                {
                    /* --option */
                    _ftprintf(stderr
                        , _T("%s: unrecognized option '--%s'\n")
                        , argv[0]
                        , d->__nextchar);
                }
                else
                {
                    /* +option or -option */
                    _ftprintf(stderr
                        , _T("%s: unrecognized option '%c%s'\n")
                        , argv[0]
                        , argv[d->optind][0]
                        , d->__nextchar);
                }
            }
            d->__nextchar = (TCHAR*)_T("");
            d->optind++;
            d->optopt = 0;
            return _T('?');
        }
    }
    {
        TCHAR   c = *d->__nextchar++;
        TCHAR* temp = (TCHAR*)_tcschr(optstring, c);
        if (*d->__nextchar == _T('\0'))
        {
            ++d->optind;
        }
        if (temp == NULL || c == _T(':') || c == _T(';'))
        {
            if (print_errors)
            {
                _ftprintf(stderr
                    , _T("%s: invalid option -- '%c'\n")
                    , argv[0]
                    , c);
            }
            d->optopt = c;
            return _T('?');
        }
        if (temp[0] == _T('W') && temp[1] == _T(';'))
        {
            TCHAR* nameend;
            const struct option* p;
            const struct option* pfound = NULL;
            int                     exact = 0;
            int                     ambig = 0;
            int                     indfound = 0;
            int                     option_index;
            if (*d->__nextchar != _T('\0'))
            {
                d->optarg = d->__nextchar;
                d->optind++;
            }
            else if (d->optind == argc)
            {
                if (print_errors)
                {
                    _ftprintf(stderr
                        , _T("%s: option requires an argument -- '%c'\n")
                        , argv[0]
                        , c);
                }
                d->optopt = c;
                if (optstring[0] == _T(':'))
                {
                    c = _T(':');
                }
                else
                {
                    c = _T('?');
                }
                return c;
            }
            else
            {
                d->optarg = argv[d->optind++];
            }
            for (d->__nextchar = nameend = d->optarg;
                *nameend && *nameend != _T('=');
                nameend++)
                ;
            for (p = longopts, option_index = 0;
                p->name;
                p++, option_index++)
            {
                if (!_tcsncmp(p->name
                    , d->__nextchar
                    , nameend - d->__nextchar))
                {
                    if ((unsigned int)(nameend - d->__nextchar)
                        == _tcslen(p->name))
                    {
                        pfound = p;
                        indfound = option_index;
                        exact = 1;
                        break;
                    }
                    else if (pfound == NULL)
                    {
                        pfound = p;
                        indfound = option_index;
                    }
                    else if (long_only
                        || pfound->has_arg != p->has_arg
                        || pfound->flag != p->flag
                        || pfound->val != p->val)
                    {
                        ambig = 1;
                    }
                }
            }
            if (ambig && !exact)
            {
                if (print_errors)
                {
                    _ftprintf(stderr
                        , _T("%s: option '-W %s' is ambiguous\n")
                        , argv[0]
                        , d->optarg);
                }
                d->__nextchar += _tcslen(d->__nextchar);
                d->optind++;
                return _T('?');
            }
            if (pfound != NULL)
            {
                option_index = indfound;
                if (*nameend)
                {
                    if (pfound->has_arg)
                    {
                        d->optarg = nameend + 1;
                    }
                    else
                    {
                        if (print_errors)
                        {
                            _ftprintf(stderr
                                , _T("%s: option '-W %s' doesn't allow ")
                                _T("an argument\n")
                                , argv[0]
                                , pfound->name);
                        }
                        d->__nextchar += _tcslen(d->__nextchar);
                        return _T('?');
                    }
                }
                else if (pfound->has_arg == 1)
                {
                    if (d->optind < argc)
                    {
                        d->optarg = argv[d->optind++];
                    }
                    else
                    {
                        if (print_errors)
                        {
                            _ftprintf(stderr
                                , _T("%s: option '-W %s' requires an ")
                                _T("argument\n")
                                , argv[0]
                                , pfound->name);
                        }
                        d->__nextchar += _tcslen(d->__nextchar);
                        return optstring[0] == _T(':') ? _T(':') : _T('?');
                    }
                }
                else
                {
                    d->optarg = NULL;
                }
                d->__nextchar += _tcslen(d->__nextchar);
                if (longind != NULL)
                {
                    *longind = option_index;
                }
                if (pfound->flag)
                {
                    *(pfound->flag) = pfound->val;
                    return 0;
                }
                return pfound->val;
            }
            d->__nextchar = NULL;
            return _T('W');
        }
        if (temp[1] == _T(':'))
        {
            if (temp[2] == _T(':'))
            {
                if (*d->__nextchar != _T('\0'))
                {
                    d->optarg = d->__nextchar;
                    d->optind++;
                }
                else
                {
                    d->optarg = NULL;
                }
                d->__nextchar = NULL;
            }
            else
            {
                if (*d->__nextchar != _T('\0'))
                {
                    d->optarg = d->__nextchar;
                    d->optind++;
                }
                else if (d->optind == argc)
                {
                    if (print_errors)
                    {
                        _ftprintf(stderr
                            , _T("%s: option requires an ")
                            _T("argument -- '%c'\n")
                            , argv[0]
                            , c);
                    }
                    d->optopt = c;
                    if (optstring[0] == _T(':'))
                    {
                        c = _T(':');
                    }
                    else
                    {
                        c = _T('?');
                    }
                }
                else
                {
                    d->optarg = argv[d->optind++];
                }
                d->__nextchar = NULL;
            }
        }
        return c;
    }
}

int _getopt_internal(int argc
    , TCHAR* const* argv
    , const TCHAR* optstring
    , const struct option* longopts
    , int* longind
    , int long_only
    , int posixly_correct)
{
    int result;
    getopt_data.optind = optind;
    getopt_data.opterr = opterr;
    result = _getopt_internal_r(argc
        , argv
        , optstring
        , longopts
        , longind
        , long_only
        , &getopt_data
        , posixly_correct);
    optind = getopt_data.optind;
    optarg = getopt_data.optarg;
    optopt = getopt_data.optopt;
    return result;
}

int getopt(int argc, TCHAR* const* argv, const TCHAR* optstring) _GETOPT_THROW
{
    return _getopt_internal(argc
        , argv
        , optstring
        , (const struct option*)0
        , (int*)0
        , 0
        , 0);
}

int getopt_long(int argc
    , TCHAR* const* argv
    , const TCHAR* options
    , const struct option* long_options
    , int* opt_index) _GETOPT_THROW
{
    return _getopt_internal(argc
        , argv
        , options
        , long_options
        , opt_index
        , 0
        , 0);
}

int _getopt_long_r(int argc
    , TCHAR* const* argv
    , const TCHAR* options
    , const struct option* long_options
    , int* opt_index
    , struct _getopt_data* d)
{
    return _getopt_internal_r(argc
        , argv
        , options
        , long_options
        , opt_index
        , 0
        , d
        , 0);
}

int getopt_long_only(int argc
    , TCHAR* const* argv
    , const TCHAR* options
    , const struct option* long_options
    , int* opt_index) _GETOPT_THROW
{
    return _getopt_internal(argc
        , argv
        , options
        , long_options
        , opt_index
        , 1
        , 0);
}

int _getopt_long_only_r(int argc
    , TCHAR* const* argv
    , const TCHAR* options
    , const struct option* long_options
    , int* opt_index
    , struct _getopt_data* d)
{
    return _getopt_internal_r(argc
        , argv
        , options
        , long_options
        , opt_index
        , 1
        , d
        , 0);
}

void getopt_reset()
{
    optarg = NULL;
    optind = 1;
    opterr = 1;
    optopt = _T('?');
    //
    getopt_data.optind = 0;
    getopt_data.opterr = 0;
    getopt_data.optopt = 0;
    getopt_data.optarg = NULL;
    getopt_data.__initialized = 0;
    getopt_data.__nextchar = NULL;
    getopt_data.__ordering = 0;
    getopt_data.__posixly_correct = 0;
    getopt_data.__first_nonopt = 0;
    getopt_data.__last_nonopt = 0;
}

做完以上步骤后,就可以进行预测了。

注意:最好配置opencv4.8.1,其他版本的opencv可能会报如下错误:

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/954784.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

YOLOv8从菜鸟到精通(二):YOLOv8数据标注以及模型训练

数据标注 前期准备 先打开Anaconda Navigator&#xff0c;点击Environment&#xff0c;再点击new(new是我下载anaconda的文件夹名称)&#xff0c;然后点击创建 点击绿色按钮&#xff0c;并点击Open Terminal 输入labelimg便可打开它,labelimg是图像标注工具&#xff0c;在上篇…

STM32-keil安装时遇到的一些问题以及解决方案

前言&#xff1a; 本人项目需要使用到STM32,故需配置keil 5&#xff0c;在配置时遇到了以下问题&#xff0c;并找到相应的解决方案&#xff0c;希望能够为遇到相同问题的道友提供一些解决思路 1、提示缺少&#xff08;missing&#xff09;version 5编译器 step1&#xff1a;找…

C语言结构体漫谈:从平凡中见不平凡

大家好&#xff0c;这里是小编的博客频道 小编的博客&#xff1a;就爱学编程 很高兴在CSDN这个大家庭与大家相识&#xff0c;希望能在这里与大家共同进步&#xff0c;共同收获更好的自己&#xff01;&#xff01;&#xff01; 本文目录 引言正文《1》 结构体的两种声明一、结构…

LabVIEW与WPS文件格式的兼容性

LabVIEW 本身并不原生支持将文件直接保存为 WPS 格式&#xff08;如 WPS 文档或表格&#xff09;。然而&#xff0c;可以通过几种间接的方式实现这一目标&#xff0c;确保您能将 LabVIEW 中的数据或报告转换为 WPS 可兼容的格式。以下是几种常见的解决方案&#xff1a; ​ 导出…

LeetCode | 栈与队列:算法入门到进阶的全解析

栈和队列作为最基础的数据结构&#xff0c;不仅简单直观&#xff0c;还在算法世界中扮演着举足轻重的角色。无论是处理括号匹配问题、滑动窗口、还是实现先进先出的任务调度&#xff0c;栈与队列都是核心工具。 在本篇文章中&#xff0c;我们将以 LeetCode 中的经典题目为例&am…

得物App再迎开放日,全流程体验正品查验鉴别

近日&#xff0c;得物App超级品质保障中心再度迎来了开放日活动。近60位得物App的用户与粉丝齐聚超级品质保障中心&#xff0c;全流程体验正品查验鉴别。开放日当天&#xff0c;参与者有机会近距离观察得物App的商品质检区、鉴别区、收发流转区、实验室和正品库等关键功能区&am…

docker 部署 MantisBT

1. docker 安装MantisBT docker pull vimagick/mantisbt:latest 2.先运行实例&#xff0c;复制配置文件 docker run -p 8084:80 --name mantisbt -d vimagick/mantisbt:latest 3. 复制所需要配置文件到本地路径 docker cp mantisbt:/var/www/html/config/config_inc.php.…

【大语言模型】ACL2024论文-38 从信息瓶颈视角有效过滤检索增强生成中的噪声

【大语言模型】ACL2024论文-38 从信息瓶颈视角有效过滤检索增强生成中的噪声 目录 文章目录 【大语言模型】ACL2024论文-38 从信息瓶颈视角有效过滤检索增强生成中的噪声目录后记 《An Information Bottleneck Perspective for Effective Noise Filtering on Retrieval-Augment…

《火焰烟雾检测开源神经网络模型:智能防火的科技护盾》

一、火灾威胁与检测需求 火灾&#xff0c;始终是高悬在人类社会头顶的 “达摩克利斯之剑”&#xff0c;其带来的灾难后果触目惊心。根据国家消防救援局发布的数据&#xff0c;仅在 2024 年上半年&#xff0c;全国就接报火灾达 31.7 万起 &#xff0c;造成了 1173 人不幸遇难&am…

深入探究Linux树状目录结构

Linux 作为一款广泛使用的开源操作系统&#xff0c;其目录结构采用了树状设计&#xff0c;这种结构清晰、有条理&#xff0c;便于用户和系统进行文件管理与操作。 一、根目录&#xff08;/&#xff09; 根目录是整个 Linux 文件系统的起始点&#xff0c;就像一棵大树的根部&…

【C语言4】数组:一维数组、二维数组、变长数组及数组的练习题

文章目录 前言一、数组的概念二、一维数组2.1. 数组的创建和初始化2.2. 数组的类型2.3. 一维数组的下标2.4. 数组元素的打印和输入2.5. 一维数组在内存中的存储2.6. sizeof 计算数组元素个数 三、二维数组3.1. 二维数组的概念3.1. 二维数组的创建与初始化3.2. 二维数组的下标3.…

图论1-问题 C: 算法7-6:图的遍历——广度优先搜索

题目描述 广度优先搜索遍历类似于树的按层次遍历的过程。其过程为&#xff1a;假设从图中的某顶点v出发&#xff0c;在访问了v之后依次访问v的各个未曾被访问过的邻接点&#xff0c;然后分别从这些邻接点出发依次访问它们的邻接点&#xff0c;并使“先被访问的顶点的邻接点”先…

【今日分享】人工智能加速发现能源新材料的结构与性能

人工智能与材料国际学术会议(ICAIM)workshop9是由来自宁夏大学材料与新能源学院副院长王海龙教授及马薇副教授、杜鑫老师组成&#xff0c;他们将以“人工智能加速发现新能源新材料的结构与性能”为主题开展研讨工作&#xff0c;欢迎对该主题感兴趣的专家学者携稿加入。 loadin…

Docker拉取hello-world失败超时解决方法(配置多个镜源)

问题图片 解决方案 //创建目录 sudo mkdir -p /etc/docker //写入加速器配置 sudo tee /etc/docker/daemon.json <<-EOF "registry-mirrors": "https://do.nark.eu.org", "https://dc.j8.work", "https://docker.m.daocloud.io"…

[操作系统] 深入理解操作系统的概念及定位

概念 任何计算机系统都包含⼀个基本的程序集合&#xff0c;称为操作系统(OS)。 其核心功能如图片所示&#xff0c;包括&#xff1a; 内核 (Kernel)&#xff1a; 内核是操作系统的核心部分&#xff0c;被认为是狭义上的操作系统&#xff0c;直接与硬件打交道。负责进程管理、内…

某政务行业基于 SeaTunnel 探索数据集成平台的架构实践

分享嘉宾&#xff1a;某政务公司大数据技术经理 孟小鹏 编辑整理&#xff1a;白鲸开源 曾辉 导读&#xff1a;本篇文章将从数据集成的基础概念入手&#xff0c;解析数据割裂给企业带来的挑战&#xff0c;阐述数据集成的重要性&#xff0c;并对常见的集成场景与工具进行阐述&…

c#删除文件和目录到回收站

之前在c上遇到过这个问题&#xff0c;折腾许久才解决了&#xff0c;这次在c#上再次遇到这个问题&#xff0c;不过似乎容易了一些&#xff0c;亲测代码如下&#xff0c;两种删除方式都写在代码中了。 直接上完整代码&#xff1a; using Microsoft.VisualBasic.FileIO; using Sy…

数据合并与数据关联:数据处理中的核心操作

在数据分析和处理过程中&#xff0c;数据合并&#xff08;Data Merging&#xff09;和数据关联&#xff08;Data Association&#xff09;是两个非常重要的操作。它们分别用于整合不同数据集中的信息以及发现数据之间的潜在关系。 数据合并&#xff08;Data Merging&#xff0…

RK3576 Android14 状态栏和导航栏增加显示控制功能

问题背景&#xff1a; 因为RK3576 Android14用户需要手动控制状态栏和导航栏显示隐藏控制&#xff0c;包括对锁屏后下拉状态栏的屏蔽&#xff0c;在设置功能里增加此功能的控制&#xff0c;故参考一些博客完成此功能&#xff0c;以下是具体代码路径的修改内容。 解决方案&…

【初阶数据结构】序列系统重构:顺序表

文章目录 1.线性表2.顺序表2.1 概念及结构2.1.1 静态顺序表2.2.2 动态顺序表 2.2 接口实现2.2.1 顺序表打印2.2.2 顺序表初始化2.2.3 顺序表销毁2.2.4 顺序表容量检查2.2.5 顺序表尾插2.2.6 顺序表头插2.2.7 顺序表尾删2.2.8 顺序表头删2.2.9 顺序表在pos位置插入x2.2.10 顺序表…