OAK相机如何将 YOLOv9 模型转换成 blob 格式?

编辑:OAK中国
首发:oakchina.cn
喜欢的话,请多多👍⭐️✍
内容可能会不定期更新,官网内容都是最新的,请查看首发地址链接。

Hello,大家好,这里是OAK中国,我是Ashely。

专注科技,专注分享。

最近真的很忙,已经好久不发博客了。这个月有朋友问怎么在OAK相机上部署yolov9,正好给大家出个教程。

1.其他Yolo转换及使用教程请参考
2.检测类的yolo模型建议使用在线转换(地址),如果在线转换不成功,你再根据本教程来做本地转换。

▌.pt 转换为 .onnx

使用下列脚本(将脚本放到 YOLOv9 根目录中)将 pytorch 模型转换为 onnx 模型,若已安装 openvino_dev,则可进一步转换为 OpenVINO 模型:

示例用法:

python export_onnx.py -w <path_to_model>.pt -imgsz 640 

export_onnx.py :

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import argparse
import json
import logging
import math
import os
import platform
import sys
import time
import warnings
from io import BytesIO
from pathlib import Path

import torch
from torch import nn

warnings.filterwarnings("ignore")

FILE = Path(__file__).resolve()
ROOT = FILE.parents[0]  # YOLO root directory
if str(ROOT) not in sys.path:
    sys.path.append(str(ROOT))  # add ROOT to PATH
if platform.system() != "Windows":
    ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative

from models.experimental import attempt_load
from models.yolo import DDetect, Detect, DualDDetect, DualDetect, TripleDDetect, TripleDetect
from utils.torch_utils import select_device

try:
    from rich import print
    from rich.logging import RichHandler

    logging.basicConfig(
        level="INFO",
        format="%(message)s",
        datefmt="[%X]",
        handlers=[
            RichHandler(
                rich_tracebacks=False,
                show_path=False,
            )
        ],
    )
except ImportError:
    logging.basicConfig(
        level="INFO",
        format="%(asctime)s\t%(levelname)s\t%(message)s",
        datefmt="[%X]",
    )


class DetectV9(nn.Module):
    """YOLOv9 Detect head for detection models"""

    dynamic = False  # force grid reconstruction
    export = False  # export mode
    shape = None
    anchors = torch.empty(0)  # init
    strides = torch.empty(0)  # init

    def __init__(self, old_detect):
        super().__init__()
        self.nc = old_detect.nc  # number of classes
        self.nl = old_detect.nl  # number of detection layers
        self.reg_max = old_detect.reg_max  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
        self.no = old_detect.no  # number of outputs per anchor
        self.stride = old_detect.stride  # strides computed during build

        self.cv2 = old_detect.cv2
        self.cv3 = old_detect.cv3
        self.dfl = old_detect.dfl
        self.f = old_detect.f
        self.i = old_detect.i

    def forward(self, x):
        shape = x[0].shape  # BCHW

        d1 = [torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1) for i in range(self.nl)]

        box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in d1], 2).split((self.reg_max * 4, self.nc), 1)
        box = self.dfl(box)
        cls_output = cls.sigmoid()
        # Get the max
        conf, _ = cls_output.max(1, keepdim=True)
        # Concat
        y = torch.cat([box, conf, cls_output], dim=1)
        # Split to 3 channels
        outputs = []
        start, end = 0, 0
        for xi in x:
            end += xi.shape[-2] * xi.shape[-1]
            outputs.append(y[:, :, start:end].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))
            start += xi.shape[-2] * xi.shape[-1]

        return outputs

    def bias_init(self):
        # Initialize Detect() biases, WARNING: requires stride availability
        m = self  # self.model[-1]  # Detect() module

        for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)


class DualDetectV9(DetectV9):
    def __init__(self, old_detect):
        super().__init__(old_detect)

        self.cv4 = old_detect.cv4
        self.cv5 = old_detect.cv5
        self.dfl2 = old_detect.dfl2

    def forward(self, x):
        shape = x[0].shape  # BCHW

        d2 = [torch.cat((self.cv4[i](x[self.nl + i]), self.cv5[i](x[self.nl + i])), 1) for i in range(self.nl)]

        box2, cls2 = torch.cat([di.view(shape[0], self.no, -1) for di in d2], 2).split((self.reg_max * 4, self.nc), 1)
        box2 = self.dfl2(box2)
        cls_output2 = cls2.sigmoid()
        # Get the max
        conf2, _ = cls_output2.max(1, keepdim=True)
        # Concat
        y2 = torch.cat([box2, conf2, cls_output2], dim=1)

        # Split to 3 channels
        outputs2 = []
        start2, end2 = 0, 0
        for _i, xi in enumerate(x[3:]):
            end2 += xi.shape[-2] * xi.shape[-1]
            outputs2.append(y2[:, :, start2:end2].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))
            start2 += xi.shape[-2] * xi.shape[-1]

        return outputs2

    def bias_init(self):
        # Initialize Detect() biases, WARNING: requires stride availability
        m = self  # self.model[-1]  # Detect() module

        for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)
        for a, b, s in zip(m.cv4, m.cv5, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)


class TripleDetectV9(DualDetectV9):
    def __init__(self, old_detect):
        super().__init__(old_detect)

        self.cv6 = old_detect.cv6
        self.cv7 = old_detect.cv7
        self.dfl3 = old_detect.dfl3

    def forward(self, x):
        shape = x[0].shape  # BCHW

        d3 = [
            torch.cat(
                (self.cv6[i](x[self.nl * 2 + i]), self.cv7[i](x[self.nl * 2 + i])),
                1,
            )
            for i in range(self.nl)
        ]

        box3, cls3 = torch.cat([di.view(shape[0], self.no, -1) for di in d3], 2).split((self.reg_max * 4, self.nc), 1)
        box3 = self.dfl3(box3)
        cls_output3 = cls3.sigmoid()
        # Get the max
        conf3, _ = cls_output3.max(1, keepdim=True)
        # Concat
        y3 = torch.cat([box3, conf3, cls_output3], dim=1)

        # Split to 3 channels
        outputs3 = []
        start3, end3 = 0, 0
        for _i, xi in enumerate(x[6:]):
            end3 += xi.shape[-2] * xi.shape[-1]
            outputs3.append(y3[:, :, start3:end3].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))
            start3 += xi.shape[-2] * xi.shape[-1]

        return outputs3

    def bias_init(self):
        # Initialize Detect() biases, WARNING: requires stride availability
        m = self  # self.model[-1]  # Detect() module

        for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)
        for a, b, s in zip(m.cv4, m.cv5, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)
        for a, b, s in zip(m.cv6, m.cv7, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)


def parse_args():
    parser = argparse.ArgumentParser(
        description="Tool for converting Yolov9 models to the blob format used by OAK",
        formatter_class=argparse.ArgumentDefaultsHelpFormatter,
    )
    parser.add_argument(
        "-m",
        "-i",
        "-w",
        "--input_model",
        type=Path,
        required=True,
        help="weights path",
    )
    parser.add_argument(
        "-imgsz",
        "--img-size",
        nargs="+",
        type=int,
        default=[640, 640],
        help="image size",
    )  # height, width
    parser.add_argument("-op", "--opset", type=int, default=12, help="opset version")

    parser.add_argument(
        "-n",
        "--name",
        type=str,
        help="The name of the model to be saved, none means using the same name as the input model",
    )
    parser.add_argument(
        "-o",
        "--output_dir",
        type=Path,
        help="Directory for saving files, none means using the same path as the input model",
    )
    parser.add_argument(
        "-b",
        "--blob",
        action="store_true",
        help="OAK Blob export",
    )
    parser.add_argument(
        "-s",
        "--spatial_detection",
        action="store_true",
        help="Inference with depth information",
    )
    parser.add_argument(
        "-sh",
        "--shaves",
        type=int,
        help="Inference with depth information",
    )
    parser.add_argument(
        "-t",
        "--convert_tool",
        type=str,
        help="Which tool is used to convert, docker: should already have docker (https://docs.docker.com/get-docker/) and docker-py (pip install docker) installed; blobconverter: uses an online server to convert the model and should already have blobconverter (pip install blobconverter); local: use openvino-dev (pip install openvino-dev) and openvino 2022.1 ( https://docs.oakchina.cn/en/latest /pages/Advanced/Neural_networks/local_convert_openvino.html#id2) to convert",
        default="blobconverter",
        choices=["docker", "blobconverter", "local"],
    )

    args = parser.parse_args()
    args.input_model = args.input_model.resolve().absolute()
    if args.name is None:
        args.name = args.input_model.stem

    if args.output_dir is None:
        args.output_dir = args.input_model.parent

    args.img_size *= 2 if len(args.img_size) == 1 else 1  # expand

    if args.shaves is None:
        args.shaves = 5 if args.spatial_detection else 6

    return args


def export(input_model, img_size, output_model, opset, **kwargs):
    t = time.time()

    # Load PyTorch model
    device = select_device("cpu")
    # load FP32 model
    model = attempt_load(input_model, device=device, inplace=True, fuse=True)
    labels = model.module.names if hasattr(model, "module") else model.names  # get class names
    labels = labels if isinstance(labels, list) else list(labels.values())

    # check num classes and labels
    assert model.nc == len(labels), f"Model class count {model.nc} != len(names) {len(labels)}"

    # Replace with the custom Detection Head
    if isinstance(model.model[-1], (Detect, DDetect)):
        logging.info("Replacing model.model[-1] with DetectV9")
        model.model[-1] = DetectV9(model.model[-1])
    elif isinstance(model.model[-1], (DualDetect, DualDDetect)):
        logging.info("Replacing model.model[-1] with DualDetectV9")
        model.model[-1] = DualDetectV9(model.model[-1])
    elif isinstance(model.model[-1], (TripleDetect, TripleDDetect)):
        logging.info("Replacing model.model[-1] with TripleDetectV9")
        model.model[-1] = TripleDetectV9(model.model[-1])

    num_branches = model.model[-1].nl

    # Input
    img = torch.zeros(1, 3, *img_size).to(device)  # image size(1,3,320,320) Detection

    model.eval()

    model(img)  # dry runs

    # ONNX export
    try:
        import onnx

        print()
        logging.info(f"Starting ONNX export with onnx {onnx.__version__}...")
        output_list = ["output%s_yolov6r2" % (i + 1) for i in range(num_branches)]
        with BytesIO() as f:
            torch.onnx.export(
                model,
                img,
                f,
                verbose=False,
                opset_version=opset,
                input_names=["images"],
                output_names=output_list,
            )

            # Checks
            onnx_model = onnx.load_from_string(f.getvalue())  # load onnx model
            onnx.checker.check_model(onnx_model)  # check onnx model

        try:
            import onnxsim

            logging.info("Starting to simplify ONNX...")
            onnx_model, check = onnxsim.simplify(onnx_model)
            assert check, "assert check failed"

        except ImportError:
            logging.warning(
                "onnxsim is not found, if you want to simplify the onnx, "
                + "you should install it:\n\t"
                + "pip install -U onnxsim onnxruntime\n"
                + "then use:\n\t"
                + f'python -m onnxsim "{output_model}" "{output_model}"'
            )
        except Exception:
            logging.exception("Simplifier failure")

        onnx.save(onnx_model, output_model)
        logging.info(f"ONNX export success, saved as:\n\t{output_model}")

    except Exception:
        logging.exception("ONNX export failure")

    # generate anchors and sides
    anchors = []

    # generate masks
    masks = {}

    logging.info(f"anchors:\n\t{anchors}")
    logging.info(f"anchor_masks:\n\t{masks}")
    export_json = output_model.with_suffix(".json")
    export_json.write_text(
        json.dumps(
            {
                "nn_config": {
                    "output_format": "detection",
                    "NN_family": "YOLO",
                    "input_size": f"{img_size[0]}x{img_size[1]}",
                    "NN_specific_metadata": {
                        "classes": model.nc,
                        "coordinates": 4,
                        "anchors": anchors,
                        "anchor_masks": masks,
                        "iou_threshold": 0.3,
                        "confidence_threshold": 0.5,
                    },
                },
                "mappings": {"labels": labels},
            },
            indent=4,
        )
    )
    logging.info(f"Anchors data export success, saved as:\n\t{export_json}")

    # Finish
    logging.info("Export complete (%.2fs).\n" % (time.time() - t))


def convert(convert_tool, output_model, shaves, output_dir, name, **kwargs):
    t = time.time()

    export_dir: Path = output_dir.joinpath(name + "_openvino")
    export_dir.mkdir(parents=True, exist_ok=True)

    export_xml = export_dir.joinpath(name + ".xml")
    export_blob = export_dir.joinpath(name + ".blob")

    if convert_tool == "blobconverter":
        import blobconverter

        blobconverter.from_onnx(
            model=str(output_model),
            data_type="FP16",
            shaves=shaves,
            use_cache=False,
            version="2021.4",
            output_dir=export_dir,
            optimizer_params=[
                "--scale=255",
                "--reverse_input_channel",
                # "--use_new_frontend",
            ],
            # download_ir=True,
        )
        """
        with ZipFile(blob_path, "r", ZIP_LZMA) as zip_obj:
            for name in zip_obj.namelist():
                zip_obj.extract(
                    name,
                    export_dir,
                )
        blob_path.unlink()
        """
    elif convert_tool == "docker":
        import docker

        export_dir = Path("/io").joinpath(export_dir.name)
        export_xml = export_dir.joinpath(name + ".xml")
        export_blob = export_dir.joinpath(name + ".blob")

        client = docker.from_env()
        image = client.images.pull("openvino/ubuntu20_dev", tag="2022.3.1")
        docker_output = client.containers.run(
            image=image.tags[0],
            command=f"bash -c \"mo -m {name}.onnx -n {name} -o {export_dir} --static_shape --reverse_input_channels --scale=255 --use_new_frontend && echo 'MYRIAD_ENABLE_MX_BOOT NO' | tee /tmp/myriad.conf >> /dev/null && /opt/intel/openvino/tools/compile_tool/compile_tool -m {export_xml} -o {export_blob} -ip U8 -VPU_NUMBER_OF_SHAVES {shaves} -VPU_NUMBER_OF_CMX_SLICES {shaves} -d MYRIAD -c /tmp/myriad.conf\"",
            remove=True,
            volumes=[
                f"{output_dir}:/io",
            ],
            working_dir="/io",
        )
        logging.info(docker_output.decode("utf8"))
    else:
        import subprocess as sp

        # OpenVINO export
        logging.info("Starting to export OpenVINO...")
        OpenVINO_cmd = f"mo --input_model {output_model} --output_dir {export_dir} --data_type FP16 --scale 255 --reverse_input_channel"
        try:
            sp.check_output(OpenVINO_cmd, shell=True)
            logging.info(f"OpenVINO export success, saved as {export_dir}")
        except sp.CalledProcessError:
            logging.exception("")
            logging.warning("OpenVINO export failure!")
            logging.warning(f"By the way, you can try to export OpenVINO use:\n\t{OpenVINO_cmd}")

        # OAK Blob export
        logging.info("Then you can try to export blob use:")
        blob_cmd = (
            "echo 'MYRIAD_ENABLE_MX_BOOT ON' | tee /tmp/myriad.conf"
            + f"compile_tool -m {export_xml} -o {export_blob} -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES {shaves} -VPU_NUMBER_OF_CMX_SLICES {shaves} -c /tmp/myriad.conf"
        )
        logging.info(f"{blob_cmd}")

        logging.info(
            "compile_tool maybe in the path: /opt/intel/openvino/tools/compile_tool/compile_tool, if you install openvino 2022.1 with apt"
        )

    logging.info("Convert complete (%.2fs).\n" % (time.time() - t))


if __name__ == "__main__":
    args = parse_args()
    logging.info(args)
    print()
    output_model = args.output_dir / (args.name + ".onnx")

    export(output_model=output_model, **vars(args))
    if args.blob:
        convert(output_model=output_model, **vars(args))

可以使用 Netron 查看模型结构:
在这里插入图片描述

▌转换

openvino 本地转换

onnx -> openvino

mo 是 openvino_dev 2022.1 中脚本,安装命令为 pip install openvino-dev

mo --input_model yolov9-c.onnx --scale=255 --reverse_input_channel

openvino -> blob
compile_tool 是 OpenVINO Runtime 中脚本

<path>/compile_tool -m yolov9-c.xml 
-ip U8 -d MYRIAD 
-VPU_NUMBER_OF_SHAVES 6 
-VPU_NUMBER_OF_CMX_SLICES 6

在线转换

blobconvert 网页 http://blobconverter.luxonis.com/

  • 进入网页,按下图指示操作:
    在这里插入图片描述

  • 修改参数,转换模型:
    在这里插入图片描述

  1. 选择 onnx 模型
  2. 修改 optimizer_params--data_type=FP16 --scale=255 --reverse_input_channel
  3. 修改 shaves6
  4. 转换

blobconverter python 代码:

blobconverter.from_onnx(
            "yolov9-c.onnx",	
            optimizer_params=[
                "--scale=255",
                "--reverse_input_channel",
            ],
            shaves=6,
        )

blobconvert cli

blobconverter --onnx yolov9-c.onnx -sh 6 -o . --optimizer-params "scale=255 --reverse_input_channel"

▌DepthAI 示例

正确解码需要可配置的网络相关参数:

  • setNumClasses – YOLO 检测类别的数量
  • setIouThreshold – iou 阈值
  • setConfidenceThreshold – 置信度阈值,低于该阈值的对象将被过滤掉
# coding=utf-8
import cv2
import depthai as dai
import numpy as np

numClasses = 80
model = dai.OpenVINO.Blob("yolov9-c.blob")
dim = next(iter(model.networkInputs.values())).dims
W, H = dim[:2]

output_name, output_tenser = next(iter(model.networkOutputs.items()))
if "yolov6" in output_name:
    numClasses = output_tenser.dims[2] - 5
else:
    numClasses = output_tenser.dims[2] // 3 - 5

labelMap = [
    # "class_1","class_2","..."
    "class_%s" % i
    for i in range(numClasses)
]

# Create pipeline
pipeline = dai.Pipeline()

# Define sources and outputs
camRgb = pipeline.create(dai.node.ColorCamera)
detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)
xoutRgb = pipeline.create(dai.node.XLinkOut)
xoutNN = pipeline.create(dai.node.XLinkOut)

xoutRgb.setStreamName("image")
xoutNN.setStreamName("nn")

# Properties
camRgb.setPreviewSize(W, H)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setInterleaved(False)
camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)

# Network specific settings
detectionNetwork.setBlob(model)
detectionNetwork.setConfidenceThreshold(0.5)

# Yolo specific parameters
detectionNetwork.setNumClasses(numClasses)
detectionNetwork.setCoordinateSize(4)
detectionNetwork.setAnchors([])
detectionNetwork.setAnchorMasks({})
detectionNetwork.setIouThreshold(0.5)

# Linking
camRgb.preview.link(detectionNetwork.input)
camRgb.preview.link(xoutRgb.input)
detectionNetwork.out.link(xoutNN.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:
    # Output queues will be used to get the rgb frames and nn data from the outputs defined above
    imageQueue = device.getOutputQueue(name="image", maxSize=4, blocking=False)
    detectQueue = device.getOutputQueue(name="nn", maxSize=4, blocking=False)

    frame = None
    detections = []

    # nn data, being the bounding box locations, are in <0..1> range - they need to be normalized with frame width/height
    def frameNorm(frame, bbox):
        normVals = np.full(len(bbox), frame.shape[0])
        normVals[::2] = frame.shape[1]
        return (np.clip(np.array(bbox), 0, 1) * normVals).astype(int)

    def drawText(frame, text, org, color=(255, 255, 255), thickness=1):
        cv2.putText(
            frame, text, org, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), thickness + 3, cv2.LINE_AA
        )
        cv2.putText(
            frame, text, org, cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, thickness, cv2.LINE_AA
        )

    def drawRect(frame, topLeft, bottomRight, color=(255, 255, 255), thickness=1):
        cv2.rectangle(frame, topLeft, bottomRight, (0, 0, 0), thickness + 3)
        cv2.rectangle(frame, topLeft, bottomRight, color, thickness)

    def displayFrame(name, frame):
        color = (128, 128, 128)
        for detection in detections:
            bbox = frameNorm(
                frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax)
            )
            drawText(
                frame=frame,
                text=labelMap[detection.label],
                org=(bbox[0] + 10, bbox[1] + 20),
            )
            drawText(
                frame=frame,
                text=f"{detection.confidence:.2%}",
                org=(bbox[0] + 10, bbox[1] + 35),
            )
            drawRect(
                frame=frame,
                topLeft=(bbox[0], bbox[1]),
                bottomRight=(bbox[2], bbox[3]),
                color=color,
            )
        # Show the frame
        cv2.imshow(name, frame)

    while True:
        imageQueueData = imageQueue.tryGet()
        detectQueueData = detectQueue.tryGet()

        if imageQueueData is not None:
            frame = imageQueueData.getCvFrame()

        if detectQueueData is not None:
            detections = detectQueueData.detections

        if frame is not None:
            displayFrame("rgb", frame)

        if cv2.waitKey(1) == ord("q"):
            break

▌参考资料

https://docs.oakchina.cn/en/latest/
https://www.oakchina.cn/selection-guide/


OAK中国
| OpenCV AI Kit在中国区的官方代理商和技术服务商
| 追踪AI技术和产品新动态

戳「+关注」获取最新资讯↗↗

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/665818.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

逆天工具一键修复图片,视频去码。简直不要太好用!

今天&#xff0c;我要向您推荐一款功能强大的本地部署软件&#xff0c;它能够在您的计算机上一键修复图片和视频&#xff0c;去除令人不悦的码赛克&#xff08;轻度马赛克&#xff09;。这款软件是开源的&#xff0c;并在GitHub上公开可用&#xff0c;您可以免费下载并使用。 …

智能制造案例专题|与MongoDB一起解锁工业4.0转型与增长的无限潜力!

MongoDB 智能制造 数字化技术的洪流在各个产业链的主干和枝节涌现。在工业制造领域&#xff0c;能否通过数字化技术实现各生产要素、生产环节之间的紧密配合&#xff0c;高效规划、管理整个生产流程&#xff0c;是企业提升韧性、赢得竞争的关键。随着工业4.0的深入发展和智能…

Kafka自定义分区器编写教程

1.创建java类MyPartitioner并实现Partitioner接口 点击灯泡选择实现方法&#xff0c;导入需要实现的抽象方法 2.实现方法 3.自定义分区器的使用 在自定义生产者消息发送时&#xff0c;属性配置上加入自定义分区器 properties.put(ProducerConfig.PARTITIONER_CLASS_CONFIG,&q…

stack和queue(1)

一、stack的简单介绍和使用 1.1 stack的介绍 1.stack是一种容器适配器&#xff0c;专门用在具有先进后出&#xff0c;后进先出操作的上下文环境中&#xff0c;其删除只能从容器的一端进行元素的插入和弹出操作。 2.stack是作为容器适配器被实现的&#xff0c;容器适配器即是…

C++ 混合运算的类型转换

一 混合运算和隐式转换 257 整型2 浮点5 行吗&#xff1f;成吗&#xff1f;中不中&#xff1f; C 中允许相关的数据类型进行混合运算。 相关类型。 尽管在程序中的数据类型不同&#xff0c;但逻辑上进行这种运算是合理的相关类型在混合运算时会自动进行类型转换&#xff0c;再…

EXCEL数据透视图中的日期字段,怎样自动分出年、季度、月的功能?

在excel里&#xff0c;这个果然是有个设置的地方&#xff0c;修改后就好了。 点击文件选项卡&#xff0c;选项&#xff0c;在高级里&#xff0c;将图示选项的勾选给取消&#xff0c;然后再创建数据透视表或透视图&#xff0c;日期就不会自动组合了&#xff1a; 这个选项只对新…

Scroll 生态明星项目Pencils Protocol,发展潜力巨大

近日&#xff0c;完成品牌升级的 Pencils Prtocol 结束了 Season 2 并无缝开启了 Season 3&#xff0c;在 Season 3 中&#xff0c;用户可以通过质押系列资产包括 $ETH、$USDT、$USDC、$STONE 、$wrsETH、$pufETH 等来获得可观收益&#xff0c;并获得包括 Scroll Marks、 Penci…

html+CSS部分基础运用9

项目1 参会注册表 1.设计参会注册表页面&#xff0c;效果如图9-1所示。 图9-1 参会注册表页面 项目2 设计《大学生暑期社会实践调查问卷》 1.设计“大学生暑期社会实践调查问卷”页面&#xff0c;如图9-2所示。 图9-2 大学生暑期社会调查表页面 2&#xff0e;调查表前导语的…

C/C++中互斥量(锁)的实现原理探究

互斥量的实现原理探究 文章目录 互斥量的实现原理探究互斥量的概念何为原子性操作原理探究 互斥量的概念 ​ 互斥量&#xff08;mutex&#xff09;是一种同步原语&#xff0c;用于保护多个线程同时访问共享数据。互斥量提供独占的、非递归的所有权语义&#xff1a;一个线程从成…

LeetCode374猜数字大小

题目描述 我们正在玩猜数字游戏。猜数字游戏的规则如下&#xff1a;我会从 1 到 n 随机选择一个数字。 请你猜选出的是哪个数字。如果你猜错了&#xff0c;我会告诉你&#xff0c;我选出的数字比你猜测的数字大了还是小了。你可以通过调用一个预先定义好的接口 int guess(int n…

代码随想录算法训练营第四十四天 | 01背包问题 二维、 01背包问题 一维、416. 分割等和子集

01背包问题 二维 代码随想录 视频讲解&#xff1a;带你学透0-1背包问题&#xff01;| 关于背包问题&#xff0c;你不清楚的地方&#xff0c;这里都讲了&#xff01;| 动态规划经典问题 | 数据结构与算法_哔哩哔哩_bilibili 1.dp数组定义 dp[i][j] 下标为[0,i]之间的物品&…

2024盘古石初赛(服务器部分)

赛后总结 这次初赛就有20道服务器部分赛题&#xff0c;做的情况一般&#xff0c;错了5道题这样&#xff0c;主要原因就是出在第二个网站服务器没有重构起来 今天来复现一下 这次的服务器部分我直接用仿真仿起来就开找了 第一台IM前期配置 先把网配置好&#xff0c;然后ssh…

搭载算能 BM1684 芯片,面向AI推理计算加速卡

搭载算能 BM1684 芯片&#xff0c;是面向AI推理的算力卡。可集成于服务器、工控机中&#xff0c;高效适配市场上所有AI算法&#xff0c;实现视频结构化、人脸识别、行为分析、状态监测等应用&#xff0c;为智慧城市、智慧交通、智慧能源、智慧金融、智慧电信、智慧工业等领域进…

Django Celery技术详解

文章目录 简介安装和配置创建并调度任务启动Celery Worker在视图中调用异步任务拓展功能 简介 Django Celery 是一个为Django应用程序提供异步任务处理能力的强大工具。它通过与消息代理&#xff08;如RabbitMQ、Redis&#xff09;集成&#xff0c;可以轻松地处理需要长时间运…

QT5:调用qt键盘组件实现文本框输入

目录 一、环境与目标 二、Qt VirtualKeyboard 1.勾选Qt VirtualKeyboard 2.ui设计流程 3.注意事项及问题点 三、参考代码 参考博客 一、环境与目标 qt版本&#xff1a;5.12.7 windows 11 下的 Qt Designer &#xff08;已搭建&#xff09; 目标&#xff1a;创建一个窗…

系统架构设计师【第8章】: 系统质量属性与架构评估 (核心总结)

文章目录 8.1 软件系统质量属性8.1.1 质量属性概念8.1.2 面向架构评估的质量属性8.1.3 质量属性场景描述 8.2 系统架构评估8.2.1 系统架构评估中的重要概念8.2.2 系统架构评估方法 8.3 ATAM方法架构评估实践8.3.1 阶段1—演示&#xff08;Presentation&#xff09;8.3…

微信小程序教程DAY3

box标签 第二种方法 绿色第一种 第一种更好 效果一样 完成这个项目 先写循环

失之毫厘差之千里之load和loads

起源 最近在读pandas库的一些文档的时候&#xff0c;顺便也会将文档上的一些demo在编辑器中进行运行测试&#xff0c;其中在读到pandas处理Json数据这一节的时候&#xff0c;我还是像往常一样&#xff0c;将文档提供的demo写一遍&#xff0c;结果在运行的时候&#xff0c;直接…

AI边缘计算盒子在智慧交通的应用

方案背景 随着经济增长&#xff0c;交通出行需求大幅增长&#xff0c;但道路建设增长缓慢&#xff0c;交通供需矛盾日益显著&#xff0c;中心城区主要道路高峰时段交通拥堵严重&#xff0c;道路交通拥堵逐渐常态化&#xff0c;成为制约城市可持续发展的重要因素之一。 痛点问题…

前端Vue小兔鲜儿电商项目实战Day05

一、登录 - 整体认识和路由配置 1. 整体认识 登录页面的主要功能就是表单校验和登录退出业务 ①src/views/Login/index.vue <script setup></script><template><div><header class"login-header"><div class"container m-…