YOLO8 改进 009:引入 ASFF 对 YOLOv8 检测头进行优化(适用于小目标检测任务)

论文题目Learning Spatial Fusion for Single-Shot Object Detection

论文地址:Paper - ASFF

官方源码:GitHub - GOATmessi8/ASFF

 简 介 

多尺度特征融合是解决多尺度目标检测问题的关键技术,其中 FPN(特征金字塔网络)通过自顶向下的特征融合机制,将高层语义特征与低层细节特征进行简单结合,提升了检测效果。然而,FPN 的融合方法由于未充分考虑不同层级的特征图之间存在表征不一致性,可能引入冲突信息,限制了融合效果的进一步提升。ASFF(自适应空间特征融合)通过动态加权机制,在不同尺度和空间位置上自适应地融合特征,有效抑制了层级特征间的冲突信息,提高了多尺度目标检测的效果。这种优化方式体现了特征融合理论中对层次差异和空间适应性的关注。

 核 心 代 码 

(1)融合相邻层与非相邻层:

import torch
import torch.nn as nn
from ultralytics.utils.tal import dist2bbox, make_anchors
import math
import torch.nn.functional as F

__all__ = ['ASFF_Detect']

def autopad(k, p=None, d=1):  # kernel, padding, dilation
    """Pad to 'same' shape outputs."""
    if d > 1:
        k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]  # actual kernel-size
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p


class Conv(nn.Module):
    """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
    default_act = nn.SiLU()  # default activation

    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
        """Initialize Conv layer with given arguments including activation."""
        super().__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()

    def forward(self, x):
        """Apply convolution, batch normalization and activation to input tensor."""
        return self.act(self.bn(self.conv(x)))

    def forward_fuse(self, x):
        """Perform transposed convolution of 2D data."""
        return self.act(self.conv(x))


class DFL(nn.Module):
    """
    Integral module of Distribution Focal Loss (DFL).
    Proposed in Generalized Focal Loss https://ieeexplore.ieee.org/document/9792391
    """

    def __init__(self, c1=16):
        """Initialize a convolutional layer with a given number of input channels."""
        super().__init__()
        self.conv = nn.Conv2d(c1, 1, 1, bias=False).requires_grad_(False)
        x = torch.arange(c1, dtype=torch.float)
        self.conv.weight.data[:] = nn.Parameter(x.view(1, c1, 1, 1))
        self.c1 = c1

    def forward(self, x):
        """Applies a transformer layer on input tensor 'x' and returns a tensor."""
        b, c, a = x.shape  # batch, channels, anchors
        return self.conv(x.view(b, 4, self.c1, a).transpose(2, 1).softmax(1)).view(b, 4, a)
        # return self.conv(x.view(b, self.c1, 4, a).softmax(1)).view(b, 4, a)


class ASFFV5(nn.Module):
    def __init__(self, level, ch, multiplier=1, rfb=False, vis=False, act_cfg=True):
        """
        ASFF version for YoloV5 .
        different than YoloV3
        multiplier should be 1, 0.5 which means, the channel of ASFF can be
        512, 256, 128 -> multiplier=1
        256, 128, 64 -> multiplier=0.5
        For even smaller, you need change code manually.
        """
        super(ASFFV5, self).__init__()
        self.level = level
        self.dim = [int(ch[2] * multiplier), int(ch[1] * multiplier),
                    int(ch[0] * multiplier)]
        # print(self.dim)

        self.inter_dim = self.dim[self.level]
        if level == 0:
            self.stride_level_1 = Conv(int(ch[1] * multiplier), self.inter_dim, 3, 2)

            self.stride_level_2 = Conv(int(ch[0] * multiplier), self.inter_dim, 3, 2)

            self.expand = Conv(self.inter_dim, int(
                ch[2] * multiplier), 3, 1)
        elif level == 1:
            self.compress_level_0 = Conv(
                int(ch[2] * multiplier), self.inter_dim, 1, 1)
            self.stride_level_2 = Conv(
                int(ch[0] * multiplier), self.inter_dim, 3, 2)
            self.expand = Conv(self.inter_dim, int(ch[1] * multiplier), 3, 1)
        elif level == 2:
            self.compress_level_0 = Conv(
                int(ch[2] * multiplier), self.inter_dim, 1, 1)
            self.compress_level_1 = Conv(
                int(ch[1] * multiplier), self.inter_dim, 1, 1)
            self.expand = Conv(self.inter_dim, int(
                ch[0] * multiplier), 3, 1)

        # when adding rfb, we use half number of channels to save memory
        compress_c = 8 if rfb else 16
        self.weight_level_0 = Conv(
            self.inter_dim, compress_c, 1, 1)
        self.weight_level_1 = Conv(
            self.inter_dim, compress_c, 1, 1)
        self.weight_level_2 = Conv(
            self.inter_dim, compress_c, 1, 1)

        self.weight_levels = Conv(
            compress_c * 3, 3, 1, 1)
        self.vis = vis

    def forward(self, x):  # l,m,s
        """
        # 128, 256, 512
        512, 256, 128
        from small -> large
        """
        x_level_0 = x[2]  # l
        x_level_1 = x[1]  # m
        x_level_2 = x[0]  # s
        # print('x_level_0: ', x_level_0.shape)
        # print('x_level_1: ', x_level_1.shape)
        # print('x_level_2: ', x_level_2.shape)
        if self.level == 0:
            level_0_resized = x_level_0
            level_1_resized = self.stride_level_1(x_level_1)
            level_2_downsampled_inter = F.max_pool2d(
                x_level_2, 3, stride=2, padding=1)
            level_2_resized = self.stride_level_2(level_2_downsampled_inter)
        elif self.level == 1:
            level_0_compressed = self.compress_level_0(x_level_0)
            level_0_resized = F.interpolate(
                level_0_compressed, scale_factor=2, mode='nearest')
            level_1_resized = x_level_1
            level_2_resized = self.stride_level_2(x_level_2)
        elif self.level == 2:
            level_0_compressed = self.compress_level_0(x_level_0)
            level_0_resized = F.interpolate(
                level_0_compressed, scale_factor=4, mode='nearest')
            x_level_1_compressed = self.compress_level_1(x_level_1)
            level_1_resized = F.interpolate(
                x_level_1_compressed, scale_factor=2, mode='nearest')
            level_2_resized = x_level_2

        # print('level: {}, l1_resized: {}, l2_resized: {}'.format(self.level,
        #      level_1_resized.shape, level_2_resized.shape))
        level_0_weight_v = self.weight_level_0(level_0_resized)
        level_1_weight_v = self.weight_level_1(level_1_resized)
        level_2_weight_v = self.weight_level_2(level_2_resized)
        # print('level_0_weight_v: ', level_0_weight_v.shape)
        # print('level_1_weight_v: ', level_1_weight_v.shape)
        # print('level_2_weight_v: ', level_2_weight_v.shape)

        levels_weight_v = torch.cat(
            (level_0_weight_v, level_1_weight_v, level_2_weight_v), 1)
        levels_weight = self.weight_levels(levels_weight_v)
        levels_weight = F.softmax(levels_weight, dim=1)

        fused_out_reduced = level_0_resized * levels_weight[:, 0:1, :, :] + \
                            level_1_resized * levels_weight[:, 1:2, :, :] + \
                            level_2_resized * levels_weight[:, 2:, :, :]

        out = self.expand(fused_out_reduced)

        if self.vis:
            return out, levels_weight, fused_out_reduced.sum(dim=1)
        else:
            return out


class ASFF_Detect(nn.Module):
    """YOLOv8 Detect head for detection models."""
    dynamic = False  # force grid reconstruction
    export = False  # export mode
    shape = None
    anchors = torch.empty(0)  # init
    strides = torch.empty(0)  # init

    def __init__(self, nc=80, ch=(), multiplier=1, rfb=False):
        """Initializes the YOLOv8 detection layer with specified number of classes and channels."""
        super().__init__()
        self.nc = nc  # number of classes
        self.nl = len(ch)  # number of detection layers
        self.reg_max = 16  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
        self.no = nc + self.reg_max * 4  # number of outputs per anchor
        self.stride = torch.zeros(self.nl)  # strides computed during build
        c2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], min(self.nc, 100))  # channels
        self.cv2 = nn.ModuleList(
            nn.Sequential(Conv(x, c2, 3), Conv(c2, c2, 3), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch)
        self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), Conv(c3, c3, 3), nn.Conv2d(c3, self.nc, 1)) for x in ch)
        self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()
        self.l0_fusion = ASFFV5(level=0, ch=ch, multiplier=multiplier, rfb=rfb)
        self.l1_fusion = ASFFV5(level=1, ch=ch, multiplier=multiplier, rfb=rfb)
        self.l2_fusion = ASFFV5(level=2, ch=ch, multiplier=multiplier, rfb=rfb)

    def forward(self, x):
        """Concatenates and returns predicted bounding boxes and class probabilities."""
        x1 = self.l0_fusion(x)
        x2 = self.l1_fusion(x)
        x3 = self.l2_fusion(x)
        x = [x3, x2, x1]
        shape = x[0].shape  # BCHW
        for i in range(self.nl):
            x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
        if self.training:
            return x
        elif self.dynamic or self.shape != shape:
            self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
            self.shape = shape

        x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
        if self.export and self.format in ('saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs'):  # avoid TF FlexSplitV ops
            box = x_cat[:, :self.reg_max * 4]
            cls = x_cat[:, self.reg_max * 4:]
        else:
            box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)
        dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides

        if self.export and self.format in ('tflite', 'edgetpu'):
            # Normalize xywh with image size to mitigate quantization error of TFLite integer models as done in YOLOv5:
            # https://github.com/ultralytics/yolov5/blob/0c8de3fca4a702f8ff5c435e67f378d1fce70243/models/tf.py#L307-L309
            # See this PR for details: https://github.com/ultralytics/ultralytics/pull/1695
            img_h = shape[2] * self.stride[0]
            img_w = shape[3] * self.stride[0]
            img_size = torch.tensor([img_w, img_h, img_w, img_h], device=dbox.device).reshape(1, 4, 1)
            dbox /= img_size

        y = torch.cat((dbox, cls.sigmoid()), 1)
        return y if self.export else (y, x)

    def bias_init(self):
        """Initialize Detect() biases, WARNING: requires stride availability."""
        m = self  # self.model[-1]  # Detect() module
        # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1
        # ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum())  # nominal class frequency
        for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[:m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)


if __name__ == "__main__":
    image1 = torch.rand(1, 128, 160, 160)
    image2 = torch.rand(1, 256, 80, 80)
    image3 = torch.rand(1, 512, 40, 40)
    image = [image1, image2, image3]
    channel = (128, 256, 512)

    model = ASFF_Detect(nc=80, ch=channel)

    out = model(image)
    print(out[1].shape)

(2)仅融合相邻层:

import torch
import torch.nn as nn
from ultralytics.utils.tal import dist2bbox, make_anchors
import math
import torch.nn.functional as F

__all__ = ['ASFF_Detect']

def autopad(k, p=None, d=1):  # kernel, padding, dilation
    """Pad to 'same' shape outputs."""
    if d > 1:
        k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]  # actual kernel-size
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p


class Conv(nn.Module):
    """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
    default_act = nn.SiLU()  # default activation

    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
        """Initialize Conv layer with given arguments including activation."""
        super().__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()

    def forward(self, x):
        """Apply convolution, batch normalization and activation to input tensor."""
        return self.act(self.bn(self.conv(x)))

    def forward_fuse(self, x):
        """Perform transposed convolution of 2D data."""
        return self.act(self.conv(x))


class DFL(nn.Module):
    """
    Integral module of Distribution Focal Loss (DFL).
    Proposed in Generalized Focal Loss https://ieeexplore.ieee.org/document/9792391
    """

    def __init__(self, c1=16):
        """Initialize a convolutional layer with a given number of input channels."""
        super().__init__()
        self.conv = nn.Conv2d(c1, 1, 1, bias=False).requires_grad_(False)
        x = torch.arange(c1, dtype=torch.float)
        self.conv.weight.data[:] = nn.Parameter(x.view(1, c1, 1, 1))
        self.c1 = c1

    def forward(self, x):
        """Applies a transformer layer on input tensor 'x' and returns a tensor."""
        b, c, a = x.shape  # batch, channels, anchors
        return self.conv(x.view(b, 4, self.c1, a).transpose(2, 1).softmax(1)).view(b, 4, a)
        # return self.conv(x.view(b, self.c1, 4, a).softmax(1)).view(b, 4, a)


class ASFFV5(nn.Module):
    def __init__(self, level, ch, multiplier=1, rfb=False, vis=False, act_cfg=True):
        super(ASFFV5, self).__init__()
        self.level = level
        self.dim = [int(ch[2] * multiplier), int(ch[1] * multiplier), int(ch[0] * multiplier)]

        self.inter_dim = self.dim[self.level]
        if level == 0:
            self.stride_level_1 = Conv(int(ch[1] * multiplier), self.inter_dim, 3, 2)
            self.expand = Conv(self.inter_dim, int(ch[2] * multiplier), 3, 1)
        elif level == 1:
            self.compress_level_0 = Conv(int(ch[2] * multiplier), self.inter_dim, 1, 1)
            self.stride_level_2 = Conv(int(ch[0] * multiplier), self.inter_dim, 3, 2)
            self.expand = Conv(self.inter_dim, int(ch[1] * multiplier), 3, 1)
        elif level == 2:
            self.compress_level_1 = Conv(int(ch[1] * multiplier), self.inter_dim, 1, 1)
            self.expand = Conv(self.inter_dim, int(ch[0] * multiplier), 3, 1)

        compress_c = 8 if rfb else 16
        self.weight_level_0 = Conv(self.inter_dim, compress_c, 1, 1)
        self.weight_level_1 = Conv(self.inter_dim, compress_c, 1, 1)
        self.weight_level_2 = Conv(self.inter_dim, compress_c, 1, 1)

        if level == 1:
            self.weight_levels = Conv(compress_c * 3, 3, 1, 1)
        else:
            self.weight_levels = Conv(compress_c * 2, 2, 1, 1)

        self.vis = vis

    def forward(self, x):  # l,m,s
        x_level_0 = x[2]  # l (1,256,8,8)
        x_level_1 = x[1]  # m (1,128,16,16)
        x_level_2 = x[0]  # s (1,64,32,32)

        if self.level == 0:
            level_0_resized = x_level_0
            level_1_resized = self.stride_level_1(x_level_1)
            level_0_weight_v = self.weight_level_0(level_0_resized)
            level_1_weight_v = self.weight_level_1(level_1_resized)
            levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v), 1)
            levels_weight = self.weight_levels(levels_weight_v)
            levels_weight = F.softmax(levels_weight, dim=1)
            fused_out_reduced = level_0_resized * levels_weight[:, 0:1, :, :] + level_1_resized * levels_weight[:, 1:, :, :]
        elif self.level == 1:
            level_0_resized = self.compress_level_0(x_level_0)
            level_0_resized = F.interpolate(level_0_resized, scale_factor=2, mode='nearest')
            level_1_resized = x_level_1
            level_2_resized = self.stride_level_2(x_level_2)
            level_0_weight_v = self.weight_level_0(level_0_resized)
            level_1_weight_v = self.weight_level_1(level_1_resized)
            level_2_weight_v = self.weight_level_2(level_2_resized)
            levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v, level_2_weight_v), 1)
            levels_weight = self.weight_levels(levels_weight_v)
            levels_weight = F.softmax(levels_weight, dim=1)
            fused_out_reduced = level_0_resized * levels_weight[:, 0:1, :, :] + level_1_resized * levels_weight[:, 1:2, :, :] + level_2_resized * levels_weight[:, 2:, :, :]
        elif self.level == 2:
            level_1_resized = self.compress_level_1(x_level_1)
            level_1_resized = F.interpolate(level_1_resized, scale_factor=2, mode='nearest')
            level_2_resized = x_level_2
            level_1_weight_v = self.weight_level_1(level_1_resized)
            level_2_weight_v = self.weight_level_2(level_2_resized)
            levels_weight_v = torch.cat((level_1_weight_v, level_2_weight_v), 1)
            levels_weight = self.weight_levels(levels_weight_v)
            levels_weight = F.softmax(levels_weight, dim=1)
            fused_out_reduced = level_1_resized * levels_weight[:, 0:1, :, :] + level_2_resized * levels_weight[:, 1:, :, :]

        out = self.expand(fused_out_reduced)

        if self.vis:
            return out, levels_weight, fused_out_reduced.sum(dim=1)
        else:
            return out


class ASFF_Detect(nn.Module):
    """YOLOv8 Detect head for detection models."""
    dynamic = False  # force grid reconstruction
    export = False  # export mode
    shape = None
    anchors = torch.empty(0)  # init
    strides = torch.empty(0)  # init

    def __init__(self, nc=80, ch=(), multiplier=1, rfb=False):
        """Initializes the YOLOv8 detection layer with specified number of classes and channels."""
        super().__init__()
        self.nc = nc  # number of classes
        self.nl = len(ch)  # number of detection layers
        self.reg_max = 16  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
        self.no = nc + self.reg_max * 4  # number of outputs per anchor
        self.stride = torch.zeros(self.nl)  # strides computed during build
        c2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], min(self.nc, 100))  # channels
        self.cv2 = nn.ModuleList(
            nn.Sequential(Conv(x, c2, 3), Conv(c2, c2, 3), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch)
        self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), Conv(c3, c3, 3), nn.Conv2d(c3, self.nc, 1)) for x in ch)
        self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()
        self.l0_fusion = ASFFV5(level=0, ch=ch, multiplier=multiplier, rfb=rfb)
        self.l1_fusion = ASFFV5(level=1, ch=ch, multiplier=multiplier, rfb=rfb)
        self.l2_fusion = ASFFV5(level=2, ch=ch, multiplier=multiplier, rfb=rfb)

    def forward(self, x):
        """Concatenates and returns predicted bounding boxes and class probabilities."""
        x1 = self.l0_fusion(x)
        x2 = self.l1_fusion(x)
        x3 = self.l2_fusion(x)
        x = [x3, x2, x1]
        shape = x[0].shape  # BCHW
        for i in range(self.nl):
            x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
        if self.training:
            return x
        elif self.dynamic or self.shape != shape:
            self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
            self.shape = shape

        x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
        if self.export and self.format in ('saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs'):  # avoid TF FlexSplitV ops
            box = x_cat[:, :self.reg_max * 4]
            cls = x_cat[:, self.reg_max * 4:]
        else:
            box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)
        dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides

        if self.export and self.format in ('tflite', 'edgetpu'):
            # Normalize xywh with image size to mitigate quantization error of TFLite integer models as done in YOLOv5:
            # https://github.com/ultralytics/yolov5/blob/0c8de3fca4a702f8ff5c435e67f378d1fce70243/models/tf.py#L307-L309
            # See this PR for details: https://github.com/ultralytics/ultralytics/pull/1695
            img_h = shape[2] * self.stride[0]
            img_w = shape[3] * self.stride[0]
            img_size = torch.tensor([img_w, img_h, img_w, img_h], device=dbox.device).reshape(1, 4, 1)
            dbox /= img_size

        y = torch.cat((dbox, cls.sigmoid()), 1)
        return y if self.export else (y, x)

    def bias_init(self):
        """Initialize Detect() biases, WARNING: requires stride availability."""
        m = self  # self.model[-1]  # Detect() module
        # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1
        # ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum())  # nominal class frequency
        for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[:m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)


if __name__ == "__main__":
    image1 = torch.rand(1, 128, 160, 160)
    image2 = torch.rand(1, 256, 80, 80)
    image3 = torch.rand(1, 512, 40, 40)
    image = [image1, image2, image3]
    channel = (128, 256, 512)

    model = ASFF_Detect(nc=80, ch=channel)

    out = model(image)
    print(out[1].shape)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/939426.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

【数据集】生菜病害检测数据集530张6类YOLO+VOC格式

数据集格式:VOC格式YOLO格式 压缩包内含:3个文件夹,分别存储图片、xml、txt文件 JPEGImages文件夹中jpg图片总计:530 Annotations文件夹中xml文件总计:530 labels文件夹中txt文件总计:530 标签种类数&#…

设计模式2

23中设计模式分类 创建型模式:对象实例化的模式,创建型模式用于解耦对象的实例化过程。(对象的创建和对象的使用分离) 5种:单例模式、工厂模式、抽象工厂模式、原型模式、建造者模式 结构型模式:把类或对…

CSS边框的样式

边框阴影 让元素更有立体感 img {box-shadow: 2px 10px 5px 20px #ff0000;border-radius: 44px;}语法:box-shadow:值1 值2 值3 值4 值5 值1:水平阴影的位置值2:垂直阴影的位置值3:模糊距离值4:阴影的尺寸…

Spring篇--xml方式整合第三方框架

Spring xml方式整合第三方框架 xml整合第三方框架有两种整合方案: ​ 不需要自定义名空间,不需要使用Spring的配置文件配置第三方框架本身内容,例如:MyBatis; ​ 需要引入第三方框架命名空间,需要使用…

Javascript-web API-day02

文章目录 01-事件监听02-点击关闭广告03-随机点名案例04-鼠标经过或离开事件05-可点击的轮播图06-小米搜索框07-键盘类型事件08-键盘事件-发布评论案例09-focus选择器10-评论回车发布11-事件对象12-trim方法13-环境对象14-回调函数15-tab栏切换 01-事件监听 <!DOCTYPE html…

powershell(1)

免责声明 学习视频来自 B 站up主泷羽sec&#xff0c;如涉及侵权马上删除文章。 笔记的只是方便各位师傅学习知识&#xff0c;以下代码、网站只涉及学习内容&#xff0c;其他的都与本人无关&#xff0c;切莫逾越法律红线&#xff0c;否则后果自负。 泷羽sec官网&#xff1a;http…

GraphReader: 将长文本结构化为图,并让 agent 自主探索,结合的大模型长文本处理增强方法

GraphReader: 将长文本结构化为图&#xff0c;并让 agent 自主探索&#xff0c;结合的大模型长文本处理增强方法 论文大纲理解为什么大模型和知识图谱不够&#xff1f;还要多智能体 设计思路数据分析解法拆解全流程核心模式提问为什么传统的长文本处理方法会随着文本长度增加而…

如何一站式计算抗体和蛋白信息

在生物医药研究领域&#xff0c;蛋白质&#xff08;抗体、多肽等&#xff09;的性质计算是理解生命机制、分离/纯化/鉴定/生产蛋白、以及开发蛋白新药的重要研究手段。然而&#xff0c;很多相关功能分散在不同的软件中&#xff0c;十分不方便。鹰谷电子实验记录本InELN一站式内…

物理信息神经网络(PINN)八课时教案

物理信息神经网络&#xff08;PINN&#xff09;八课时教案 第一课&#xff1a;物理信息神经网络概述 1.1 PINN的定义与背景 物理信息神经网络&#xff08;Physics-Informed Neural Networks&#xff0c;简称PINN&#xff09;是一种将物理定律融入神经网络训练过程中的先进方…

gitlab初始化+API批量操作

几年没接触gitlab了&#xff0c;新版本装完以后代码提交到默认的main分支&#xff0c;master不再是主分支 项目有几十个仓库&#xff0c;研发提交代码后仓库地址和之前的发生了变化 有几个点 需要注意 1、修改全局默认分支 2、关闭分支保护 上面修改了全局配置不会影响已经创…

【记录50】uniapp安装uview插件,样式引入失败分析及解决

SassError: Undefined variable: "$u-border-color". 表示样式变量$u-border-color没定义&#xff0c;实际是定义的 首先确保安装了scss/sass 其次&#xff0c;根目录下 app.vue中是否全局引入 <style lang"scss">import /uni_modules/uview-ui/in…

STM32CUBEMX+STM32H743ZIT6+MPU+DMA+UART下发指令对MPU配置管理

实现stm32H7的IAP过程&#xff0c;没有想象中的顺利。 需要解决串口DMA和MPU配置管理。 查看正点原子的MPU管理例程&#xff0c;想自己用串口下发指令&#xff0c;实现MPU打开&#xff0c;读取和写入指令。 中间遇到很多坑&#xff0c;比如串口DMA方式下发指令&#xff0c;没反…

8. 数组拼接

题目描述 现在有多组整数数组&#xff0c;需要将它们合并成一个新的数组。合并规则&#xff0c;从每个数组里按顺序取出固定长度的内容合并到新的数组中&#xff0c;取完的内容会删除掉&#xff0c;如果该行不足固定长度或者已经为空&#xff0c;则直接取出剩余部分的内容放到新…

Chrome 浏览器原生功能截长屏

我偶尔需要截取一些网页内容作为素材&#xff0c;但偶尔内容很长无法截全&#xff0c;需要多次截屏再拼接&#xff0c;过于麻烦。所以记录下这个通过浏览器原生功能截长屏的方案。 注意 这种方案并不是百分百完美&#xff0c;如果涉及到一些需要滚动加载的数据或者悬浮区块&am…

学技术学英文:代码中的锁:悲观锁和乐观锁

本文导读&#xff1a; 1. 举例说明加锁的场景&#xff1a; 多线程并发情况下有资源竞争的时候&#xff0c;如果不加锁&#xff0c;会出现数据错误&#xff0c;举例说明&#xff1a; 业务需求&#xff1a;账户余额>取款金额&#xff0c;才能取钱。 时间线 两人共有账户 …

深度学习之目标检测——RCNN

Selective Search 背景:事先不知道需要检测哪个类别,且候选目标存在层级关系与尺度关系 常规解决方法&#xff1a;穷举法&#xff0c;在原始图片上进行不同尺度不同大小的滑窗&#xff0c;获取每个可能的位置 弊端&#xff1a;计算量大&#xff0c;且尺度不能兼顾 Selective …

数字人在虚拟展厅中的应用方向有哪些?

数字人在虚拟展厅中的应用日益丰富&#xff0c;为参观者带来了前所未有的互动体验。以下是数字人在虚拟展厅中的几大主要应用方向&#xff1a; 1. 智能导览与讲解 在虚拟展厅中&#xff0c;数字人以其独特的魅力担任着导览员的角色。它们不仅为参观者提供精准的信息和指引&am…

WEB开发: 全栈工程师起步 - Python Flask +SQLite的管理系统实现

一、前言 罗马不是一天建成的。 每个全栈工程师都是从HELLO WORLD 起步的。 之前我们分别用NODE.JS 、ASP.NET Core 这两个框架实现过基于WebServer的全栈工程师入门教程。 今天我们用更简单的来实现&#xff1a; Python。 我们将用Python来实现一个学生管理应用&#xff0…

【我的 PWN 学习手札】IO_FILE 之 stdin任意地址写

我们知道&#xff0c;stdin会往“缓冲区”先读入数据&#xff0c;如果我们劫持这个所谓“缓冲区”到其他地址呢&#xff1f;是否可以读入数据到任意地址&#xff1f;答案是肯定的。 注意&#xff01;代码中的“-------”分隔&#xff0c;是为了区分一条调用链上不同代码片段&am…

从 Dify 到 Rill-Flow:大模型应用平台的进化之路

1. 基于 dify 的大模型应用平台构建 近些年&#xff0c;大语言模型领域的高速发展&#xff0c;涌现出了众多优秀的产品&#xff0c;能够解决很多实际的业务场景&#xff0c;大幅提升工作效率。各公司都纷纷搭建起了自己的大模型应用平台&#xff0c;来统一管理各种大语言模型&…