深度学习从入门到精通——图像分割实战DeeplabV3

DeeplabV3算法

    • 参数配置
      • 关于数据集的配置
      • 训练集参数
    • 数据预处理模块
      • DataSet构建模块
      • 测试一下数据集
      • 去正则化
      • 模型加载模块
      • DeepLABV3+

参数配置

关于数据集的配置

    parser = argparse.ArgumentParser()

    # Datset Options
    parser.add_argument("--data_root", type=str, default=r'D:/',
                        help="path to Dataset")
    parser.add_argument("--dataset", type=str, default='voc',
                        choices=['voc', 'cityscapes'], help='Name of dataset')
    parser.add_argument("--num_classes", type=int, default=None,
                        help="num classes (default: None)")

    # Deeplab Options
    # 选择模型的架构,特征提取模块分为moiblienet或者resnet50
    parser.add_argument("--model", type=str, default='deeplabv3plus_resnet50',
                        choices=['deeplabv3_resnet50',  'deeplabv3plus_resnet50',
                                 'deeplabv3_resnet101', 'deeplabv3plus_resnet101',
                                 'deeplabv3_mobilenet', 'deeplabv3plus_mobilenet'], help='model name')
    parser.add_argument("--separable_conv", action='store_true', default=False,
                        help="apply separable conv to decoder and aspp")
    parser.add_argument("--output_stride", type=int, default=16, choices=[8, 16])

训练集参数

  # Train Options
  # 制作测试
    parser.add_argument("--test_only", action='store_true', default=False)
    parser.add_argument("--save_val_results", action='store_true', default=False,
                        help="save segmentation results to \"./results\"")
    parser.add_argument("--total_itrs", type=int, default=60e3,
                        help="epoch number (default: 30k)")
                        # 学习率
    parser.add_argument("--lr", type=float, default=0.01,
                        help="learning rate (default: 0.01)")
    parser.add_argument("--lr_policy", type=str, default='poly', choices=['poly', 'step'],
                        help="learning rate scheduler policy")
    parser.add_argument("--step_size", type=int, default=10000)
    parser.add_argument("--crop_val", action='store_true', default=False,
                        help='crop validation (default: False)')
    parser.add_argument("--batch_size", type=int, default=8,
                        help='batch size (default: 16)')
    parser.add_argument("--val_batch_size", type=int, default=4,
                        help='batch size for validation (default: 4)')
    parser.add_argument("--crop_size", type=int, default=513)
    # 预训练权重路径
    parser.add_argument("--ckpt", default="./checkpoint/best_deeplabv3_resnet50_voc_os16.pth", type=str,
                        help="restore from checkpoint")
    parser.add_argument("--continue_training", action='store_true', default=True)

    parser.add_argument("--loss_type", type=str, default='cross_entropy',
                        choices=['cross_entropy', 'focal_loss'], help="loss type (default: False)")
    parser.add_argument("--gpu_id", type=str, default='0',
                        help="GPU ID")
    # 正则化参数
    parser.add_argument("--weight_decay", type=float, default=1e-4,
                        help='weight decay (default: 1e-4)')
    parser.add_argument("--random_seed", type=int, default=1,
                        help="random seed (default: 1)")
    parser.add_argument("--print_interval", type=int, default=10,
                        help="print interval of loss (default: 10)")
    parser.add_argument("--val_interval", type=int, default=100,
                        help="epoch interval for eval (default: 100)")
    parser.add_argument("--download", action='store_true', default=False,
                        help="download datasets")

数据预处理模块

分别针对训练集、验证集、测试集做三种数据增强变换

def get_dataset(opts):
    """ Dataset And Augmentation
    """
    if opts.dataset == 'voc':
        train_transform = et.ExtCompose([
            #et.ExtResize(size=opts.crop_size),
            et.ExtRandomScale((0.5, 2.0)),
            et.ExtRandomCrop(size=(opts.crop_size, opts.crop_size), pad_if_needed=True),
            et.ExtRandomHorizontalFlip(),
            et.ExtToTensor(),
            et.ExtNormalize(mean=[0.485, 0.456, 0.406],
                            std=[0.229, 0.224, 0.225]),
        ])
        if opts.crop_val:
            val_transform = et.ExtCompose([
                et.ExtResize(opts.crop_size),
                et.ExtCenterCrop(opts.crop_size),
                et.ExtToTensor(),
                et.ExtNormalize(mean=[0.485, 0.456, 0.406],
                                std=[0.229, 0.224, 0.225]),
            ])
        else:
            val_transform = et.ExtCompose([
                et.ExtToTensor(),
                et.ExtNormalize(mean=[0.485, 0.456, 0.406],
                                std=[0.229, 0.224, 0.225]),
            ])
        train_dst = VOCSegmentation(root=opts.data_root, year=opts.year,
                                    image_set='train', download=opts.download, transform=train_transform)
        val_dst = VOCSegmentation(root=opts.data_root, year=opts.year,
                                  image_set='val', download=False, transform=val_transform)

    if opts.dataset == 'cityscapes':
        train_transform = et.ExtCompose([
            #et.ExtResize( 512 ),
            et.ExtRandomCrop(size=(opts.crop_size, opts.crop_size)),
            et.ExtColorJitter( brightness=0.5, contrast=0.5, saturation=0.5 ),
            et.ExtRandomHorizontalFlip(),
            et.ExtToTensor(),
            et.ExtNormalize(mean=[0.485, 0.456, 0.406],
                            std=[0.229, 0.224, 0.225]),
        ])

        val_transform = et.ExtCompose([
            #et.ExtResize( 512 ),
            et.ExtToTensor(),
            et.ExtNormalize(mean=[0.485, 0.456, 0.406],
                            std=[0.229, 0.224, 0.225]),
        ])

        train_dst = Cityscapes(root=opts.data_root,
                               split='train', transform=train_transform)
        val_dst = Cityscapes(root=opts.data_root,
                             split='val', transform=val_transform)
    return train_dst, val_dst

DataSet构建模块


def voc_cmap(N=256, normalized=False):
    def bitget(byteval, idx):
        return ((byteval & (1 << idx)) != 0)
    dtype = 'float32' if normalized else 'uint8'
    cmap = np.zeros((N, 3), dtype=dtype)
    for i in range(N):
        r = g = b = 0
        c = i
        for j in range(8):
            r = r | (bitget(c, 0) << 7-j)
            g = g | (bitget(c, 1) << 7-j)
            b = b | (bitget(c, 2) << 7-j)
            c = c >> 3

        cmap[i] = np.array([r, g, b])
    cmap = cmap/255 if normalized else cmap
    return cmap
class VOCSegmentation(data.Dataset):
    """`Pascal VOC <http://host.robots.ox.ac.uk/pascal/VOC/>`_ Segmentation Dataset.
    Args:
        root (string): Root directory of the VOC Dataset.
        year (string, optional): The dataset year, supports years 2007 to 2012.
        image_set (string, optional): Select the image_set to use, ``train``, ``trainval`` or ``val``
        download (bool, optional): If true, downloads the dataset from the internet and
            puts it in root directory. If dataset is already downloaded, it is not
            downloaded again.
        transform (callable, optional): A function/transform that  takes in an PIL image
            and returns a transformed version. E.g, ``transforms.RandomCrop``
    """
    cmap = voc_cmap()
    def __init__(self,
                 root,
                 year='2012',
                 image_set='train',
                 download=False,
                 transform=None):

        is_aug=False
        if year=='2012_aug':
            is_aug = True
            year = '2012'
        
        self.root = os.path.expanduser(root)
        self.year = year
        self.url = DATASET_YEAR_DICT[year]['url']
        self.filename = DATASET_YEAR_DICT[year]['filename']
        self.md5 = DATASET_YEAR_DICT[year]['md5']
        self.transform = transform
        
        self.image_set = image_set
        base_dir = DATASET_YEAR_DICT[year]['base_dir']
        voc_root = os.path.join(self.root, base_dir)
        image_dir = os.path.join(voc_root, 'JPEGImages')

        if download:
            download_extract(self.url, self.root, self.filename, self.md5)

        if not os.path.isdir(voc_root):
            raise RuntimeError('Dataset not found or corrupted.' +
                               ' You can use download=True to download it')
        
        if is_aug and image_set=='train':
            mask_dir = os.path.join(voc_root, 'SegmentationClassAug')
            assert os.path.exists(mask_dir), "SegmentationClassAug not found, please refer to README.md and prepare it manually"
            split_f = os.path.join( self.root, 'train_aug.txt')#'./datasets/data/train_aug.txt'
        else:
            mask_dir = os.path.join(voc_root, 'SegmentationClass')
            splits_dir = os.path.join(voc_root, 'ImageSets/Segmentation')
            split_f = os.path.join(splits_dir, image_set.rstrip('\n') + '.txt')

        if not os.path.exists(split_f):
            raise ValueError(
                'Wrong image_set entered! Please use image_set="train" '
                'or image_set="trainval" or image_set="val"')

        with open(os.path.join(split_f), "r") as f:
            file_names = [x.strip() for x in f.readlines()]
        
        self.images = [os.path.join(image_dir, x + ".jpg") for x in file_names]
        self.masks = [os.path.join(mask_dir, x + ".png") for x in file_names]
        assert (len(self.images) == len(self.masks))

    def __getitem__(self, index):
        """
        Args:
            index (int): Index
        Returns:
            tuple: (image, target) where target is the image segmentation.
        """
        img = Image.open(self.images[index]).convert('RGB')
        target = Image.open(self.masks[index])
        if self.transform is not None:
            img, target = self.transform(img, target)

        return img, target


    def __len__(self):
        return len(self.images)

    @classmethod
    def decode_target(cls, mask):
        """decode semantic mask to RGB image"""
        return cls.cmap[mask]

测试一下数据集

import numpy as np

from datasets import VOCSegmentation
from utils import ext_transforms as et
import cv2
train_transform = et.ExtCompose([
    # et.ExtResize(size=opts.crop_size),
    et.ExtRandomScale((0.5, 2.0)),
    et.ExtRandomCrop(size=(224, 224), pad_if_needed=True),
    et.ExtRandomHorizontalFlip(),
    et.ExtToTensor(),
    et.ExtNormalize(mean=[0.485, 0.456, 0.406],
                    std=[0.229, 0.224, 0.225]),
])
data = VOCSegmentation(root=r"D:/", year="2012", image_set='train', download=False, transform=train_transform)

if __name__ == '__main__':
    print(data[0][0].shape)
    print(data[0][1].shape)
    res = data.decode_target(data[0][1])
    cv2.imshow("Res",np.array(res))
    cv2.waitKey(0)

在这里插入图片描述

去正则化

class Denormalize(object):
    def __init__(self, mean, std):
        mean = np.array(mean)
        std = np.array(std)
        self._mean = -mean/std
        self._std = 1/std

    def __call__(self, tensor):
        if isinstance(tensor, np.ndarray):
            return (tensor - self._mean.reshape(-1,1,1)) / self._std.reshape(-1,1,1)
        return normalize(tensor, self._mean, self._std)
   

模型加载模块

    # Set up model
    model_map = {
        'deeplabv3_resnet50': network.deeplabv3_resnet50,
        'deeplabv3plus_resnet50': network.deeplabv3plus_resnet50,
        'deeplabv3_resnet101': network.deeplabv3_resnet101,
        'deeplabv3plus_resnet101': network.deeplabv3plus_resnet101,
        'deeplabv3_mobilenet': network.deeplabv3_mobilenet,
        'deeplabv3plus_mobilenet': network.deeplabv3plus_mobilenet
    }

    model = model_map[opts.model](num_classes=opts.num_classes, output_stride=opts.output_stride)
	
def deeplabv3_resnet50(num_classes=21, output_stride=8, pretrained_backbone=True):
    """Constructs a DeepLabV3 model with a ResNet-50 backbone.

    Args:
        num_classes (int): number of classes.
        output_stride (int): output stride for deeplab.
        pretrained_backbone (bool): If True, use the pretrained backbone.
    """
    return _load_model('deeplabv3', 'resnet50', num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone)

def deeplabv3_resnet101(num_classes=21, output_stride=8, pretrained_backbone=True):
    """Constructs a DeepLabV3 model with a ResNet-101 backbone.

    Args:
        num_classes (int): number of classes.
        output_stride (int): output stride for deeplab.
        pretrained_backbone (bool): If True, use the pretrained backbone.
    """
    return _load_model('deeplabv3', 'resnet101', num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone)

def deeplabv3_mobilenet(num_classes=21, output_stride=8, pretrained_backbone=True, **kwargs):
    """Constructs a DeepLabV3 model with a MobileNetv2 backbone.

    Args:
        num_classes (int): number of classes.
        output_stride (int): output stride for deeplab.
        pretrained_backbone (bool): If True, use the pretrained backbone.
    """
    return _load_model('deeplabv3', 'mobilenetv2', num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone)


# Deeplab v3+

def deeplabv3plus_resnet50(num_classes=21, output_stride=8, pretrained_backbone=True):
    """Constructs a DeepLabV3 model with a ResNet-50 backbone.

    Args:
        num_classes (int): number of classes.
        output_stride (int): output stride for deeplab.
        pretrained_backbone (bool): If True, use the pretrained backbone.
    """
    return _load_model('deeplabv3plus', 'resnet50', num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone)

加载模块

def _load_model(arch_type, backbone, num_classes, output_stride, pretrained_backbone):

    if backbone=='mobilenetv2':
        model = _segm_mobilenet(arch_type, backbone, num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone)
    elif backbone.startswith('resnet'):
        model = _segm_resnet(arch_type, backbone, num_classes, output_stride=output_stride, pretrained_backbone=pretrained_backbone)
    else:
        raise NotImplementedError
    return model

def _segm_resnet(name, backbone_name, num_classes, output_stride, pretrained_backbone):

    if output_stride==8:
        replace_stride_with_dilation=[False, True, True]
        aspp_dilate = [12, 24, 36]
    else:
        replace_stride_with_dilation=[False, False, True]
        aspp_dilate = [6, 12, 18]

    backbone = resnet.__dict__[backbone_name](
        pretrained=pretrained_backbone,
        replace_stride_with_dilation=replace_stride_with_dilation)
    
    inplanes = 2048
    low_level_planes = 256

    if name=='deeplabv3plus':
        return_layers = {'layer4': 'out', 'layer1': 'low_level'}#
        classifier = DeepLabHeadV3Plus(inplanes, low_level_planes, num_classes, aspp_dilate)
    elif name=='deeplabv3':
        return_layers = {'layer4': 'out'}
        classifier = DeepLabHead(inplanes , num_classes, aspp_dilate)
    #提取网络的第几层输出结果并给一个别名
    backbone = IntermediateLayerGetter(backbone, return_layers=return_layers)

    model = DeepLabV3(backbone, classifier)
    return model

DeepLABV3+

class DeepLabHeadV3Plus(nn.Module):
    def __init__(self, in_channels, low_level_channels, num_classes, aspp_dilate=[12, 24, 36]):
        super(DeepLabHeadV3Plus, self).__init__()
        self.project = nn.Sequential( 
            nn.Conv2d(low_level_channels, 48, 1, bias=False),
            nn.BatchNorm2d(48),
            nn.ReLU(inplace=True),
        )

        self.aspp = ASPP(in_channels, aspp_dilate)

        self.classifier = nn.Sequential(
            nn.Conv2d(304, 256, 3, padding=(1,1), bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, num_classes, 1)
        )
        self._init_weight()

    def forward(self, feature):
        #print(feature.shape)
        low_level_feature = self.project( feature['low_level'] )#return_layers = {'layer4': 'out', 'layer1': 'low_level'}
        #print(low_level_feature.shape)
        output_feature = self.aspp(feature['out'])
        #print(output_feature.shape)
        output_feature = F.interpolate(output_feature, size=low_level_feature.shape[2:], mode='bilinear', align_corners=False)
        #print(output_feature.shape)
        return self.classifier( torch.cat( [ low_level_feature, output_feature ], dim=1 ) )
    
    def _init_weight(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

其中,空洞融合ASPP模块

class ASPP(nn.Module):
    def __init__(self, in_channels, atrous_rates):
        super(ASPP, self).__init__()
        out_channels = 256
        modules = []
        modules.append(nn.Sequential(
            nn.Conv2d(in_channels, out_channels, 1, bias=False),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True)))

        rate1, rate2, rate3 = tuple(atrous_rates)
        modules.append(ASPPConv(in_channels, out_channels, rate1))
        modules.append(ASPPConv(in_channels, out_channels, rate2))
        modules.append(ASPPConv(in_channels, out_channels, rate3))
        modules.append(ASPPPooling(in_channels, out_channels))

        self.convs = nn.ModuleList(modules)

        self.project = nn.Sequential(
            nn.Conv2d(5 * out_channels, out_channels, 1, bias=False),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True),
            nn.Dropout(0.1),)

    def forward(self, x):
        res = []
        for conv in self.convs:
            #print(conv(x).shape)
            res.append(conv(x))
        res = torch.cat(res, dim=1)
        return self.project(res)

卷积转深度可分离卷积

def convert_to_separable_conv(module):
    new_module = module
    if isinstance(module, nn.Conv2d) and module.kernel_size[0]>1:
        new_module = AtrousSeparableConvolution(module.in_channels,
                                      module.out_channels, 
                                      module.kernel_size,
                                      module.stride,
                                      module.padding,
                                      module.dilation,
                                      module.bias)
    for name, child in module.named_children():
        new_module.add_module(name, convert_to_separable_conv(child))
    return new_module
class AtrousSeparableConvolution(nn.Module):
    """ Atrous Separable Convolution
    """
    def __init__(self, in_channels, out_channels, kernel_size,
                            stride=1, padding=0, dilation=1, bias=True):
        super(AtrousSeparableConvolution, self).__init__()
        self.body = nn.Sequential(
            # Separable Conv
            nn.Conv2d( in_channels, in_channels, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, bias=bias, groups=in_channels ),
            # PointWise Conv
            nn.Conv2d( in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=bias),
        )
        
        self._init_weight()

    def forward(self, x):
        return self.body(x)

    def _init_weight(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

参考代码 --------
链接: https://pan.baidu.com/s/1E6CD5NQjbKeITacH9VjxAw?pwd=jqgk 提取码: jqgk
–来自百度网盘超级会员v6的分享

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/939541.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

大数据操作实验一

1.Postgresql 1.1 数据库的对象创建 1.1.1 创建数据库(Database) 鼠标右键database进行创建 1.1.2 创建图(Schema) 鼠标右键schema&#xff0c;然后创建schema图纸 1.1.3 创建表(Table) 鼠标右键Table&#xff0c;创建表 1.2数据库实列化 1.2.1 实列化静态数据 提…

IDEA2024如何创建一个普通的Java Web项目工程(JSP)

本章教程,主要介绍如何在IDEA2024 专业版本中,创建一个普通的Java Web项目。 一、新建项目 二、配置项目 依次点击File——Project Structure——Modules 修改路径中的web为webapp,然后点击Create Artifact默认保存。 至此,一个基础的Java web就创建完成了。

Linux下mysql 8.0安装教程

本文介绍了如何在Linux下安装MySQL8.0,供大家参考,具体内容如下 准备工作: mysql8.0 rpm文件 测试工具(比如 idea的database工具) 安装步骤: 1. 下载mysql的repo源,下载地址: 进入Linux系统,输入指令: 1 wgethttps://dev.mysql.com/get/mysql80-community-rele…

libaom 源码分析:熵编码模块介绍

AV1 熵编码原理介绍 关于AV1 熵编码原理介绍可以参考:AV1 编码标准熵编码技术概述libaom 熵编码相关源码介绍 函数流程图 核心函数介绍 av1_pack_bitstream 函数:该函数负责将编码后的数据打包成符合 AV1 标准的比特流格式;包括写入序列头 OBU 的函数 av1_write_obu_header…

[数据结构#1] 并查集 | FindRoot | Union | 优化 | 应用

目录 1. 并查集原理 问题背景 名称与编号映射 数据结构设计 2. 并查集基本操作 (1) 初始化 (2) 查询根节点 (FindRoot) (3) 合并集合 (Union) (4) 集合操作总结 并查集优化 (1) 路径压缩 (2) 按秩合并 3. 并查集的应用 (1) 统计省份数量 (2) 判断等式方程是否成…

Centos创建共享文件夹拉取文件

1.打开VMware程序&#xff0c;鼠标右检你的虚拟机&#xff0c;打开设置 2.点击选项——共享文件夹——总是启用 点击添加&#xff0c;设置你想要共享的文件夹在pc上的路径&#xff08;我这里已经添加过了就不加了&#xff09; 注意不要中文&#xff0c;建议用share&#xff0c…

Element@2.15.14-tree checkStrictly 状态实现父项联动子项,实现节点自定义编辑、新增、删除功能

背景&#xff1a;现在有一个新需求&#xff0c;需要借助树结构来实现词库的分类管理&#xff0c;树的节点是不同的分类&#xff0c;不同的分类可以有自己的词库&#xff0c;所以父子节点是互不影响的&#xff1b;同样为了选择的方便性&#xff0c;提出了新需求&#xff0c;选择…

java版电子招投标采购|投标|评标|竞标|邀标|评审招投标系统源码

招投标管理系统是一款适用于招标代理、政府采购、企业采购和工程交易等领域的企业级应用平台。该平台以项目为主线&#xff0c;从项目立项到项目归档&#xff0c;实现了全流程的高效沟通和协作。通过该平台&#xff0c;用户可以实时共享项目数据信息&#xff0c;实现规范化管理…

【Verilog HDL 入门教程】 —— 学长带你学Verilog(基础篇)

文章目录 一、Verilog HDL 概述1、Verilog HDL 是什么2、Verilog HDL产生的背景3、Verilog HDL 和 VHDL的区别 二、Verilog HDL 基础知识1、Verilog HDL 语言要素1.1、命名规则1.2、注释符1.3、关键字1.4、数值1.4.1、整数及其表示1.4.2、实数及其表示1.4.3、字符串及其表示 2、…

龙迅#LT7911E适用于EDP/DP/TPYE-C转MIPIDSI应用,支持图像处理功能,内置I2C,主应用副屏显示,投屏领域!

1. 描述 LT7911E 是一款高性能 eDP 转 MIPI D-PHY 转换器&#xff0c;旨在将 eDP 源连接到 MIPI 显示面板。 LT7911E 集成了一个符合 eDP1.4 标准的接收器&#xff0c;支持 1.62Gbps 至 5.67Gbps 的输入数据&#xff0c;以 270Mbps 的递增步长&#xff0c;以及一个 2 端口 D…

《算法SM9》题目

判断题 SM9密码算法系统参数由KGC选择。 A.正确 B.错误 正确答案A 多项选择题 SM9密码算法KGC是负责&#xff08; &#xff09;的可信机构。 A.选择系统参数 B.生成主密钥 C.生成用户标识 D.生成用户私钥 正确答案ABD 判断题 SM9密钥封装机制封装的秘密密钥是根据…

C语言——实现求出最大值

问题描述&#xff1a;利用C语言自定义函数求出一维数组里边最大的数字 //利用函数找最大数#include<stdio.h>int search(int s[9]) //查找函数 {int i , max s[0] , max_xia 0;for(i0;i<9;i){if(s[i] > max){max_xia i;max s[max_xia];}}return max; } in…

【尚硅谷 - SSM+SpringBoot+SpringSecurity框架整合项目 】项目打包并且本地部署

前后端分离开发&#xff1a;把一个项目拆成两部分进行开发&#xff0c;所以在打包的时候&#xff0c;需要使用不同的打包方式。 后端 – SpringBoot – jar包 前端 – Vue: 因为使用了vue-admin-template框架&#xff1a;所以先使用框架进行打包使用Nginx部署&#xff0c;通…

【SH】Ubuntu Server 24服务器搭建MySQL数据库研发笔记

文章目录 搭建服务器在线安装1. 更新软件包列表2. 安装MySQL3. 检查MySQL状态4. 修改密码5. 新增用户6. 设置局域网访问 离线安装下载安装包 常用命令参考文档在线安装日志 搭建服务器 作者羊大侠搭建的是 Ubuntu Server 24.04 LTS 服务器环境 搭建参考文档&#xff1a;【SH】…

容器化技术全面解析:Docker 与 Containerd 的深入解读

目录 Docker 简介 1. 什么是 Docker&#xff1f; 2. Docker 的核心组件 3. Docker 的主要功能 4. Docker 的优点 5. Docker 的使用场景 Containerd 简介 1. 什么是 Containerd&#xff1f; 2. Containerd 的核心特性 3. Containerd 的架构 4. Containerd 与 Docker 的…

华为数通最新题库 H12-821 HCIP稳定过人中

以下是成绩单和考试人员 HCIP H12-831 HCIP H12-725 安全中级

Webots控制器编程

本文主要内容是如何编写Webots控制器&#xff0c;使用语言为Python。 文章目录 1. 新增控制器2. Hello World Example3. 读取传感器4. 使用执行器5. 理解step和robot.step函数6. 同时使用传感器和执行器7. 控制器参数 1. 新增控制器 对机器人Robot新增控制器的方式&#x…

[SAP ABAP] 将内表数据转换为HTML格式

从sflight数据库表中检索航班信息&#xff0c;并将这些信息转换成HTML格式&#xff0c;然后下载或显示在前端 开发步骤 ① 自定义一个数据类型 ty_sflight 来存储航班信息 ② 声明内表和工作区变量&#xff0c;用于存储表头、字段、HTML内容和航班详细信息以及创建字段目录lt…

《算法SM4》题目

单项选择题 我国商用密码算法SM4迭代结构是&#xff08;&#xff09;。 A.平衡Fesitel网络结构 B.非平衡Fesitel网络结构 C.SP结构 D.MD结构 正确答案B 多项选择题 SM4分组密码算法轮函数中的T置换&#xff0c;包括的运算有&#xff08;&#xff09;。 A.非线性变换 …

深度学习革新音乐转录

✨✨ 欢迎大家来访Srlua的博文&#xff08;づ&#xffe3;3&#xffe3;&#xff09;づ╭❤&#xff5e;✨✨ &#x1f31f;&#x1f31f; 欢迎各位亲爱的读者&#xff0c;感谢你们抽出宝贵的时间来阅读我的文章。 我是Srlua小谢&#xff0c;在这里我会分享我的知识和经验。&am…