【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解

【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解

文章目录

  • 【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解
  • 前言
  • Inception-ResNet讲解
    • Inception-ResNet-V1
    • Inception-ResNet-V2
    • 残差模块的缩放(Scaling of the Residuals)
    • Inception-ResNet的总体模型结构
  • GoogLeNet(Inception-ResNet) Pytorch代码
    • ## Inception-ResNet-V1
    • Inception-ResNet-V2
  • 完整代码
    • Inception-ResNet-V1
    • Inception-ResNet-V2
  • 总结


前言

GoogLeNet(Inception-ResNet)是由谷歌的Szegedy, Christian等人在《Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning【AAAI-2017】》【论文地址】一文中提出的改进模型,受启发于ResNet【参考】在深度网络上较好的表现影响,论文将残差连接加入到Inception结构中形成2个Inception-ResNet版本的网络,它将残差连接取代原本Inception块中池化层部分,并将拼接变成了求和相加,提升了Inception的训练速度。

因为InceptionV4、Inception-Resnet-v1和Inception-Resnet-v2同出自一篇论文,大部分读者对InceptionV4存在误解,认为它是Inception模块与残差学习的结合,其实InceptionV4没有使用残差学习的思想,它基本延续了Inception v2/v3的结构,只有Inception-Resnet-v1和Inception-Resnet-v2才是Inception模块与残差学习的结合产物。


Inception-ResNet讲解

Inception-ResNet的核心思想是将Inception模块和ResNet模块进行融合,以利用它们各自的优点。Inception模块通过并行多个不同大小的卷积核来捕捉多尺度的特征,而ResNet模块通过残差连接解决了深层网络中的梯度消失和梯度爆炸问题,有助于更好地训练深层模型。Inception-ResNet使用了与InceptionV4【参考】类似的Inception模块,并在其中引入了ResNet的残差连接。这样,网络中的每个Inception模块都包含了两个分支:一个是常规的Inception结构,另一个是包含残差连接的Inception结构。这种设计使得模型可以更好地学习特征表示,并且在训练过程中可以更有效地传播梯度。

Inception-ResNet-V1

Inception-ResNet-v1:一种和InceptionV3【参考】具有相同计算损耗的结构。

  1. Stem结构: Inception-ResNet-V1的Stem结构类似于此前的InceptionV3网络中Inception结构组之前的网络层。

    所有卷积中没有标记为V表示填充方式为"SAME Padding",输入和输出维度一致;标记为V表示填充方式为"VALID Padding",输出维度视具体情况而定。

  2. Inception-resnet-A结构: InceptionV4网络中Inception-A结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。

    Inception-resnet结构残差连接代替了Inception中的池化层,并用残差连接相加操作取代了原Inception块中的拼接操作。

  3. Inception-resnet-B结构: InceptionV4网络中Inception-B结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。

  4. Inception-resnet-C结构: InceptionV4网络中Inception-C结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。

  5. Redution-A结构: 与InceptionV4网络中Redution-A结构一致,区别在于卷积核的个数。

    k和l表示卷积个数,不同网络结构的redution-A结构k和l是不同的。

  6. Redution-B结构:
    .

Inception-ResNet-V2

Inception-ResNet-v2:这是一种和InceptionV4具有相同计算损耗的结构,但是训练速度要比纯Inception-v4要快
Inception-ResNet-v2的整体框架和Inception-ResNet-v1的一致,除了Inception-ResNet-v2的stem结构与Inception V4的相同,其他的的结构Inception-ResNet-v2与Inception-ResNet-v1的类似,只不过卷积的个数Inception-ResNet-v2数量更多。

  1. Stem结构: Inception-ResNet-v2的stem结构与Inception V4的相同。
  2. Inception-resnet-A结构: InceptionV4网络中Inception-A结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。
  3. Inception-resnet-B结构: InceptionV4网络中Inception-B结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。
  4. Inception-resnet-C结构: InceptionV4网络中Inception-C结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。
  5. Redution-A结构: 与InceptionV4网络中Redution-A结构一致,区别在于卷积核的个数。

    k和l表示卷积个数,不同网络结构的redution-A结构k和l是不同的。

    1. Redution-B结构:

残差模块的缩放(Scaling of the Residuals)

如果单个网络层卷积核数量过多(超过1000),残差网络开始出现不稳定,网络会在训练过程早期便会开始失效—经过几万次训练后,平均池化层之前的层开始只输出0。降低学习率、增加额外的BN层都无法避免这种状况。因此在将shortcut分支加到当前残差块的输出之前,对残差块的输出进行放缩能够稳定训练

通常,将残差放缩因子定在0.1到0.3之间去缩放残差块输出。即使缩放并不是完全必须的,它似乎并不会影响最终的准确率,但是放缩能有益于训练的稳定性。

Inception-ResNet的总体模型结构

下图是原论文给出的关于 Inception-ResNet-V1模型结构的详细示意图:

下图是原论文给出的关于 Inception-ResNet-V2模型结构的详细示意图:

读者注意了,原始论文标注的 Inception-ResNet-V2通道数有一部分是错的,写代码时候对应不上。

两个版本的总体结构相同,具体的Stem、Inception块、Redution块则稍微不同。
Inception-ResNet-V1和 Inception-ResNet-V2在图像分类中分为两部分:backbone部分: 主要由 Inception-resnet模块、Stem模块和池化层(汇聚层)组成,分类器部分:由全连接层组成。


GoogLeNet(Inception-ResNet) Pytorch代码

## Inception-ResNet-V1

卷积层组: 卷积层+BN层+激活函数

# 卷积组: Conv2d+BN+ReLU
class BasicConv2d(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):
        super(BasicConv2d, self).__init__()
        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)
        self.bn = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        x = self.relu(x)
        return x

Stem模块: 卷积层组+池化层

# Stem:BasicConv2d+MaxPool2d
class Stem(nn.Module):
    def __init__(self, in_channels):
        super(Stem, self).__init__()

        # conv3x3(32 stride2 valid)
        self.conv1 = BasicConv2d(in_channels, 32, kernel_size=3, stride=2)
        # conv3*3(32 valid)
        self.conv2 = BasicConv2d(32, 32, kernel_size=3)
        # conv3*3(64)
        self.conv3 = BasicConv2d(32, 64, kernel_size=3, padding=1)

        # maxpool3*3(stride2 valid)
        self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2)

        # conv1*1(80)
        self.conv5 = BasicConv2d(64, 80, kernel_size=1)
        # conv3*3(192 valid)
        self.conv6 = BasicConv2d(80, 192, kernel_size=1)

        # conv3*3(256 stride2 valid)
        self.conv7 = BasicConv2d(192, 256, kernel_size=3, stride=2)

    def forward(self, x):
        x = self.maxpool4(self.conv3(self.conv2(self.conv1(x))))
        x = self.conv7(self.conv6(self.conv5(x)))
        return x

Inception_ResNet-A模块: 卷积层组+池化层

# Inception_ResNet_A:BasicConv2d+MaxPool2d
class Inception_ResNet_A(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale=1.0):
        super(Inception_ResNet_A, self).__init__()
        # 缩减指数
        self.scale = scale
        # conv1*1(32)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(32)+conv3*3(32)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3red, 1),
            BasicConv2d(ch3x3red, ch3x3, 3, stride=1, padding=1)
        )
        # conv1*1(32)+conv3*3(32)+conv3*3(32)
        self.branch_2 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3redX2, 1),
            BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride=1, padding=1),
            BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride=1, padding=1)
        )
        # conv1*1(256)
        self.conv = BasicConv2d(ch1x1+ch3x3+ch3x3X2_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        x2 = self.branch_2(x)
        # 拼接
        x_res = torch.cat((x0, x1, x2), dim=1)
        x_res = self.conv(x_res)
        return self.relu(x + self.scale * x_res)

Inception_ResNet-B模块: 卷积层组+池化层

# Inception_ResNet_B:BasicConv2d+MaxPool2d
class Inception_ResNet_B(nn.Module):
    def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale=1.0):
        super(Inception_ResNet_B, self).__init__()
        # 缩减指数
        self.scale = scale
        # conv1*1(128)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(128)+conv1*7(128)+conv1*7(128)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch_red, 1),
            BasicConv2d(ch_red, ch_1, (1, 7), stride=1, padding=(0, 3)),
            BasicConv2d(ch_1, ch_2, (7, 1), stride=1, padding=(3, 0))
        )
        # conv1*1(896)
        self.conv = BasicConv2d(ch1x1+ch_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        # 拼接
        x_res = torch.cat((x0, x1), dim=1)
        x_res = self.conv(x_res)
        return self.relu(x + self.scale * x_res)

Inception_ResNet-C模块: 卷积层组+池化层

# Inception_ResNet_C:BasicConv2d+MaxPool2d
class Inception_ResNet_C(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext,  scale=1.0, activation=True):
        super(Inception_ResNet_C, self).__init__()
        # 缩减指数
        self.scale = scale
        # 是否激活
        self.activation = activation
        # conv1*1(192)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(192)+conv1*3(192)+conv3*1(192)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3redX2, 1),
            BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride=1, padding=(0, 1)),
            BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride=1, padding=(1, 0))
        )
        # conv1*1(1792)
        self.conv = BasicConv2d(ch1x1+ch3x3X2_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        # 拼接
        x_res = torch.cat((x0, x1), dim=1)
        x_res = self.conv(x_res)
        if self.activation:
            return self.relu(x + self.scale * x_res)
        return x + self.scale * x_res

redutionA模块: 卷积层组+池化层

# redutionA:BasicConv2d+MaxPool2d
class redutionA(nn.Module):
    def __init__(self, in_channels, k, l, m, n):
        super(redutionA, self).__init__()
        # conv3*3(n stride2 valid)
        self.branch1 = nn.Sequential(
            BasicConv2d(in_channels, n, kernel_size=3, stride=2),
        )
        # conv1*1(k)+conv3*3(l)+conv3*3(m stride2 valid)
        self.branch2 = nn.Sequential(
            BasicConv2d(in_channels, k, kernel_size=1),
            BasicConv2d(k, l, kernel_size=3, padding=1),
            BasicConv2d(l, m, kernel_size=3, stride=2)
        )
        # maxpool3*3(stride2 valid)
        self.branch3 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=2))

    def forward(self, x):
        branch1 = self.branch1(x)
        branch2 = self.branch2(x)
        branch3 = self.branch3(x)
        # 拼接
        outputs = [branch1, branch2, branch3]
        return torch.cat(outputs, 1)

redutionB模块: 卷积层组+池化层

# redutionB:BasicConv2d+MaxPool2d
class redutionB(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):
        super(redutionB, self).__init__()
        # conv1*1(256)+conv3x3(384 stride2 valid)
        self.branch_0 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_1, 3, stride=2, padding=0)
        )
        # conv1*1(256)+conv3x3(256 stride2 valid)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_2, 3, stride=2, padding=0),
        )
        # conv1*1(256)+conv3x3(256)+conv3x3(256 stride2 valid)
        self.branch_2 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_3, 3, stride=1, padding=1),
            BasicConv2d(ch3x3_3, ch3x3_4, 3, stride=2, padding=0)
        )
        # maxpool3*3(stride2 valid)
        self.branch_3 = nn.MaxPool2d(3, stride=2, padding=0)

    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        x2 = self.branch_2(x)
        x3 = self.branch_3(x)
        return torch.cat((x0, x1, x2, x3), dim=1)

Inception-ResNet-V2

Inception-ResNet-V2除了Stem,其他模块在结构上与Inception-ResNet-V1一致。
卷积层组: 卷积层+BN层+激活函数

# 卷积组: Conv2d+BN+ReLU
class BasicConv2d(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):
        super(BasicConv2d, self).__init__()
        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)
        self.bn = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        x = self.relu(x)
        return x

Stem模块: 卷积层组+池化层

# Stem:BasicConv2d+MaxPool2d
class Stem(nn.Module):
    def __init__(self, in_channels):
        super(Stem, self).__init__()
        # conv3*3(32 stride2 valid)
        self.conv1 = BasicConv2d(in_channels, 32, kernel_size=3, stride=2)
        # conv3*3(32 valid)
        self.conv2 = BasicConv2d(32, 32, kernel_size=3)
        # conv3*3(64)
        self.conv3 = BasicConv2d(32, 64, kernel_size=3, padding=1)
        # maxpool3*3(stride2 valid) & conv3*3(96 stride2 valid)
        self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2)
        self.conv4 = BasicConv2d(64, 96, kernel_size=3, stride=2)

        # conv1*1(64)+conv3*3(96 valid)
        self.conv5_1_1 = BasicConv2d(160, 64, kernel_size=1)
        self.conv5_1_2 = BasicConv2d(64, 96, kernel_size=3)
        # conv1*1(64)+conv7*1(64)+conv1*7(64)+conv3*3(96 valid)
        self.conv5_2_1 = BasicConv2d(160, 64, kernel_size=1)
        self.conv5_2_2 = BasicConv2d(64, 64, kernel_size=(7, 1), padding=(3, 0))
        self.conv5_2_3 = BasicConv2d(64, 64, kernel_size=(1, 7), padding=(0, 3))
        self.conv5_2_4 = BasicConv2d(64, 96, kernel_size=3)

        # conv3*3(192 valid) & maxpool3*3(stride2 valid)
        self.conv6 = BasicConv2d(192, 192, kernel_size=3, stride=2)
        self.maxpool6 = nn.MaxPool2d(kernel_size=3, stride=2)

    def forward(self, x):
        x1_1 = self.maxpool4(self.conv3(self.conv2(self.conv1(x))))
        x1_2 = self.conv4(self.conv3(self.conv2(self.conv1(x))))
        x1 = torch.cat([x1_1, x1_2], 1)

        x2_1 = self.conv5_1_2(self.conv5_1_1(x1))
        x2_2 = self.conv5_2_4(self.conv5_2_3(self.conv5_2_2(self.conv5_2_1(x1))))
        x2 = torch.cat([x2_1, x2_2], 1)

        x3_1 = self.conv6(x2)
        x3_2 = self.maxpool6(x2)
        x3 = torch.cat([x3_1, x3_2], 1)
        return x3

Inception_ResNet-A模块: 卷积层组+池化层

# Inception_ResNet_A:BasicConv2d+MaxPool2d
class Inception_ResNet_A(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale=1.0):
        super(Inception_ResNet_A, self).__init__()
        # 缩减指数
        self.scale = scale
        # conv1*1(32)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(32)+conv3*3(32)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3red, 1),
            BasicConv2d(ch3x3red, ch3x3, 3, stride=1, padding=1)
        )
        # conv1*1(32)+conv3*3(48)+conv3*3(64)
        self.branch_2 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3redX2, 1),
            BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride=1, padding=1),
            BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride=1, padding=1)
        )
        # conv1*1(384)
        self.conv = BasicConv2d(ch1x1+ch3x3+ch3x3X2_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        x2 = self.branch_2(x)
        # 拼接
        x_res = torch.cat((x0, x1, x2), dim=1)
        x_res = self.conv(x_res)
        return self.relu(x + self.scale * x_res)

Inception_ResNet-B模块: 卷积层组+池化层

# Inception_ResNet_B:BasicConv2d+MaxPool2d
class Inception_ResNet_B(nn.Module):
    def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale=1.0):
        super(Inception_ResNet_B, self).__init__()
        # 缩减指数
        self.scale = scale
        # conv1*1(192)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(128)+conv1*7(160)+conv1*7(192)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch_red, 1),
            BasicConv2d(ch_red, ch_1, (1, 7), stride=1, padding=(0, 3)),
            BasicConv2d(ch_1, ch_2, (7, 1), stride=1, padding=(3, 0))
        )
        # conv1*1(1154)
        self.conv = BasicConv2d(ch1x1+ch_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        # 拼接
        x_res = torch.cat((x0, x1), dim=1)
        x_res = self.conv(x_res)
        return self.relu(x + self.scale * x_res)

Inception_ResNet-C模块: 卷积层组+池化层

# Inception_ResNet_C:BasicConv2d+MaxPool2d
class Inception_ResNet_C(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext,  scale=1.0, activation=True):
        super(Inception_ResNet_C, self).__init__()
        # 缩减指数
        self.scale = scale
        # 是否激活
        self.activation = activation
        # conv1*1(192)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(192)+conv1*3(224)+conv3*1(256)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3redX2, 1),
            BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride=1, padding=(0, 1)),
            BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride=1, padding=(1, 0))
        )
        # conv1*1(2048)
        self.conv = BasicConv2d(ch1x1+ch3x3X2_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        # 拼接
        x_res = torch.cat((x0, x1), dim=1)
        x_res = self.conv(x_res)
        if self.activation:
            return self.relu(x + self.scale * x_res)
        return x + self.scale * x_res

redutionA模块: 卷积层组+池化层

# redutionA:BasicConv2d+MaxPool2d
class redutionA(nn.Module):
    def __init__(self, in_channels, k, l, m, n):
        super(redutionA, self).__init__()
        # conv3*3(n stride2 valid)
        self.branch1 = nn.Sequential(
            BasicConv2d(in_channels, n, kernel_size=3, stride=2),
        )
        # conv1*1(k)+conv3*3(l)+conv3*3(m stride2 valid)
        self.branch2 = nn.Sequential(
            BasicConv2d(in_channels, k, kernel_size=1),
            BasicConv2d(k, l, kernel_size=3, padding=1),
            BasicConv2d(l, m, kernel_size=3, stride=2)
        )
        # maxpool3*3(stride2 valid)
        self.branch3 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=2))

    def forward(self, x):
        branch1 = self.branch1(x)
        branch2 = self.branch2(x)
        branch3 = self.branch3(x)
        # 拼接
        outputs = [branch1, branch2, branch3]
        return torch.cat(outputs, 1)

redutionB模块: 卷积层组+池化层

# redutionB:BasicConv2d+MaxPool2d
class redutionB(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):
        super(redutionB, self).__init__()
        # conv1*1(256)+conv3x3(384 stride2 valid)
        self.branch_0 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_1, 3, stride=2, padding=0)
        )
        # conv1*1(256)+conv3x3(288 stride2 valid)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_2, 3, stride=2, padding=0),
        )
        # conv1*1(256)+conv3x3(288)+conv3x3(320 stride2 valid)
        self.branch_2 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_3, 3, stride=1, padding=1),
            BasicConv2d(ch3x3_3, ch3x3_4, 3, stride=2, padding=0)
        )
        # maxpool3*3(stride2 valid)
        self.branch_3 = nn.MaxPool2d(3, stride=2, padding=0)

    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        x2 = self.branch_2(x)
        x3 = self.branch_3(x)
        return torch.cat((x0, x1, x2, x3), dim=1)

完整代码

Inception-ResNet的输入图像尺寸是299×299

Inception-ResNet-V1

import torch
import torch.nn as nn
from torchsummary import summary

# 卷积组: Conv2d+BN+ReLU
class BasicConv2d(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):
        super(BasicConv2d, self).__init__()
        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)
        self.bn = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        x = self.relu(x)
        return x

# Stem:BasicConv2d+MaxPool2d
class Stem(nn.Module):
    def __init__(self, in_channels):
        super(Stem, self).__init__()

        # conv3x3(32 stride2 valid)
        self.conv1 = BasicConv2d(in_channels, 32, kernel_size=3, stride=2)
        # conv3*3(32 valid)
        self.conv2 = BasicConv2d(32, 32, kernel_size=3)
        # conv3*3(64)
        self.conv3 = BasicConv2d(32, 64, kernel_size=3, padding=1)

        # maxpool3*3(stride2 valid)
        self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2)

        # conv1*1(80)
        self.conv5 = BasicConv2d(64, 80, kernel_size=1)
        # conv3*3(192 valid)
        self.conv6 = BasicConv2d(80, 192, kernel_size=1)

        # conv3*3(256 stride2 valid)
        self.conv7 = BasicConv2d(192, 256, kernel_size=3, stride=2)

    def forward(self, x):
        x = self.maxpool4(self.conv3(self.conv2(self.conv1(x))))
        x = self.conv7(self.conv6(self.conv5(x)))
        return x

# Inception_ResNet_A:BasicConv2d+MaxPool2d
class Inception_ResNet_A(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale=1.0):
        super(Inception_ResNet_A, self).__init__()
        # 缩减指数
        self.scale = scale
        # conv1*1(32)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(32)+conv3*3(32)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3red, 1),
            BasicConv2d(ch3x3red, ch3x3, 3, stride=1, padding=1)
        )
        # conv1*1(32)+conv3*3(32)+conv3*3(32)
        self.branch_2 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3redX2, 1),
            BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride=1, padding=1),
            BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride=1, padding=1)
        )
        # conv1*1(256)
        self.conv = BasicConv2d(ch1x1+ch3x3+ch3x3X2_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        x2 = self.branch_2(x)
        # 拼接
        x_res = torch.cat((x0, x1, x2), dim=1)
        x_res = self.conv(x_res)
        return self.relu(x + self.scale * x_res)

# Inception_ResNet_B:BasicConv2d+MaxPool2d
class Inception_ResNet_B(nn.Module):
    def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale=1.0):
        super(Inception_ResNet_B, self).__init__()
        # 缩减指数
        self.scale = scale
        # conv1*1(128)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(128)+conv1*7(128)+conv1*7(128)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch_red, 1),
            BasicConv2d(ch_red, ch_1, (1, 7), stride=1, padding=(0, 3)),
            BasicConv2d(ch_1, ch_2, (7, 1), stride=1, padding=(3, 0))
        )
        # conv1*1(896)
        self.conv = BasicConv2d(ch1x1+ch_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        # 拼接
        x_res = torch.cat((x0, x1), dim=1)
        x_res = self.conv(x_res)
        return self.relu(x + self.scale * x_res)

# Inception_ResNet_C:BasicConv2d+MaxPool2d
class Inception_ResNet_C(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext,  scale=1.0, activation=True):
        super(Inception_ResNet_C, self).__init__()
        # 缩减指数
        self.scale = scale
        # 是否激活
        self.activation = activation
        # conv1*1(192)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(192)+conv1*3(192)+conv3*1(192)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3redX2, 1),
            BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride=1, padding=(0, 1)),
            BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride=1, padding=(1, 0))
        )
        # conv1*1(1792)
        self.conv = BasicConv2d(ch1x1+ch3x3X2_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        # 拼接
        x_res = torch.cat((x0, x1), dim=1)
        x_res = self.conv(x_res)
        if self.activation:
            return self.relu(x + self.scale * x_res)
        return x + self.scale * x_res

# redutionA:BasicConv2d+MaxPool2d
class redutionA(nn.Module):
    def __init__(self, in_channels, k, l, m, n):
        super(redutionA, self).__init__()
        # conv3*3(n stride2 valid)
        self.branch1 = nn.Sequential(
            BasicConv2d(in_channels, n, kernel_size=3, stride=2),
        )
        # conv1*1(k)+conv3*3(l)+conv3*3(m stride2 valid)
        self.branch2 = nn.Sequential(
            BasicConv2d(in_channels, k, kernel_size=1),
            BasicConv2d(k, l, kernel_size=3, padding=1),
            BasicConv2d(l, m, kernel_size=3, stride=2)
        )
        # maxpool3*3(stride2 valid)
        self.branch3 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=2))

    def forward(self, x):
        branch1 = self.branch1(x)
        branch2 = self.branch2(x)
        branch3 = self.branch3(x)
        # 拼接
        outputs = [branch1, branch2, branch3]
        return torch.cat(outputs, 1)

# redutionB:BasicConv2d+MaxPool2d
class redutionB(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):
        super(redutionB, self).__init__()
        # conv1*1(256)+conv3x3(384 stride2 valid)
        self.branch_0 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_1, 3, stride=2, padding=0)
        )
        # conv1*1(256)+conv3x3(256 stride2 valid)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_2, 3, stride=2, padding=0),
        )
        # conv1*1(256)+conv3x3(256)+conv3x3(256 stride2 valid)
        self.branch_2 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_3, 3, stride=1, padding=1),
            BasicConv2d(ch3x3_3, ch3x3_4, 3, stride=2, padding=0)
        )
        # maxpool3*3(stride2 valid)
        self.branch_3 = nn.MaxPool2d(3, stride=2, padding=0)

    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        x2 = self.branch_2(x)
        x3 = self.branch_3(x)
        return torch.cat((x0, x1, x2, x3), dim=1)

class Inception_ResNetv1(nn.Module):
    def __init__(self, num_classes = 1000, k=192, l=192, m=256, n=384):
        super(Inception_ResNetv1, self).__init__()
        blocks = []
        blocks.append(Stem(3))
        for i in range(5):
            blocks.append(Inception_ResNet_A(256,32, 32, 32, 32, 32, 32, 256, 0.17))
        blocks.append(redutionA(256, k, l, m, n))
        for i in range(10):
            blocks.append(Inception_ResNet_B(896, 128, 128, 128, 128, 896, 0.10))
        blocks.append(redutionB(896,256, 384, 256, 256, 256))
        for i in range(4):
            blocks.append(Inception_ResNet_C(1792,192, 192, 192, 192, 1792, 0.20))
        blocks.append(Inception_ResNet_C(1792, 192, 192, 192, 192, 1792, activation=False))
        self.features = nn.Sequential(*blocks)
        self.conv = BasicConv2d(1792, 1536, 1)
        self.global_average_pooling = nn.AdaptiveAvgPool2d((1, 1))
        self.dropout = nn.Dropout(0.8)
        self.linear = nn.Linear(1536, num_classes)

    def forward(self, x):
        x = self.features(x)
        x = self.conv(x)
        x = self.global_average_pooling(x)
        x = x.view(x.size(0), -1)
        x = self.dropout(x)
        x = self.linear(x)
        return x

if __name__ == '__main__':
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    model = Inception_ResNetv1().to(device)
    summary(model, input_size=(3, 229, 229))

summary可以打印网络结构和参数,方便查看搭建好的网络结构。

Inception-ResNet-V2

import torch
import torch.nn as nn
from torchsummary import summary

# 卷积组: Conv2d+BN+ReLU
class BasicConv2d(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):
        super(BasicConv2d, self).__init__()
        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)
        self.bn = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        x = self.relu(x)
        return x

# Stem:BasicConv2d+MaxPool2d
class Stem(nn.Module):
    def __init__(self, in_channels):
        super(Stem, self).__init__()
        # conv3*3(32 stride2 valid)
        self.conv1 = BasicConv2d(in_channels, 32, kernel_size=3, stride=2)
        # conv3*3(32 valid)
        self.conv2 = BasicConv2d(32, 32, kernel_size=3)
        # conv3*3(64)
        self.conv3 = BasicConv2d(32, 64, kernel_size=3, padding=1)
        # maxpool3*3(stride2 valid) & conv3*3(96 stride2 valid)
        self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2)
        self.conv4 = BasicConv2d(64, 96, kernel_size=3, stride=2)

        # conv1*1(64)+conv3*3(96 valid)
        self.conv5_1_1 = BasicConv2d(160, 64, kernel_size=1)
        self.conv5_1_2 = BasicConv2d(64, 96, kernel_size=3)
        # conv1*1(64)+conv7*1(64)+conv1*7(64)+conv3*3(96 valid)
        self.conv5_2_1 = BasicConv2d(160, 64, kernel_size=1)
        self.conv5_2_2 = BasicConv2d(64, 64, kernel_size=(7, 1), padding=(3, 0))
        self.conv5_2_3 = BasicConv2d(64, 64, kernel_size=(1, 7), padding=(0, 3))
        self.conv5_2_4 = BasicConv2d(64, 96, kernel_size=3)

        # conv3*3(192 valid) & maxpool3*3(stride2 valid)
        self.conv6 = BasicConv2d(192, 192, kernel_size=3, stride=2)
        self.maxpool6 = nn.MaxPool2d(kernel_size=3, stride=2)

    def forward(self, x):
        x1_1 = self.maxpool4(self.conv3(self.conv2(self.conv1(x))))
        x1_2 = self.conv4(self.conv3(self.conv2(self.conv1(x))))
        x1 = torch.cat([x1_1, x1_2], 1)

        x2_1 = self.conv5_1_2(self.conv5_1_1(x1))
        x2_2 = self.conv5_2_4(self.conv5_2_3(self.conv5_2_2(self.conv5_2_1(x1))))
        x2 = torch.cat([x2_1, x2_2], 1)

        x3_1 = self.conv6(x2)
        x3_2 = self.maxpool6(x2)
        x3 = torch.cat([x3_1, x3_2], 1)
        return x3

# Inception_ResNet_A:BasicConv2d+MaxPool2d
class Inception_ResNet_A(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale=1.0):
        super(Inception_ResNet_A, self).__init__()
        # 缩减指数
        self.scale = scale
        # conv1*1(32)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(32)+conv3*3(32)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3red, 1),
            BasicConv2d(ch3x3red, ch3x3, 3, stride=1, padding=1)
        )
        # conv1*1(32)+conv3*3(48)+conv3*3(64)
        self.branch_2 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3redX2, 1),
            BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride=1, padding=1),
            BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride=1, padding=1)
        )
        # conv1*1(384)
        self.conv = BasicConv2d(ch1x1+ch3x3+ch3x3X2_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        x2 = self.branch_2(x)
        # 拼接
        x_res = torch.cat((x0, x1, x2), dim=1)
        x_res = self.conv(x_res)
        return self.relu(x + self.scale * x_res)

# Inception_ResNet_B:BasicConv2d+MaxPool2d
class Inception_ResNet_B(nn.Module):
    def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale=1.0):
        super(Inception_ResNet_B, self).__init__()
        # 缩减指数
        self.scale = scale
        # conv1*1(192)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(128)+conv1*7(160)+conv1*7(192)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch_red, 1),
            BasicConv2d(ch_red, ch_1, (1, 7), stride=1, padding=(0, 3)),
            BasicConv2d(ch_1, ch_2, (7, 1), stride=1, padding=(3, 0))
        )
        # conv1*1(1154)
        self.conv = BasicConv2d(ch1x1+ch_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        # 拼接
        x_res = torch.cat((x0, x1), dim=1)
        x_res = self.conv(x_res)
        return self.relu(x + self.scale * x_res)

# Inception_ResNet_C:BasicConv2d+MaxPool2d
class Inception_ResNet_C(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext,  scale=1.0, activation=True):
        super(Inception_ResNet_C, self).__init__()
        # 缩减指数
        self.scale = scale
        # 是否激活
        self.activation = activation
        # conv1*1(192)
        self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)
        # conv1*1(192)+conv1*3(224)+conv3*1(256)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch3x3redX2, 1),
            BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride=1, padding=(0, 1)),
            BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride=1, padding=(1, 0))
        )
        # conv1*1(2048)
        self.conv = BasicConv2d(ch1x1+ch3x3X2_2, ch1x1ext, 1)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        # 拼接
        x_res = torch.cat((x0, x1), dim=1)
        x_res = self.conv(x_res)
        if self.activation:
            return self.relu(x + self.scale * x_res)
        return x + self.scale * x_res

# redutionA:BasicConv2d+MaxPool2d
class redutionA(nn.Module):
    def __init__(self, in_channels, k, l, m, n):
        super(redutionA, self).__init__()
        # conv3*3(n stride2 valid)
        self.branch1 = nn.Sequential(
            BasicConv2d(in_channels, n, kernel_size=3, stride=2),
        )
        # conv1*1(k)+conv3*3(l)+conv3*3(m stride2 valid)
        self.branch2 = nn.Sequential(
            BasicConv2d(in_channels, k, kernel_size=1),
            BasicConv2d(k, l, kernel_size=3, padding=1),
            BasicConv2d(l, m, kernel_size=3, stride=2)
        )
        # maxpool3*3(stride2 valid)
        self.branch3 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=2))

    def forward(self, x):
        branch1 = self.branch1(x)
        branch2 = self.branch2(x)
        branch3 = self.branch3(x)
        # 拼接
        outputs = [branch1, branch2, branch3]
        return torch.cat(outputs, 1)

# redutionB:BasicConv2d+MaxPool2d
class redutionB(nn.Module):
    def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):
        super(redutionB, self).__init__()
        # conv1*1(256)+conv3x3(384 stride2 valid)
        self.branch_0 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_1, 3, stride=2, padding=0)
        )
        # conv1*1(256)+conv3x3(288 stride2 valid)
        self.branch_1 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_2, 3, stride=2, padding=0),
        )
        # conv1*1(256)+conv3x3(288)+conv3x3(320 stride2 valid)
        self.branch_2 = nn.Sequential(
            BasicConv2d(in_channels, ch1x1, 1),
            BasicConv2d(ch1x1, ch3x3_3, 3, stride=1, padding=1),
            BasicConv2d(ch3x3_3, ch3x3_4, 3, stride=2, padding=0)
        )
        # maxpool3*3(stride2 valid)
        self.branch_3 = nn.MaxPool2d(3, stride=2, padding=0)

    def forward(self, x):
        x0 = self.branch_0(x)
        x1 = self.branch_1(x)
        x2 = self.branch_2(x)
        x3 = self.branch_3(x)
        return torch.cat((x0, x1, x2, x3), dim=1)

class Inception_ResNetv2(nn.Module):
    def __init__(self, num_classes = 1000, k=256, l=256, m=384, n=384):
        super(Inception_ResNetv2, self).__init__()
        blocks = []
        blocks.append(Stem(3))
        for i in range(5):
            blocks.append(Inception_ResNet_A(384,32, 32, 32, 32, 48, 64, 384, 0.17))
        blocks.append(redutionA(384, k, l, m, n))
        for i in range(10):
            blocks.append(Inception_ResNet_B(1152, 192, 128, 160, 192, 1152, 0.10))
        blocks.append(redutionB(1152, 256, 384, 288, 288, 320))
        for i in range(4):
            blocks.append(Inception_ResNet_C(2144,192, 192, 224, 256, 2144, 0.20))
        blocks.append(Inception_ResNet_C(2144, 192, 192, 224, 256, 2144, activation=False))
        self.features = nn.Sequential(*blocks)
        self.conv = BasicConv2d(2144, 1536, 1)
        self.global_average_pooling = nn.AdaptiveAvgPool2d((1, 1))
        self.dropout = nn.Dropout(0.8)
        self.linear = nn.Linear(1536, num_classes)

    def forward(self, x):
        x = self.features(x)
        x = self.conv(x)
        x = self.global_average_pooling(x)
        x = x.view(x.size(0), -1)
        x = self.dropout(x)
        x = self.linear(x)
        return x

if __name__ == '__main__':
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    model = Inception_ResNetv2().to(device)
    summary(model, input_size=(3, 229, 229))

summary可以打印网络结构和参数,方便查看搭建好的网络结构。


总结

尽可能简单、详细的介绍了Inception-ResNet将Inception和ResNet结合的作用和过程,讲解了Inception-ResNet模型的结构和pytorch代码。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/171780.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

第1关:图的邻接矩阵存储及求邻接点操作

任务要求参考答案评论2 任务描述相关知识 顶点集合边集合编程要求测试说明 任务描述 本关任务:要求从文件输入顶点和边数据,包括顶点信息、边、权值等,编写程序实现以下功能。 1)构造无向网G的邻接矩阵和顶点集,即图…

配置文件自动提示

1、引入依赖<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-configuration-processor</artifactId> </dependency> 2、修改IDEA配置

mysql底层是如何存放数据的

总览 首先总的来说&#xff0c;分为四个层级&#xff0c;行页区段。行就是数据库里的一行数据。 但一次从磁盘读进内存的数据量是一页&#xff08;页是读写的单位&#xff0c;默认16KB一页&#xff09;&#xff0c;页分很多种类&#xff0c;例如数据页、溢出页、undo日志页。 …

OpenAI宫斗,尘埃落定,微软成最大赢家

周末被OpenAI董事会闹剧刷屏,ChatGPT之父Sam Altman前一天被踢出董事会,免职CEO,后一天重返OpenAI,目前结局未知。 很多同学想要围观,缺少背景知识,这里老章为大家简单介绍前因后果及涉及的人物,时间线,让大家轻松围观。 备好瓜子,开始。 1、主角 先看一张图,看一…

Java基于B/S架构,包括PC后台管理端、APP移动端、可视化数据大屏的智慧工地源码

智慧工地管理平台充分运用数字化技术&#xff0c;聚焦施工现场岗位一线&#xff0c;依托物联网、互联网、AI等技术&#xff0c;围绕施工现场管理的人、机、料、法、环五大维度&#xff0c;以及施工过程管理的进度、质量、安全三大体系为基础应用&#xff0c;实现全面高效的工程…

无人售货奶柜:颠覆传统零售行业的潜力黑马

无人售货奶柜&#xff1a;颠覆传统零售行业的潜力黑马 无人售货奶柜具备体积小、灵活运用空间、无需人工看守和自动结算等特点。相较于传统建店方式&#xff0c;它的成本大大降低&#xff0c;从而提高了运营效率。此外&#xff0c;无人售货奶柜独特的优势之一就是可以保持24小时…

【GUI】-- 11 贪吃蛇小游戏之绘制静态的小蛇

GUI编程 04 贪吃蛇小游戏 4.2 第二步&#xff1a;绘制静态的小蛇 现在绘制静态的小蛇(即小蛇初始位置)&#xff0c;并且完善游戏默认初始状态。这一步还在GamePanel类中实现。 首先&#xff0c;定义了小蛇的数据结构&#xff0c; //定义蛇的数据结构int length; //小蛇总长…

LeetCode【45】跳跃游戏2

题目&#xff1a; 思路&#xff1a; 注意和跳跃游戏【55】不同的是&#xff0c;题目保证可以跳到nums[n-1];那么每次跳到最大即可 代码&#xff1a; public class LeetCode45 {public static int jump(int[] nums) {int jumps 0;int currentEnd 0;int farthest 0;for(int…

案例研究|北京交通大学基于DataEase开展多场景校园数据分析与展示

北京交通大学是教育部直属&#xff0c;教育部、交通运输部、北京市人民政府和中国国家铁路集团有限公司共建的全国重点大学&#xff0c;是国家“211工程”“985工程优势学科创新平台”“双一流”建设高校。 多年来&#xff0c;北京交通大学积极发挥信息技术赋能学校人才培养、…

STM32 -Bin/Hex文件格式解析

文章目录 1. 概述2. Hex文件2.1 格式解析2.2 数据类型2.3 举例解析2.4 合并两个Hex文件方法 3 总结&#xff08;未完待续&#xff09; 1. 概述 Hex文件&#xff1a;它是单片机和嵌入式工程编译输出的一种常见的目标文件格式&#xff08;比如keil就能编译输出hex文件&#xff0…

Rockchip Clock

一:概述 1、时钟子系统 本章节所指的时钟是给SOC各个组件提供时钟的树状框架,而非内核使用的时钟。和其他模块一样,CLOCK也有框架,用以适配不同的平台。适配层之上是客户代码和接口,也就是各模块(如需要时钟信号的外设)的驱动。适配层之下是具体的SOC的时钟操作细节。…

Rust错误处理机制:优雅地管理错误

大家好&#xff01;我是lincyang。 今天&#xff0c;我们要探讨的是Rust语言中的错误处理机制。 Rust作为一种系统编程语言&#xff0c;对错误处理的重视程度是非常高的。它提供了一套既安全又灵活的机制来处理可能出现的错误。 Rust错误处理的两大类别 在Rust中&#xff0…

Flutter:多线程Isolate的简单使用

在flutter中如果要使用线程&#xff0c;需要借助Isolate来实现。 简介 在Flutter中&#xff0c;Isolate是一种轻量级的线程解决方案&#xff0c;用于在应用程序中执行并发任务。Isolate可以被认为是独立于主线程的工作单元&#xff0c;它们可以在后台执行任务而不会阻塞应用程…

影像仪自行更换RGB光源,“看得清,测得准”!

影像仪作为一种现代化高精度测量设备&#xff0c;在各行各业都有着广泛的应用。而RGB表光作为影像仪中的一个关键技术&#xff0c;也是实现图像真实还原的重要因素之一。 **什么是RGB表光&#xff1f;**RGB是指红&#xff08;Red&#xff09;、绿&#xff08;Green&#xff09;…

vue动态获取目录结构进行配置静态路由

文章目录 前言定义项目页面格式一、vite 配置动态路由新建 /router/utils.ts引入 /router/utils.ts 二、webpack 配置动态路由总结如有启发&#xff0c;可点赞收藏哟~ 前言 项目中动态配置路由可以减少路由配置时间&#xff0c;并可减少配置路由出现的一些奇奇怪怪的问题 路由…

Postman的各种参数你都用对了吗?

大家好&#xff0c;我是G探险者。 Postman我们都不陌生&#xff0c;作为一个广泛使用的 HTTP 客户端&#xff0c;平时我们使用它来测试接口&#xff0c;无非就是把接口的url放进去&#xff0c;然后根据请求类型get或者post,在不同位置传一下参数&#xff0c;除了常见的 Params…

机器学习第11天:降维

文章目录 机器学习专栏 主要思想 主流方法 投影 二维投射到一维 三维投射到二维 流形学习 PCA主成分分析 介绍 代码 内核PCA 具体代码 LLE 结语 机器学习专栏 机器学习_Nowl的博客-CSDN博客 主要思想 介绍&#xff1a;当一个任务有很多特征时&#xff0c;我们…

Foodpanda API连接的艺术:无代码开发如何集成营销系统和广告推广工具

连接Foodpanda和电商平台的无代码开发 Foodpanda不仅是一家提供快速外卖服务的国际品牌&#xff0c;而且其创新的技术解决方案还能帮助电商企业优化系统运营。通过无代码开发的方法&#xff0c;即使没有专业的API开发知识&#xff0c;商家也能实现高效的电商系统和客服系统连接…

第3关:图的广度遍历

500 任务要求参考答案评论2 任务描述相关知识编程要求测试说明 任务描述 本关任务&#xff1a;以邻接表存储图&#xff0c;要求编写程序实现图的广度优先遍历。 相关知识 广度优先遍历类似于树的按层次遍历的过程。 假设从图中某顶点v出发&#xff0c;在访问了v之后依次访…