EfficientNet与复合缩放理论(Compound Scaling Theory) 详解(MATLAB)

1.EfficientNet网络与模型复合缩放

1.1 EfficientNet网络简介

1.1.1 提出背景、动机与过程

        EfficientNet是一种高效的卷积神经网络(CNN),由Google的研究团队Tan等人在2019年提出。EfficientNet的设计目标是提高网络的性能,同时减少计算资源的消耗,“EfficientNet”的名称也由此而来。

        为了实现在更少的模型参数和计算需求下达到更高的模型性能,Tan 等人基于神经架构搜索方法(Neural Architecture Search,NAS)首先得到了该模型的基线网络(baseline network),称其为EfficientNet-b0 。

神经架构搜索方法(Neural Architecture Search):NAS的思想类似于机器学习模型的超参数优化方法(如网格搜索算法、粒子群优化算法等),两者两者都定义了一个待搜索的参数空间,在本质上都属于优化算法。

  • NAS的搜索空间一般包含网络架构的参数,这些架构由不同的网络层类型、数量、连接方式等参数组成。
  • 超参数优化方法的搜索空间则是模型超参数的集合,这些超参数定义了模型的行为和训练过程。

作者等人应用的NAS方法为“多目标神经架构搜索(multi-objective neural architecture search)”,该方法的原文地址为:MnasNet: Platform-Aware Neural Architecture Search for Mobile

        基于EfficientNet-b0对模型扩展(缩放)的参数组合进行了网格搜索,协调了扩展过程中输入图像分辨率γ、网络深度α、宽度β的“最佳比值”,并以此提出了复合这三种因素的拓展方法,称为复合缩放(Compound Scaling)理论。基于该理论对EfficientNet的基线模型(EfficientNet-b0)进行复合缩放,得到EfficientNet-b1~b7,8种不同的规模的EfficientNet网络。最终在不同数据集中达到了当时领先的水准。

注意:在Tan等人的研究中,缩放系数(即α, β, γ)的确定是基于网格搜索(Grid Search)的,而不是神经架构搜索(NAS)技术。很多博主都搞错这点。

原论文:Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks[C]//International conference on machine learning. PMLR, 2019: 6105-6114.

原论文地址:[1905.11946] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

1.1.2 模型性能

该网络在ImageNet数据集上的表现极为优异,能够在准确率和计算效率之间实现良好的平衡。EfficientNet网络采用了深度可扩展卷积网络(Mobile Inverted Bottleneck Block)模块作为基础模块,并对其进行多维度的缩减和扩展,使得网络具备更快的推理速度和更小的模型体积。

从跑分结果上来看轻量化的EfficientNet-B0(76.3%,5.3M)要优于ResNet-50(75.3%,25M)、MobileNet V3-Large(75.2%,5.4M)和ShuffleNet V2(75.4%)。差于MoblieViT V3-XS(76.7%,2.5M)和MobileViT-S(78.4%,5.6M)。(看来混合结构才是大势所趋啊)

中等规模的EfficientNet-B4(82.4%,9.2M)要优于ResNet-200(81.8%)、SENet-101(81.4%,49.2M)和DeepVit-L(82.2%,55M)。

较大大规模的EfficientNet-B7(84.4%,66M),则于ResNeSt-269(84.5%,111M)和AMD(ViT-B/16)(84.6%,87M)相当。

从其表现来看,一些传统模型和方法的Baseline替换成EfficientNet的Baseline基本都会有所提升,综合评价是弱于混合模型,强于大多数ConvNet。

对比图

ImageNet模型排行榜:ImageNet Benchmark (Image Classification) | Papers With Code

1.1.3 EfficientNet的后续相关研究

EfficientNet取得成功后,基于该模型及复合缩放的思路,学者继续优化了该模型及其方法,比较有代表性的相关研究有:

2019年:NoisyStudent(EfficientNet)

论文地址:Self-training with Noisy Student improves ImageNet classification

NoisyStudent是一种半监督学习方法,它扩展了自我训练和蒸馏的思想。该方法的核心在于利用未标注的数据来提升模型的性能。Xie等人结合了NoisyStudent方法和EfficientNet模型提出的NoisyStudent(EfficientNet)让其在ImageNet上的精度更近一步。最终NoisyStudent(EfficientNet-L2)取得了(88.4%,480M)的成绩。

这里的EfficientNet-L2是Xie等人基于EfficientNet-b0继续扩展的,其大小要比b7还要大很多,具体的,其对于B0的缩放倍率为宽度(Width):4.3,深度(Depth):5.3 ,分辨率为(Train Res.):800*800。而EfficientNet-B7的这三个数值是2.0,3.1和600*600。

2020年 FixEfficientNet

论文地址:Fixing the train-test resolution discrepancy: FixEfficientNet

Touvron等人提出了FixRes方法,通过联合优化训练和测试时的分辨率和尺度选择,保持相同的区域分类(RoC)采样。他们将其与EfficientNet结合得到了FixEfficientNet,FixEfficientNet-L2最终取得了88.5% ,480M成绩。

2020年 Meta Pseudo Labels(EfficientNet-L2)

论文地址:Meta Pseudo Labels

Pham等人提出了Meta Pseudo Labels,一种半监督学习方法,结合EfficientNet-L2,该方法在ImageNet上实现了90.2%,480M的成绩。在ViT得到发展之前,打遍天下无敌手,也是目前在ImageNet数据集中的顶端模型。

2021年 EfficientNet-V2

论文地址:EfficientNetV2: Smaller Models and Faster Training

Tan等人在EfficientNet成功的基础上提出了EfficientNetV2,这是一种新的、更小且更快的卷积神经网络家族。通过训练感知的NAS和缩放策略,EfficientNetV2在训练速度和参数效率方面显著优于之前的模型。

1.2 EfficientNet结构

1.2.1EfficientNet的基本单元:MBConv模块

EfficientNet属于当时背景下的缝合怪,由于是基于NAS搜索出来网络,从结构上来说,其糅合MobileNet和ResNet等模型的结构特点。该模型采用了的基本结构称为MBConv模块Mobile Inverted Bottleneck Convolution,移动倒置残差卷积模块),其可以看作由MobileNet的Inverted Residual Block发展而来的模块,其有以下特点:

  1. 反向瓶颈转换:MBConv模块首先通过1x1的卷积层对输入特征图进行升维处理,增加通道数,然后通过一个深度可分离卷积层(Depthwise Convolution)进行空间特征提取,最后再通过另一个1x1的卷积层进行降维处理,恢复到原始的通道数或接近的通道数。这种结构形成了一个反向的瓶颈转换,即先膨胀通道数再压缩通道数。
  2. 残差连接:MBConv模块中包含残差连接(Residual Connection),将模块的输入与经过上述操作后的输出相加,以促进梯度流动,加速训练过程,并有助于解决深度网络中的退化问题。
  3. Squeeze-and-Excitation(SE)模块:对于经典的EfficientNet的版本,会包含一个SE模块。SE模块通过先全局平均池化(压缩),然后通过两个全连接层来调整通道间的权重,实现对特征通道的重新校准,增强重要特征,抑制不重要的特征。

MBConv模块主要包含两种变体,MBConv1和MBConv6

MBConv1的结构

图都是自己画的,和别人不一样很正常,不一样的话建议以我为标准。没错,很多博主都画错或讲错了。

MBConv1是Efficient浅层中的结构:

MBConv1由普通卷积、深度卷积(或称为分组卷积 grouped convolution),批归一化层BN(Batch-Normalization)、Swish激活函数、SE模块、加法层、乘法层构成。假设输入数据的大小为M*M*N(空间-空间-通道,S-S-C),在第一个Conv后对通道进行增,一般为输入数据的六倍,MBconv1处理完后在空间大小下采样为输入的1/2,但通道数输出为原先的1/2。

注意:

  • 在EfficientNet中,SE模块中间的激活函数为Swish而不是Relu或其他。
  • MBconv1中,Multiplication Layer后的卷积核对通道进行下采样,压缩为原来的1/4,SE模块中也有下采用过程,为输入通道的1/4。
  • Dethwise-Conv的卷积核大小为k*k,会有所变动,常见的为3*3,5*5.具体数值可以参考结构的表格
 MBConv6的结构

MBConv6是EfficientNet中的深层结构,下图列举了1层、2层和3层的MBConv6,4层、5层的MBConv6以此类推。

其主要嵌套MBConv1结构,通过Skip-Connection 进行连接。相较于输入(M*M*C),其空间尺度下采用为原来的1/2,通道尺度上采样至3/2。当MBConv6的重复层(Layers)为1时,其结构与MBConv1相似,但前面多了个size为1*1的Conv层,将通道维度上采样6倍。二层、三层等多层中的MBConv6层的结构都是单层MBConv6,不是复合嵌套。特别的,MBConv6不一定会对空间尺度进行下采样,具体看模型的设计细节。

注:

  • 重要的,在同一stage中,在MBConv6中的第二个及后续的MBConv6层中,Dethwise-Convolution的步幅为1,不为2,同时每个卷积核Filters的数量与第一层的MBConv6对应,以便对齐输出数据的尺度。
  • 与单独MBConv1不同,MBConv6中的MBConv1中Multiplication Layer后的Conv对通道的下采样为1/4而不是1/2。
  • 官方源码对于MBConv的dropout操作是在训练过程利用“Stochastic Depth”(随机深度技术)实现的,对模型进行正则化,也就是避免过拟合。并不是MBConv结构中插入了dropout层。

MATLAB构建EfficientNet-b0

根据MBConv1和MBConv6的结构,基于Tan等人提供的参数表,就可以非常的轻松建立EfficientNet结构的网络.

如EfficientNet-b0:

EfficientNet-b0是基于NAS搜索出来的,其优化目标是:ACC(m )\times[FLOPS(m)/T]^w,即:准确率*浮点消耗。

  • m代表当前模型
  • T是目标FLOPS(Floating Point Operations Per Second,每秒浮点运算次数)的一个阈值或目标值,是一个常数,由人为设定,用于权衡模型的计算复杂度和准确率。
  • w为-0.07, 是一个超参数,用于在准确率(accuracy)和FLOPS之间找到一个平衡点。具体来说,这个超参数在优化过程中会被用来调整准确率提升和计算量增加之间的相对重要性。当 w 的值为负值时(如-0.07),优化过程会更加倾向于选择那些计算量较小(即FLOPS较低)但准确率相对较高的模型。

EfficientNet由9个“Stage”构成,Stage2为MBConv1,3~7为多层MBConv6("#Layers"对应层数), 第8层为单层的MBConv6。可以发现每个Stage中,空间尺度下采用与原来的1/2,通道上采样原来的3/2。但是在Stage-7中,4层的MBconv6为对空间进行下采样,需要注意。

在EfficientNet-b0中,通道的上采样也并非严格按照3/2进行设计,如Stage4,和Stage5等。这是由于EfficientNet的参数是由于NAS搜索出来的,而不是人为设计的,所以会比较反直觉,但基本是接近3/2~2之间的下采样倍率。但这不是严格要求的,多一些少一些对模型性能影响不是很明显。

OK,基于此的构建EfficientNet-b0(这里是复制了MATLAB官方提供的EfficientNet-b0预训练模型的结构生成代码):


net = dlnetwork;

tempNet = [
    imageInputLayer([224 224 3],"Name","ImageInput","Normalization","zscore")
    convolution2dLayer([3 3],32,"Name","efficientnet-b0|model|stem|conv2d|Conv2D","Padding",[0 1 0 1],"Stride",[2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|stem|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|stem|MulLayer")
    groupedConvolution2dLayer([3 3],1,32,"Name","efficientnet-b0|model|blocks_0|depthwise_conv2d|depthwise","Padding",[1 1 1 1])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_0|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|MulLayer");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_0|se|GlobAvgPool")
    convolution2dLayer([1 1],8,"Name","Conv__301")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_0|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|se|MulLayer")
    convolution2dLayer([1 1],32,"Name","Conv__304")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_0|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|se|MulLayer_1")
    convolution2dLayer([1 1],16,"Name","efficientnet-b0|model|blocks_0|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_0|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)
    convolution2dLayer([1 1],96,"Name","efficientnet-b0|model|blocks_1|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|MulLayer")
    groupedConvolution2dLayer([3 3],1,96,"Name","efficientnet-b0|model|blocks_1|depthwise_conv2d|depthwise","Padding",[0 1 0 1],"Stride",[2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_1|se|GlobAvgPool")
    convolution2dLayer([1 1],4,"Name","Conv__309")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|se|MulLayer")
    convolution2dLayer([1 1],96,"Name","Conv__312")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_1|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|se|MulLayer_1")
    convolution2dLayer([1 1],24,"Name","efficientnet-b0|model|blocks_1|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],144,"Name","efficientnet-b0|model|blocks_2|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|MulLayer")
    groupedConvolution2dLayer([3 3],1,144,"Name","efficientnet-b0|model|blocks_2|depthwise_conv2d|depthwise","Padding",[1 1 1 1])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_2|se|GlobAvgPool")
    convolution2dLayer([1 1],6,"Name","Conv__319")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|se|MulLayer")
    convolution2dLayer([1 1],144,"Name","Conv__322")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_2|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|se|MulLayer_1")
    convolution2dLayer([1 1],24,"Name","efficientnet-b0|model|blocks_2|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    additionLayer(2,"Name","efficientnet-b0|model|blocks_2|Add")
    convolution2dLayer([1 1],144,"Name","efficientnet-b0|model|blocks_3|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|MulLayer")
    groupedConvolution2dLayer([5 5],1,144,"Name","efficientnet-b0|model|blocks_3|depthwise_conv2d|depthwise","Padding",[1 2 1 2],"Stride",[2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_3|se|GlobAvgPool")
    convolution2dLayer([1 1],6,"Name","Conv__327")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|se|MulLayer")
    convolution2dLayer([1 1],144,"Name","Conv__330")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_3|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|se|MulLayer_1")
    convolution2dLayer([1 1],40,"Name","efficientnet-b0|model|blocks_3|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],240,"Name","efficientnet-b0|model|blocks_4|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|MulLayer")
    groupedConvolution2dLayer([5 5],1,240,"Name","efficientnet-b0|model|blocks_4|depthwise_conv2d|depthwise","Padding",[2 2 2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_4|se|GlobAvgPool")
    convolution2dLayer([1 1],10,"Name","Conv__337")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|se|MulLayer")
    convolution2dLayer([1 1],240,"Name","Conv__340")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_4|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|se|MulLayer_1")
    convolution2dLayer([1 1],40,"Name","efficientnet-b0|model|blocks_4|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    additionLayer(2,"Name","efficientnet-b0|model|blocks_4|Add")
    convolution2dLayer([1 1],240,"Name","efficientnet-b0|model|blocks_5|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|MulLayer")
    groupedConvolution2dLayer([3 3],1,240,"Name","efficientnet-b0|model|blocks_5|depthwise_conv2d|depthwise","Padding",[0 1 0 1],"Stride",[2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_5|se|GlobAvgPool")
    convolution2dLayer([1 1],10,"Name","Conv__345")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|se|MulLayer")
    convolution2dLayer([1 1],240,"Name","Conv__348")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_5|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|se|MulLayer_1")
    convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_5|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_6|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|MulLayer")
    groupedConvolution2dLayer([3 3],1,480,"Name","efficientnet-b0|model|blocks_6|depthwise_conv2d|depthwise","Padding",[1 1 1 1])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_6|se|GlobAvgPool")
    convolution2dLayer([1 1],20,"Name","Conv__355")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|se|MulLayer")
    convolution2dLayer([1 1],480,"Name","Conv__358")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_6|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|se|MulLayer_1")
    convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_6|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_6|Add");
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_7|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|MulLayer")
    groupedConvolution2dLayer([3 3],1,480,"Name","efficientnet-b0|model|blocks_7|depthwise_conv2d|depthwise","Padding",[1 1 1 1])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_7|se|GlobAvgPool")
    convolution2dLayer([1 1],20,"Name","Conv__365")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|se|MulLayer")
    convolution2dLayer([1 1],480,"Name","Conv__368")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_7|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|se|MulLayer_1")
    convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_7|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    additionLayer(2,"Name","efficientnet-b0|model|blocks_7|Add")
    convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_8|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|MulLayer")
    groupedConvolution2dLayer([5 5],1,480,"Name","efficientnet-b0|model|blocks_8|depthwise_conv2d|depthwise","Padding",[2 2 2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_8|se|GlobAvgPool")
    convolution2dLayer([1 1],20,"Name","Conv__373")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|se|MulLayer")
    convolution2dLayer([1 1],480,"Name","Conv__376")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_8|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|se|MulLayer_1")
    convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_8|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_9|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|MulLayer")
    groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_9|depthwise_conv2d|depthwise","Padding",[2 2 2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_9|se|GlobAvgPool")
    convolution2dLayer([1 1],28,"Name","Conv__383")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|se|MulLayer")
    convolution2dLayer([1 1],672,"Name","Conv__386")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_9|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|se|MulLayer_1")
    convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_9|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_9|Add");
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_10|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|MulLayer")
    groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_10|depthwise_conv2d|depthwise","Padding",[2 2 2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_10|se|GlobAvgPool")
    convolution2dLayer([1 1],28,"Name","Conv__393")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|se|MulLayer")
    convolution2dLayer([1 1],672,"Name","Conv__396")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_10|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|se|MulLayer_1")
    convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_10|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    additionLayer(2,"Name","efficientnet-b0|model|blocks_10|Add")
    convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_11|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|MulLayer")
    groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_11|depthwise_conv2d|depthwise","Padding",[1 2 1 2],"Stride",[2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_11|se|GlobAvgPool")
    convolution2dLayer([1 1],28,"Name","Conv__401")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|se|MulLayer")
    convolution2dLayer([1 1],672,"Name","Conv__404")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_11|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|se|MulLayer_1")
    convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_11|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_12|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|MulLayer")
    groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_12|depthwise_conv2d|depthwise","Padding",[2 2 2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_12|se|GlobAvgPool")
    convolution2dLayer([1 1],48,"Name","Conv__411")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|se|MulLayer")
    convolution2dLayer([1 1],1152,"Name","Conv__414")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_12|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|se|MulLayer_1")
    convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_12|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_12|Add");
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_13|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|MulLayer")
    groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_13|depthwise_conv2d|depthwise","Padding",[2 2 2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_13|se|GlobAvgPool")
    convolution2dLayer([1 1],48,"Name","Conv__421")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|se|MulLayer")
    convolution2dLayer([1 1],1152,"Name","Conv__424")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_13|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|se|MulLayer_1")
    convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_13|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_13|Add");
net = addLayers(net,tempNet);

tempNet = [
    convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_14|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|MulLayer")
    groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_14|depthwise_conv2d|depthwise","Padding",[2 2 2 2])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_14|se|GlobAvgPool")
    convolution2dLayer([1 1],48,"Name","Conv__431")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|se|MulLayer")
    convolution2dLayer([1 1],1152,"Name","Conv__434")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_14|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|se|MulLayer_1")
    convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_14|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = [
    additionLayer(2,"Name","efficientnet-b0|model|blocks_14|Add")
    convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_15|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|MulLayer")
    groupedConvolution2dLayer([3 3],1,1152,"Name","efficientnet-b0|model|blocks_15|depthwise_conv2d|depthwise","Padding",[1 1 1 1])
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|SigmoidLayer_1");
net = addLayers(net,tempNet);

tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|MulLayer_1");
net = addLayers(net,tempNet);

tempNet = [
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_15|se|GlobAvgPool")
    convolution2dLayer([1 1],48,"Name","Conv__439")];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|se|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|se|MulLayer")
    convolution2dLayer([1 1],1152,"Name","Conv__442")
    sigmoidLayer("Name","efficientnet-b0|model|blocks_15|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|se|MulLayer_1")
    convolution2dLayer([1 1],320,"Name","efficientnet-b0|model|blocks_15|conv2d_1|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)
    convolution2dLayer([1 1],1280,"Name","efficientnet-b0|model|head|conv2d|Conv2D")
    batchNormalizationLayer("Name","efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);

tempNet = sigmoidLayer("Name","efficientnet-b0|model|head|SigmoidLayer");
net = addLayers(net,tempNet);

tempNet = [
    multiplicationLayer(2,"Name","efficientnet-b0|model|head|MulLayer")
    globalAveragePooling2dLayer("Name","efficientnet-b0|model|head|global_average_pooling2d|GlobAvgPool")
    fullyConnectedLayer(1000,"Name","efficientnet-b0|model|head|dense|MatMul")
    softmaxLayer("Name","Softmax")];
net = addLayers(net,tempNet);

% clean up helper variable
clear tempNet;

Connect Layer Branches
Connect all the branches of the network to create the network graph.
net = connectLayers(net,"efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|stem|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|stem|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|stem|SigmoidLayer","efficientnet-b0|model|stem|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_0|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_0|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|SigmoidLayer","efficientnet-b0|model|blocks_0|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|MulLayer","efficientnet-b0|model|blocks_0|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|MulLayer","efficientnet-b0|model|blocks_0|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__301","efficientnet-b0|model|blocks_0|se|SigmoidLayer");
net = connectLayers(net,"Conv__301","efficientnet-b0|model|blocks_0|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|se|SigmoidLayer","efficientnet-b0|model|blocks_0|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|se|SigmoidLayer_1","efficientnet-b0|model|blocks_0|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_1|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_1|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|SigmoidLayer","efficientnet-b0|model|blocks_1|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_1|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_1|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|SigmoidLayer_1","efficientnet-b0|model|blocks_1|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|MulLayer_1","efficientnet-b0|model|blocks_1|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|MulLayer_1","efficientnet-b0|model|blocks_1|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__309","efficientnet-b0|model|blocks_1|se|SigmoidLayer");
net = connectLayers(net,"Conv__309","efficientnet-b0|model|blocks_1|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|se|SigmoidLayer","efficientnet-b0|model|blocks_1|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|se|SigmoidLayer_1","efficientnet-b0|model|blocks_1|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_2|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_2|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|SigmoidLayer","efficientnet-b0|model|blocks_2|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_2|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_2|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|SigmoidLayer_1","efficientnet-b0|model|blocks_2|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|MulLayer_1","efficientnet-b0|model|blocks_2|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|MulLayer_1","efficientnet-b0|model|blocks_2|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__319","efficientnet-b0|model|blocks_2|se|SigmoidLayer");
net = connectLayers(net,"Conv__319","efficientnet-b0|model|blocks_2|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|se|SigmoidLayer","efficientnet-b0|model|blocks_2|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|se|SigmoidLayer_1","efficientnet-b0|model|blocks_2|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_3|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_3|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|SigmoidLayer","efficientnet-b0|model|blocks_3|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_3|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_3|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|SigmoidLayer_1","efficientnet-b0|model|blocks_3|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|MulLayer_1","efficientnet-b0|model|blocks_3|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|MulLayer_1","efficientnet-b0|model|blocks_3|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__327","efficientnet-b0|model|blocks_3|se|SigmoidLayer");
net = connectLayers(net,"Conv__327","efficientnet-b0|model|blocks_3|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|se|SigmoidLayer","efficientnet-b0|model|blocks_3|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|se|SigmoidLayer_1","efficientnet-b0|model|blocks_3|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_4|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_4|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|SigmoidLayer","efficientnet-b0|model|blocks_4|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_4|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_4|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|SigmoidLayer_1","efficientnet-b0|model|blocks_4|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|MulLayer_1","efficientnet-b0|model|blocks_4|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|MulLayer_1","efficientnet-b0|model|blocks_4|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__337","efficientnet-b0|model|blocks_4|se|SigmoidLayer");
net = connectLayers(net,"Conv__337","efficientnet-b0|model|blocks_4|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|se|SigmoidLayer","efficientnet-b0|model|blocks_4|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|se|SigmoidLayer_1","efficientnet-b0|model|blocks_4|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_5|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_5|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|SigmoidLayer","efficientnet-b0|model|blocks_5|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_5|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_5|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|SigmoidLayer_1","efficientnet-b0|model|blocks_5|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|MulLayer_1","efficientnet-b0|model|blocks_5|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|MulLayer_1","efficientnet-b0|model|blocks_5|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__345","efficientnet-b0|model|blocks_5|se|SigmoidLayer");
net = connectLayers(net,"Conv__345","efficientnet-b0|model|blocks_5|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|se|SigmoidLayer","efficientnet-b0|model|blocks_5|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|se|SigmoidLayer_1","efficientnet-b0|model|blocks_5|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_6|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_6|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|SigmoidLayer","efficientnet-b0|model|blocks_6|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_6|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_6|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|SigmoidLayer_1","efficientnet-b0|model|blocks_6|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|MulLayer_1","efficientnet-b0|model|blocks_6|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|MulLayer_1","efficientnet-b0|model|blocks_6|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__355","efficientnet-b0|model|blocks_6|se|SigmoidLayer");
net = connectLayers(net,"Conv__355","efficientnet-b0|model|blocks_6|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|se|SigmoidLayer","efficientnet-b0|model|blocks_6|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|se|SigmoidLayer_1","efficientnet-b0|model|blocks_6|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|Add","efficientnet-b0|model|blocks_7|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|Add","efficientnet-b0|model|blocks_7|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_7|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_7|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|SigmoidLayer","efficientnet-b0|model|blocks_7|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_7|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_7|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|SigmoidLayer_1","efficientnet-b0|model|blocks_7|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|MulLayer_1","efficientnet-b0|model|blocks_7|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|MulLayer_1","efficientnet-b0|model|blocks_7|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__365","efficientnet-b0|model|blocks_7|se|SigmoidLayer");
net = connectLayers(net,"Conv__365","efficientnet-b0|model|blocks_7|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|se|SigmoidLayer","efficientnet-b0|model|blocks_7|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|se|SigmoidLayer_1","efficientnet-b0|model|blocks_7|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_7|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_8|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_8|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|SigmoidLayer","efficientnet-b0|model|blocks_8|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_8|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_8|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|SigmoidLayer_1","efficientnet-b0|model|blocks_8|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|MulLayer_1","efficientnet-b0|model|blocks_8|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|MulLayer_1","efficientnet-b0|model|blocks_8|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__373","efficientnet-b0|model|blocks_8|se|SigmoidLayer");
net = connectLayers(net,"Conv__373","efficientnet-b0|model|blocks_8|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|se|SigmoidLayer","efficientnet-b0|model|blocks_8|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|se|SigmoidLayer_1","efficientnet-b0|model|blocks_8|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_9|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_9|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|SigmoidLayer","efficientnet-b0|model|blocks_9|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_9|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_9|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|SigmoidLayer_1","efficientnet-b0|model|blocks_9|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|MulLayer_1","efficientnet-b0|model|blocks_9|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|MulLayer_1","efficientnet-b0|model|blocks_9|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__383","efficientnet-b0|model|blocks_9|se|SigmoidLayer");
net = connectLayers(net,"Conv__383","efficientnet-b0|model|blocks_9|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|se|SigmoidLayer","efficientnet-b0|model|blocks_9|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|se|SigmoidLayer_1","efficientnet-b0|model|blocks_9|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|Add","efficientnet-b0|model|blocks_10|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|Add","efficientnet-b0|model|blocks_10|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_10|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_10|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|SigmoidLayer","efficientnet-b0|model|blocks_10|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_10|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_10|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|SigmoidLayer_1","efficientnet-b0|model|blocks_10|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|MulLayer_1","efficientnet-b0|model|blocks_10|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|MulLayer_1","efficientnet-b0|model|blocks_10|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__393","efficientnet-b0|model|blocks_10|se|SigmoidLayer");
net = connectLayers(net,"Conv__393","efficientnet-b0|model|blocks_10|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|se|SigmoidLayer","efficientnet-b0|model|blocks_10|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|se|SigmoidLayer_1","efficientnet-b0|model|blocks_10|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_10|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_11|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_11|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|SigmoidLayer","efficientnet-b0|model|blocks_11|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_11|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_11|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|SigmoidLayer_1","efficientnet-b0|model|blocks_11|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|MulLayer_1","efficientnet-b0|model|blocks_11|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|MulLayer_1","efficientnet-b0|model|blocks_11|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__401","efficientnet-b0|model|blocks_11|se|SigmoidLayer");
net = connectLayers(net,"Conv__401","efficientnet-b0|model|blocks_11|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|se|SigmoidLayer","efficientnet-b0|model|blocks_11|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|se|SigmoidLayer_1","efficientnet-b0|model|blocks_11|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_12|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_12|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|SigmoidLayer","efficientnet-b0|model|blocks_12|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_12|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_12|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|SigmoidLayer_1","efficientnet-b0|model|blocks_12|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|MulLayer_1","efficientnet-b0|model|blocks_12|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|MulLayer_1","efficientnet-b0|model|blocks_12|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__411","efficientnet-b0|model|blocks_12|se|SigmoidLayer");
net = connectLayers(net,"Conv__411","efficientnet-b0|model|blocks_12|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|se|SigmoidLayer","efficientnet-b0|model|blocks_12|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|se|SigmoidLayer_1","efficientnet-b0|model|blocks_12|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|Add","efficientnet-b0|model|blocks_13|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|Add","efficientnet-b0|model|blocks_13|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_13|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_13|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|SigmoidLayer","efficientnet-b0|model|blocks_13|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_13|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_13|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|SigmoidLayer_1","efficientnet-b0|model|blocks_13|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|MulLayer_1","efficientnet-b0|model|blocks_13|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|MulLayer_1","efficientnet-b0|model|blocks_13|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__421","efficientnet-b0|model|blocks_13|se|SigmoidLayer");
net = connectLayers(net,"Conv__421","efficientnet-b0|model|blocks_13|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|se|SigmoidLayer","efficientnet-b0|model|blocks_13|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|se|SigmoidLayer_1","efficientnet-b0|model|blocks_13|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_13|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|Add","efficientnet-b0|model|blocks_14|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|Add","efficientnet-b0|model|blocks_14|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_14|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_14|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|SigmoidLayer","efficientnet-b0|model|blocks_14|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_14|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_14|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|SigmoidLayer_1","efficientnet-b0|model|blocks_14|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|MulLayer_1","efficientnet-b0|model|blocks_14|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|MulLayer_1","efficientnet-b0|model|blocks_14|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__431","efficientnet-b0|model|blocks_14|se|SigmoidLayer");
net = connectLayers(net,"Conv__431","efficientnet-b0|model|blocks_14|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|se|SigmoidLayer","efficientnet-b0|model|blocks_14|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|se|SigmoidLayer_1","efficientnet-b0|model|blocks_14|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_14|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_15|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_15|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|SigmoidLayer","efficientnet-b0|model|blocks_15|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_15|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_15|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|SigmoidLayer_1","efficientnet-b0|model|blocks_15|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|MulLayer_1","efficientnet-b0|model|blocks_15|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|MulLayer_1","efficientnet-b0|model|blocks_15|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__439","efficientnet-b0|model|blocks_15|se|SigmoidLayer");
net = connectLayers(net,"Conv__439","efficientnet-b0|model|blocks_15|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|se|SigmoidLayer","efficientnet-b0|model|blocks_15|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|se|SigmoidLayer_1","efficientnet-b0|model|blocks_15|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|head|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|head|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|head|SigmoidLayer","efficientnet-b0|model|head|MulLayer/in2");
net = initialize(net);
plot(net);

 1.3 复合缩放理论与EfficientNet的扩展模型(B1~B7)

1.3.1单一维度扩展模型存在的问题

        有许多方法可以根据不同的资源约束来缩放ConvNet:ResNet可以通过调整网络深度(#layers)来缩小(例如,ResNet-18)或放大(例如,ResNet-200),而WideResNet和MobileNets可以通过网络宽度(#channels)来缩放。因此,人们也普遍认识到,更大的输入图像尺寸将有助于准确性,但会增加FLOPS的开销。尽管先前的研究已经表明网络深度和宽度对于ConvNets的表达能力都很重要,但如何有效地缩放ConvNet以实现更好的效率和准确性仍然是一个开放的问题。

        直观地,对于更高分辨率的图像,我们应该增加网络深度,以便更大的感受野可以帮助捕获包含更多像素的相似特征。相应地,当分辨率更高时,我们也应该增加网络宽度,以便捕获更多具有更多像素的细粒度模式。

Tan等人提供的示意图,复合缩放需要结合模型多个规模参数进行权衡,包括宽度,深度和分辨率

所以问题的主要困难在于最优的d、w、r相互依赖,并且在不同的资源约束下值会发生变化。由于这个困难,传统方法大多只在一个维度上缩放ConvNets:深度、宽度或分辨率。相关研究表明,扩展任何维度的网络宽度、深度或分辨率都可以提高准确性,但对于更大的模型,准确性增益会减少。也就是说,单独地对某一维度进行不断扩展,对于计算资源来说并不高效,我们需要一种更科学的方法去指导我们合理的扩展模型。

1.3.2 复合缩放方法的构建 

我们数学建模的思路描述上述的优化问题:

在固定的计算资源限制下(计算量提高一倍),利用网络的最优深度、宽度和输入分辨率与计算量的关系(式1):

Depth: d=\alpha ^\Phi

Width:w=\beta ^\Phi

Resolution: r=\gamma ^\Phi

subject\: to:\: \alpha \cdot \beta ^2 \cdot \gamma^2 \approx 2 \: ( \alpha \geq 1, \beta\geq 1, \gamma\geq 1)        

复合系数ϕ指定为1,而αβγ则是决定如何将这些额外资源分别分配给网络宽度、深度和分辨率的常数。

这一问题的描述方程可写为(式2):

_{d,w,r}^{max}Accuracy(Net(d,w,r))

subject \: to\: Net(d,w,r)=\odot _{i=1...s}\widehat{F}_{i}^{d\cdot \widehat{L_i}}(X_{r\cdot \widehat{H_i},r\cdot \widehat{W_i},w\cdot \widehat{C_i}})

Memory(Net)\leq Targat\: memory

FLOPS(Net)\leq Targat \: Flops  

\odot _{i=1...s}\widehat{F}_{i}表示EfficientNet-b0中一系列层的组合操作。w,d,r分别是用于缩放后网络宽度、深度和分辨率的倍率;Fi、Li、Hi、Wi、Ci是基线网络(EfficientNet-b0)中预定义的参数,头上带个尖(widehat),主要是用来特指EfficientNet-b0。

其中:

  • Fi表示基线网络中第i层卷积层的操作符(operator),即其对应结构,
  • Li表示基线网络中第i阶段(stage)中卷积层的重复次数,和放大深度倍率d关联。
  • Hi表示基线网络中第i层卷积层的操作符(operator),和分辨率的放大倍率r关联。
  • Wi表示基线网络中第i层输入张量的宽度(width),和分辨率的放大倍率r关联。
  • Ci表示基线网络中第i层输入张量的通道数(channel dimension),和宽度的放大倍率w关联。

从基线模型EfficientNet-B0出发,基于复合缩放方法数学思路,通过两步来对其进行扩展:

第一步:我们首先设定φ=1(假设有两倍以上的资源可用),并根据公式1和公式2对α(深度)、β(宽度)、γ(分辨率)进行小范围的网格搜索特别地,在式1的约束条件下,Tan等人发现对于EfficientNet-B0而言最佳的α、β、γ值分别为1.2、1.1和1.15

第二步:随后,将α、β、γ固定为常数(1.2、1.1和1.15),并使用公式1,通过调整不同的\phi值来扩展基线网络,从而得到了EfficientNet-B1至B7。

注意:

  • α、β、γ值分别为1.2、1.1和1.15是作者针对EfficientNet-b0网格搜索的优化值,并不是对所有模型都是这个扩展值,对于其他的模型α、β、γ的值并不一样。
  • 此处优化的α、β、γ值是针对ImageNet数据库迭代出来的优化值,不是理论计算得到上的最优值,因此需要注意其对于数据库或使用场景不同时普适性,对于不同场景会有所变动。

1.3.3 将Efficient-b0扩展到b1~b7

基于EfficientNet-b0和搜索到的优化缩放参数α(深度)、β(宽度)、γ(分辨率),通过复合系数\Phi的指数放大(如\alpha^\Phi)可以计算出EfficientNet的扩展模型的缩放系数,即b1~b7;

对于EfficientNet-b1~b3,\Phi的取值为0.5,1,2,计算公式为\alpha^\Phi, \beta ^\Phi ,\delta ^\Phi

如EfficientNet-b1,其分辨率为224*1.15^{0.5}=240.21,舍入得240,其网络宽度缩放倍率为1.1^{0.5}=1.049舍入得1.0,网络深度的缩放倍率为1.2^{0.5}=1.095舍入得1.1,故EfficientNet-b1的输入分辨率、宽度和深度增加比例分别为240*240,1.0和1.1

对于EfficientNet-b4~b7,\Phi的取值为就为其系列值即:4、5、6、7;但是深度缩放倍率的计算公式修改为\alpha^{\Phi-(\Phi +1)*0.1},而分辨率的缩放则是取\delta ^\Phi附近的某个合适的数值。

如EfficientNet-b7,其分辨率为224*1.15^{7}=595.8445,取600,其网络宽度缩放倍率为1.1^{7}=1.9487,舍入得2.0,网络深度的缩放倍率为1.2^{7-(7+1)*0.1}=3.0961,舍入得3.1,故EfficientNet-b7的输入分辨率、宽度和深度增加比例分别为600*600,2.0和3.1.

下列是b0~b7的放大系数表:

ModelInput resolutionwidth coefficientdepth coefficient
b0 22411
b124011.1
b22601.11.2
b33001.21.4
b43801.41.8
b54561.62.2
b65281.82.6
b760023.1

Input resolution代表训练网络时输入网络的图像大小,如224就代表输入分辨率大小为224*224

width coefficient和depth_coefficient分别代表channel维度上的倍率因子(也就是卷积核数量(filters)和网络的深度。

如b2处的width coefficient为1.1 ,即卷积核数量扩大到原来的1.1倍,原先为32,那么现在就是32*1.1=35.2,然后再一直取到8的倍数,即40;

depth_coefficient负责加深网络的深度,如b4的depth_coefficient为1.8,那么相较于b0,对应Stage的MBconv的层数将会增加到原先的1.8倍,并向上取整。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/937002.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

CentOS7 Apache安装踩坑

Gnome桌面右键弹出终端。 [rootlocalhost ~]# yum repolist 已加载插件:fastestmirror, langpacks /var/run/yum.pid 已被锁定,PID 为 2611 的另一个程序正在运行。 Another app is currently holding the yum lock; waiting for it to exit... [root…

接收文件并保存在本地

接受多个文件 前端 <input typefile namefilelist> <input typefile namefilelist> <input typefile namefilelist> ... 后端 filelist request.files.getlist(name属性值) 获取文件内容 单个文件 file request.files.get(file)content file.read…

关于解决VScode中python解释器中的库Not Found的问题

关于解决VScode中python解释器中的库Not Found的问题 背景介绍解决步骤1. 检查当前使用的Python解释器2. 确保选择正确的Python解释器3. 安装库到指定的Python环境①使用 pip 完整路径指定&#xff1a;②使用 conda 安装&#xff1a;③使用 python -m pip 指定解释器&#xff1…

springboot436校园招聘系统(论文+源码)_kaic

摘 要 使用旧方法对校园招聘系统的信息进行系统化管理已经不再让人们信赖了&#xff0c;把现在的网络信息技术运用在校园招聘系统的管理上面可以解决许多信息管理上面的难题&#xff0c;比如处理数据时间很长&#xff0c;数据存在错误不能及时纠正等问题。这次开发的校园招聘系…

YOLOv9改进,YOLOv9引入DLKA-Attention可变形大核注意力,WACV2024,二次创新RepNCSPELAN4结构

摘要 作者引入了一种称为可变形大核注意力 (D-LKA Attention) 的新方法来增强医学图像分割。这种方法使用大型卷积内核有效地捕获体积上下文,避免了过多的计算需求。D-LKA Attention 还受益于可变形卷积,以适应不同的数据模式。 理论介绍 大核卷积(Large Kernel Convolu…

LRM-典型 Transformer 在视觉领域的应用,单个图像生成3D图像

https://yiconghong.me/LRM. 一、Abstract 第一个大型重建模型&#xff08;LRM&#xff09;&#xff0c;它可以在5秒内从单个输入图像预测物体的3D模型。LRM采用了高度可扩展的基于transformer的架构&#xff0c;具有5亿个可学习参数&#xff0c;可以直接从输入图像中预测神经…

鸿蒙开发:一个轻盈的上拉下拉刷新组件

前言 老早之前开源了一个刷新组件&#xff0c;提供了很多常见的功能&#xff0c;也封装了List&#xff0c;Grid&#xff0c;WaterFlow&#xff0c;虽然功能多&#xff0c;但也冗余比较多&#xff0c;随着时间的前去&#xff0c;暴露的问题就慢慢增多&#xff0c;虽然我也提供了…

ByteCTF2024

wp参考&#xff1a; 2024 ByteCTF wp 2024 ByteCTF WP- Nepnep ByteCTF 2024 writeup by Arr3stY0u 五冠王&#xff01;ByteCTF 2024 初赛WriteUp By W&M ByteCTF 2024 By W&M - W&M Team ByteCTF Re WP - 吾爱破解 - 52pojie.cn 2024 ByteCTF - BediveRe_R…

Node.js教程入门第一课:环境安装

对于一个程序员来说&#xff0c;每学习一个新东西的时候&#xff0c;第一步基本上都是先进行环境的搭建&#xff01; 从本章节开始让我们开始探索Node.js的世界吧! 什么是Node.js? 那么什么是Node.js呢&#xff1f;简单的说Node.js 就是运行在服务端的 JavaScript JavaScript…

弧形导轨的变形因素有哪些?

随着弧形导轨的应用日渐普遍&#xff0c;在日常使用中总会遇到很多各种各样的问题&#xff0c;其中变形是最常见的问题&#xff0c;但通过采取正确的预防和解决措施&#xff0c;可以避免其对设备性能和精度造成的影响&#xff0c;以下是一些主要的变形原因&#xff1a; 1、负载…

SSL证书部署(linux-nginx)

一、SSL证书的作用 HTTP协议无法加密数据,数据传输可能产生泄露、篡改或钓鱼攻击等问题。 SSL证书部署到Web服务器后,可帮助Web服务器和网站间建立可信的HTTPS协议加密链接,为网站安全加锁,保证数据安全传输。 二、前提条件 1.已通过数字证书管理服务控制台签发证书。 …

MATLAB引用矩阵元素的几种方法

引用矩阵元素可以通过索引&#xff0c;也可以通过逻辑值 索引 通过引用元素在矩阵中的位置来提取元素&#xff0c;例如&#xff1a; - 逻辑值 通过某种逻辑运算来使得要提取的值变为逻辑 1 1 1&#xff0c;用 A ( ) A() A()提取即可&#xff0c; A A A为原矩阵的名称。 例如&…

一些浅显易懂的IP小定义

IP归属地&#xff08;也叫IP地址&#xff0c;IP属地&#xff09; 互联网协议地址&#xff0c;每个设备上的唯一的网络身份证明。用于确保网络数据能够精准传送到你的设备上。 基于IP数据云全球IP归属地解析&#xff0c;示例Python代码 curl -X POST https://route.showapi.co…

Jupyter Notebook 切换虚拟环境

方法 切换到需要添加到Jupyter Notebook中的虚拟环境&#xff0c;执行&#xff1a; python -m ipykernel install --name Jupyter Notebook中显示的虚拟环境名称如果遇到 [Errno 13] Permission denied: /usr/local/share/jupyter类似的权限问题&#xff0c;可能是没有对应的…

Blue Ocean 在Jenkins上创建Pipeline使用详解

BlueOcean是Jenkins的一个插件,它提供了一套可视化操作界面来帮助用户创建、编辑Pipeline任务。以下是对BlueOcean中Pipeline操作的详细解释: 一、安装与启动BlueOcean 安装:在Jenkins的“系统管理”->“插件管理”->“可选插件”中搜索“BlueOcean”,然后点击“Ins…

厦门凯酷全科技有限公司正规吗靠谱吗?

随着短视频和直播电商的迅猛发展&#xff0c;越来越多的企业开始将目光投向抖音这一平台。作为国内领先的短视频社交平台&#xff0c;抖音凭借其庞大的用户基础和强大的算法推荐系统&#xff0c;成为众多品牌拓展市场、提升销售的重要渠道。厦门凯酷全科技有限公司&#xff08;…

【安卓开发】【Android Studio】启动时报错“Unable to access Android SDK add-on list”

一、问题描述 在启动Android Studio时&#xff0c;软件报错&#xff1a;Unable to access Android SDK add-on list&#xff0c;报错截图如下&#xff1a; 二、原因及解决方法 初步推测是由于网络节点延迟&#xff0c;无法接入谷歌导致的。点击Cancel取消即可。

KALI安装操作及过程

以下是在计算机上安装 Kali Linux 的详细教程&#xff1a;&#xff08;通常我直接使用虚拟机&#xff09; 解压虚拟机安装包&#xff0c;直接在虚拟机中打开KALI &#xff08;将内存改为4GB&#xff09; 初始密码账号&#xff1a;kali 一、准备工作 下载 Kali Linux 镜像文件…

磁盘空间占用分析工具-wiztree【推荐】

磁盘空间占用分析工具-wiztree【推荐】 如果你遇到过磁盘空间占满、找大文件困难、线上服务器空间飙升等一系列磁盘的问题&#xff0c;并且需要分析文件夹占用空间&#xff0c;传统的方法就是一个一个去看&#xff0c;属实太费劲&#xff0c;效率太低。 而“WizTree”便可解决…

2024年12月11日Github流行趋势

项目名称&#xff1a;maigret 项目维护者&#xff1a;soxoj, kustermariocoding, dependabot, fen0s, cyb3rk0tik项目介绍&#xff1a;通过用户名从数千个站点收集个人档案信息的工具。项目star数&#xff1a;12,055项目fork数&#xff1a;870 项目名称&#xff1a;uv 项目维护…