Swin Transformer 学习笔记(附代码)

论文地址:https://arxiv.org/pdf/2103.14030.pdf

代码地址: GitHub - microsoft/Swin-Transformer: This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".

1.是什么?

Swin Transformer是一种用于图像分类的深度学习模型,它在2021年提出并取得了很好的效果。Swin Transformer的网络结构是基于Transformer架构的,但与传统的Transformer不同,它引入了一种新的分层机制,以处理大尺寸图像。

Swin Transformer的核心思想是将图像分割成小的图块,并在这些图块上应用Transformer模块。这种分块的方式使得Swin Transformer能够处理大尺寸的图像,而不会受到传统Transformer中注意力计算复杂度的限制。

Swin Transformer的网络结构由多个阶段组成,每个阶段都包含了多个分块层。在每个分块层中,图块会通过自注意力机制进行特征交互,并通过跨窗口注意力机制进行全局信息的传递。这种分块和跨窗口的机制使得Swin Transformer能够捕捉到不同尺度的特征。

通过在大规模图像分类任务上的实验证明,Swin Transformer在准确性和计算效率方面都取得了很好的表现。它在ImageNet数据集上的Top-1准确率超过了90%,并且相比于其他图像分类模型,具有更低的计算复杂度。

总结起来,Swin Transformer是一种基于Transformer架构的图像分类模型,通过分块和跨窗口的机制,能够处理大尺寸图像并取得较好的准确性和计算效率。

2.为什么?

最初的ViT模型,它有几个明显的问题:

  • 建模能力方面,强行分割patch破坏了原有的邻域结构,也不再具有卷积的那种空间不变性
  • 复杂度方面,之前的ViT是在每层都做全局(global)自注意力。如果保持每个Patch的大小不变,随着图片尺寸的变大,Patch的个数会增加,而Patch的个数等于进入Transformer的Token个数,且Transformer的时间复杂度是O(n^2)。
  • 易用性方面,由于Embedding(结构是全连接)和图片大小是绑定的,所以预训练、精调和推理使用的图片必须是完全同等的尺寸。

相比于Vision Transformer(讲解),Swin Transformer做出了几点改进:

  • 层次化构建方法:使用了类似卷积神经网络中的层次化构建方法(Hierarchical feature maps),比如特征图尺寸中有对图像下采样4倍的,8倍的以及16倍的,这样的backbone有助于在此基础上构建目标检测,实例分割等任务。而Vision Transformer中直接下采样16倍,后面的特征图也是维持这个下采样率不变。
  • 特征图划分:在Swin Transformer中使用了Windows Multi-Head Self-Attention(W-MSA)的概念,比如在下图的4倍下采样和8倍下采样中,将特征图划分成了多个不相交的窗口,并且Multi-Head Self-Attention只在每个窗口内进行。相对于Vision Transformer中直接对整个特征图进行Multi-Head Self-Attention,这样做的目的是减少计算量,尤其是在浅层特征图很大的时候。
  • 窗口变换:特征图划分虽然减少了计算量,但会隔绝不同窗口之间的信息传递,因此所以在论文中作者又提出了Shifted Windows Multi-Head Self-Attention(SW-MSA)的概念,目的在于使信息在相邻窗口中传递。
     

3.怎么样?

3.1网络结构

它分为几个阶段

  1. Embedding Stage(stage1)。将图片划分为若干4*4的patch,使用线性变换来将patch变为Embedding向量,这一步和ViT是一样的。但是注意,这里的patch比ViT的14*14小了很多。
  2. 若干个使用Swin Transformer 的Stage(stage2-4)。这里模仿了经典卷积网络backbone的结构,在每个Stage都将feature map(对应到Vit就是Patch或Token的个数)变成原来的四分之一。这是通过简单地将2*2patch合并成一个来完成的。同时,用Swin Transformer替代了原来的标准Transformer,主要变化如下
    1. 用M*M大小的窗口自注意力代替全局自注意力。因为自注意力机制时间复杂度是O(n^2),通过减少参加自注意力的元素,将原来关于patch数平方复杂度的计算变为关于patch数线性复杂度
    2. 用对角线方向的shift来使Swin Transformer里的每一层窗口都是不同的,这样一个patch有机会和不同的patch交互。这里还使用了一个mask trick来使得这种shift的自注意力计算更高效。
    3. 添加了相对位置偏置(relative position bias),对比发现这比添加绝对位置embedding效果好很多

3.2 Patch Merging 模块

Patch Merging 模块将 尺寸为 H×W 的 Patch 块首先进行拼接并在 channel 维度上进行concatenate 构成了 H/2×W/2×4C 的特征图,然后再进行 Layer Normalization 操作进行正则化,然后通过一个 Linear 层形成了一个 H/2×W/2×2C ,完成了特征图的下采样过程。其中 size 缩小为原来的 1/2,channel 扩大为原来的 2 倍。


3.3 W-MSA 模块

ViT 网络中的 MSA 通过 Self-Attention 使得每一个像素点都可以和其他的像素点进行内积从而得到所有像素点的信息,从而获得丰富的全局信息。但是每个像素点都需要和其他像素点进行信息交换,计算量巨大,网络的执行效率低下。因此 Swin-T 将 MSA 分个多个固定的 Windows 构成了 W-MSA,每个 Windows 之间的像素点只能与该 Windows 中的其他像素点进行内积从而获得信息,这样便大幅的减小了计算量,提高了网络的运算效率。MSA 和 W-MAS 的计算量如下所示:

其中h、w和C分别代表特征图的高度、宽度和深度,M代表每个 Windows 的大小。假定 h=w=112,M=7,C=128可以计算出 W-MSA 节省了40124743680 FLOPs。

3.4 SW-MSA 模块

虽然 W-MSA 通过划分 Windows 的方法减少了计算量,但是由于各个 Windows 之间无法进行信息的交互,因此可以看作其“感受野”缩小,无法得到较全局准确的信息从而影响网络的准确度。

为了实现不同窗口之间的信息交互,我们可以将窗口滑动,偏移窗口使其包含不同的像素点,然后再进行 W-MSA 计算,将两次 W-MSA 计算的结果进行连接便可结合两个不同的 Windows 中的像素点所包含的信息从而实现 Windows 之间的信息共通。

偏移窗口的 W-MSA 构成了 SW-MSA 模块,其 Windows 在 W-MSA 的基础上向右下角偏移了两个 Patch,形成了9个大小不一的块,然后使用 cyclic shift 将这 9 个块平移拼接成与 W-MSA 对应的 4 个大小相同的块,再通过 masked MSA 对这 4 个拼接块进行对应的模板计算完成信息的提取,最后通过 reverse cyclic shift 将信息数据 patch 平移回原先的位置。通过 SW-MSA 机制完成了偏移窗口的像素点的 MSA 计算并实现了不同窗口间像素点的信息交流,从而间接扩大了网络的“感受野”,提高了信息的利用效率。

3.5 Relative position bias 机制 

Swin-T 网络还在 Attention 计算中引入了相对位置偏置机制去提高网络的整体准确率表现,通过引入相对位置偏置机制,其准确度能够提高 1.2%~2.3% 不等。 以 2×2 的特征图为例,首先我们需要对特征图的各个块进行绝对位置的编号,得到每个块的绝对位置索引。然后对每个块计算其与其他块之间的相对位置,计算方法为该块的绝对位置索引减去其他块的绝对位置索引,可以得到每个块的相对位置索引矩阵。将每个块的相对位置索引矩阵展平连接构成了整个特征图的相对位置索引矩阵,具体的计算流程如下图所示。

Swin-T并不是使用二维元组形式的相对位置索引矩阵,而是通过将二维元组形式的相对位置索引映射为一维的相对位置偏置(Relative position bias)来构成相应的矩阵,具体的映射方法如下:1. 将对应的相对位置行索引和列索引分别加上 M-1, 2. 将行索引和列索引分别乘以 2M-1, 3. 将行索引和列索引相加,再使用对应的相对位置偏置表(Relative position bias table)进行映射即可得到最终的相对位置偏置B。具体的计算流程如下所示:

加入了相对位置偏置机制的 Attention 计算公式如下所示:

 

其中B即为上述计算得到的相对位置偏置。

3.6  Swin Transformer 网络详细配置

在论文中 Swin-T 分为四种不同规格的网络,每一种规格有着不同的 Block 堆叠次数和 channel 数以应对不同规模的任务及复杂度要求。一般来说规格越高的网络参数个数越多,所能得到的信息也就越多,预测的准确程度也就越高,但是其计算复杂度和训练时间也就越大,需要根据自己的任务要求和配置运用适合规格的 Swin-T 网络。各规格网络的详细配置如下图所示:

3.7代码实现

# --------------------------------------------------------
# Swin Transformer
# Copyright (c) 2021 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ze Liu
# --------------------------------------------------------

import torch
import torch.nn as nn
import torch.utils.checkpoint as checkpoint
from timm.models.layers import DropPath, to_2tuple, trunc_normal_

try:
    import os, sys

    kernel_path = os.path.abspath(os.path.join('..'))
    sys.path.append(kernel_path)
    from kernels.window_process.window_process import WindowProcess, WindowProcessReverse

except:
    WindowProcess = None
    WindowProcessReverse = None
    print("[Warning] Fused window process have not been installed. Please refer to get_started.md for installation.")


class Mlp(nn.Module):
    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
        super().__init__()
        out_features = out_features or in_features
        hidden_features = hidden_features or in_features
        self.fc1 = nn.Linear(in_features, hidden_features)
        self.act = act_layer()
        self.fc2 = nn.Linear(hidden_features, out_features)
        self.drop = nn.Dropout(drop)

    def forward(self, x):
        x = self.fc1(x)
        x = self.act(x)
        x = self.drop(x)
        x = self.fc2(x)
        x = self.drop(x)
        return x


def window_partition(x, window_size):
    """
    Args:
        x: (B, H, W, C)
        window_size (int): window size

    Returns:
        windows: (num_windows*B, window_size, window_size, C)
    """
    B, H, W, C = x.shape
    x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
    windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
    return windows


def window_reverse(windows, window_size, H, W):
    """
    Args:
        windows: (num_windows*B, window_size, window_size, C)
        window_size (int): Window size
        H (int): Height of image
        W (int): Width of image

    Returns:
        x: (B, H, W, C)
    """
    B = int(windows.shape[0] / (H * W / window_size / window_size))
    x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
    x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
    return x


class WindowAttention(nn.Module):
    r""" Window based multi-head self attention (W-MSA) module with relative position bias.
    It supports both of shifted and non-shifted window.

    Args:
        dim (int): Number of input channels.
        window_size (tuple[int]): The height and width of the window.
        num_heads (int): Number of attention heads.
        qkv_bias (bool, optional):  If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
        attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
        proj_drop (float, optional): Dropout ratio of output. Default: 0.0
    """

    def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):

        super().__init__()
        self.dim = dim
        self.window_size = window_size  # Wh, Ww
        self.num_heads = num_heads
        head_dim = dim // num_heads
        self.scale = qk_scale or head_dim ** -0.5

        # define a parameter table of relative position bias
        self.relative_position_bias_table = nn.Parameter(
            torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads))  # 2*Wh-1 * 2*Ww-1, nH

        # get pair-wise relative position index for each token inside the window
        coords_h = torch.arange(self.window_size[0])
        coords_w = torch.arange(self.window_size[1])
        coords = torch.stack(torch.meshgrid([coords_h, coords_w]))  # 2, Wh, Ww
        coords_flatten = torch.flatten(coords, 1)  # 2, Wh*Ww
        relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :]  # 2, Wh*Ww, Wh*Ww
        relative_coords = relative_coords.permute(1, 2, 0).contiguous()  # Wh*Ww, Wh*Ww, 2
        relative_coords[:, :, 0] += self.window_size[0] - 1  # shift to start from 0
        relative_coords[:, :, 1] += self.window_size[1] - 1
        relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
        relative_position_index = relative_coords.sum(-1)  # Wh*Ww, Wh*Ww
        self.register_buffer("relative_position_index", relative_position_index)

        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
        self.attn_drop = nn.Dropout(attn_drop)
        self.proj = nn.Linear(dim, dim)
        self.proj_drop = nn.Dropout(proj_drop)

        trunc_normal_(self.relative_position_bias_table, std=.02)
        self.softmax = nn.Softmax(dim=-1)

    def forward(self, x, mask=None):
        """
        Args:
            x: input features with shape of (num_windows*B, N, C)
            mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
        """
        B_, N, C = x.shape
        qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
        q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)

        q = q * self.scale
        attn = (q @ k.transpose(-2, -1))

        relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
            self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1)  # Wh*Ww,Wh*Ww,nH
        relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous()  # nH, Wh*Ww, Wh*Ww
        attn = attn + relative_position_bias.unsqueeze(0)

        if mask is not None:
            nW = mask.shape[0]
            attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
            attn = attn.view(-1, self.num_heads, N, N)
            attn = self.softmax(attn)
        else:
            attn = self.softmax(attn)

        attn = self.attn_drop(attn)

        x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
        x = self.proj(x)
        x = self.proj_drop(x)
        return x

    def extra_repr(self) -> str:
        return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'

    def flops(self, N):
        # calculate flops for 1 window with token length of N
        flops = 0
        # qkv = self.qkv(x)
        flops += N * self.dim * 3 * self.dim
        # attn = (q @ k.transpose(-2, -1))
        flops += self.num_heads * N * (self.dim // self.num_heads) * N
        #  x = (attn @ v)
        flops += self.num_heads * N * N * (self.dim // self.num_heads)
        # x = self.proj(x)
        flops += N * self.dim * self.dim
        return flops


class SwinTransformerBlock(nn.Module):
    r""" Swin Transformer Block.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resulotion.
        num_heads (int): Number of attention heads.
        window_size (int): Window size.
        shift_size (int): Shift size for SW-MSA.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        drop_path (float, optional): Stochastic depth rate. Default: 0.0
        act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm
        fused_window_process (bool, optional): If True, use one kernel to fused window shift & window partition for acceleration, similar for the reversed part. Default: False
    """

    def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
                 act_layer=nn.GELU, norm_layer=nn.LayerNorm,
                 fused_window_process=False):
        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.num_heads = num_heads
        self.window_size = window_size
        self.shift_size = shift_size
        self.mlp_ratio = mlp_ratio
        if min(self.input_resolution) <= self.window_size:
            # if window size is larger than input resolution, we don't partition windows
            self.shift_size = 0
            self.window_size = min(self.input_resolution)
        assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"

        self.norm1 = norm_layer(dim)
        self.attn = WindowAttention(
            dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
            qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)

        self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
        self.norm2 = norm_layer(dim)
        mlp_hidden_dim = int(dim * mlp_ratio)
        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)

        if self.shift_size > 0:
            # calculate attention mask for SW-MSA
            H, W = self.input_resolution
            img_mask = torch.zeros((1, H, W, 1))  # 1 H W 1
            h_slices = (slice(0, -self.window_size),
                        slice(-self.window_size, -self.shift_size),
                        slice(-self.shift_size, None))
            w_slices = (slice(0, -self.window_size),
                        slice(-self.window_size, -self.shift_size),
                        slice(-self.shift_size, None))
            cnt = 0
            for h in h_slices:
                for w in w_slices:
                    img_mask[:, h, w, :] = cnt
                    cnt += 1

            mask_windows = window_partition(img_mask, self.window_size)  # nW, window_size, window_size, 1
            mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
            attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
            attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
        else:
            attn_mask = None

        self.register_buffer("attn_mask", attn_mask)
        self.fused_window_process = fused_window_process

    def forward(self, x):
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"

        shortcut = x
        x = self.norm1(x)
        x = x.view(B, H, W, C)

        # cyclic shift
        if self.shift_size > 0:
            if not self.fused_window_process:
                shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
                # partition windows
                x_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, C
            else:
                x_windows = WindowProcess.apply(x, B, H, W, C, -self.shift_size, self.window_size)
        else:
            shifted_x = x
            # partition windows
            x_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, C

        x_windows = x_windows.view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C

        # W-MSA/SW-MSA
        attn_windows = self.attn(x_windows, mask=self.attn_mask)  # nW*B, window_size*window_size, C

        # merge windows
        attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)

        # reverse cyclic shift
        if self.shift_size > 0:
            if not self.fused_window_process:
                shifted_x = window_reverse(attn_windows, self.window_size, H, W)  # B H' W' C
                x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
            else:
                x = WindowProcessReverse.apply(attn_windows, B, H, W, C, self.shift_size, self.window_size)
        else:
            shifted_x = window_reverse(attn_windows, self.window_size, H, W)  # B H' W' C
            x = shifted_x
        x = x.view(B, H * W, C)
        x = shortcut + self.drop_path(x)

        # FFN
        x = x + self.drop_path(self.mlp(self.norm2(x)))

        return x

    def extra_repr(self) -> str:
        return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
               f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"

    def flops(self):
        flops = 0
        H, W = self.input_resolution
        # norm1
        flops += self.dim * H * W
        # W-MSA/SW-MSA
        nW = H * W / self.window_size / self.window_size
        flops += nW * self.attn.flops(self.window_size * self.window_size)
        # mlp
        flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
        # norm2
        flops += self.dim * H * W
        return flops


class PatchMerging(nn.Module):
    r""" Patch Merging Layer.

    Args:
        input_resolution (tuple[int]): Resolution of input feature.
        dim (int): Number of input channels.
        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm
    """

    def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
        super().__init__()
        self.input_resolution = input_resolution
        self.dim = dim
        self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
        self.norm = norm_layer(4 * dim)

    def forward(self, x):
        """
        x: B, H*W, C
        """
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"
        assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."

        x = x.view(B, H, W, C)

        x0 = x[:, 0::2, 0::2, :]  # B H/2 W/2 C
        x1 = x[:, 1::2, 0::2, :]  # B H/2 W/2 C
        x2 = x[:, 0::2, 1::2, :]  # B H/2 W/2 C
        x3 = x[:, 1::2, 1::2, :]  # B H/2 W/2 C
        x = torch.cat([x0, x1, x2, x3], -1)  # B H/2 W/2 4*C
        x = x.view(B, -1, 4 * C)  # B H/2*W/2 4*C

        x = self.norm(x)
        x = self.reduction(x)

        return x

    def extra_repr(self) -> str:
        return f"input_resolution={self.input_resolution}, dim={self.dim}"

    def flops(self):
        H, W = self.input_resolution
        flops = H * W * self.dim
        flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
        return flops


class BasicLayer(nn.Module):
    """ A basic Swin Transformer layer for one stage.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resolution.
        depth (int): Number of blocks.
        num_heads (int): Number of attention heads.
        window_size (int): Local window size.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
        norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
        downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
        fused_window_process (bool, optional): If True, use one kernel to fused window shift & window partition for acceleration, similar for the reversed part. Default: False
    """

    def __init__(self, dim, input_resolution, depth, num_heads, window_size,
                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
                 drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
                 fused_window_process=False):

        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.depth = depth
        self.use_checkpoint = use_checkpoint

        # build blocks
        self.blocks = nn.ModuleList([
            SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
                                 num_heads=num_heads, window_size=window_size,
                                 shift_size=0 if (i % 2 == 0) else window_size // 2,
                                 mlp_ratio=mlp_ratio,
                                 qkv_bias=qkv_bias, qk_scale=qk_scale,
                                 drop=drop, attn_drop=attn_drop,
                                 drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
                                 norm_layer=norm_layer,
                                 fused_window_process=fused_window_process)
            for i in range(depth)])

        # patch merging layer
        if downsample is not None:
            self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
        else:
            self.downsample = None

    def forward(self, x):
        for blk in self.blocks:
            if self.use_checkpoint:
                x = checkpoint.checkpoint(blk, x)
            else:
                x = blk(x)
        if self.downsample is not None:
            x = self.downsample(x)
        return x

    def extra_repr(self) -> str:
        return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"

    def flops(self):
        flops = 0
        for blk in self.blocks:
            flops += blk.flops()
        if self.downsample is not None:
            flops += self.downsample.flops()
        return flops


class PatchEmbed(nn.Module):
    r""" Image to Patch Embedding

    Args:
        img_size (int): Image size.  Default: 224.
        patch_size (int): Patch token size. Default: 4.
        in_chans (int): Number of input image channels. Default: 3.
        embed_dim (int): Number of linear projection output channels. Default: 96.
        norm_layer (nn.Module, optional): Normalization layer. Default: None
    """

    def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
        super().__init__()
        img_size = to_2tuple(img_size)
        patch_size = to_2tuple(patch_size)
        patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
        self.img_size = img_size
        self.patch_size = patch_size
        self.patches_resolution = patches_resolution
        self.num_patches = patches_resolution[0] * patches_resolution[1]

        self.in_chans = in_chans
        self.embed_dim = embed_dim

        self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
        if norm_layer is not None:
            self.norm = norm_layer(embed_dim)
        else:
            self.norm = None

    def forward(self, x):
        B, C, H, W = x.shape
        # FIXME look at relaxing size constraints
        assert H == self.img_size[0] and W == self.img_size[1], \
            f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
        x = self.proj(x).flatten(2).transpose(1, 2)  # B Ph*Pw C
        if self.norm is not None:
            x = self.norm(x)
        return x

    def flops(self):
        Ho, Wo = self.patches_resolution
        flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1])
        if self.norm is not None:
            flops += Ho * Wo * self.embed_dim
        return flops


class SwinTransformer(nn.Module):
    r""" Swin Transformer
        A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows`  -
          https://arxiv.org/pdf/2103.14030

    Args:
        img_size (int | tuple(int)): Input image size. Default 224
        patch_size (int | tuple(int)): Patch size. Default: 4
        in_chans (int): Number of input image channels. Default: 3
        num_classes (int): Number of classes for classification head. Default: 1000
        embed_dim (int): Patch embedding dimension. Default: 96
        depths (tuple(int)): Depth of each Swin Transformer layer.
        num_heads (tuple(int)): Number of attention heads in different layers.
        window_size (int): Window size. Default: 7
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
        qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
        drop_rate (float): Dropout rate. Default: 0
        attn_drop_rate (float): Attention dropout rate. Default: 0
        drop_path_rate (float): Stochastic depth rate. Default: 0.1
        norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
        ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
        patch_norm (bool): If True, add normalization after patch embedding. Default: True
        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
        fused_window_process (bool, optional): If True, use one kernel to fused window shift & window partition for acceleration, similar for the reversed part. Default: False
    """

    def __init__(self, img_size=224, patch_size=4, in_chans=3, num_classes=1000,
                 embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24],
                 window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
                 drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
                 norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
                 use_checkpoint=False, fused_window_process=False, **kwargs):
        super().__init__()

        self.num_classes = num_classes
        self.num_layers = len(depths)
        self.embed_dim = embed_dim
        self.ape = ape
        self.patch_norm = patch_norm
        self.num_features = int(embed_dim * 2 ** (self.num_layers - 1))
        self.mlp_ratio = mlp_ratio

        # split image into non-overlapping patches
        self.patch_embed = PatchEmbed(
            img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
            norm_layer=norm_layer if self.patch_norm else None)
        num_patches = self.patch_embed.num_patches
        patches_resolution = self.patch_embed.patches_resolution
        self.patches_resolution = patches_resolution

        # absolute position embedding
        if self.ape:
            self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
            trunc_normal_(self.absolute_pos_embed, std=.02)

        self.pos_drop = nn.Dropout(p=drop_rate)

        # stochastic depth
        dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))]  # stochastic depth decay rule

        # build layers
        self.layers = nn.ModuleList()
        for i_layer in range(self.num_layers):
            layer = BasicLayer(dim=int(embed_dim * 2 ** i_layer),
                               input_resolution=(patches_resolution[0] // (2 ** i_layer),
                                                 patches_resolution[1] // (2 ** i_layer)),
                               depth=depths[i_layer],
                               num_heads=num_heads[i_layer],
                               window_size=window_size,
                               mlp_ratio=self.mlp_ratio,
                               qkv_bias=qkv_bias, qk_scale=qk_scale,
                               drop=drop_rate, attn_drop=attn_drop_rate,
                               drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
                               norm_layer=norm_layer,
                               downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
                               use_checkpoint=use_checkpoint,
                               fused_window_process=fused_window_process)
            self.layers.append(layer)

        self.norm = norm_layer(self.num_features)
        self.avgpool = nn.AdaptiveAvgPool1d(1)
        self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()

        self.apply(self._init_weights)

    def _init_weights(self, m):
        if isinstance(m, nn.Linear):
            trunc_normal_(m.weight, std=.02)
            if isinstance(m, nn.Linear) and m.bias is not None:
                nn.init.constant_(m.bias, 0)
        elif isinstance(m, nn.LayerNorm):
            nn.init.constant_(m.bias, 0)
            nn.init.constant_(m.weight, 1.0)

    @torch.jit.ignore
    def no_weight_decay(self):
        return {'absolute_pos_embed'}

    @torch.jit.ignore
    def no_weight_decay_keywords(self):
        return {'relative_position_bias_table'}

    def forward_features(self, x):
        x = self.patch_embed(x)
        if self.ape:
            x = x + self.absolute_pos_embed
        x = self.pos_drop(x)

        for layer in self.layers:
            x = layer(x)

        x = self.norm(x)  # B L C
        x = self.avgpool(x.transpose(1, 2))  # B C 1
        x = torch.flatten(x, 1)
        return x

    def forward(self, x):
        x = self.forward_features(x)
        x = self.head(x)
        return x

    def flops(self):
        flops = 0
        flops += self.patch_embed.flops()
        for i, layer in enumerate(self.layers):
            flops += layer.flops()
        flops += self.num_features * self.patches_resolution[0] * self.patches_resolution[1] // (2 ** self.num_layers)
        flops += self.num_features * self.num_classes
        return flops

参考:

ViT学习笔记2:Swin Transformer

Swin Transformer 相比之前的 ViT 模型,做出了哪些改进? 关注者

图解Swin Transformer

经典文献阅读之--Swin Transformer

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/314901.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Unity——VContainer的依赖注入

一、IOC控制反转和DI依赖倒置 1、IOC框架核心原理是依赖倒置原则 C#设计模式的六大原则 使用这种思想方式&#xff0c;可以让我们无需关心对象的生成方式&#xff0c;只需要告诉容器我需要的对象即可&#xff0c;而告诉容器我需要对象的方式就叫做DI&#xff08;依赖注入&…

SqlAlchemy使用教程(一) 原理与环境搭建

一、SqlAlchemy 原理及环境搭建 SqlAlchemy是1个支持连接各种不同数据库的Python库&#xff0c;提供DBAPI与ORM&#xff08;object relation mapper&#xff09;两种方式使用数据库。 DBAPI方式&#xff0c;即使用SQL方式访问数据库 ORM, 对象关系模型&#xff0c;是用 Python…

js 实现拖动按钮添加布局

效果&#xff1a; h布局生成左右布局&#xff0c; v布局生成上下布局 代码&#xff1a; <!DOCTYPE html> <html lang"en"> <head><meta charset"UTF-8"><meta name"viewport" content"widthdevice-width, ini…

Python笔记08-面向对象

文章目录 类和对象构造方法内置方法封装继承类型注解多态 类只是一种程序内的“设计图纸”&#xff0c;需要基于图纸生产实体&#xff08;对象&#xff09;&#xff0c;才能正常工作 这种套路&#xff0c;称之为&#xff1a;面向对象编程 类和对象 定义类的语法如下&#xff…

vue v-for循环拖拽排序,实现数组选中的数据拖拽后对应的子数据也进行重新排序

如下图所有&#xff0c;有个需求更新&#xff0c; 实现拖拽。 1&#xff0c;当新增了测点类型的时候每个对应的回路子数据都会新增对应的测点类型。 2&#xff0c;当拖动测点类型结束的时候对应的回路里面的内容也会跟着测点类型的排序自动排序 其实很简单&#xff0c;只要会了…

el-tree多个树进行节点同步联动(完整版)

2024.1.11今天我学习了如何对多个el-tree树进行相同节点的联动效果&#xff0c;如图&#xff1a; 这边有两棵树&#xff0c;我们发现第一个树和第二个树之间会有重复的指标&#xff0c;当我们选中第一个树的指标&#xff0c;我们希望第二个树如果也有重复的指标也能进行勾选上&…

鸿蒙实战基础(ArkTS)-窗口管理

基于窗口能力&#xff0c;实现验证码登录的场景&#xff0c;主要完成以下功能&#xff1a; 登录页面主窗口实现沉浸式。输入用户名和密码后&#xff0c;拉起验证码校验子窗口。验证码校验成功后&#xff0c;主窗口跳转到应用首页。 登录界面实现沉浸式 完成登录界面布局的编…

全面解析微服务

导读 微服务是企业应用及数据变革升级的利器&#xff0c;也是数字化转型及运营不可或缺的助产工具&#xff0c;企业云原生更离不开微服务&#xff0c;同时云原生的既要最大化发挥微服务的价值&#xff0c;也要最大化弥补微服务的缺陷。本文梳理了微服务基础设施组件、服务网格、…

企业数据中台整体介绍及建设方案:文件全文51页,附下载

关键词&#xff1a;数据中台解决方案&#xff0c;数据治理&#xff0c;数据中台技术架构&#xff0c;数据中台建设内容&#xff0c;数据中台核心价值 一、什么是数据中台&#xff1f; 数据中台是指通过数据技术&#xff0c;对海量数据进行采集、计算、存储、加工&#xff0c;…

富唯智能新研发的复合机器人,轻松破解汽车底盘零配件生产中的难题

随着汽车工业的快速发展&#xff0c;对于底盘零配件的需求也日益增长。为了满足市场需求&#xff0c;智能物流解决方案在汽车底盘零配件生产中扮演着越来越重要的角色。如何实现高效、准确的生产和物流管理&#xff0c;以满足市场快速变化的需求&#xff0c;成为了汽车生产商亟…

在加载第三方库过程中,无法加载到库的问题(使用readelf, patchelf命令)

无法加载到库问题 问题及分析过程readelf 命令patchelf命令 问题及分析过程 在开发一个程序过程中&#xff0c;需要加载第三方库iTapTradeAPI, 在CMakeList.txt中已经设置了CMAKE_INSTALL_RPATH&#xff0c;但是发布到生产之后由于目录问题无法加载到libiTapTradeAPI库了 下面…

Vue过滤器详解

聚沙成塔每天进步一点点 本文内容 ⭐ 专栏简介基本用法多个过滤器的串联过滤器在指令中的应用全局过滤器 ⭐ 本期推荐 ⭐ 专栏简介 Vue学习之旅的奇妙世界 欢迎大家来到 Vue 技能树参考资料专栏&#xff01;创建这个专栏的初衷是为了帮助大家更好地应对 Vue.js 技能树的学习。每…

活动回顾∣“全邻友好,艺术大咖交流会”——员村街开展社区微型养老博览会长者文艺汇演活动

为进一步营造邻里守望&#xff0c;共建美好社区的氛围&#xff0c;促进社区长者参与社区服务&#xff0c;展示社区长者健康、积极向上的精神风貌&#xff0c;2024年1月10日&#xff0c;员村街开展“全邻友好&#xff0c;艺术大咖交流会”——微型养老博览会活动&#xff0c;让长…

电影《艾里甫与赛乃姆》简介

电影《艾里甫与赛乃姆》由天山电影制片厂于1981年摄制&#xff0c;该片由傅杰执导&#xff0c;由买买提祖农司马依、布维古丽、阿布里米提沙迪克、努力曼阿不力孜、买买提依不拉音江、阿不都热合曼艾力等主演。 该片改编自维吾尔族民间爱情叙事长诗《艾里甫与赛乃姆》&#xf…

ModuleNotFoundError: No module named ‘simple_knn‘

【报错】使用 AutoDL 复现 GaussianEditor 时引用 3D Gaussian Splatting 调用simple_knn 时遇到 ModuleNotFoundError: No module named ‘simple_knn‘ 报错&#xff1a; 【原因】 一开始以为是版本问题&#xff0c;于是将所有可能的版本都尝试了 (from versions: 0.1, 0.2…

一套成熟的Spring Cloud智慧工地平台源码,自主版权,支持二次开发!

智慧工地源码&#xff0c;java语言开发的智慧工地源码 智慧工地利用移动互联、物联网、云计算、大数据等新一代信息技术&#xff0c;彻底改变传统施工现场各参建方的交互方式、工作方式和管理模式&#xff0c;为建设集团、施工企业、监理单位、设计单位、政府监管部门等提供一揽…

Android开发基础(三)

Android开发基础&#xff08;三&#xff09; 本篇将介绍Android权限管理。 Android权限管理 Android权限管理主要是为了保护用户的隐私和设备的安全性&#xff1b; 在Android系统中&#xff0c;应用在请求权限时必须进行明确的申请&#xff0c;根据权限的保护级别&#xff0…

NAND Separate Command Address (SCA) 接口数据传输解读

在采用Separate Command Address (SCA) 接口的存储产品中&#xff0c;DQ input burst和DQ output burst又是什么样的策略呢&#xff1f; DQ Input Burst: 在读取操作期间&#xff0c;数据以一种快速并行的方式通过DQ总线传送到控制器。在SCA接口下&#xff0c;虽然命令和地址信…

预约上门按摩系统开发会涉及哪些关键技术

一套完善的预约上门按摩系统的开发是需要考虑很多方面&#xff0c;无论是从用户下单还是技师、订单、财务以及各个方面的管理&#xff0c;涉及的逻辑和技术非常多&#xff0c;今天就以最简单的方式把预约上门按摩系统的技术方面分享一下。 1.移动端应用程序开发技术&#xff1a…

腾讯云COS桶文件上传下载工具类

1&#xff0c;申请key和密钥 2&#xff0c;引入依赖 <dependency><groupId>com.qcloud</groupId><artifactId>cos_api</artifactId><version>5.6.24</version></dependency>3&#xff0c;工具类 package com.example.activi…