karpathy Let‘s build GPT

1 introduction

按照karpathy的教程,一步步的完成transformer的构建,并在这个过程中,加深对transformer设计的理解。
karpathy推荐在进行网络设计的过程中,同时利用jupyter notebook进行快速测试和python进行主要的网络的构建。

在这里插入图片描述

2 网络实现

2.1 数据的构建

  • 读取text
text = open("input.txt", "r", encoding='utf-8').read()
words = sorted(set(''.join(text)))
vocab_size = len(words)
print(f'vocab_size is: {vocab_size}')
print(''.join(words))
print(text[:1000])

vocab_size is: 65
!$&',-.3:;?ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
First Citizen:
Before we proceed any further, hear me speak.
All:
Speak, speak.
First Citizen:
You are all resolved rather to die than to famish?

  • 将字符转换成数字
stoi = {ch : i for i, ch in enumerate(words)}
itos = {i : ch for i, ch in enumerate(words)}

encode = lambda s: [stoi[ch] for ch in s]
decode = lambda l: ''.join([itos[i] for i in l])
print(encode("hii")) 
print(decode(encode("hii")))

[46, 47, 47]
hii

  • 制作数据集
import torch
# 生成数据集
data = torch.tensor(encode(text), dtype=torch.long)
print(len(data))
n = int(len(data) * 0.9)
train_data = data[:n]
val_data = data[n:]
print(train_data[:1000])

1115394
tensor([18, 47, 56, 57, 58, 1, 15, 47, 58, 47, 64, 43, 52, 10, 0, 14, 43, 44,
53, 56, 43, 1, 61, 43, 1, 54, 56, 53, 41, 43, 43, 42, 1, 39, 52, 63,
1, 44, 59, 56, 58, 46, 43, 56, 6, 1, 46, 43, 39, 56, 1, 51, 43, 1,
57, 54, 43, 39, 49, 8, 0, 0, 13, 50, 50, 10, 0, 31, 54, 43, 39, 49,

  • 构建dataloader
import torch
batch_size = 4
torch.manual_seed(1337)
def get_batch(split):
    datasets = {
        'train': train_data,
        'val': val_data,
    }[split]
    ix = torch.randint(0, len(datasets) - block_size, (batch_size,))
    x = torch.stack([datasets[i:i+block_size] for i in ix])
    y = torch.stack([datasets[1+i:i+block_size+1] for i in ix])
    return x, y

xb, yb = get_batch('train')
print(f'x shape is: {xb.shape}, y shape is: {yb.shape}')
print(f'x is {xb}')
print(f'y is {yb}')

x shape is: torch.Size([4, 8]), y shape is: torch.Size([4, 8])
x is tensor([[24, 43, 58, 5, 57, 1, 46, 43],
[44, 53, 56, 1, 58, 46, 39, 58],
[52, 58, 1, 58, 46, 39, 58, 1],
[25, 17, 27, 10, 0, 21, 1, 54]])
y is tensor([[43, 58, 5, 57, 1, 46, 43, 39],
[53, 56, 1, 58, 46, 39, 58, 1],
[58, 1, 58, 46, 39, 58, 1, 46],
[17, 27, 10, 0, 21, 1, 54, 39]])

2.2 构建pipeline

  • 定义一个最简单的网络
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed(1337)
class BigramLanguageModel(nn.Module):

    def __init__(self, vocab_size):
        super().__init__()
        self.token_embedding_table = nn.Embedding(vocab_size, vocab_size)

    def forward(self, idx, targets=None):
        self.out = self.token_embedding_table(idx)
        return self.out
    
xb, yb = get_batch('train')
model = BigramLanguageModel(vocab_size)
out = model(xb)
print(f'x shape is: {xb.shape}')
print(f'out shape is: {out.shape}')

x shape is: torch.Size([4, 8])
out shape is: torch.Size([4, 8, 65])

  • 包含输出以后的完整的pipeline是
from typing import Iterator
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed(1337)
class BigramLanguageModel(nn.Module):

    def __init__(self, vocab_size):
        super().__init__()
        self.token_embedding_table = nn.Embedding(vocab_size, vocab_size)

    def forward(self, idx, targets=None):
        logits = self.token_embedding_table(idx)  # B, T, C
        if targets is None:
            loss = None
        else:
            B, T, C = logits.shape
            logits = logits.view(B*T, C) # 这是很好理解的
            targets = targets.view(B*T) # 但是targets是B,T
            loss = F.cross_entropy(logits, targets)

        return logits, loss

    def generate(self, idx, max_new_tokens):
        for _ in range(max_new_tokens):
            logits, loss = self(idx)    
            logits = logits[:, -1, :]  # B, C    
            prob = F.softmax(logits, dim=-1)  # 对最后一维进行softmax
            ix = torch.multinomial(prob, num_samples=1) # B, C
            print(idx)
            idx = torch.cat((idx, ix), dim=1)   # B,T+1
            print(idx)
        return idx

            # ix = ix.view(B)
    
xb, yb = get_batch('train')
model = BigramLanguageModel(vocab_size)
out, loss = model(xb)
print(f'x shape is: {xb.shape}')
print(f'out shape is: {out.shape}')

idx = idx = torch.zeros((1, 1), dtype=torch.long)
print(decode(model.generate(idx, max_new_tokens=10)[0].tolist()))
# print(f'idx is {idx}')

x shape is: torch.Size([4, 8])
out shape is: torch.Size([4, 8, 65])
tensor([[0]])
tensor([[ 0, 50]])
tensor([[ 0, 50]])
tensor([[ 0, 50, 7]])
tensor([[ 0, 50, 7]])
tensor([[ 0, 50, 7, 29]])
tensor([[ 0, 50, 7, 29]])
tensor([[ 0, 50, 7, 29, 37]])
tensor([[ 0, 50, 7, 29, 37]])
tensor([[ 0, 50, 7, 29, 37, 48]])
tensor([[ 0, 50, 7, 29, 37, 48]])
tensor([[ 0, 50, 7, 29, 37, 48, 58]])
tensor([[ 0, 50, 7, 29, 37, 48, 58]])
tensor([[ 0, 50, 7, 29, 37, 48, 58, 5]])
tensor([[ 0, 50, 7, 29, 37, 48, 58, 5]])
tensor([[ 0, 50, 7, 29, 37, 48, 58, 5, 15]])
tensor([[ 0, 50, 7, 29, 37, 48, 58, 5, 15]])
tensor([[ 0, 50, 7, 29, 37, 48, 58, 5, 15, 24]])
tensor([[ 0, 50, 7, 29, 37, 48, 58, 5, 15, 24]])
tensor([[ 0, 50, 7, 29, 37, 48, 58, 5, 15, 24, 12]])
l-QYjt’CL?

这里有几个地方需要注意,首先输入输出是:

x is tensor([[24, 43, 58, 5, 57, 1, 46, 43],
[44, 53, 56, 1, 58, 46, 39, 58],
[52, 58, 1, 58, 46, 39, 58, 1],
[25, 17, 27, 10, 0, 21, 1, 54]])
y is tensor([[43, 58, 5, 57, 1, 46, 43, 39],
[53, 56, 1, 58, 46, 39, 58, 1],
[58, 1, 58, 46, 39, 58, 1, 46],
[17, 27, 10, 0, 21, 1, 54, 39]])

并且这个pipeline,网络对输入的长度也没有限制

  • 开始训练
    这个时候我们需要构建一个完整的训练代码,如果还是用jupyter notebook,每次改变了网络的一个组成部分,需要重新执行很多地方,比较麻烦,所以构建一个.py文件。
import torch
import torch.nn as nn
import torch.nn.functional as F

# hyperparameters
batch_size = 32
block_size = 8
max_iter = 3000
eval_interval = 300
learning_rate = 1e-2
device = 'cuda' if torch.cuda.is_available() else 'cpu'
eval_iters = 200
# ---------------------

torch.manual_seed(1337)

text = open("input.txt", "r", encoding='utf-8').read()
chars = sorted(list(set(text)))
vocab_size = len(chars)

stoi = {ch : i for i, ch in enumerate(chars)}
itos = {i : ch for i, ch in enumerate(chars)}
encode = lambda s: [stoi[ch] for ch in s]
decode = lambda l: ''.join([itos[i] for i in l])

# 生成数据集
data = torch.tensor(encode(text), dtype=torch.long)
n = int(len(data) * 0.9)
train_data = data[:n]
val_data = data[n:]

def get_batch(split):
    datasets = {
        'train': train_data,
        'val': val_data,
    }[split]
    ix = torch.randint(0, len(datasets) - block_size, (batch_size,))
    x = torch.stack([datasets[i:i+block_size] for i in ix])
    y = torch.stack([datasets[1+i:i+block_size+1] for i in ix])
    x, y = x.to(device), y.to(device)
    return x, y

@torch.no_grad()
def estimate_loss():
    out = {}
    model.eval()
    for split in ['train', 'val']:
        losses = torch.zeros(eval_iters)
        for k in range(eval_iters):
            X, Y = get_batch(split)
            logits, loss = model(X, Y)
            losses[k] = loss.item()
        out[split] = losses.mean()
    model.train()
    return out

class BigramLanguageModel(nn.Module):

    def __init__(self, vocab_size):
        super().__init__()
        self.token_embedding_table = nn.Embedding(vocab_size, vocab_size)

    def forward(self, idx, targets=None):
        # import pdb; pdb.set_trace()
        logits = self.token_embedding_table(idx)  # B, T, C
        if targets is None:
            loss = None
        else:
            B, T, C = logits.shape
            logits = logits.view(B*T, C) # 这是很好理解的
            targets = targets.view(B*T) # 但是targets是B,T, C其实并不好理解
            loss = F.cross_entropy(logits, targets)

        return logits, loss

    def generate(self, idx, max_new_tokens):
        for _ in range(max_new_tokens):
            logits, loss = self(idx)    
            logits = logits[:, -1, :]  # B, C    
            prob = F.softmax(logits, dim=-1)  # 对最后一维进行softmax
            ix = torch.multinomial(prob, num_samples=1) # B, 1
            # print(idx)
            idx = torch.cat((idx, ix), dim=1)   # B,T+1
            # print(idx)
        return idx

model = BigramLanguageModel(vocab_size)
m = model.to(device)
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)

lossi = []
for iter in range(max_iter):

    if iter % eval_interval == 0:
        losses = estimate_loss()
        print(f'step {iter}: train loss {losses["train"]:.4f}, val loss {losses["val"]:.4f}')
    xb, yb = get_batch('train')
    out, loss = m(xb, yb)
    optimizer.zero_grad(set_to_none=True)
    loss.backward()
    optimizer.step()

# generate from the model
context = torch.zeros((1,1), dtype=torch.long, device=device)
print(decode(m.generate(context, max_new_tokens=500)[0].tolist()))



输出的结果是

step 0: train loss 4.7305, val loss 4.7241
step 300: train loss 2.8110, val loss 2.8249
step 600: train loss 2.5434, val loss 2.5682
step 900: train loss 2.4932, val loss 2.5088
step 1200: train loss 2.4863, val loss 2.5035
step 1500: train loss 2.4665, val loss 2.4921
step 1800: train loss 2.4683, val loss 2.4936
step 2100: train loss 2.4696, val loss 2.4846
step 2400: train loss 2.4638, val loss 2.4879
step 2700: train loss 2.4738, val loss 2.4911
CEThik brid owindakis b, bth
HAPet bobe d e.
S:
O:3 my d?
LUCous:
Wanthar u qur, t.
War dXENDoate awice my.
Hastarom oroup
Yowhthetof isth ble mil ndill, ath iree sengmin lat Heriliovets, and Win nghir.
Swanousel lind me l.
HAshe ce hiry:
Supr aisspllw y.
Hentofu n Boopetelaves
MPOLI s, d mothakleo Windo whth eisbyo the m dourive we higend t so mower; te
AN ad nterupt f s ar igr t m:
Thin maleronth,
Mad
RD:
WISo myrangoube!
KENob&y, wardsal thes ghesthinin couk ay aney IOUSts I&fr y ce.
J

2.3 self-attention

我们处理当前的字符的时候,需要和历史字符进行通信,历史字符可以看成是某一种特征,使用最简单的均值提取的方式提取历史字符的feature

# 最简单的通信方式,将当前的字符和之前的字符平均进行沟通
# 可以看成是history information的features
a = torch.tril(torch.ones(3, 3))
print(a)
a = torch.tril(a) / torch.sum(a, 1, keepdim=True)
print(a)

tensor([[1., 0., 0.],
[1., 1., 0.],
[1., 1., 1.]])
tensor([[1.0000, 0.0000, 0.0000],
[0.5000, 0.5000, 0.0000],
[0.3333, 0.3333, 0.3333]])

可以采用softmax的方式进行mask

import torch.nn.functional as F
tril = torch.tril(torch.ones(T, T))  # 某种意义上的Q
wei = torch.zeros(T, T) # K
wei = wei.masked_fill(tril == 0, float('-inf'))  
print(wei)
wei = F.softmax(wei)
print(wei)

tensor([[0., -inf, -inf, -inf, -inf, -inf, -inf, -inf],
[0., 0., -inf, -inf, -inf, -inf, -inf, -inf],
[0., 0., 0., -inf, -inf, -inf, -inf, -inf],
[0., 0., 0., 0., -inf, -inf, -inf, -inf],
[0., 0., 0., 0., 0., -inf, -inf, -inf],
[0., 0., 0., 0., 0., 0., -inf, -inf],
[0., 0., 0., 0., 0., 0., 0., -inf],
[0., 0., 0., 0., 0., 0., 0., 0.]])
tensor([[1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.5000, 0.5000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.3333, 0.3333, 0.3333, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.2500, 0.2500, 0.2500, 0.2500, 0.0000, 0.0000, 0.0000, 0.0000],
[0.2000, 0.2000, 0.2000, 0.2000, 0.2000, 0.0000, 0.0000, 0.0000],
[0.1667, 0.1667, 0.1667, 0.1667, 0.1667, 0.1667, 0.0000, 0.0000],
[0.1429, 0.1429, 0.1429, 0.1429, 0.1429, 0.1429, 0.1429, 0.0000],
[0.1250, 0.1250, 0.1250, 0.1250, 0.1250, 0.1250, 0.1250, 0.1250]])

特征提取的结果

xbow2 = wei @ x # (T, T) @ (B, T, C) --> (B, T, C)  # x对应v
print(xbow2.shape)

torch.Size([4, 8, 2])

加上pos_emb现在的forward版本

def forward(self, idx, targets=None):
        # import pdb; pdb.set_trace()
        tok_emb = self.token_embedding_table(idx)  # B, T, C(n_emb)
        pos_emb = self.position_embedding_table(torch.range(T, device=device)) # T,C 
        # positional encoding
        x = tok_emb + pos_emb   # (B, T, C) broadcasting
        logits = self.lm_head(x)  # B, T, C(vocab_size)
        if targets is None:
            loss = None
        else:
            B, T, C = logits.shape
            logits = logits.view(B*T, C) # 这是很好理解的
            targets = targets.view(B*T) # 但是targets是B,T, C其实并不好理解
            loss = F.cross_entropy(logits, targets)

        return logits, loss

karpathy 给出的一些启示

  • Attention is a communication mechanism. Can be seen as nodes in a directed graph looking at each other and aggregating information with a weighted sum from all nodes that point to them, with data-dependent weights.
  • There is no notion of space. Attention simply acts over a set of vectors. This is why we need to positionally encode tokens.
  • Each example across batch dimension is of course processed completely independently and never “talk” to each other
  • In an “encoder” attention block just delete the single line that does masking with tril, allowing all tokens to communicate. This block here is called a “decoder” attention block because it has triangular masking, and is usually used in autoregressive settings, like language modeling.
  • “self-attention” just means that the keys and values are produced from the same source as queries. In “cross-attention”, the queries still get produced from x, but the keys and values come from some other, external source (e.g. an encoder module)
  • “Scaled” attention additional divides wei by 1/sqrt(head_size). This makes it so when input Q,K are unit variance, wei will be unit variance too and Softmax will stay diffuse and not saturate too much. Illustration below

attention的公式其中scale是为了保证两个分布相乘的时候,方差不变的。

在这里插入图片描述

k = torch.randn(B, T, head_size)
q = torch.randn(B, T, head_size)
wei = q @ k.transpose(-2, -1)
wei_scale = wei / head_size**0.5
print(k.var())
print(q.var())
print(wei.var())
print(wei_scale.var())

输出结果

tensor(1.0278)
tensor(0.9802)
tensor(15.9041)
tensor(0.9940)

初始化对结果的影响很大,实际上来说我们还是很希望softmax初始化的结果是一个方差较小的分布,如果不进行scale

torch.softmax(torch.tensor([0.1, -0.2, 0.3, -0.2, 0.5]) * 8, dim=-1)

tensor([0.0326, 0.0030, 0.1615, 0.0030, 0.8000])

对原来的py文件做一些修改:
在这里插入图片描述

class Head(nn.Module):
    def __init__(self, head_size):
        super().__init__()
        self.query = nn.Linear(n_embd, head_size, bias=False)
        self.key = nn.Linear(n_embd, head_size, bias=False)
        self.value = nn.Linear(n_embd, head_size, bias=False)
        self.register_buffer('tril', torch.tril(torch.ones(block_size, block_size)))

    def forward(self, x):
        # import pdb; pdb.set_trace()
        B, T, C = x.shape    
        q = self.query(x)      #(B, T, C)
        k = self.key(x)        #(B, T, C)
        v = self.value(x)      #(B, T, C)
        wei = q @ k.transpose(-2, -1) * C**-0.5 # (B,T,C)@(B,C,T) --> (B, T, T)
        wei = wei.masked_fill(self.tril[:T, :T] == 0, float('-inf')) 
        wei = F.softmax(wei, dim=-1)   # (B, T, T)
        out = wei @ v   #(B, T, T) @ (B, T, C) --> (B, T, C)
        return out

修改模型

class BigramLanguageModel(nn.Module):
    def __init__(self, vocab_size):
        super().__init__()
        self.token_embedding_table = nn.Embedding(vocab_size, n_embd)
        self.position_embedding_table = nn.Embedding(block_size, n_embd)
        self.sa_head = Head(n_embd)  # head的尺寸保持不变
        self.lm_head = nn.Linear(n_embd, vocab_size)

    def forward(self, idx, targets=None):
        # import pdb; pdb.set_trace()
        B, T = idx.shape
        tok_emb = self.token_embedding_table(idx)  # B, T, C(n_emb)
        pos_emb = self.position_embedding_table(torch.arange(T, device=device)) # T,C 
        # positional encoding
        x = tok_emb + pos_emb   # (B, T, C) broadcasting
        x = self.sa_head(x)
        logits = self.lm_head(x)  # B, T, C(vocab_size)
        if targets is None:
            loss = None
        else:
            B, T, C = logits.shape
            logits = logits.view(B*T, C) # 这是很好理解的
            targets = targets.view(B*T) # 但是targets是B,T, C其实并不好理解
            loss = F.cross_entropy(logits, targets)

        return logits, loss

    def generate(self, idx, max_new_tokens):
        for _ in range(max_new_tokens):
            idx_cmd = idx[:, -block_size:]   # (B, T)
            logits, loss = self(idx_cmd)    
            logits = logits[:, -1, :]  # B, C    
            prob = F.softmax(logits, dim=-1)  # 对最后一维进行softmax
            ix = torch.multinomial(prob, num_samples=1) # B, 1
            # print(idx)
            idx = torch.cat((idx, ix), dim=1)   # B,T+1
            # print(idx)
        return idx

加上self-attention的结果
step 4500: train loss 2.3976, val loss 2.4041

2.4 multi-head attention

在这里插入图片描述

这里借鉴了group convolutional 的思想,

class MultiHeadAttention(nn.Module):
    """ multiple head of self attention in parallel """
    def __init__(self, num_heads, head_size):
        super().__init__()
        self.heads = nn.ModuleList([Head(head_size) for _ in range(num_heads)])

    def forward(self, x):
        return torch.cat([h(x) for h in self.heads], dim=-1)

应用的时候

self.sa_head = MultiHeadAttention(4, n_embd//4)  # head的尺寸保持不变

训练的结果

step 4500: train loss 2.2679, val loss 2.2789

2.5 feedforward network

在这里插入图片描述
在这里插入图片描述

加上feedforward的结果
step 4500: train loss 2.2337, val loss 2.2476

同时用一个block表示这个这个单元,
一个transform的block可以理解成一个connection 组成部分+computation组成部分

class Block(nn.Module):
    def __init__(self, n_embd, n_head):
        super().__init__()
        head_size = n_embd // n_head
        self.sa = MultiHeadAttention(n_head, head_size)
        self.ffwd = FeedForward(n_embd)
    
    def forward(self, x):
        x = self.sa(x)
        x = self.ffwd(x)
        return x

修改模型的定义

class BigramLanguageModel(nn.Module):
    def __init__(self, vocab_size):
        super().__init__()
        self.token_embedding_table = nn.Embedding(vocab_size, n_embd)
        self.position_embedding_table = nn.Embedding(block_size, n_embd)
        self.blocks = nn.Sequential(
            Block(n_embd, n_head=4),
            Block(n_embd, n_head=4),
            Block(n_embd, n_head=4),
        )
        self.lm_head = nn.Linear(n_embd, vocab_size)

    def forward(self, idx, targets=None):
        # import pdb; pdb.set_trace()
        B, T = idx.shape
        tok_emb = self.token_embedding_table(idx)  # B, T, C(n_emb)
        pos_emb = self.position_embedding_table(torch.arange(T, device=device)) # T,C 
        # positional encoding
        x = tok_emb + pos_emb   # (B, T, C) broadcasting
        x = self.blocks(x)
        logits = self.lm_head(x)  # B, T, C(vocab_size)
        if targets is None:
            loss = None
        else:
            B, T, C = logits.shape
            logits = logits.view(B*T, C) # 这是很好理解的
            targets = targets.view(B*T) # 但是targets是B,T, C其实并不好理解
            loss = F.cross_entropy(logits, targets)

        return logits, loss

2.6 Residual network

现在模型的深度已经很深了,直接训练很可能无法很好的收敛,需要另外一个很重要的工具,残差网络。

class Block(nn.Module):
    def __init__(self, n_embd, n_head):
        super().__init__()
        head_size = n_embd // n_head
        self.sa = MultiHeadAttention(n_head, head_size)
        self.ffwd = FeedForward(n_embd)
    
    def forward(self, x):
        x = x + self.sa(x)
        x = x + self.ffwd(x)
        return x

深度扩充了以后,很容易过拟合了

step 4500: train loss 2.0031, val loss 2.1067

2.7 Layer normalization

我们先来看一下很基础的batchnorm。加入x,y 是两个独立,并且均值为0,方差为1的分布。
根据Var(xy)=E(X)^2 * Var(Y) + E(Y)^2 * Var(X) + Var(X) * Var(Y)=1
再来看矩阵相乘后,每一行变成了T2个独立同分布的乘积,根据中心极限定理:它们的和将近似服从正态分布,均值为各随机变量均值之和,方差为各随机变量方差之和。
也就是说矩阵相乘后的第一列的var=T2*1, mean=0
所以在矩阵相乘的时候,进行scale T 2 \sqrt{T2} T2 可以normalize(var 是平方,所以用了开根号)
在这里插入图片描述

x = torch.ones(5,5)
x = torch.tril(x)
print(x)
print(x.mean(dim=0))
print(x.mean(dim=1))

观察一下矩阵normalize的特点

tensor([[1., 0., 0., 0., 0.],
[1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0.],
[1., 1., 1., 1., 0.],
[1., 1., 1., 1., 1.]])
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
tensor([0.2000, 0.4000, 0.6000, 0.8000, 1.0000])

class BatchNorm1d:
  
  def __init__(self, dim, eps=1e-5, momentum=0.1):
    self.eps = eps
    self.momentum = momentum
    self.training = True
    # parameters (trained with backprop)
    self.gamma = torch.ones(dim)
    self.beta = torch.zeros(dim)
    # buffers (trained with a running 'momentum update')
    self.running_mean = torch.zeros(dim)
    self.running_var = torch.ones(dim)
  
  def __call__(self, x):
    # calculate the forward pass
    if self.training:
      if x.ndim == 2:
        dim = 0
      elif x.ndim == 3:
        dim = (0,1)
      xmean = x.mean(dim, keepdim=True) # batch mean
      xvar = x.var(dim, keepdim=True) # batch variance
    else:
      xmean = self.running_mean
      xvar = self.running_var
    xhat = (x - xmean) / torch.sqrt(xvar + self.eps) # normalize to unit variance
    self.out = self.gamma * xhat + self.beta
    # update the buffers
    if self.training:
      with torch.no_grad():
        self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * xmean
        self.running_var = (1 - self.momentum) * self.running_var + self.momentum * xvar
    return self.out
  
  def parameters(self):
    return [self.gamma, self.beta]

BatchNormalized对一列进行normalized, layernormalize 对一行进行normalized。

class BatchNorm1d:
  
  def __init__(self, dim, eps=1e-5, momentum=0.1):
    self.eps = eps
    self.momentum = momentum
    self.training = True
    # parameters (trained with backprop)
    self.gamma = torch.ones(dim)
    self.beta = torch.zeros(dim)
    # buffers (trained with a running 'momentum update')
    self.running_mean = torch.zeros(dim)
    self.running_var = torch.ones(dim)
  
  def __call__(self, x):
    # calculate the forward pass
    if self.training:
      dim = 1
      xmean = x.mean(dim, keepdim=True) # batch mean
      xvar = x.var(dim, keepdim=True) # batch variance
    else:
      xmean = self.running_mean
      xvar = self.running_var
    xhat = (x - xmean) / torch.sqrt(xvar + self.eps) # normalize to unit variance
    self.out = self.gamma * xhat + self.beta
    # update the buffers
    if self.training:
      with torch.no_grad():
        self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * xmean
        self.running_var = (1 - self.momentum) * self.running_var + self.momentum * xvar
    return self.out
  
  def parameters(self):
    return [self.gamma, self.beta]

如今用的比较 的模式
在这里插入图片描述
对应的代码
``python
class Block(nn.Module):
def init(self, n_embd, n_head):
super().init()
head_size = n_embd // n_head
self.sa = MultiHeadAttention(n_head, head_size)
self.ffwd = FeedForward(n_embd)
self.ln1 = nn.LayerNorm(n_embd)
self.ln2 = nn.LayerNorm(n_embd)

def forward(self, x):
    x = x + self.sa(self.ln1(x))
    x = x + self.ffwd(self.ln2(x))
    return x
并且一般会在连续的decoder block 模块后添加一个layerNorm
```python
class BigramLanguageModel(nn.Module):
    def __init__(self, vocab_size):
        super().__init__()
        self.token_embedding_table = nn.Embedding(vocab_size, n_embd)
        self.position_embedding_table = nn.Embedding(block_size, n_embd)
        self.blocks = nn.Sequential(
            Block(n_embd, n_head=4),
            Block(n_embd, n_head=4),
            Block(n_embd, n_head=4),
            nn.LayerNorm(n_embd),
        )
        self.lm_head = nn.Linear(n_embd, vocab_size)

加上layerNormlization以后,精度又上升了一些

step 4500: train loss 1.9931, val loss 2.0892
现在训练误差和验证误差的loss比较大 ,需要想办法解决一下。

2.8 使用dropout

  • 在head 使用dropout,防止模型被特定的feature给过分影响了提高模型的鲁棒性。
    def forward(self, x):
        # import pdb; pdb.set_trace()
        B, T, C = x.shape    
        q = self.query(x)      #(B, T, C)
        k = self.key(x)        #(B, T, C)
        v = self.value(x)      #(B, T, C)
        wei = q @ k.transpose(-2, -1) * C**-0.5 # (B,T,C)@(B,C,T) --> (B, T, T)
        wei = wei.masked_fill(self.tril[:T, :T] == 0, float('-inf')) 
        wei = F.softmax(wei, dim=-1)   # (B, T, T)
        wei = self.dropout(wei)
        out = wei @ v   #(B, T, T) @ (B, T, C) --> (B, T, C)
        return out
  • 在multihead上使用dropout,也是同样的原理,防止特定feature过分影响了模型
    def forward(self, x):
        out =  torch.cat([h(x) for h in self.heads], dim=-1)
        out = self.dropout(self.proj(out))
        return out
  • 在计算单元的输出结果前使用dropout
class FeedForward(nn.Module):
    def __init__(self, n_embd):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(n_embd, 4 * n_embd),
            nn.ReLU(),
            nn.Linear(4 * n_embd, n_embd),
            nn.Dropout(dropout),
        )

修改设定参数

# hyperparameters
batch_size = 64
block_size = 256
max_iter = 5000
eval_interval = 500
learning_rate = 3e-4    # self-attention can't tolerate very high learnning rate
device = 'cuda' if torch.cuda.is_available() else 'cpu'
eval_iters = 200
n_embd = 384
n_layer = 6
n_head = 6
dropout = 0.2

step 4500: train loss 1.1112, val loss 1.4791

References

[1] https://www.youtube.com/watch?v=kCc8FmEb1nY

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/592083.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

前端页面平滑过渡解决方案

一、问题产生 在使用图片作为页面背景时,无法使用transtion进行平滑过渡,直接切换背景又会降低使用体验。 二、解决方式 使用clip-path对背景图片裁剪配合transtion实现平滑过渡的效果 三、效果展示 网址:ljynet.com 四、实现方式 tem…

图像特征点检测

⚠申明: 未经许可,禁止以任何形式转载,若要引用,请标注链接地址。 全文共计3077字,阅读大概需要3分钟 🌈更多学习内容, 欢迎👏关注👀【文末】我的个人微信公众号&#xf…

练习题(2024/5/3)

1对称二叉树 给你一个二叉树的根节点 root , 检查它是否轴对称。 示例 1: 输入:root [1,2,2,3,4,4,3] 输出:true示例 2: 输入:root [1,2,2,null,3,null,3] 输出:false提示: 树中…

前端工程化04-VsCode插件设置总结(持续更)

1、输出语句log设置 log输出、平常你输出log,还必须得打一个console然后再.log()非常不方便,当然我们可以直接输入一个log,但是提示有两个,我们还得上下选择 所以我们直接采用插件的提示 一个clg就可以了 2、括号包裹提示 找到VsCode的settings.js文…

考研入门55问---基础知识篇

考研入门55问---基础知识篇 01 >什么是研究生入学考试? 研究生是指大专和本科之后的深造课程。以研究生为最高学历, 研究生毕业后,也可称研究生,含义为研究生学历的人。在中国大陆地区,普通民众一般也将硕士毕业生称…

微图乐 多种装B截图一键制作工具(仅供娱乐交流)

软件介绍 采用exe进程交互通信。全新UI界面,让界面更加清爽简约。支持zfb、VX、TX、Yin行、Dai款、游戏等图片生成,一键超清原图复制到剪辑板,分享给好友。适用于提高商家信誉度,产品销售额度。装逼娱乐,用微图乐。图…

InfiniFlow 創始人兼CEO張穎峰確認出席“邊緣智能2024 - AI開發者峰會”

隨著AI技術的迅猛發展,全球正逐步進入邊緣計算智能化與分布式AI深度融合的新時代,共同書寫著分布式智能創新應用的壯麗篇章。邊緣智能,作為融合邊緣計算和智能技術的新興領域,正逐漸成為推動AI發展的關鍵力量。借助分布式和去中心…

扫雷实现详解【递归展开+首次必展开+标记雷+取消标记雷】

扫雷 一.扫雷设计思路二.扫雷代码逐步实现1.创建游戏菜单2.初始化棋盘3.打印棋盘4.随机布置雷5.统计周围雷的个数6.递归展开棋盘7.标记雷8.删除雷的标记9.保证第一次排雷的安全性棋盘必定展开10.排查雷11.判断输赢 三.扫雷总代码四.截图 一.扫雷设计思路 1.创建游戏菜单。  2.…

使用机器学习确定文本的编程语言

导入必要的库 norman Python 语句&#xff1a;import <span style"color:#000000"><span style"background-color:#fbedbb"><span style"color:#0000ff">import</span> pandas <span style"color:#0000ff&quo…

Postman的一些使用技巧

Postman 是一个流行的 API 开发工具&#xff0c;用于设计、开发、测试、发布和监控 API。在现代web开发中使用非常广泛。后端开发必备而且必会的工具。 目录 1.配置环境变量 2.动态变量 3.脚本 4.测试 5.模拟 6.监控 7.集合运行器 8.响应保存 9.请求历史 10.同步请求…

基于springboot+vue+Mysql的影城管理系统

开发语言&#xff1a;Java框架&#xff1a;springbootJDK版本&#xff1a;JDK1.8服务器&#xff1a;tomcat7数据库&#xff1a;mysql 5.7&#xff08;一定要5.7版本&#xff09;数据库工具&#xff1a;Navicat11开发软件&#xff1a;eclipse/myeclipse/ideaMaven包&#xff1a;…

brpc profiler

cpu profiler cpu profiler | bRPC MacOS的额外配置 在MacOS下&#xff0c;gperftools中的perl pprof脚本无法将函数地址转变成函数名&#xff0c;解决办法是&#xff1a; 安装standalone pprof&#xff0c;并把下载的pprof二进制文件路径写入环境变量GOOGLE_PPROF_BINARY_PA…

现代循环神经网络(GRU、LSTM)(Pytorch 14)

一 简介 前一章中我们介绍了循环神经网络的基础知识&#xff0c;这种网络 可以更好地处理序列数据。我们在文本数据上实现 了基于循环神经网络的语言模型&#xff0c;但是对于当今各种各样的序列学习问题&#xff0c;这些技术可能并不够用。 例如&#xff0c;循环神经网络在…

Java中接口的默认方法

为什么要使用默认方法 当我们把一个程序的接口写完后 用其他的类去实现&#xff0c;此时如果程序需要再添加一个抽象方法的时候我们只有两种选择 将抽象方法写在原本的接口中 但是这样写会导致其他所有改接口的实现类都需要实现这个抽象方法比较麻烦 写另一个接口 让需要的实…

23.哀家要长脑子了!

目录 1.290. 单词规律 - 力扣&#xff08;LeetCode&#xff09; 2.532. 数组中的 k-diff 数对 - 力扣&#xff08;LeetCode&#xff09; 3.205. 同构字符串 - 力扣&#xff08;LeetCode&#xff09; 4.138. 随机链表的复制 - 力扣&#xff08;LeetCode&#xff09; 5.599. 两…

Spring Boot与OpenCV:融合机器学习的智能图像与视频处理平台

&#x1f9d1; 作者简介&#xff1a;阿里巴巴嵌入式技术专家&#xff0c;深耕嵌入式人工智能领域&#xff0c;具备多年的嵌入式硬件产品研发管理经验。 &#x1f4d2; 博客介绍&#xff1a;分享嵌入式开发领域的相关知识、经验、思考和感悟&#xff0c;欢迎关注。提供嵌入式方向…

VMware虚拟机中ubuntu使用记录(6)—— 如何标定单目相机的内参(张正友标定法)

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 前言一、张正友相机标定法1. 工具的准备2. 标定的步骤(1) 启动相机(2) 启动标定程序(3) 标定过程的操作(5)可能的报错 3. 标定文件内容解析 前言 张正友相机标定法…

什么是PWM?

1.PWM也叫做脉冲宽度调制&#xff0c;它是一种模拟控制方式&#xff0c;根据相应 载荷 的变化来调制晶体管基级和MOS管栅极的偏置&#xff0c;来实现 晶体管 或 MOS管 导通时间的改变&#xff0c;从而实现开关稳压电源输出的改变。 这种方式能使电源的输出电压在工作条件变化时…

linux的基础入门(2)

环境变量 在Shell中&#xff0c;正确的赋值语法是没有空格的&#xff0c;即变量名数值。所以&#xff0c;正确的方式是&#xff1a; tmpshy 这样就将变量tmp赋值为"shy"了。 注意&#xff1a;并不是任何形式的变量名都是可用的&#xff0c;变量名只能是英文字母、…

Reac19 升级指南

Reactv19 已经发布 beta 版本&#xff0c;想要快速体验如何升级到 v19 版本尝鲜的朋友们可以查阅进行了解 前言 React 已于近日发布了 v19 的 beta 版本&#xff0c;同时为了帮助后续的 v19 升级&#xff0c;也同时发布了 v18.3.0的正式版&#xff0c; 与 v18.2 版本完全相同…