LLM漫谈(七)| 使用PyTorch从零构建LLM

        LLM是最流行AI聊天机器人的核心基础,比如ChatGPT、Gemini、MetaAI、Mistral AI等。在每一个LLM,有个核心架构:Transformer。我们将首先根据著名的论文“Attention is all you need”-https://arxiv.org/abs/1706.03762 来构建Transformer架构。

        首先,我们将逐块构建Transformer模型的所有组件。然后,我们将组装所有块来构建我们的模型。之后,我们将使用从 Hugging Face 数据集中获取的数据集训练和验证我们的模型。最后,我们将通过对新的翻译文本数据执行翻译来测试我们的模型。

重要提示:我将对 transformer 架构中的所有组件逐步进行拆解,并就概念、原因和方式提供必要的解释。我还将对我认为需要解释的逐行代码提供评论。

Step 1: Load dataset (加载数据集)

        为了使LLM模型能够进行从英语到马来语任务的翻译,我们需要使用同时具有源(英语)和目标(马来语)语言对的数据集。因此,我们将使用来自 Huggingface 的数据集,名为“Helsinki-NLP/opus-100”,它有 100 万对英语-马来语训练数据集,足以获得良好的准确性,并且在验证和测试数据集中各有 2000 个数据。它已经预拆分,因此我们不必再次进行数据集拆分。

# Import necessary libraries# Install datasets, tokenizers library if you've not done so yet (!pip install datasets, tokenizers).import osimport mathimport torchimport torch.nn as nnfrom torch.utils.data import Dataset, DataLoaderfrom pathlib import Pathfrom datasets import load_datasetfrom tqdm import tqdm# Assign device value as "cuda" to train on GPU if GPU is available. Otherwise it will fall back to default as "cpu".device = torch.device("cuda" if torch.cuda.is_available() else "cpu")  # Loading train, validation, test dataset from huggingface path below.raw_train_dataset = load_dataset("Helsinki-NLP/opus-100", "en-ms", split='train')raw_validation_dataset = load_dataset("Helsinki-NLP/opus-100", "en-ms", split='validation')raw_test_dataset = load_dataset("Helsinki-NLP/opus-100", "en-ms", split='test')# Directory to store dataset files.os.mkdir("./dataset-en")os.mkdir("./dataset-my")# Directory to save model during model training after each EPOCHS (in step 10).os.mkdir("./malaygpt")# Director to store source and target tokenizer.os.mkdir("./tokenizer_en")os.mkdir("./tokenizer_my")dataset_en = []     dataset_my = []file_count = 1# In order to train the tokenizer (in step 2), we'll separate the training dataset into english and malay. # Create multiple small file of size 50k data each and store into dataset-en and dataset-my directory.for data in tqdm(raw_train_dataset["translation"]):    dataset_en.append(data["en"].replace('\n', " "))    dataset_my.append(data["ms"].replace('\n', " "))if len(dataset_en) == 50000:with open(f'./dataset-en/file{file_count}.txt', 'w', encoding='utf-8') as fp:            fp.write('\n'.join(dataset_en))            dataset_en = []with open(f'./dataset-my/file{file_count}.txt', 'w', encoding='utf-8') as fp:            fp.write('\n'.join(dataset_my))            dataset_my = []        file_count += 1

Step 2: Create Tokenizer (创建 Tokenizer)

        Transformer模型不能直接处理原始文本,它只处理数字。因此,我们必须做一些事情来将原始文本转换为数字。为此,我们将使用一种流行的分词器,称为 BPE 分词器,它是在 GPT3 等模型中使用的subword分词器。我们将首先在语料库数据(在本例中为训练数据集)上训练 BPE 分词器,我们在步骤 1 中准备了该数据。流程如下图所示:

        训练完成后,分词器会生成英语和马来语的词汇表。词汇表是语料库数据中唯一token的集合。由于我们正在执行翻译任务,因此我们需要两种语言的分词器。BPE 分词器获取原始文本,将其与词汇表中的token映射,并为输入原始文本中的每个单词返回一个token。token可以是单个单词或子单词。这是subword分词器相对于其他分词器的优势之一,因为它可以克服 OOV(out of vocabulary)问题。然后,分词器在词汇表中返回token的唯一索引或位置 ID,这将进一步用于创建嵌入,如上面的流程所示。

# import tokenzier library classes and modules.from tokenizers import Tokenizerfrom tokenizers.models import BPEfrom tokenizers.trainers import BpeTrainerfrom tokenizers.pre_tokenizers import Whitespace# path to the training dataset files which will be used to train tokenizer.path_en = [str(file) for file in Path('./dataset-en').glob("**/*.txt")]path_my = [str(file) for file in Path('./dataset-my').glob("**/*.txt")]# [ Creating Source Language Tokenizer - English ].# Additional special tokens are created such as [UNK] - to represent Unknown words, [PAD] - Padding token to maintain same sequence length across the model.# [CLS] - token to denote start of sentence, [SEP] - token to denote end of sentence.tokenizer_en = Tokenizer(BPE(unk_token="[UNK]"))trainer_en = BpeTrainer(min_frequency=2, special_tokens=["[PAD]","[UNK]","[CLS]", "[SEP]", "[MASK]"])# splitting tokens based on whitespace.tokenizer_en.pre_tokenizer = Whitespace()# Tokenizer trains the dataset files created in step 1tokenizer_en.train(files=path_en, trainer=trainer_en)# Save tokenizer for future use.tokenizer_en.save("./tokenizer_en/tokenizer_en.json")# [ Creating Target Language Tokenizer - Malay ].tokenizer_my = Tokenizer(BPE(unk_token="[UNK]"))trainer_my = BpeTrainer(min_frequency=2, special_tokens=["[PAD]","[UNK]","[CLS]", "[SEP]", "[MASK]"])tokenizer_my.pre_tokenizer = Whitespace()tokenizer_my.train(files=path_my, trainer=trainer_my)tokenizer_my.save("./tokenizer_my/tokenizer_my.json")tokenizer_en = Tokenizer.from_file("./tokenizer_en/tokenizer_en.json")tokenizer_my = Tokenizer.from_file("./tokenizer_my/tokenizer_my.json")# Getting size of both tokenizer.source_vocab_size = tokenizer_en.get_vocab_size()target_vocab_size = tokenizer_my.get_vocab_size()# Define token-ids variables, we need this for training model.CLS_ID = torch.tensor([tokenizer_my.token_to_id("[CLS]")], dtype=torch.int64).to(device)SEP_ID = torch.tensor([tokenizer_my.token_to_id("[SEP]")], dtype=torch.int64).to(device)PAD_ID = torch.tensor([tokenizer_my.token_to_id("[PAD]")], dtype=torch.int64).to(device)

Step 3: Prepare Dataset and DataLoader(准备数据集和 DataLoader)

       在此步骤中,我们将为源语言和目标语言准备数据集,稍后将用于训练和验证我们将要构建的模型。我们将创建一个接受原始数据集的类,并定义一个函数,该函数使用源 (tokenizer_en) 和目标 (tokenizer_my) 分词器分别对源文本和目标文本进行编码。最后,我们将为训练和验证数据集创建一个 DataLoader,该数据集批量迭代数据集(在我们的示例中,批大小将设置为 10)。批处理大小可以根据数据大小和可用处理能力进行更改。

# This class takes raw dataset and max_seq_len (maximum length of a sequence in the entire dataset).class EncodeDataset(Dataset):    def __init__(self, raw_dataset, max_seq_len):        super().__init__()        self.raw_dataset = raw_dataset        self.max_seq_len = max_seq_len        def __len__(self):        return len(self.raw_dataset)    def __getitem__(self, index):                # Fetching raw text for the given index that consists of source and target pair.        raw_text = self.raw_dataset[index]                # Separating text to source and target text and will be later used for encoding.        source_text = raw_text["en"]        target_text = raw_text["ms"]        # Encoding source text with source tokenizer(tokenizer_en) and target text with target tokenizer(tokenizer_my).        source_text_encoded = torch.tensor(tokenizer_en.encode(source_text).ids, dtype = torch.int64).to(device)            target_text_encoded = torch.tensor(tokenizer_my.encode(target_text).ids, dtype = torch.int64).to(device)        # To train the model, the sequence lenth of each input sequence should be equal max seq length.         # Hence additional number of padding will be added to the input sequence if the length is less than the max_seq_len.        num_source_padding = self.max_seq_len - len(source_text_encoded) - 2         num_target_padding = self.max_seq_len - len(target_text_encoded) - 1         encoder_padding = torch.tensor([PAD_ID] * num_source_padding, dtype = torch.int64).to(device)        decoder_padding = torch.tensor([PAD_ID] * num_target_padding, dtype = torch.int64).to(device)                # encoder_input has the first token as start of sentence - CLS_ID, followed by source encoding which is then followed by the end of sentence token - SEP.        # To reach the required max_seq_len, addition PAD token will be added at the end.                encoder_input = torch.cat([CLS_ID, source_text_encoded, SEP_ID, encoder_padding]).to(device)            # decoder_input has the first token as start of sentence - CLS_ID, followed by target encoding.        # To reach the required max_seq_len, addition PAD token will be added at the end. There is no end of sentence token - SEP in decoder_input.        decoder_input = torch.cat([CLS_ID, target_text_encoded, decoder_padding ]).to(device)                           # target_label has the first token as target encoding followed by end of sentence token - SEP. There is no start of sentence token - CLS in target label.        # To reach the required max_seq_len, addition PAD token will be added at the end.         target_label = torch.cat([target_text_encoded,SEP_ID,decoder_padding]).to(device)                               # As we've added extra padding token with input encoding, during training, we don't want this token to be trained by model as there is nothing to learn in this token.        # So, we'll use encoder mask to nullify the padding token value prior to calculating output of self attention in encoder block.        encoder_mask = (encoder_input != PAD_ID).unsqueeze(0).unsqueeze(0).int().to(device)                             # We also don't want any token to get influenced by the future token during the decoding stage. Hence, Causal mask is being implemented during masked multihead attention to handle this.         decoder_mask = (decoder_input != PAD_ID).unsqueeze(0).unsqueeze(0).int() & causal_mask(decoder_input.size(0)).to(device)         return {            'encoder_input': encoder_input,            'decoder_input': decoder_input,            'target_label': target_label,            'encoder_mask': encoder_mask,            'decoder_mask': decoder_mask,            'source_text': source_text,            'target_text': target_text        }# Causal mask will make sure any token that comes after the current token will be masked, meaning the value will be replaced by -ve infinity which will be converted to zero or close to zero after softmax function. # Hence the model will just ignore these value or willn't be able to learn anything from these values.def causal_mask(size):  # dimension of causal mask (batch_size, seq_len, seq_len)  mask = torch.triu(torch.ones(1, size, size), diagonal = 1).type(torch.int)  return mask == 0# To calculate the max sequence lenth in the entire training dataset for the source and target dataset.max_seq_len_source = 0max_seq_len_target = 0for data in raw_train_dataset["translation"]:    enc_ids = tokenizer_en.encode(data["en"]).ids    dec_ids = tokenizer_my.encode(data["ms"]).ids    max_seq_len_source = max(max_seq_len_source, len(enc_ids))    max_seq_len_target = max(max_seq_len_target, len(dec_ids))    print(f'max_seqlen_source: {max_seq_len_source}')   #530print(f'max_seqlen_target: {max_seq_len_target}')   #526# To simplify the training process, we'll just take single max_seq_len and add 20 to cover the additional length of tokens such as PAD, CLS, SEP in the sequence.max_seq_len = 550# Instantiate the EncodeRawDataset class and create the encoded train and validation-dataset.train_dataset = EncodeDataset(raw_train_dataset["translation"], max_seq_len)val_dataset = EncodeDataset(raw_validation_dataset["translation"], max_seq_len)# Creating DataLoader wrapper for both training and validation dataset. This dataloader will be used later stage during training and validation of our LLM model.train_dataloader = DataLoader(train_dataset, batch_size = 10, shuffle = True, generator=torch.Generator(device='cuda'))val_dataloader = DataLoader(val_dataset, batch_size = 1, shuffle = True, generator=torch.Generator(device='cuda'))

Step 4: Input Embedding and Positional Encoding(输入嵌入和位置编码)

Input Embedding:步骤 2 中从分词器生成的token ID 序列将被输入到嵌入层。嵌入层将 token-id 映射到词汇表,并为每个标记生成一个维度为 512 的嵌入向量[ 512 取自注意论文]。嵌入向量可以根据已训练的训练数据集捕获令牌的语义含义。嵌入向量中的每个维度值都表示与令牌相关的某种特征。例如,如果标记是 Dog,则某个维度值将表示眼睛、嘴巴、腿、身高等。如果我们在 n 维空间中绘制一个向量,则外观相似的物体(如狗和猫)将彼此靠近,而外观不相似的物体(例如学校、家庭嵌入向量)将位于更远的地方。

Positional Encoding:Transformer架构的优点之一是它可以并行处理任意数量的输入序列,从而减少了大量的训练时间,也使预测速度更快。但是,一个缺点是,在并行处理多个token序列时,token在句子中的位置不会按顺序排列。这可能会导致句子的不同含义或上下文,这取决于token的位置。因此,为了解决这个问题,本文采用了位置编码方法。该论文建议在每个token的 512维的索引级别上应用两个数学函数(一个是 sin,一个是cosine)。下面是简单的正弦和余弦数学函数。

        Sin 函数应用于每个偶数维值,而cosine函数应用于嵌入向量的奇数维值。最后,将生成的位置编码器矢量添加到嵌入向量中。现在,我们有了嵌入向量,它可以捕获token的语义含义以及token的位置。请注意,位置编码的值在每个序列中都保持不变。

# Input embedding and positional encodingclass EmbeddingLayer(nn.Module):    def __init__(self, vocab_size: int, d_model: int):        super().__init__()        self.d_model = d_model                # Using pytorch embedding layer module to map token id to vocabulary and then convert into embeeding vector.         # The vocab_size is the vocabulary size of the training dataset created by tokenizer during training of corpus dataset in step 2.        self.embedding = nn.Embedding(vocab_size, d_model)        def forward(self, input):        # In addition of feeding input sequence to the embedding layer, the extra multiplication by square root of d_model is done to normalize the embedding layer output        embedding_output = self.embedding(input) * math.sqrt(self.d_model)        return embedding_outputclass PositionalEncoding(nn.Module):    def __init__(self, max_seq_len: int, d_model: int, dropout_rate: float):        super().__init__()        self.dropout = nn.Dropout(dropout_rate)                # We're creating a matrix of the same shape as embedding vector.        pe = torch.zeros(max_seq_len, d_model)                # Calculate the position part of PE functions.        pos = torch.arange(0, max_seq_len, dtype=torch.float).unsqueeze(1)        # Calculate the division part of PE functions. Take note that the div part expression is slightly different that papers expression as this exponential functions seems to works better.        div_term = torch.exp(torch.arange(0, d_model, 2).float()) * (-math.log(10000)/d_model)                # Fill in the odd and even matrix value with the sin and cosine mathematical function results.        pe[:, 0::2] = torch.sin(pos * div_term)        pe[:, 1::2] = torch.cos(pos * div_term)                # Since we're expecting the input sequences in batches so the extra batch_size dimension is added in 0 postion.        pe = pe.unsqueeze(0)            def forward(self, input_embdding):        # Add positional encoding together with the input embedding vector.        input_embdding = input_embdding + (self.pe[:, :input_embdding.shape[1], :]).requires_grad_(False)                  # Perform dropout to prevent overfitting.        return self.dropout(input_embdding)

Step 5: Multi-Head Attention Block(多头注意力块)

        就像 Transformer 是LLM心脏一样,self-attention机制是 Transformer 架构的核心。

        那么,为什么你需要self-attention呢?让我们用下面的一个简单的例子来回答这个问题。

          在第 1 句和第 2 句中,“bank”一词显然有两种不同的含义。但是,“bank”一词的嵌入值在两个句子中是相同的。这是不对的。我们希望根据句子的上下文更改嵌入值。因此,我们需要一种机制,使嵌入值可以动态变化,以根据句子的整体含义给出上下文含义。self-attention机制可以动态更新嵌入值,可以根据句子表示上下文含义。

         如果self-attention已经这么好了,为什么我们还需要多头自我注意力呢?让我们看看下面的另一个例子来找出答案。

        在这个例子中,如果我们使用self-attention,它可能只关注句子的一个方面,也许只是一个“what”方面,因为它只能捕捉到“What did John do?”,但是,其他方面,例如“when”或“where”,对于模型学习同样重要。因此,我们需要找到一种方法,让self-attention机制同时学习句子中的多重关系。因此,这就是Multi-Head Self Attention(多头注意力可以互换使用)的用武之地。在Multi-Head Self Attention中,单头嵌入将分为多个头,以便每个头都会查看句子的不同方面并相应地学习,这就是我们想要的。

        现在,我们知道为什么需要Multi-Head Self Attention。让我们看看Multi-Head Self Attention实际上是如何工作的?

        如果您对矩阵乘法了解的话,那么理解其机制对您来说是一项非常容易的任务。让我们先看一下整个流程图,下面我将逐点描述从输入到输出的流程。

1. 首先,让我们制作编码器输入的 3 个副本(输入嵌入和位置编码的组合,我们在步骤 4 中已经完成)。让我们给每个副本起一个名字 Q、K 和 V。它们中的每一个都只是编码器输入的副本。编码器输入形状:(seq_len,d_model),seq_len:最大序列长度,d_model:在这种情况下嵌入向量维度为512。

2. 接下来,我们将执行 Q 与权重 W_q、K 与权重 W_k 和 V 与权重 W_v 的矩阵乘法。每个权重矩阵的形状为(d_model, d_model)。生成的新query、key和value嵌入向量的形状为(seq_len, d_model)。权重参数将由模型随机初始化,稍后将随着模型开始训练而更新。为什么我们首先需要权重矩阵乘法?因为这些是query、key和value嵌入向量所需的可学习参数,以提供更好的表示。

3. 根据self-attention论文,头数为 8 个。每个新的query、key和value嵌入向量将被划分为 8 个较小的查询、键和值嵌入向量单元。嵌入向量的新形状为 (seq_len, d_model/num_heads) 或 (seq_len, d_k)。[ d_k = d_model/num_heads ]。

4. 每个查询嵌入向量将执行点积操作,转置其自身的键嵌入向量和序列中的所有其他嵌入向量。这个点积给出了注意力分数。注意力分数显示给定token与给定输入序列中所有其他token的相似程度。分数越高,相似性越高。

  • 然后,注意力分数将除以d_k的平方根,这是在整个矩阵中规范化分数值所需的平方根。但是为什么必须除以d_k才能归一化,它可以是任何其他数字。主要原因是,随着嵌入向量维度的增加,注意力矩阵中的总方差成比例增加。这就是为什么除以d_k将平衡方差的增加。如果我们不除以d_k,对于任何给定的较高注意力分数,softmax 函数将给出非常高的概率值,同样,对于任何低注意力分数值,softmax 函数将给出非常低的概率值。这最终将使模型只关注具有这些概率值的特征,而忽略具有较低概率值的特征,这将导致梯度消失。因此,对注意力得分矩阵进行归一化是非常必要的。
  • 在执行 softmax 函数之前,如果编码器掩码不是 None,则注意力分数将矩阵乘以编码器掩码。如果掩码是因果掩码,则在输入序列中嵌入token的那些嵌入标记的注意力分数值将替换为 -ve 无穷大。softmax 函数会将 -ve 无穷大值转换为接近零的值。因此,模型不会学习当前token之后的那些特征。这就是我们如何防止未来的token影响我们的模型学习。

5. 然后将softmax函数应用于注意力得分矩阵,并输出形状为(seq_len,seq_len)的权重矩阵。

6. 然后,这些权重矩阵将与相应的值嵌入向量相乘。这将产生 8 个形状为 (seq_len, d_v) 的注意力头。[ d_v = d_model/num_heads ]。

7. 最后,所有头将连接成一个具有新形状(seq_len、d_model)的磁头。这个新的单头将矩阵乘以输出权重矩阵 W_o (d_model, d_model)。Multi-Head Attention 的最终输出代表了单词的上下文含义以及学习输入句子多个方面的能力。

下面,让我们开始编写 Multi-Head attention block。

class MultiHeadAttention(nn.Module):    def __init__(self, d_model: int, num_heads: int, dropout_rate: float):        super().__init__()        # Define dropout to prevent overfitting.        self.dropout = nn.Dropout(dropout_rate)                # Weight matrix are introduced and are all learnable parameters.        self.W_q = nn.Linear(d_model, d_model)        self.W_k = nn.Linear(d_model, d_model)        self.W_v = nn.Linear(d_model, d_model)        self.W_o = nn.Linear(d_model, d_model)        self.num_heads = num_heads        assert d_model % num_heads == 0, "d_model must be divisible by number of heads"                # d_k is the new dimension of each splitted self attention heads        self.d_k = d_model // num_heads    def forward(self, q, k, v, encoder_mask=None):                # We'll be training our model with multiple batches of sequence at once in parallel, hence we'll need to include batch_size in the shape as well.        # query, key and value are calculated by matrix multiplication of corresponding weights with the input embeddings.         # Change of shape: q(batch_size, seq_len, d_model) @ W_q(d_model, d_model) => query(batch_size, seq_len, d_model) [same goes to key and value].          query = self.W_q(q)         key = self.W_k(k)        value = self.W_v(v)        # Splitting query, key and value into number of heads. d_model is splitted in d_k across 8 heads.        # Change of shape: query(batch_size, seq_len, d_model) => query(batch_size, seq_len, num_heads, d_k) -> query(batch_size,num_heads, seq_len,d_k) [same goes to key and value].        query = query.view(query.shape[0], query.shape[1], self.num_heads ,self.d_k).transpose(1,2)        key = key.view(key.shape[0], key.shape[1], self.num_heads ,self.d_k).transpose(1,2)        value = value.view(value.shape[0], value.shape[1], self.num_heads ,self.d_k).transpose(1,2)        # :: SELF ATTENTION BLOCK STARTS ::        # Attention score is calculated to find the similarity or relation between query with key of itself and all other embedding in the sequence.        #  Change of shape: query(batch_size,num_heads, seq_len,d_k) @ key(batch_size,num_heads, seq_len,d_k) => attention_score(batch_size,num_heads, seq_len,seq_len).        attention_score = (query @ key.transpose(-2,-1))/math.sqrt(self.d_k)        # If mask is provided, the attention score needs to modify as per the mask value. Refer to the details in point no 4.        if encoder_mask is not None:            attention_score = attention_score.masked_fill(encoder_mask==0, -1e9)                # Softmax function calculates the probability distribution among all the attention scores. It assign higher probabiliy value to higher attention score. Meaning more similar tokens get higher probability value.        # Change of shape: same as attention_score        attention_weight = torch.softmax(attention_score, dim=-1)        if self.dropout is not None:            attention_weight = self.dropout(attention_weight)        # Final step in Self attention block is, matrix multiplication of attention_weight with Value embedding vector.        # Change of shape: attention_score(batch_size,num_heads, seq_len,seq_len) @  value(batch_size,num_heads, seq_len,d_k) => attention_output(batch_size,num_heads, seq_len,d_k)        attention_output = attention_score @ value                # :: SELF ATTENTION BLOCK ENDS ::        # Now, all the heads will be combined back to a single head        # Change of shape:attention_output(batch_size,num_heads, seq_len,d_k) => attention_output(batch_size,seq_len,num_heads,d_k) => attention_output(batch_size,seq_len,d_model)                attention_output = attention_output.transpose(1,2).contiguous().view(attention_output.shape[0], -1, self.num_heads * self.d_k)        # Finally attention_output is matrix multiplied with output weight matrix to give the final Multi-Head attention output.          # The shape of the multihead_output is same as the embedding input        # Change of shape: attention_output(batch_size,seq_len,d_model) @ W_o(d_model, d_model) => multihead_output(batch_size, seq_len, d_model)        multihead_output = self.W_o(attention_output)                return multihead_output

Step 6: Feedforward Network, Layer Normalization and Add&Norm(前馈网络、层归一化和 AddAndNorm)

Feedfoward Network:Feedfoward Network使用深度神经网络来学习两个线性层(第一层有d_model个节点,第二层有d_ff节点,根据注意论文分配的值)的所有特征,并将 ReLU 激活函数应用于第一线性层的输出,为嵌入值提供非线性,并应用 dropout 以进一步避免过拟合。

LayerNorm:我们对嵌入值应用层归一化,以确保网络中嵌入向量的值分布保持一致。这确保了学习的顺利进行。我们将使用称为 gamma 和 beta 的额外学习参数来根据网络需要扩展和移动嵌入值。

Add&Norm:这包括残差连接和层规范化(前面已说明)。在前向传递期间,残差连接可确保在后期仍能记住前一层中的要素,从而在计算输出时做出必要的贡献。同样,在向后传播期间,残差连接通过在每个阶段减少执行一次反向传播来确保防止梯度消失。AddAndNorm 在编码器(2 次)和解码器块(3 次)中使用。它从上一层获取输入,先对其进行规范化,然后再将其添加到上一层的输出中。

# Feedfoward Network, Layer Normalization and AddAndNorm Blockclass FeedForward(nn.Module):    def __init__(self, d_model: int, d_ff: int, dropout_rate: float):        super().__init__()        self.layer_1 = nn.Linear(d_model, d_ff)        self.activation_1 = nn.ReLU()        self.dropout = nn.Dropout(dropout_rate)        self.layer_2 = nn.Linear(d_ff, d_model)        def forward(self, input):        return self.layer_2(self.dropout(self.activation_1(self.layer_1(input))))class LayerNorm(nn.Module):    def __init__(self, eps: float = 1e-5):        super().__init__()        #Epsilon is a very small value and it plays an important role to prevent potentially division by zero problem.        self.eps = eps        #Extra learning parameters gamma and beta are introduced to scale and shift the embedding value as the network needed.        self.gamma = nn.Parameter(torch.ones(1))        self.beta = nn.Parameter(torch.zeros(1))        def forward(self, input):        mean = input.mean(dim=-1, keepdim=True)              std = input.std(dim=-1, keepdim=True)              return self.gamma * ((input - mean)/(std + self.eps)) + self.beta                class AddAndNorm(nn.Module):    def __init__(self, dropout_rate: float):        super().__init__()        self.dropout = nn.Dropout(dropout_rate)        self.layer_norm = LayerNorm()    def forward(self, input, sub_layer):        return input + self.dropout(sub_layer(self.layer_norm(input)))

Step 7: Encoder block and Encoder(编码器块和编码器)

Encoder Block:编码器块内部有两个主要组件:Multi-Head Attention和Feedforward。还有 2 个单元的 Add & Norm。我们将首先按照 Attention 白皮书中的流程将所有这些组件组装到 EncoderBlock 类中。根据论文,该编码器块已重复 6 次。

Encoder:然后,我们将创建一个名为 Encoder 的附加类,该类将获取 EncoderBlock 的列表并将其堆叠并给出最终的 Encoder 输出。

class EncoderBlock(nn.Module):    def __init__(self, multihead_attention: MultiHeadAttention, feed_forward: FeedForward, dropout_rate: float):        super().__init__()        self.multihead_attention = multihead_attention        self.feed_forward = feed_forward        self.add_and_norm_list = nn.ModuleList([AddAndNorm(dropout_rate) for _ in range(2)])    def forward(self, encoder_input, encoder_mask):        # First AddAndNorm unit taking encoder input from skip connection and adding it with the output of MultiHead attention block.        encoder_input = self.add_and_norm_list[0](encoder_input, lambda encoder_input: self.multihead_attention(encoder_input, encoder_input, encoder_input, encoder_mask))                # Second AddAndNorm unit taking output of MultiHead attention block from skip connection and adding it with the output of Feedforward layer.        encoder_input = self.add_and_norm_list[1](encoder_input, self.feed_forward)        return encoder_inputclass Encoder(nn.Module):    def __init__(self, encoderblocklist: nn.ModuleList):        super().__init__()        # Encoder class is initialized by taking encoderblock list.        self.encoderblocklist = encoderblocklist        self.layer_norm = LayerNorm()    def forward(self, encoder_input, encoder_mask):        # Looping through all the encoder block - 6 times.        for encoderblock in self.encoderblocklist:            encoder_input = encoderblock(encoder_input, encoder_mask)        # Normalize the final encoder block output and return. This encoder output will be used later on as key and value for the cross attention in decoder block.        encoder_output = self.layer_norm(encoder_input)        return encoder_output

Step 8: Decoder block, Decoder and Projection Layer(解码器块、解码器和投影层)

Decoder Block:解码器块中有三个主要组件:Masked Multi-Head Attention、Multi-Head Attention 和 Feedforward。解码器块也有 3 个单元的 Add & Norm。我们将按照 Attention 论文中的流程将所有这些组件组装到 DecoderBlock 类中。根据论文,这个解码器块重复了 6 次。

Decoder:我们将创建名为 Decoder 的附加类,该类将获取 DecoderBlock 的列表,对其进行堆叠,并提供最终的解码器输出。

Decoder Block中有两种类型的Multi-Head Attention。第一个是Masked Multi-Head Attention,它接受解码器输入作为query、key和value以及解码器掩码(也称为因果掩码)。因果掩码可防止模型查看按序列顺序排列的嵌入。步骤 3 和步骤 5 中提供了有关其工作原理的详细说明。第二个是Cross Attention,它接受解码器输入作为query,而key和value则来自编码器,计算方式与self-attention类似。

Projection Layer:最终的解码器输出将传递到投影层。在此层中,解码器输出将首先馈送到线性层中,其中嵌入的形状将发生变化,如下面的代码部分所示。随后,softmax 函数将解码器输出转换为词汇表上的概率分布,并选择概率最高的标记作为预测输出。

class DecoderBlock(nn.Module):    def __init__(self, masked_multihead_attention: MultiHeadAttention,multihead_attention: MultiHeadAttention, feed_forward: FeedForward, dropout_rate: float):        super().__init__()        self.masked_multihead_attention = masked_multihead_attention        self.multihead_attention = multihead_attention        self.feed_forward = feed_forward        self.add_and_norm_list = nn.ModuleList([AddAndNorm(dropout_rate) for _ in range(3)])    def forward(self, decoder_input, decoder_mask, encoder_output, encoder_mask):        # First AddAndNorm unit taking decoder input from skip connection and adding it with the output of Masked Multi-Head attention block.        decoder_input = self.add_and_norm_list[0](decoder_input, lambda decoder_input: self.masked_multihead_attention(decoder_input,decoder_input, decoder_input, decoder_mask))        # Second AddAndNorm unit taking output of Masked Multi-Head attention block from skip connection and adding it with the output of MultiHead attention block.        decoder_input = self.add_and_norm_list[1](decoder_input, lambda decoder_input: self.multihead_attention(decoder_input,encoder_output, encoder_output, encoder_mask))            # cross attention        # Third AddAndNorm unit taking output of MultiHead attention block from skip connection and adding it with the output of Feedforward layer.        decoder_input = self.add_and_norm_list[2](decoder_input, self.feed_forward)        return decoder_inputclass Decoder(nn.Module):    def __init__(self,decoderblocklist: nn.ModuleList):        super().__init__()        self.decoderblocklist = decoderblocklist        self.layer_norm = LayerNorm()    def forward(self, decoder_input, decoder_mask, encoder_output, encoder_mask):        for decoderblock in self.decoderblocklist:            decoder_input = decoderblock(decoder_input, decoder_mask, encoder_output, encoder_mask)        decoder_output = self.layer_norm(decoder_input)        return decoder_outputclass ProjectionLayer(nn.Module):    def __init__(self, vocab_size: int, d_model: int):        super().__init__()        self.projection_layer = nn.Linear(d_model, vocab_size)    def forward(self, decoder_output):        # Projection layer first take in decoder output and passed into the linear layer of shape (d_model, vocab_size)                # Change in shape: decoder_output(batch_size, seq_len, d_model) @ linear_layer(d_model, vocab_size) => output(batch_size, seq_len, vocab_size)        output = self.projection_layer(decoder_output)                # softmax function to output the probability distribution over the vocabulary        return torch.log_softmax(output, dim=-1)

Step 9: Create and build a Transformer(创建和构建 Transformer)

         最后,我们完成了transformer架构中所有组件块的构建。唯一悬而未决的任务是将它们组装在一起。

        首先,我们创建一个 Transformer 类,该类将初始化组件类的所有实例。在 transformer 类中,我们将首先定义 encode 函数,该函数执行 transformer 编码器部分的所有任务并生成编码器输出。

       其次,我们定义了一个decode函数,该函数执行 transformer 解码器部分的所有任务并生成解码器输出。

        第三,我们定义了一个project函数,它接收解码器的输出,并将输出映射到词汇表进行预测。

        现在,transformer架构已经准备就绪。现在,我们可以通过定义一个函数来构建我们的转换LLM模型,该函数包含以下代码中给出的所有必要参数。

class Transformer(nn.Module):    def __init__(self, source_embed: EmbeddingLayer, target_embed: EmbeddingLayer, positional_encoding: PositionalEncoding, multihead_attention: MultiHeadAttention, masked_multihead_attention: MultiHeadAttention, feed_forward: FeedForward, encoder: Encoder, decoder: Decoder, projection_layer: ProjectionLayer, dropout_rate: float):                super().__init__()                # Initialize instances of all the component class of transformer architecture.        self.source_embed = source_embed        self.target_embed = target_embed        self.positional_encoding = positional_encoding        self.multihead_attention = multihead_attention                self.masked_multihead_attention = masked_multihead_attention        self.feed_forward = feed_forward        self.encoder = encoder        self.decoder = decoder        self.projection_layer = projection_layer        self.dropout = nn.Dropout(dropout_rate)        # Encode function takes in encoder input, does necessary processing inside all encoder blocks and gives encoder output.    def encode(self, encoder_input, encoder_mask):        encoder_input = self.source_embed(encoder_input)        encoder_input = self.positional_encoding(encoder_input)        encoder_output = self.encoder(encoder_input, encoder_mask)        return encoder_output    # Decode function takes in decoder input, does necessary processing inside all decoder blocks and gives decoder output.    def decode(self, decoder_input, decoder_mask, encoder_output, encoder_mask):        decoder_input = self.target_embed(decoder_input)        decoder_input = self.positional_encoding(decoder_input)        decoder_output = self.decoder(decoder_input, decoder_mask, encoder_output, encoder_mask)        return decoder_output    # Projec function takes in decoder output into its projection layer and maps the output to the vocabulary for prediction.    def project(self, decoder_output):        return self.projection_layer(decoder_output)def build_model(source_vocab_size, target_vocab_size, max_seq_len=1135, d_model=512, d_ff=2048, num_heads=8, num_blocks=6, dropout_rate=0.1):        # Define and assign all the parameters value needed for the transformer architecture    source_embed = EmbeddingLayer(source_vocab_size, d_model)    target_embed = EmbeddingLayer(target_vocab_size, d_model)    positional_encoding = PositionalEncoding(max_seq_len, d_model, dropout_rate)    multihead_attention = MultiHeadAttention(d_model, num_heads, dropout_rate)    masked_multihead_attention = MultiHeadAttention(d_model, num_heads, dropout_rate)    feed_forward = FeedForward(d_model, d_ff, dropout_rate)        projection_layer = ProjectionLayer(target_vocab_size, d_model)    encoder_block = EncoderBlock(multihead_attention, feed_forward, dropout_rate)    decoder_block = DecoderBlock(masked_multihead_attention,multihead_attention, feed_forward, dropout_rate)    encoderblocklist = []    decoderblocklist = []    for _ in range(num_blocks):        encoderblocklist.append(encoder_block)                for _ in range(num_blocks):        decoderblocklist.append(decoder_block)        encoderblocklist = nn.ModuleList(encoderblocklist)                decoderblocklist = nn.ModuleList(decoderblocklist)            encoder = Encoder(encoderblocklist)    decoder = Decoder(decoderblocklist)        # Instantiate the transformer class by providing all the parameters values    model = Transformer(source_embed, target_embed, positional_encoding, multihead_attention, masked_multihead_attention,feed_forward, encoder, decoder, projection_layer, dropout_rate)    for param in model.parameters():        if param.dim() > 1:            nn.init.xavier_uniform_(param)        return model# Finally, call build model and assign it to model variable. # This model is now fully ready to train and validate our dataset. # After training and validation, we can perform new translation task using this very modelmodel = build_model(source_vocab_size, target_vocab_size)

Step 10: Training and validation of our build LLM model(训练和验证我们的构建LLM模型)

        现在是训练我们的模型的时候了。训练过程非常简单,我们将使用在步骤 3 中创建的训练 DataLoader。由于训练数据集总数为 100 万,我强烈建议在 GPU 设备上训练我们的模型。我花了大约 5 个小时才完成 20 个epoch。在每个 epoch 之后,我们将保存模型权重以及优化器状态,以便从停止前的点恢复训练,而不是从头开始恢复训练。

        在每个epoch之后,我们将使用验证 DataLoader 启动验证。验证数据集的大小为 2000,这是相当合理的。在验证过程中,我们只需要计算一次编码器输出,直到解码器输出获得句子末尾标记 [SEP],这是因为在解码器获得 [SEP] 标记之前,我们会发送相同的内容给编码器输出,这没有意义。

        解码器输入将首先从句子标记 [CLS] 的开头开始。每次预测后,解码器输入将附加下一个生成的标记,直到达到句子标记 [SEP] 的末尾。最后,投影图层将输出映射到相应的文本表示。

def training_model(preload_epoch=None):       # The entire training, validation cycle will run for 20 times.    EPOCHS = 20    initial_epoch = 0    global_step = 0            # Adam is one of the most commonly used optimization algorithms that hold the current state and will update the parameters based on the computed gradients.             optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)        # If the preload_epoch is not none, that means the training will start with the weights, optimizer that has been last saved. The new epoch number will be preload epoch + 1.    if preload_epoch is not None:        model_filename = f"./malaygpt/model_{preload_epoch}.pt"        state = torch.load(model_filename)        initial_epoch = state['epoch'] + 1        optimizer.load_state_dict(state['optimizer_state_dict'])        global_step = state['global_step']    # The CrossEntropyLoss loss function computes the difference between the projection output and target label.    loss_fn = nn.CrossEntropyLoss(ignore_index = tokenizer_en.token_to_id("[PAD]"), label_smoothing=0.1).to(device)    for epoch in range(initial_epoch, EPOCHS):        # ::: Start of Training block :::        model.train()                  # training with the training dataloder prepared in step 3.             for batch in tqdm(train_dataloader):            encoder_input = batch['encoder_input'].to(device)   # (batch_size, seq_len)            decoder_input = batch['decoder_input'].to(device)    # (batch_size, seq_len)            target_label = batch['target_label'].to(device)      # (batch_size, seq_len)            encoder_mask = batch['encoder_mask'].to(device)                   decoder_mask = batch['decoder_mask'].to(device)                     encoder_output = model.encode(encoder_input, encoder_mask)            decoder_output = model.decode(decoder_input, decoder_mask, encoder_output, encoder_mask)            projection_output = model.project(decoder_output)            # projection_output(batch_size, seq_len, vocab_size)            loss = loss_fn(projection_output.view(-1, projection_output.shape[-1]), target_label.view(-1))                        # backward pass            optimizer.zero_grad()            loss.backward()            # update weights            optimizer.step()                    global_step += 1        print(f'Epoch [{epoch+1}/{EPOCHS}]: Train Loss: {loss.item():.2f}')                # save the state of the model after every epoch        model_filename = f"./malaygpt/model_{epoch}.pt"        torch.save({            'epoch': epoch,            'model_state_dict': model.state_dict(),            'optimizer_state_dict': optimizer.state_dict(),            'global_step': global_step        }, model_filename)                # ::: End of Training block :::        # ::: Start of Validation block :::        model.eval()                with torch.inference_mode():            for batch in tqdm(val_dataloader):                                encoder_input = batch['encoder_input'].to(device)   # (batch_size, seq_len)                                        encoder_mask = batch['encoder_mask'].to(device)                source_text = batch['source_text']                target_text = batch['target_text']                                # Computing the output of the encoder for the source sequence.                encoder_output = model.encode(encoder_input, encoder_mask)                # for prediction task, the first token that goes in decoder input is the [CLS] token                decoder_input = torch.empty(1,1).fill_(tokenizer_my.token_to_id('[CLS]')).type_as(encoder_input).to(device)                # since we need to keep adding the output back to the input until the [SEP] - end token is received.                while True:                                         # check if the max length is received, if it is, then we stop.                    if decoder_input.size(1) == max_seq_len:                        break                    # Recreate mask each time the new output is added the decoder input for next token prediction                    decoder_mask = causal_mask(decoder_input.size(1)).type_as(encoder_mask).to(device)                    decoder_output = model.decode(decoder_input,decoder_mask,encoder_output,encoder_mask)                                        # Apply projection only to the next token.                    projection = model.project(decoder_output[:, -1])                    # Select the token with highest probablity which is a called greedy search implementation.                    _, new_token = torch.max(projection, dim=1)                    new_token = torch.empty(1,1). type_as(encoder_input).fill_(new_token.item()).to(device)                    # Add the new token back to the decoder input.                    decoder_input = torch.cat([decoder_input, new_token], dim=1)                    # Check if the new token is the end of token, then we stop if received [SEP].                    if new_token == tokenizer_my.token_to_id('[SEP]'):                        break                # Assigned decoder output as the fully appended decoder input.                decoder_output = decoder_input.sequeeze(0)                model_predicted_text = tokenizer_my.decode(decoder_output.detach().cpu.numpy())                                print(f'SOURCE TEXT": {source_text}')                print(f'TARGET TEXT": {target_text}')                print(f'PREDICTED TEXT": {model_predicted_text}')                   # ::: End of Validation block :::             # This function runs the training and validation for 20 epochstraining_model(preload_epoch=None)

Step 11: Create a function to test new translation task with our built model(创建一个函数,使用我们构建的模型测试新的翻译任务)

        我们将为我们的翻译函数提供一个新的通用名称,称为 malaygpt。这接受用户输入的英语原始文本,并输出马来语的翻译文本。让我们运行该函数并尝试一下。

def malaygpt(user_input_text):  model.eval()  with torch.inference_mode():    user_input_text = user_input_text.strip()    user_input_text_encoded = torch.tensor(tokenizer_en.encode(user_input_text).ids, dtype = torch.int64).to(device)    num_source_padding = max_seq_len - len(user_input_text_encoded) - 2    encoder_padding = torch.tensor([PAD_ID] * num_source_padding, dtype = torch.int64).to(device)    encoder_input = torch.cat([CLS_ID, user_input_text_encoded, SEP_ID, encoder_padding]).to(device)    encoder_mask = (encoder_input != PAD_ID).unsqueeze(0).unsqueeze(0).int().to(device)    # Computing the output of the encoder for the source sequence    encoder_output = model.encode(encoder_input, encoder_mask)    # for prediction task, the first token that goes in decoder input is the [CLS] token    decoder_input = torch.empty(1,1).fill_(tokenizer_my.token_to_id('[CLS]')).type_as(encoder_input).to(device)    # since we need to keep adding the output back to the input until the [SEP] - end token is received.    while True:        # check if the max length is received        if decoder_input.size(1) == max_seq_len:            break        # recreate mask each time the new output is added the decoder input for next token prediction        decoder_mask = causal_mask(decoder_input.size(1)).type_as(encoder_mask).to(device)        decoder_output = model.decode(decoder_input,decoder_mask,encoder_output,encoder_mask)        # apply projection only to the next token        projection = model.project(decoder_output[:, -1])        # select the token with highest probablity which is a greedy search implementation        _, new_token = torch.max(projection, dim=1)        new_token = torch.empty(1,1). type_as(encoder_input).fill_(new_token.item()).to(device)        # add the new token back to the decoder input        decoder_input = torch.cat([decoder_input, new_token], dim=1)        # check if the new token is the end of token        if new_token == tokenizer_my.token_to_id('[SEP]'):            break    # final decoder out is the concatinated decoder input till the end token    decoder_output = decoder_input.sequeeze(0)    model_predicted_text = tokenizer_my.decode(decoder_output.detach().cpu.numpy())    return model_predicted_text

测试时间!让我们做一些翻译测试。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/725632.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

漏洞挖掘 | 记一次src挖掘-小程序敏感信息泄露

权当是一次漏洞挖掘的思路分享 闲言 就现在的一个web漏洞挖掘强度还是非常高的,所以我们不妨把我们的眼光投向一个之前可能未曾涉及到的区域———小程序 是的微信小程序,这玩意的防范能力和过滤能力其实对比web方向是要弱小很多的 进入正题 以下就是…

详细分析Element Plus的el-pagination基本知识(附Demo)

目录 前言1. 基本知识2. Demo3. 实战 前言 需求:从无到有做一个分页并且附带分页的导入导出增删改查等功能 前提一定是要先有分页,作为全栈玩家,先在前端部署一个分页的列表 相关后续的功能,是Java,推荐阅读&#x…

配置环境常规操作

一、看看显卡情况 1、看显卡驱动: nvidia-smi 2、验证cuda是否安装成功 nvcc -V 二、conda创建环境 conda create --name PatchCore_anomaly_detection python3.9 conda activate PatchCore_anomaly_detection 三、配置虚拟环境 cd C:\BaiduNetdiskDownload…

不同表格式下的小文件治理方式(开源RC file/ORC/Text非事务表、事务表、Holodesk表格式..)

友情链接: 小文件治理系列之为什么会出现小文件问题,小文件过多问题的危害以及不同阶段下的小文件治理最佳解决手段 小文件过多的解决方法(不同阶段下的治理手段,SQL端、存储端以及计算端) 概览 在前两篇博文中&am…

OceanBase v4.2 特性解析:支持并发建表,提升OMS导入效率

背景 OceanBase 4.0版本新增了单日志流架构,使得OBServer单机突破了原有的分区数限制,支持更大数量的分区。 很多业务环境为了处理单机数据量过大的问题,通常采取分库分表的方法,这一方法会导致业务需要创建数十万乃至百万级别的…

Java安全

Java安全 Java2Sec靶场搭建 靶场地址 https://github.com/bewhale/JavaSec 查看数据库配置文件,mysql,用户名密码根据自己数据库密码更改 使用小皮面板的mysql,新建一个数据名为javasec的数据库 运行javasec.sql文件 下载运行jar包即可 …

【五子棋】C语言教程

Hi~!这里是奋斗的小羊,很荣幸您能阅读我的文章,诚请评论指点,欢迎欢迎 ~~ 💥💥个人主页:奋斗的小羊 💥💥所属专栏:C语言 🚀本系列文章为个人学习…

【病毒分析】Steloj勒索病毒分析

1.背景 1.1 来源 近期,Solar团队收到某汽车制造公司的援助请求,该公司的计算机服务器受到了Steloj勒索家族的侵害,所有的文件被加密并且添加了.steloj后缀,该勒索软件的初始入侵方式是MSSQL数据库弱口令进行入侵,后续…

NET Core C# 中的Action委托:语法、用法和示例_2024-06-19

Action委托是一个内置的泛型委托类型。此委托使您的程序更具可读性和效率,因为您无需定义自定义委托,如以下示例所示。 它在 System 命名空间下定义。它没有输出参数,输入参数最少为 1 个,最多为 16 个。 Action委托通常用于具有…

JavaScript基础部分知识点总结(Part3)

函数的概念 1. 函数的概念 在JS 里面,可能会定义非常多的相同代码或者功能相似的代码,这些代码可能需要大量重复使用。虽然for循环语句也能实现一些简单的重复操作,但是比较具有局限性,此时我们就可以使用JS 中的函数。函数&…

Springboot整合Zookeeper分布式组件实例

一、Zookeeper概述 1.1 Zookeeper的定义 Zookeeper是一个开源的分布式协调服务,主要用于分布式应用程序中的协调管理。它由Apache软件基金会维护,是Hadoop生态系统中的重要成员。Zookeeper提供了一个高效且可靠的分布式锁服务,以及群集管理…

反激开关电源变压器设计1

特别注意:变压器计算出来的结果没有绝对的对与错 只要再全域范围内工作变压器不饱和就不能说变压器计算不对,(输入全范围,输出全范围,温度度全范围) 在变压器不饱和的情况下,只有优劣之分&…

自然语言处理概述

目录 1.概述 2.背景 3.作用 4.优缺点 4.1.优点 4.2.缺点 5.应用场景 5.1.十个应用场景 5.2.文本分类 5.2.1.一般流程 5.2.2.示例 6.使用示例 7.总结 1.概述 自然语言处理(NLP)是计算机科学、人工智能和语言学的交叉领域,旨在实…

【STM32-启动文件】

STM32-启动文件 ■ STM32-启动文件■ STM32-启动文件主要做了以下工作:■ STM32-启动文件指令■ STM32-启动文件代码详解■ 栈空间的开辟■■■ ■■■■■ ■ STM32-启动文件 STM32 启动文件由 ST 官方提供 启动文件由汇编编写,是系统上电复位后第一个…

Java 读取Excel导入数据库,形成树状结构

最近开发过程中遇到一个Excel的导入的功能,因为导入的数据结构具有层次结构,经过一番研究,最终得以实现,所有写下该文章,记录过程,供以后参考。 下图是导入Excel的数据结构: 使用POI解析Excel&…

理解 ICMP 报文:网络故障排查的重要工具

文章目录 什么是 ICMP?ICMP 报文类型ICMP “Destination Unreachable” 报文实例解析:端口不可达ICMP 报文的生成和处理解决方案结语 在网络通信中,ICMP(Internet Control Message Protocol,互联网控制消息协议&#x…

支持向量机 (SVM) 算法详解

支持向量机 (SVM) 算法详解 支持向量机(Support Vector Machine, SVM)是一种监督学习模型,广泛应用于分类和回归分析。SVM 特别适合高维数据,并且在处理复杂非线性数据时表现出色。本文将详细讲解 SVM 的原理、数学公式、应用场景…

安装cuda、cudnn、Pytorch(用cuda和cudnn加速计算)

写在前面 最近几个月都在忙着毕业的事,好一阵子没写代码了。今天准备跑个demo,发现报错 AssertionError: Torch not compiled with CUDA enabled 不知道啥情况,因为之前有cuda环境,能用gpu加速,看这个报错信息应该是P…

基于百度飞桨PaddleOCR应用开发实践银行卡卡面内容检测识别系统

OCR相关的内容我在之前的工作中虽有所涉及,但是还是比较少的,最近正好需要用到OCR的一些技术,查了一些资料,发现国内的话百度这块做的还是比较全面系统深入的,抱着闲来无事学习了解的心态,这里花了点时间基…

八大排序————C语言版实现

Hello,各位未来的高级程序员们,大家好,今天我就来为大家讲解一下有关排序的内容,我们常见的排序就是我们接下来要讲的这八个排序,我们平常所说的排序有十大排序,我们这里的八大排序是我们生活中最为常见的八…