Gemma2 2B 模型的model.safetensors.index.json文件解析

Gemma2 2B 模型的 model.safetensors.index.json 文件解析

在使用 Gemma2 2B 模型或其他大型预训练模型时,model.safetensors.index.json 文件起到了索引的作用,它帮助我们了解模型的结构、参数存储方式以及如何加载模型的具体权重。本博客将深入解析该文件的内容和用途。
下载到本地的文件如下所示:

在这里插入图片描述


1. 文件结构概述

model.safetensors.index.json 文件的主要结构包括两个关键部分:

  1. Metadata 元数据:包含模型的总大小信息。
  2. Weight Map 权重映射:定义模型参数与实际存储文件的对应关系。

示例内容如下:

{
  "metadata": {
    "total_size": 10457367552
  },
  "weight_map": {
    "model.embed_tokens.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors"
  }
}

2. Metadata 元数据解析

total_size

  • 作用:表示所有模型参数文件的总大小(以字节为单位)。
  • 示例10457367552 字节约等于 10.45 GB
  • 意义
    1. 帮助用户评估存储需求。
    2. 检查文件是否下载完整,与预期大小匹配。

3. Weight Map 权重映射解析

weight_map

  • 作用
    将模型的各层参数映射到具体的 .safetensors 文件。
  • 格式
    • 键:模型参数的名称,表示权重在模型中的位置。
    • 值:存储这些权重的 .safetensors 文件。
  • 示例解析
    • model.embed_tokens.weight: 嵌入层的权重存储在 model-00001-of-00003.safetensors 文件中。
    • model.layers.0.mlp.up_proj.weight: 第 0 层 MLP 的上投影矩阵参数位于 model-00001-of-00003.safetensors
    • model.layers.10.mlp.down_proj.weight: 第 10 层 MLP 的下投影矩阵参数位于 model-00002-of-00003.safetensors

用途

  1. 分布式存储:大型模型被拆分为多个小文件,方便管理和加载。
  2. 增量更新:支持部分更新,不必重写整个模型。
  3. 动态加载:根据需求按需加载模型的某些部分。

4. 模型分片机制

为什么需要分片?

  1. 存储限制:单个文件过大可能超出文件系统限制。
  2. 加载效率:分片可以按需加载,提高内存利用率。
  3. 分布式训练:多个 GPU 或节点可以并行处理不同的参数分片。

如何定位分片?

  • 文件命名规则:model-<编号>-of-<总数>.safetensors
    • model-00001-of-00003.safetensors 表示 3 个分片中的第 1 个。
  • 使用索引文件确保参数名和文件名一一对应。

5. Safetensors 格式简介

优势

  1. 安全性:防止恶意代码注入,保障权重文件的安全加载。
  2. 效率高:二进制存储格式,支持快速读取和写入。
  3. 跨平台兼容性:适用于 CPU 和 GPU 环境。

加载示例

from safetensors.torch import load_file

# 加载特定分片
weights = load_file("model-00001-of-00003.safetensors")
print(weights.keys())

6. 实际应用场景

1. 模型加载过程

  1. 根据 model.safetensors.index.json 文件读取分片信息。
  2. 根据需要加载某些分片到 GPU,减少内存占用。
  3. 动态合并加载的参数,恢复完整模型结构。

2. 文件一致性检查

  • 利用 total_size 验证下载的文件总大小是否正确,确保数据完整性。

3. 参数微调

  • 用户可以根据需求只加载特定层权重进行微调,避免加载不必要的参数。

7. 总结

model.safetensors.index.json 文件是大型模型权重管理的重要工具,尤其适合 Gemma2 2B 这样的多层神经网络。通过解析该文件,可以了解模型的存储布局、参数分片策略以及如何高效加载和管理模型权重。

关键要点

  1. 元数据部分提供总大小信息,便于存储规划和完整性检查。
  2. 权重映射部分详细记录模型参数与存储文件的对应关系,支持灵活加载。
  3. Safetensors 格式提高了加载速度和安全性,适合大规模模型的分布式部署。

希望这篇博客能帮助您更好地理解 model.safetensors.index.json 文件的作用和实现原理,助力您的模型开发和部署工作!

后记

2024年12月30日13点45分于上海,在GPT4o大模型辅助下完成。

附录

下面是完整的Gemma2 2B 模型的model.safetensors.index.json文件:

{
  "metadata": {
    "total_size": 10457367552
  },
  "weight_map": {
    "model.embed_tokens.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.pre_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.pre_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.10.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.pre_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.23.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.24.input_layernorm.weight": "model-00003-of-00003.safetensors",
    "model.layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
    "model.layers.24.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
    "model.layers.24.pre_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
    "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.25.input_layernorm.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.pre_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
    "model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.pre_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.pre_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.pre_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.pre_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.pre_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.8.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.8.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.8.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.8.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.8.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
    "model.layers.9.input_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.pre_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
    "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
    "model.norm.weight": "model-00003-of-00003.safetensors"
  }
}

仅供参考

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/945270.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

大模型系列——旋转位置编码和长度外推

绝对位置编码 旋转位置编码 论文中有个很直观的图片展示了旋转变换的过程&#xff1a; 对于“我”对应的d维向量&#xff0c; 拆分成d/2组以后&#xff0c;每组对应一个角度&#xff0c;若1对应的向量为(x1,x2)&#xff0c;应用旋转位置编码&#xff0c;相当于这个分量旋转了m…

网络安全威胁2024年中报告

下载地址&#xff1a; 网络安全威胁2024年中报告-奇安信

Momentum Contrast for Unsupervised Visual Representation Learning论文笔记

文章目录 论文地址动量队列对比学习的infoNCE loss为什么需要动量编码器对比学习moco方法中的动量Encoder为什么不能与梯度Encoder完全相同为什么动量编码器和梯度编码器不能完全相同&#xff1f;总结&#xff1a; 我理解&#xff0c;正负样本应该经过同一个encoder&#xff0c…

Unity 使用UGUI制作卷轴开启关闭效果

视频效果 代码 using UnityEngine.UI; using System.Collections; using System.Collections.Generic; using UnityEngine; using DG.Tweening; using DG.Tweening.Core; using DG.Tweening.Plugins.Options;public class JuanZhou : MonoBehaviour {[SerializeField]private …

plsql :用户system通过sysdba连接数据库--报错ora-01031

一、winR cmd通过命令窗口登录sys用户 sql sys/[password]//localhost:1521/[service_name] as sysdba二、输入用户名:sys as sysdba 三、输入密码:自己设的 四、执行grant sysdba to system; 再去PL/SQL连接就可以了

ubuntu 使用samba与windows共享文件[注意权限配置]

在Ubuntu上使用Samba服务与Windows系统共享文件&#xff0c;需要正确配置Samba服务以及相应的权限。以下是详细的步骤&#xff1a; 安装Samba 首先&#xff0c;确保你的Ubuntu系统上安装了Samba服务。 sudo apt update sudo apt install samba配置Samba 安装完成后&#xff0c…

Java - 日志体系_Apache Commons Logging(JCL)日志接口库_适配Log4j2 及 源码分析

文章目录 PreApache CommonsApache Commons ProperLogging &#xff08;Apache Commons Logging &#xff09; JCL 集成Log4j2添加 Maven 依赖配置 Log4j2验证集成 源码分析1. Log4j-jcl 的背景2. log4j-jcl 的工作原理2.1 替换默认的 LogFactoryImpl2.2 LogFactoryImpl 的实现…

仓颉编程语言:编程世界的 “文化瑰宝”

我的个人主页 在当今编程领域百花齐放的时代&#xff0c;各种编程语言争奇斗艳&#xff0c;服务于不同的应用场景和开发者群体。然而&#xff0c;有这样一种编程语言&#xff0c;它承载着独特的文化内涵&#xff0c;宛如编程世界里一颗熠熠生辉的“文化瑰宝”&#xff0c;那就…

Prompt工程--AI开发--可置顶粘贴小工具

PROMPT 1.背景要求&#xff1a;我需要开发一个简单的粘贴小工具&#xff0c;用于方便地粘贴和管理文本内容。该工具需要具备以下功能&#xff1a;粘贴功能&#xff1a;提供一个文本框&#xff0c;用户可以粘贴内容。窗口置顶&#xff1a;支持窗口置顶功能&#xff0c;确保窗口…

利用Abel_Cain软件实现ARP欺骗

ARP协议是“Address Resolution Protocol”&#xff08;地址解析协议&#xff09;的缩写。在局域网中&#xff0c;网络中实际传输的是“帧”&#xff0c;帧里面是有目标主机的MAC地址的。在以太网中&#xff0c;一个主机要和另一个主机进行直接通信&#xff0c;必须要知道目标主…

STM32学习之 按键/光敏电阻 控制 LED/蜂鸣器

STM32学习之 按键/光敏电阻 控制 LED/蜂鸣器 1、按键控制 LED 按键:常见的输入设备&#xff0c;按下导通&#xff0c;松手断开 按键抖动:由子按键内部使用的是机械式弹簧片来进行通断的、所以在按下和松手的瞬间会伴随有一连串的抖动 按键控制LED接线图&#xff1a; 要有工程…

深入解析MySQL索引结构:从数组到B+树的演变与优化

前言&#xff1a; 在数据库查询中&#xff0c;索引是一种关键的性能优化工具。然而&#xff0c;索引的失效可能导致查询效率大幅下降。为了更好地理解索引的工作原理及规避其失效&#xff0c;深入了解索引结构的演变过程尤为重要。 MySQL 的索引数据结构从简单到复杂&#xff0…

window如何将powershell以管理员身份添加到右键菜单?(按住Shift键显示)

window如何将powershell以管理员身份添加到右键菜单&#xff1f; 在 Windows 中&#xff0c;将 PowerShell 以管理员身份添加到右键菜单&#xff0c;可以让你在需要提升权限的情况下快速打开 PowerShell 窗口。以下是详细的步骤&#xff0c;包括手动编辑注册表和使用注册表脚本…

Redis--持久化策略(AOF与RDB)

持久化策略&#xff08;AOF与RDB&#xff09; 持久化Redis如何实现数据不丢失&#xff1f;RDB 快照是如何实现的呢&#xff1f;执行时机RDB原理执行快照时&#xff0c;数据能被修改吗&#xff1f; AOF持久化是怎么实现的&#xff1f;AOF原理三种写回策略AOF重写机制 RDB和AOF合…

uniapp-vue3(下)

关联链接&#xff1a;uniapp-vue3&#xff08;上&#xff09; 文章目录 七、咸虾米壁纸项目实战7.1.咸虾米壁纸项目概述7.2.项目初始化公共目录和设计稿尺寸测量工具7.3.banner海报swiper轮播器7.4.使用swiper的纵向轮播做公告区域7.5.每日推荐滑动scroll-view布局7.6.组件具名…

计算机网络 (16)数字链路层的几个共同问题

一、封装成帧 封装成帧是数据链路层的一个基本问题。数据链路层把网络层交下来的数据构成帧发送到链路上&#xff0c;以及把接收到的帧中的数据取出并上交给网络层。封装成帧就是在一段数据的前后分别添加首部和尾部&#xff0c;构成了一个帧。接收端在收到物理层上交的比特流后…

操作系统论文导读(八):Schedulability analysis of sporadic tasks with multiple criticality specifications——具有多个

Schedulability analysis of sporadic tasks with multiple criticality specifications——具有多个关键性规范的零星任务的可调度性分析 目录 一、论文核心思想 二、基本定义 2.1 关键性指标 2.2 任务及相关参数定义 2.3 几个基础定义 三、可调度性分析 3.1 调度算法分…

「教程」抖音短剧小程序源码开发后上架的教程及好处

上线抖音短剧小程序的步骤 注册账号与准备资料&#xff1a;首先需要在抖音开放平台官网注册一个抖音小程序账号&#xff0c;并完成相关认证&#xff0c;获取小程序开发权限。同时&#xff0c;要准备好短剧相关的素材&#xff0c;如视频、音频、剧本、封面图片等 开发或选择小程…

omi friend实战记录

一、简介 omi friend是国外githab上开源的一个“AI硬件”的制作教程&#xff0c;它的形状是个三角形&#xff0c;属于项链佩戴这类的。可以接入llm进行对话&#xff0c;他有麦克风、扬声器&#xff0c;还有电池。外形好看&#xff0c;功能实用。还有专属的一系列app可以供方便…

活动预告 |【Part2】 Azure 在线技术公开课:迁移和保护 Windows Server 和 SQL Server 工作负载

课程介绍 通过 Microsoft Learn 免费参加 Microsoft Azure 在线技术公开课&#xff0c;掌握创造新机遇所需的技能&#xff0c;加快对 Microsoft 云技术的了解。参加我们举办的“迁移和保护 Windows Server 和 SQL Server 工作负载”活动&#xff0c;了解 Azure 如何为将工作负载…