HuggingFace团队亲授大模型量化基础: Quantization Fundamentals with Hugging Face

Quantization Fundamentals with Hugging Face

本文是学习https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/ 这门课的学习笔记。

在这里插入图片描述

What you’ll learn in this course

Generative AI models, like large language models, often exceed the capabilities of consumer-grade hardware and are expensive to run. Compressing models through methods such as quantization makes them more efficient, faster, and accessible. This allows them to run on a wide variety of devices, including smartphones, personal computers, and edge devices, and minimizes performance degradation.

Join this course to:

  • Quantize any open source model with linear quantization using the Quanto library.
  • Get an overview of how linear quantization is implemented. This form of quantization can be applied to compress any model, including LLMs, vision models, etc.
  • Apply “downcasting,” another form of quantization, with the Transformers library, which enables you to load models in about half their normal size in the BFloat16 data type.

By the end of this course, you will have a foundation in quantization techniques and be able to apply them to compress and optimize your own generative AI models, making them more accessible and efficient.

文章目录

  • Quantization Fundamentals with Hugging Face
    • What you’ll learn in this course
  • Handling Big Models
  • Lesson 2: Data Types and Sizes
      • Integers
      • Floating Points
      • Downcasting
  • Lesson 3: Loading ML Models with Different Data Types
    • Model Casting: `float16`
    • Model Casting: `bfloat16`
        • Note about deepcopy
    • Using Popular Generative Models in Different Data Types
        • To get the sample code that Younes showed:
      • Model Performance: `float32` vs `bfloat16`
      • Default Data Type
      • Note
  • Lesson 4: Quantization Theory
        • Libraries to install
    • T5-FLAN
      • Without Quantization
    • Quantize the model (8-bit precision)
      • Freeze the model
      • Try running inference on the quantized model
        • Comparing "linear quantization" to "downcasting"
  • Quantization of LLMs
  • 后记

Handling Big Models

在这里插入图片描述
Pruning
在这里插入图片描述

Knowledge Distillation

在这里插入图片描述

Quantization

在这里插入图片描述

Data Types

在这里插入图片描述

What covers in this course

在这里插入图片描述

Lesson 2: Data Types and Sizes

In this lab, you will learn about the common data types used to store the parameters of machine learning models.

The libraries are already installed in the classroom. If you’re running this notebook on your own machine, you can install the following:

!pip install torch==2.1.1
import torch

Integers

在这里插入图片描述

Interger in PyTorch
在这里插入图片描述

# Information of `8-bit unsigned integer`
torch.iinfo(torch.uint8)

Output

iinfo(min=0, max=255, dtype=uint8)
# Information of `8-bit (signed) integer`
torch.iinfo(torch.int8)

Output

iinfo(min=-128, max=127, dtype=int8)

Floating Points

Floating point

在这里插入图片描述

FP32

在这里插入图片描述

FP16

在这里插入图片描述

Comparison

在这里插入图片描述

Floating point in pytorch

在这里插入图片描述

# by default, python stores float data in fp64
value = 1/3
format(value, '.60f')

Output

'0.333333333333333314829616256247390992939472198486328125000000'
# 64-bit floating point
tensor_fp64 = torch.tensor(value, dtype = torch.float64)
print(f"fp64 tensor: {format(tensor_fp64.item(), '.60f')}")

Output

fp64 tensor: 0.333333333333333314829616256247390992939472198486328125000000
tensor_fp32 = torch.tensor(value, dtype = torch.float32)
tensor_fp16 = torch.tensor(value, dtype = torch.float16)
tensor_bf16 = torch.tensor(value, dtype = torch.bfloat16)

print(f"fp64 tensor: {format(tensor_fp64.item(), '.60f')}")
print(f"fp32 tensor: {format(tensor_fp32.item(), '.60f')}")
print(f"fp16 tensor: {format(tensor_fp16.item(), '.60f')}")
print(f"bf16 tensor: {format(tensor_bf16.item(), '.60f')}")

Output

fp64 tensor: 0.333333333333333314829616256247390992939472198486328125000000
fp32 tensor: 0.333333343267440795898437500000000000000000000000000000000000
fp16 tensor: 0.333251953125000000000000000000000000000000000000000000000000
bf16 tensor: 0.333984375000000000000000000000000000000000000000000000000000
# Information of `16-bit brain floating point`
torch.finfo(torch.bfloat16)

Output

finfo(resolution=0.01, min=-3.38953e+38, max=3.38953e+38, eps=0.0078125, smallest_normal=1.17549e-38, tiny=1.17549e-38, dtype=bfloat16)
# Information of `32-bit floating point`
torch.finfo(torch.float32)

Output

finfo(resolution=1e-06, min=-3.40282e+38, max=3.40282e+38, eps=1.19209e-07, smallest_normal=1.17549e-38, tiny=1.17549e-38, dtype=float32)

Downcasting

# random pytorch tensor: float32, size=1000
tensor_fp32 = torch.rand(1000, dtype = torch.float32)
# first 5 elements of the random tensor
tensor_fp32[:5]

Output

tensor([0.4897, 0.0494, 0.8093, 0.6704, 0.0713])
# downcast the tensor to bfloat16 using the "to" method
tensor_fp32_to_bf16 = tensor_fp32.to(dtype = torch.bfloat16)
tensor_fp32_to_bf16[:5]

Output

tensor([0.4902, 0.0493, 0.8086, 0.6719, 0.0713], dtype=torch.bfloat16)
# tensor_fp32 x tensor_fp32
m_float32 = torch.dot(tensor_fp32, tensor_fp32)

Output

tensor(324.9693)
# tensor_fp32_to_bf16 x tensor_fp32_to_bf16
m_bfloat16 = torch.dot(tensor_fp32_to_bf16, tensor_fp32_to_bf16)

Output

tensor(326., dtype=torch.bfloat16)

在这里插入图片描述

Lesson 3: Loading ML Models with Different Data Types

在这里插入图片描述

In this lab, you will load ML models in different datatypes.

helper.py

import torch
import torch.nn as nn
import requests
from PIL import Image

import warnings
# Ignore specific UserWarnings related to max_length in transformers
warnings.filterwarnings("ignore", 
    message=".*Using the model-agnostic default `max_length`.*")

class DummyModel(nn.Module):
  """
  A dummy model that consists of an embedding layer
  with two blocks of a linear layer followed by a layer
  norm layer.
  """
  def __init__(self):
    super().__init__()

    torch.manual_seed(123)

    self.token_embedding = nn.Embedding(2, 2)

    # Block 1
    self.linear_1 = nn.Linear(2, 2)
    self.layernorm_1 = nn.LayerNorm(2)

    # Block 2
    self.linear_2 = nn.Linear(2, 2)
    self.layernorm_2 = nn.LayerNorm(2)

    self.head = nn.Linear(2, 2)

  def forward(self, x):
    hidden_states = self.token_embedding(x)

    # Block 1
    hidden_states = self.linear_1(hidden_states)
    hidden_states = self.layernorm_1(hidden_states)

    # Block 2
    hidden_states = self.linear_2(hidden_states)
    hidden_states = self.layernorm_2(hidden_states)

    logits = self.head(hidden_states)
    return logits


def get_generation(model, processor, image, dtype):
  inputs = processor(image, return_tensors="pt").to(dtype)
  out = model.generate(**inputs)
  return processor.decode(out[0], skip_special_tokens=True)


def load_image(img_url):
    image = Image.open(requests.get(
        img_url, stream=True).raw).convert('RGB')

    return image


from helper import DummyModel
model = DummyModel()
model

Output

DummyModel(
  (token_embedding): Embedding(2, 2)
  (linear_1): Linear(in_features=2, out_features=2, bias=True)
  (layernorm_1): LayerNorm((2,), eps=1e-05, elementwise_affine=True)
  (linear_2): Linear(in_features=2, out_features=2, bias=True)
  (layernorm_2): LayerNorm((2,), eps=1e-05, elementwise_affine=True)
  (head): Linear(in_features=2, out_features=2, bias=True)
)
  • Create a function to inspect the data types of the parameters in a model.
def print_param_dtype(model):
    for name, param in model.named_parameters():
        print(f"{name} is loaded in {param.dtype}")
        
print_param_dtype(model)

Output

token_embedding.weight is loaded in torch.float32
linear_1.weight is loaded in torch.float32
linear_1.bias is loaded in torch.float32
layernorm_1.weight is loaded in torch.float32
layernorm_1.bias is loaded in torch.float32
linear_2.weight is loaded in torch.float32
linear_2.bias is loaded in torch.float32
layernorm_2.weight is loaded in torch.float32
layernorm_2.bias is loaded in torch.float32
head.weight is loaded in torch.float32
head.bias is loaded in torch.float32

Model Casting: float16

  • Cast the model into a different precision.
# float 16
model_fp16 = DummyModel().half()
print_param_dtype(model_fp16)

Output

token_embedding.weight is loaded in torch.float16
linear_1.weight is loaded in torch.float16
linear_1.bias is loaded in torch.float16
layernorm_1.weight is loaded in torch.float16
layernorm_1.bias is loaded in torch.float16
linear_2.weight is loaded in torch.float16
linear_2.bias is loaded in torch.float16
layernorm_2.weight is loaded in torch.float16
layernorm_2.bias is loaded in torch.float16
head.weight is loaded in torch.float16
head.bias is loaded in torch.float16
  • Run simple inference using model.
import torch
dummy_input = torch.LongTensor([[1, 0], [0, 1]])
# inference using float32 model
logits_fp32 = model(dummy_input)
logits_fp32

Output

tensor([[[-0.6872,  0.7132],
         [-0.6872,  0.7132]],

        [[-0.6872,  0.7132],
         [-0.6872,  0.7132]]], grad_fn=<ViewBackward0>)
# inference using float16 model
try:
    logits_fp16 = model_fp16(dummy_input)
except Exception as error:
    print("\033[91m", type(error).__name__, ": ", error, "\033[0m")

Model Casting: bfloat16

Note about deepcopy
  • copy.deepcopy makes a copy of the model that is independent of the original. Modifications you make to the copy will not affect the original, because you’re making a “deep copy”. For more details, see the Python docs on the [copy][https://docs.python.org/3/library/copy.html] library.
from copy import deepcopy
model_bf16 = deepcopy(model)
model_bf16 = model_bf16.to(torch.bfloat16)
print_param_dtype(model_bf16)

Output

token_embedding.weight is loaded in torch.bfloat16
linear_1.weight is loaded in torch.bfloat16
linear_1.bias is loaded in torch.bfloat16
layernorm_1.weight is loaded in torch.bfloat16
layernorm_1.bias is loaded in torch.bfloat16
linear_2.weight is loaded in torch.bfloat16
linear_2.bias is loaded in torch.bfloat16
layernorm_2.weight is loaded in torch.bfloat16
layernorm_2.bias is loaded in torch.bfloat16
head.weight is loaded in torch.bfloat16
head.bias is loaded in torch.bfloat16
logits_bf16 = model_bf16(dummy_input)
  • Now, compare the difference between logits_fp32 and logits_bf16.
mean_diff = torch.abs(logits_bf16 - logits_fp32).mean().item()
max_diff = torch.abs(logits_bf16 - logits_fp32).max().item()

print(f"Mean diff: {mean_diff} | Max diff: {max_diff}")

Output

Mean diff: 0.0009978711605072021 | Max diff: 0.0016907453536987305

Using Popular Generative Models in Different Data Types

  • Load Salesforce/blip-image-captioning-base to perform image captioning.
To get the sample code that Younes showed:
  • Click on the “Model Card” tab.
  • On the right, click on the button “<> Use in Transformers”, you’ll see a popup with sample code for loading this model.
# Load model directly
from transformers import AutoProcessor, AutoModelForSeq2SeqLM

processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = AutoModelForSeq2SeqLM.from_pretrained("Salesforce/blip-image-captioning-base")
  • To see the sample code with an example, click on “Read model documentation” at the bottom of the popup. It opens a new tab.
    https://huggingface.co/docs/transformers/main/en/model_doc/blip#transformers.BlipForConditionalGeneration
  • On this page, scroll down a bit, past the “parameters”, section, and you’ll see “Examples:”
from PIL import Image
import requests
from transformers import AutoProcessor, BlipForConditionalGeneration

processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "A picture of"

inputs = processor(images=image, text=text, return_tensors="pt")

outputs = model(**inputs)
from transformers import BlipForConditionalGeneration
model_name = "Salesforce/blip-image-captioning-base"
model = BlipForConditionalGeneration.from_pretrained(model_name)
  • Check the memory footprint of the model.
fp32_mem_footprint = model.get_memory_footprint()
print("Footprint of the fp32 model in bytes: ",
      fp32_mem_footprint)
print("Footprint of the fp32 model in MBs: ", 
      fp32_mem_footprint/1e+6)

Output

Footprint of the fp32 model in bytes:  989660400
Footprint of the fp32 model in MBs:  989.6604
  • Load the same model in bfloat16.
model_bf16 = BlipForConditionalGeneration.from_pretrained(
                                               model_name,
                               torch_dtype=torch.bfloat16
)

bf16_mem_footprint = model_bf16.get_memory_footprint()

# Get the relative difference
relative_diff = bf16_mem_footprint / fp32_mem_footprint

print("Footprint of the bf16 model in MBs: ", 
      bf16_mem_footprint/1e+6)
print(f"Relative diff: {relative_diff}")

Output

Footprint of the bf16 model in MBs:  494.832248
Relative diff: 0.5000020693967345

Model Performance: float32 vs bfloat16

  • Now, compare the generation results of the two model.
from transformers import BlipProcessor
processor = BlipProcessor.from_pretrained(model_name)
  • Load the image.
from helper import load_image, get_generation
from IPython.display import display

img_url = 'https://storage.googleapis.com/\
sfr-vision-language-research/BLIP/demo.jpg'

image = load_image(img_url)
display(image.resize((500, 350)))

Output

在这里插入图片描述

results_fp32 = get_generation(model, 
                              processor, 
                              image, 
                              torch.float32)
                              
print("fp32 Model Results:\n", results_fp32)

Output

fp32 Model Results:
 a woman sitting on the beach with her dog
results_bf16 = get_generation(model_bf16, 
                              processor, 
                              image, 
                              torch.bfloat16)
print("bf16 Model Results:\n", results_bf16)                             

Output

bf16 Model Results:
 a woman sitting on the beach with her dog

Default Data Type

  • For Hugging Face Transformers library, the deafult data type to load the models in is float32
  • You can set the “default data type” as what you want.
desired_dtype = torch.bfloat16
torch.set_default_dtype(desired_dtype)
dummy_model_bf16 = DummyModel()
print_param_dtype(dummy_model_bf16)

Output

token_embedding.weight is loaded in torch.bfloat16
linear_1.weight is loaded in torch.bfloat16
linear_1.bias is loaded in torch.bfloat16
layernorm_1.weight is loaded in torch.bfloat16
layernorm_1.bias is loaded in torch.bfloat16
linear_2.weight is loaded in torch.bfloat16
linear_2.bias is loaded in torch.bfloat16
layernorm_2.weight is loaded in torch.bfloat16
layernorm_2.bias is loaded in torch.bfloat16
head.weight is loaded in torch.bfloat16
head.bias is loaded in torch.bfloat16
  • Similarly, you can reset the default data type to float32.
torch.set_default_dtype(torch.float32)
print_param_dtype(dummy_model_bf16)

Output

token_embedding.weight is loaded in torch.bfloat16
linear_1.weight is loaded in torch.bfloat16
linear_1.bias is loaded in torch.bfloat16
layernorm_1.weight is loaded in torch.bfloat16
layernorm_1.bias is loaded in torch.bfloat16
linear_2.weight is loaded in torch.bfloat16
linear_2.bias is loaded in torch.bfloat16
layernorm_2.weight is loaded in torch.bfloat16
layernorm_2.bias is loaded in torch.bfloat16
head.weight is loaded in torch.bfloat16
head.bias is loaded in torch.bfloat16

Note

  • You just used a simple form of quantization, in which the model’s parameters are saved in a more compact data type (bfloat16). During inference, the model performs its calculations in this data type, and its activations are in this data type.
  • In the next lesson, you will use another quantization method, “linear quantization”, which enables the quantized model to maintain performance much closer to the original model by converting from the compressed data type back to the original FP32 data type during inference.

Lesson 4: Quantization Theory

线性量化(Linear Quantization)是一种量化方法,用于将连续的实数值数据映射到离散的整数值。在线性量化中,数据范围被均匀地划分为若干个量化级别,每个级别代表一个固定的实数范围。线性量化广泛应用于信号处理、图像处理和机器学习模型的压缩和加速。

线性量化的基本原理

线性量化的过程可以概括为以下几个步骤:

  1. 确定数据范围
    确定需要量化的数据的最小值和最大值,通常记作 [ x m i n , x m a x ] [x_{min}, x_{max}] [xmin,xmax]

  2. 确定量化级别数
    选择一个量化级别数 (N),通常是 ( 2 b 2^b 2b),其中 (b) 是量化位数。例如,对于 8 位量化,(N = 256)。

  3. 计算量化步长
    量化步长(step size)( Δ \Delta Δ) 计算公式为:

    Δ = x m a x − x m i n N − 1 \Delta = \frac{x_{max} - x_{min}}{N - 1} Δ=N1xmaxxmin

  4. 量化
    将每个连续实数值 (x) 映射到离散量化级别 (q)。量化公式为:

    q = round ( x − x m i n Δ ) q = \text{round}\left(\frac{x - x_{min}}{\Delta}\right) q=round(Δxxmin)

    其中,( round \text{round} round) 表示四舍五入到最近的整数。

  5. 反量化(重建)
    将量化后的整数值 (q) 映射回近似的连续实数值 ( x ^ \hat{x} x^)。反量化公式为:
    x ^ = x m i n + q ⋅ Δ \hat{x} = x_{min} + q \cdot \Delta x^=xmin+qΔ

线性量化的示例

假设有一组数据 ([0.0, 1.0, 2.0, 3.0]),我们希望使用 2 位量化(即 (N = 4))。

  1. 数据范围
    x m i n = 0.0 , x m a x = 3.0 x_{min} = 0.0,x_{max} = 3.0 xmin=0.0xmax=3.0

  2. 量化步长
    Δ = 3.0 − 0.0 4 − 1 = 1.0 \Delta = \frac{3.0 - 0.0}{4 - 1} = 1.0 Δ=413.00.0=1.0

  3. 量化
    对每个值进行量化:
    q = round ( x − 0.0 1.0 ) = round ( x ) q = \text{round}\left(\frac{x - 0.0}{1.0}\right) = \text{round}(x) q=round(1.0x0.0)=round(x)

    因此:
    0.0 → 0 , 1.0 → 1 , 2.0 → 2 , 3.0 → 3 0.0 \rightarrow 0, \quad 1.0 \rightarrow 1, \quad 2.0 \rightarrow 2, \quad 3.0 \rightarrow 3 0.00,1.01,2.02,3.03

  4. 反量化
    将量化值映射回实数:
    x ^ = 0.0 + q ⋅ 1.0 = q \hat{x} = 0.0 + q \cdot 1.0 = q x^=0.0+q1.0=q
    因此:
    0 → 0.0 , 1 → 1.0 , 2 → 2.0 , 3 → 3.0 0 \rightarrow 0.0, \quad 1 \rightarrow 1.0, \quad 2 \rightarrow 2.0, \quad 3 \rightarrow 3.0 00.0,11.0,22.0,33.0

线性量化在机器学习中的应用

在线性量化中,神经网络模型的权重和激活值可以被量化为低位整数(例如 8 位整数),以减少模型的内存占用和计算复杂度,从而提高推理速度。特别是在资源受限的设备(如移动设备和嵌入式系统)上,量化技术非常有用。

量化神经网络的主要挑战在于,如何在量化过程中尽量减少对模型精度的影响。因此,常见的方法包括对模型进行量化感知训练(Quantization-Aware Training)和后量化(Post-Training Quantization)。

总之,线性量化是一种简单而有效的数据压缩技术,在许多领域有广泛的应用。

Linear quantization

在这里插入图片描述

在这里插入图片描述

Scale and zero point

在这里插入图片描述

Quantization Aware Training

在这里插入图片描述

In this lab, you will perform Linear Quantization.

Libraries to install
  • If you are running this notebook on your local machine, you can install the following:
!pip install transformers==4.35.0
!pip install quanto==0.0.11
!pip install torch==2.1.1

T5-FLAN

  • Please note that due to hardware memory constraints, and in order to offer this course for free to everyone, the code you’ll run here is for the T5-FLAN model instead of the EleutherAI AI Pythia model.
  • Thank you for your understanding! 🤗

For the T5-FLAN model, here is one more library to install if you are running locally:

!pip install sentencepiece==0.2.0

Without Quantization

model_name = "google/flan-t5-small"
import sentencepiece as spm
from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small")

input_text = "Hello, my name is "
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))

Output

<pad> annie scott</s>

helper.py

import torch

# ################ monkey patch for quanto
def named_module_tensors(module, recurse=False):
    for named_parameter in module.named_parameters(recurse=recurse):
      name, val = named_parameter
      flag = True
      if hasattr(val,"_data") or hasattr(val,"_scale"):
        if hasattr(val,"_data"):
          yield name + "._data", val._data
        if hasattr(val,"_scale"):
          yield name + "._scale", val._scale
      else:
        yield named_parameter

    for named_buffer in module.named_buffers(recurse=recurse):
      yield named_buffer

def dtype_byte_size(dtype):
    """
    Returns the size (in bytes) occupied by one parameter of type `dtype`.
    """
    import re
    if dtype == torch.bool:
        return 1 / 8
    bit_search = re.search(r"[^\d](\d+)$", str(dtype))
    if bit_search is None:
        raise ValueError(f"`dtype` is not a valid dtype: {dtype}.")
    bit_size = int(bit_search.groups()[0])
    return bit_size // 8

def compute_module_sizes(model):
    """
    Compute the size of each submodule of a given model.
    """
    from collections import defaultdict
    module_sizes = defaultdict(int)
    for name, tensor in named_module_tensors(model, recurse=True):
      size = tensor.numel() * dtype_byte_size(tensor.dtype)
      name_parts = name.split(".")
      for idx in range(len(name_parts) + 1):
        module_sizes[".".join(name_parts[:idx])] += size

    return module_sizes
from helper import compute_module_sizes
module_sizes = compute_module_sizes(model)
print(f"The model size is {module_sizes[''] * 1e-9} GB")

Output

The model size is 0.307844608 GB

Quantize the model (8-bit precision)

from quanto import quantize, freeze
import torch
quantize(model, weights=torch.int8, activations=None)

Freeze the model

  • This step takes a bit of memory, and so for the Pythia model that is shown in the lecture video, it will not run in the classroom.
  • This will work fine with the smaller T5-Flan model.
freeze(model)
module_sizes = compute_module_sizes(model)
print(f"The model size is {module_sizes[''] * 1e-9} GB")

Output

The model size is 0.12682868 GB

Try running inference on the quantized model

input_text = "Hello, my name is "
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))

Output

<pad> annie scott</s>
Comparing “linear quantization” to “downcasting”

To recap the difference between the “linear quantization” method in this lesson with the “downcasting” method in the previous lesson:

  • When downcasting a model, you convert the model’s parameters to a more compact data type (bfloat16). During inference, the model performs its calculations in this data type, and its activations are in this data type. Downcasting may work with the bfloat16 data type, but the model performance will likely degrade with any smaller data type, and won’t work if you convert to an integer data type (like the int8 in this lesson).

  • In this lesson, you used another quantization method, “linear quantization”, which enables the quantized model to maintain performance much closer to the original model by converting from the compressed data type back to the original FP32 data type during inference. So when the model makes a prediction, it is performing the matrix multiplications in FP32, and the activations are in FP32. This enables you to quantize the model in data types smaller than bfloat16, such as int8, in this example.

在这里插入图片描述

Quantization of LLMs

Recent SOTA quantization methods

在这里插入图片描述

For 2-bit quantization

在这里插入图片描述

在这里插入图片描述

Fine Tuning quantized models

在这里插入图片描述

Fine tune with QAT

在这里插入图片描述

后记

2024年6月8日14点58分完成huggingface的这门量化基础课程。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/691421.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

基于OpenVINO实现无监督异常检测

异常检测(AD) 在欺诈检测、网络安全和医疗诊断等关键任务应用中至关重要。由于数据的高维性和底层模式的复杂性&#xff0c;图像、视频和卫星图像等视觉数据中的异常检测尤其具有挑战性。然而&#xff0c;视觉异常检测对于检测制造中的缺陷、识别监控录像中的可疑活动以及检测医…

应用广义线性模型二|二响应广义线性模型

系列文章目录 文章目录 系列文章目录一、二响应模型的不同表达方式和响应函数二、二响应模型的性质&#xff08;一&#xff09;二响应变量的条件数学期望与方差&#xff08;二&#xff09;二响应模型参数的极大似然估计&#xff08;三&#xff09;二响应模型的优势 三、二响应模…

算法人生(21):从“React框架”看“情绪管理”

说起React框架&#xff0c;我们知道它是一种由Facebook开发和维护的开源JavaScript库&#xff0c;主要用于构建用户界面&#xff0c;特别是单页应用程序&#xff08;SPA&#xff09;。React框架围绕组件化&#xff0c;即把用户界面拆分为可复用的独立组件&#xff0c;每个组件负…

OpenCV 4.10 发布

OpenCV 4.10 JPEG 解码速度提升 77%&#xff0c;实验性支持 Wayland、Win ARM64 根据 “OpenCV 中国团队” 介绍&#xff0c;从 4.10 开始 OpenCV 对 JPEG 图像的读取和解码有了 77% 的速度提升&#xff0c;超过了 scikit-image、imageio、pillow。 4.10 版本的一些亮点&…

SpringBoot+Vue甘肃非物质文化网站(前后端分离)

技术栈 JavaSpringBootMavenMySQLMyBatisVueShiroElement-UI 系统角色对应功能 用户管理员 系统功能截图

Dockerfille解析

用于构建Docker镜像的文本&#xff0c;由一条条指令构成 Docker执行Dockerfile的流程 1. Docker从基础镜像执行一个容器 2. 执行一条指令并对容器进行修改 3. 执行类型Docker commit的命令添加一个新的镜像层 4. Docker再基于新的镜像执行一个新的容器 5. 执行Dockerfile中…

小阿轩yx-iptables 防火墙

小阿轩yx-iptables 防火墙 Linux 防火墙基础 体系主要工作在 网络层针对TCP/IP 数据包实施过滤和限制 属于典型的包过滤防火墙&#xff08;或者称为网络层防火墙&#xff09; 体系基于内核编码实现 好处 具有非常稳定的性能高效率 防火墙两个表示 netfilteriptables …

C语言 数组——数组的其他应用之筛法求素数

目录 数组的其他应用 求100以内的所有素数 筛法求100以内的所有素数 自顶向下、逐步求精设计算法 数组的其他应用 求100以内的所有素数 筛法求100以内的所有素数 自顶向下、逐步求精设计算法 step 1&#xff1a;设计总体算法  初始化数组a&#xff0c;使a[2]2, a[3]3,..…

10-指针进阶——char型,多级指针,void指针,const指针

10-指针进阶——char型&#xff0c;多级指针&#xff0c;void指针&#xff0c;const指针 文章目录 10-指针进阶——char型&#xff0c;多级指针&#xff0c;void指针&#xff0c;const指针一、char 型指针1.1 示例 二、多级指针2.1 示例 三、 指针的万能拆解方法3.1 示例 四、v…

CMakeLists如何多行注释

在使用Visual Studio编写CMakeLists的时候你可能会遇到需要多行注释的情况&#xff0c;可又不知道快捷键是什么。。。 其实你只需要敲个 #[[ 就行了&#xff0c;另外一般方括号VS会自动帮你补全&#xff0c;之后将需要注释的内容放在第二个方括号与第三个方括号之间就完成注释…

Nvidia Jetson/Orin/算能 +FPGA+AI大算力边缘计算盒子:潍柴雷沃智慧农业无人驾驶

潍柴雷沃智慧农业科技股份有限公司&#xff0c;是潍柴集团重要的战略业务单元&#xff0c;旗下收获机械、拖拉机等业务连续多年保持行业领先&#xff0c;是国内少数可以为现代农业提供全程机械化整体解决方案的品牌之一。潍柴集团完成对潍柴雷沃智慧农业战略重组后&#xff0c;…

翻译《The Old New Thing》- Why isn’t there a SendThreadMessage function?

Why isnt there a SendThreadMessage function? - The Old New Thing (microsoft.com)https://devblogs.microsoft.com/oldnewthing/20081223-00/?p19743 Raymond Chen 2008年12月23日 为什么没有 SendThreadMessage 函数&#xff1f; 简要 文章讨论了 Windows 中不存在 Sen…

全链路性能测试:Nginx 负载均衡的性能分析和调优

为什么性能测试很多同学觉得是一个比较难以自学上岸的测试领域,是因为真正做全链路的性能测试是比较难的。所谓的全链路就是在项目的整个链路上任何一环节都有可能存在性能测试瓶颈,我们都需要能够通过分析性能的监控指标找到对应的问题。 我们今天要讲的Nginx负载均衡就是…

Shell脚本学习_字符串变量

目录 1.Shell字符串变量&#xff1a;格式介绍 2.Shell字符串变量&#xff1a;拼接 3.Shell字符串变量&#xff1a;字符串截取 4.Shell索引数组变量&#xff1a;定义-获取-拼接-删除 1.Shell字符串变量&#xff1a;格式介绍 1、目标&#xff1a; 能够使用字符串的三种方式 …

【NI国产替代】500 MSPS 采样率,14 bit 分辨率数据采集盒子

• 双高速高精度数据采集通道 • 支持内外精准触发采样模式 • 丰富的总线控制接口 • 抗干扰能力强 高速采集盒子是一款双通道&#xff0c;具有 500 MSPS 采样率&#xff0c;14 bit 分辨率的高速高精度数据采集设备&#xff0c;其模拟输入带宽为 200 MHz&#xff0c;…

深入了解反射

newInstance 可访问性限制&#xff1a; newInstance()方法只能调用无参的公共构造函数。如果类没有无参公共构造函数&#xff0c;那么newInstance()方法将无法使用。 异常处理&#xff1a; newInstance()方法在创建对象时会抛出受检异常InstantiationException和IllegalAcces…

各品牌电视安装第三方软件失败的解决方法

在安装电视第三方软件时&#xff0c;您可能会遇到安装失败、解析错误或无法识别文件类型等问题。以下是一些常见问题的解决方案&#xff0c;小武给您整理了详细的步骤来帮助解决这些问题。 手机投屏或安装方法参考如下文章&#xff1a; 移动端投屏到大屏幕的操作详解 通过U盘…

SpringBoot图书管理系统【附:资料➕文档】

前言&#xff1a;我是源码分享交流Coding&#xff0c;专注JavaVue领域&#xff0c;专业提供程序设计开发、源码分享、 技术指导讲解、各类项目免费分享&#xff0c;定制和毕业设计服务&#xff01; 免费获取方式--->>文章末尾处&#xff01; 项目介绍048&#xff1a; 图…

2024年6月8日 每周新增游戏

中医百科中药: 中医百科中药是一款非常强大的中药知识科普软件&#xff0c;该应用提供500多味中草药的文献资料&#xff0c;强大的搜索功能可根据功效、特点和关键词来快速查找中药&#xff0c;而且每味中药的图片、功效、主治、炮制方法等百科知识&#xff0c;可以很好的帮助你…

计算机专业本科论文起稿咋写

举例基于SpringBoot的Java基础的旅游管理系统 摘要 随着旅游业的快速发展&#xff0c;传统的旅游管理方式已经难以满足现代企业的需求。为了提高旅游企业的管理水平和服务质量&#xff0c;本文设计并实现了一个基于SpringBoot框架的旅游管理系统。本文首先介绍了旅游管理系统的…