安装代码
https://github.com/echonoshy/cgft-llm/blob/master/llama-factory/README.md
https://github.com/hiyouga/LLaMA-Factory/tree/mainLLaMA-Factoryhttps://github.com/hiyouga/LLaMA-Factory/tree/main
【大模型微调】- 使用Llama Factory实现中文llama3微调_哔哩哔哩_bilibili
git clone https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e .
启动UI
cd LLaMA-Factory
llamafactory-cli webui
CUDA_VISIBLE_DEVICES=0 USE_MODELSCOPE_HUB=1 python src/webui.py
查看命令有没有装好
llamafactory-cli -h
gpu占有情况
nvitop
地址
pwd
微调命令(构建 cust/train_llama3_lora_sft.yaml)
(命令行执行:llamafactory-cli train cust/train_llama3_lora_sft.yaml)
(打开ui: llamafactory-cli webchat cust/train_llama3_lora_sft.yaml)
cutoff_len: 1024
dataset: fintech,identity
dataset_dir: data
do_train: true
finetuning_type: lora
flash_attn: auto
fp16: true
gradient_accumulation_steps: 8
learning_rate: 0.0002
logging_steps: 5
lora_alpha: 16
lora_dropout: 0
lora_rank: 8
lora_target: q_proj,v_proj
lr_scheduler_type: cosine
max_grad_norm: 1.0
max_samples: 1000
model_name_or_path: /root/autodl-tmp/models/Llama3-8B-Chinese-Chat
num_train_epochs: 10.0
optim: adamw_torch
output_dir: saves/LLaMA3-8B-Chinese-Chat/lora/train_2024-05-25-20-27-47
packing: false
per_device_train_batch_size: 2
plot_loss: true
preprocessing_num_workers: 16
report_to: none
save_steps: 100
stage: sft
template: llama3
use_unsloth: true
warmup_steps: 0