Meta-Llama-3-8B 模型文件地址
LLaMA-Factory 仓库地址
Download Ollama
环境准备
- 操作系统:Ubuntu 22.04.5 LTS
- Anaconda3:Miniconda3-latest-Linux-x86_64
- GPU: NVIDIA GeForce RTX 4090 24G
1. 准备conda环境
创建一个新的conda环境:
conda create -n llama8b python==3.10 -y
conda activate llama8b
2. 下载LLaMA-Factory的项目文件
下载LLama_Factory源码:
git clone https://github.com/hiyouga/LLaMA-Factory.git
- 3. 升级pip版本
建议在执行项目的依赖安装之前升级 pip 的版本:
python -m pip install --upgrade pip
4. 使用pip安装LLaMA-Factory项目代码运行的项目依赖
在LLaMA-Factory中提供的 requirements.txt
文件包含了项目运行所必需的所有 Python 包及其精确版本号。使用pip一次性安装所有必需的依赖,执行命令如下:
pip install -r requirements.txt -i https://pypi.mirrors.ustc.edu.cn/simple
5. Llama3模型下载
从下面地址中下载模型文件,这里我们从ModelScope来下载
huggingface Llama3模型主页:https://huggingface.co/meta-llama/
Github主页:GitHub - meta-llama/llama3: The official Meta Llama 3 GitHub site
ModelScope Llama3-8b模型主页:Meta-Llama-3-8B-Instruct
git clone https://www.modelscope.cn/LLM-Research/Meta-Llama-3-8B-Instruct.git
6. 运行原始模型
切换到LLama_Factory目录下
cd ~/LLaMA-Factory
1、
CUDA_VISIBLE_DEVICES=0 python src/web_demo.py \
--model_name_or_path /root/LLaMA-Factory-main/Meta-Llama-3-8B \
--template llama3 \
--infer_backend vllm \
--vllm_enforce_eager
2、
CUDA_VISIBLE_DEVICES=0 python src/web_demo.py \
--model_name_or_path /root/LLaMA-Factory/Meta-Llama-3-8B-Instruct \
--template llama3 \
--infer_backend vllm \
--vllm_enforce_eager
3、
CUDA_VISIBLE_DEVICES=0 python src/web_demo.py \
--model_name_or_path /home/oneview/ai-test/model/Meta-Llama-3-8B-Instruct \
--template llama3 \
--infer_backend vllm \
--vllm_enforce_eager
报错
INFO 06-16 09:19:47 llm_engine.py:87] Initializing an LLM engine with config: model='/root/LLaMA-Factory-main/Meta-Llama-3-8B', tokenizer='/root/LLaMA-Factory-
06/16/2024 09:19:53 - INFO - llmtuner.data.template - Add pad token: <|eot_id|>
Running on local URL: http://0.0.0.0:7080
Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.
解决方法:
链接:Could not create share link. Please check your internet connection or our status page: https://statu-CSDN博客
pip install modelscope -i https://pypi.mirrors.ustc.edu.cn/simple
pip install vllm==0.3.3 -i https://pypi.mirrors.ustc.edu.cn/simple
CUDA_VISIBLE_DEVICES=0 python src/web_demo.py \
--model_name_or_path /root/LLaMA-Factory-main/Meta-Llama-3-8B
--template LLaMA-Factory \
--infer_backend vllm \
--vllm_enforce_eager
LLaMA-Factory
CUDA_VISIBLE_DEVICES=0 python src/web_demo.py \
--model_name_or_path /root/LLaMA-Factory-main/Meta-Llama-3-8B \
--template llama3 \
--infer_backend vllm \
--vllm_enforce_eager
参考:Llama3本地部署与高效微调入门_llama3-8b开源如何部署微调-CSDN博客