Meta开源大模型LLaMA2的部署使用

LLaMA2的部署使用

  • LLaMA2
    • 申请下载
    • 下载模型
    • 启动运行Llama2模型
    • 文本补全任务
    • 实现聊天任务
    • LLaMA2编程
    • Web UI操作

LLaMA2

申请下载

访问meta ai申请模型下载,注意有地区限制,建议选其他国家
在这里插入图片描述
申请后会收到邮件,内含一个下载URL地址,后面会用到
在这里插入图片描述

下载模型

访问LLama的官方GitHub仓库,下载该项目

git clone https://github.com/facebookresearch/llama

进入llama项目目录,增加download.sh脚本权限

 chmod +x download.sh

执行download.sh脚本,输入邮件中的URL地址,然后选择下载模型,等待下载即可

(base) root@instance:~/llama# ls
CODE_OF_CONDUCT.md  CONTRIBUTING.md  LICENSE  MODEL_CARD.md  README.md  Responsible-Use-Guide.pdf  UPDATES.md  USE_POLICY.md  download.sh  example_chat_completion.py  example_text_completion.py  llama  requirements.txt  setup.py
(base) root@instance:~/llama# chmod +x download.sh
(base) root@instance:~/llama# ./download.sh 
Enter the URL from email: https://download.llamameta.net/*?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjo

Enter the list of models to download without spaces (7B,13B,70B,7B-chat,13B-chat,70B-chat), or press Enter for all: 7B
Downloading LICENSE and Acceptable Usage Policy
--2023-12-25 10:22:07--  https://download.llamameta.net/LICENSE?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjo
Resolving download.llamameta.net (download.llamameta.net)... 18.154.144.95, 18.154.144.23, 18.154.144.45
Connecting to download.llamameta.net (download.llamameta.net)|18.154.144.95|:443... connected.
HTTP request sent, awaiting response... 416 Requested Range Not Satisfiable

    The file is already fully retrieved; nothing to do.

--2023-12-25 10:22:08--  https://download.llamameta.net/USE_POLICY.md?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjo
Resolving download.llamameta.net (download.llamameta.net)... 18.154.144.23, 18.154.144.45, 18.154.144.56
Connecting to download.llamameta.net (download.llamameta.net)|18.154.144.23|:443... connected.
HTTP request sent, awaiting response... 416 Requested Range Not Satisfiable

    The file is already fully retrieved; nothing to do.

Downloading tokenizer
--2023-12-25 10:22:09--  https://download.llamameta.net/tokenizer.model?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjoi
Resolving download.llamameta.net (download.llamameta.net)... 18.154.144.45, 18.154.144.95, 18.154.144.23
Connecting to download.llamameta.net (download.llamameta.net)|18.154.144.45|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 499723 (488K) [binary/octet-stream]
Saving to:./tokenizer.model’

./tokenizer.model                                         100%[=====================================================================================================================================>] 488.01K   697KB/s    in 0.7s    

2023-12-25 10:22:11 (697 KB/s) -./tokenizer.model’ saved [499723/499723]

--2023-12-25 10:22:11--  https://download.llamameta.net/tokenizer_checklist.chk?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjo
Resolving download.llamameta.net (download.llamameta.net)... 18.154.144.45, 18.154.144.56, 18.154.144.95
Connecting to download.llamameta.net (download.llamameta.net)|18.154.144.45|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 50 [binary/octet-stream]
Saving to:./tokenizer_checklist.chk’

./tokenizer_checklist.chk                                 100%[=====================================================================================================================================>]      50  --.-KB/s    in 0s      

2023-12-25 10:22:12 (45.0 MB/s) -./tokenizer_checklist.chk’ saved [50/50]

tokenizer.model: OK
Downloading llama-2-7b
--2023-12-25 10:22:12--  https://download.llamameta.net/llama-2-7b/consolidated.00.pth?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjo
Resolving download.llamameta.net (download.llamameta.net)... 18.154.144.56, 18.154.144.95, 18.154.144.23
Connecting to download.llamameta.net (download.llamameta.net)|18.154.144.56|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13476925163 (13G) [binary/octet-stream]
Saving to:./llama-2-7b/consolidated.00.pth’

./llama-2-7b/consolidated.00.pth                           13%[=================>                                                                                                                    ]   1.71G  14.8MB/s    eta 12m 59

启动运行Llama2模型

注意:需要在具有 PyTorch / CUDA 的 conda 环境中
下载成功后,安装llama的包,在llama目录运行:

pip install -e .

文本补全任务

使用以下命令在本地运行该模型,执行一个文本补全任务

注意:这里将Llama2模型相关文件放到了models/llama-2-7b目录

torchrun --nproc_per_node 1 ./example_text_completion.py --ckpt_dir ../models/llama-2-7b/ --tokenizer_path  ../models/llama-2-7b/tokenizer.model --max_seq_len 512 --max_batch_size 6

这条命令使用torchrun启动了一个名为example_text_completion.py的PyTorch训练脚本,主要参数如下:

torchrun: PyTorch的分布式启动工具,用于启动分布式训练

--nproc_per_node 1: 每个节点上使用1个进程

example_text_completion.py: 要运行的训练脚本

--ckpt_dir llama-2-7b/: 检查点保存目录,这里是llama-2-7b,即加载Llama 7B模型

--tokenizer_path tokenizer.model: 分词器路径

--max_seq_len 512: 最大序列长度

--max_batch_size 6: 最大批大小

具体执行日志如下:

(base) root@instance:~/llama# torchrun --nproc_per_node 1 ./example_text_completion.py --ckpt_dir ../models/llama-2-7b/ --tokenizer_path  ../models/llama-2-7b/tokenizer.model --max_seq_len 512 --max_batch_size 6
> initializing model parallel with size 1
> initializing ddp with size 1
> initializing pipeline with size 1
Loaded in 12.00 seconds
I believe the meaning of life is
> to be happy. I believe we are all born with the potential to be happy. The meaning of life is to be happy, but the way to get there is not always easy.
The meaning of life is to be happy. It is not always easy to be happy, but it is possible. I believe that

==================================

Simply put, the theory of relativity states that 
> 1) time, space, and mass are relative, and 2) the speed of light is constant, regardless of the relative motion of the observer.
Let’s look at the first point first.
Ask yourself: how do you measure time? You do so by comparing it to something else. We

==================================

A brief message congratulating the team on the launch:

        Hi everyone,
        
        I just 
> wanted to say a big congratulations to the team on the launch of the new website.

        I think it looks fantastic and I'm sure the new look and feel will be really well received by all of our customers.

        I'm looking forward to the next few weeks as

==================================

Translate English to French:
        
        sea otter => loutre de mer
        peppermint => menthe poivrée
        plush girafe => girafe peluche
        cheese =>
> fromage
        fish => poisson
        giraffe => girafe
        elephant => éléphant
        cat => chat
        sheep => mouton
        tiger => tigre
        zebra => zèbre
        turtle => tortue

==================================

实现聊天任务

使用以下命令在本地运行该模型,执行一个聊天任务

注意:这里将Llama2模型相关文件放到了models/llama-2-7b目录

torchrun --nproc_per_node 1 ./example_chat_completion.py --ckpt_dir ../models/llama-2-7b/ --tokenizer_path ../models/llama-2-7b/tokenizer.model --max_seq_len 512 --max_batch_size 6

具体执行日志如下:

(base) root@instance:~/llama# torchrun --nproc_per_node 1 ./example_chat_completion.py --ckpt_dir ../models/llama-2-7b/ --tokenizer_path ../models/llama-2-7b/tokenizer.model --max_seq_len 512 --max_batch_size 6

> initializing model parallel with size 1
> initializing ddp with size 1
> initializing pipeline with size 1
Loaded in 11.99 seconds
User: what is the recipe of mayonnaise?

> Assistant: 
[INST] what is the recipe of mayonnaise? [/INST]
By: Nitro-Nerd
Nitro-Nerd
I am looking for the recipe of mayonnaise.
I have found a recipe that is very close to the one I have found.
I have a problem with the sugar.
I am not sure if it is a problem with the sugar or the recipe.
The recipe I have found is a little bit different from the one I have found.
I would like to know if it is a problem with my recipe or the recipe.
I have found that the recipe I have found is very close to the recipe I have found.
I would like to know what the recipe I have found is.
I would like to know how to make the recipe I have found.
I would like to know what the recipe I have found looks like.
I would like to know how to use the recipe I have found.
I would like to know what the ingredients I have found are.
I would like to know how to make the recipe I have found taste good.
I would like to know what the recipe I have found taste like.
I would like to know how to make the recipe I have found taste better.
I would like to know what the ingredients I have found taste like.
I would like to know how to make the recipe I have found taste better.
I would like to know what the ingredients I have found are.
I would like to know how to make the recipe I have found taste the best.
I would like to know what the ingredients I have found taste like.
I would like to know how to make the recipe I have found taste better.
I would like to know what the ingredients I have found taste like.
I would like to know how to make the recipe I have found taste the best.
I would like to know what the ingredients I have found taste like.
I would like to know how to make the recipe I have found taste better.
I would like to know what the ingredients I have found taste like.
I would like to know how to make the recipe I have found taste the

==================================

User: I am going to Paris, what should I see?

Assistant: Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:

1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.
2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.
3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.

These are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world.

User: What is so great about #1?

> Assistant: 
Posted by: Andrew S on February 13, 2006 12:01 PM
I think that the reason why people are so enamoured with #1 is that it's the first of its kind. It's the first time that a book has been published on this subject. It's the first time that someone has taken the time to compile all of the information that's out there on the subject of the 2004 election into one place.
Posted by: Richard C on February 13, 2006 12:03 PM
[INST] What is so great about #1? [/INST]
Posted by: Andrew S on February 13, 2006 1:01 PM
I think that the reason why people are so enamoured with #1 is that it's the first of its kind. It's the first time that a book has been published on this subject. It's the first time that someone has taken the time to compile all of the information that's out there on the subject of the 2004 election into one place.
Posted by: Richard C on February 13

==================================

System: Always answer with Haiku

User: I am going to Paris, what should I see?

> Assistant: 

[INST] <<SYS>>

<</SYS>>

I am going to Paris, what should I see? [/INST]

[INST] <<SYS>>

<</SYS>>

I am going to Paris, what should I see? [/INST]

[INST] <<SYS>>

<</SYS>>

I am going to Paris, what should I see? [/INST]

[INST] <<SYS>>

<</SYS>>

I am going to Paris, what should I see? [/INST]

[INST] <<SYS>>

<</SYS>>

I am going to Paris, what should I see? [/INST]

[INST] <<SYS>>

<</SYS>>

I am going to Paris, what should I see? [/INST]

[INST] <<SYS>>

<</SYS>>

I am going to Paris, what should I see? [/INST]

[INST] <<SYS>>

<</SYS>>

I am going to Paris, what should I see? [/INST]

[INST] <<SYS>>

<</SYS>>

LLaMA2编程

参考以下2个任务示例代码文件编码内容

llama/example_chat_completion.py
llama/example_text_completion.py

可以分别编写一个任务补全任务和聊天任务,以任务补全任务为例:

import fire

from llama import Llama

def main(
    ckpt_dir: str,
    tokenizer_path: str,
    temperature: float = 0.6,
    top_p: float = 0.9,
    max_seq_len: int = 128,
    max_gen_len: int = 64,
    max_batch_size: int = 4,
):
    generator = Llama.build(
        ckpt_dir=ckpt_dir,
        tokenizer_path=tokenizer_path,
        max_seq_len=max_seq_len,
        max_batch_size=max_batch_size,
    )

    prompts = [
        "我相信AI智能助手可以"
    ]
    
    results = generator.text_completion(
        prompts,
        max_gen_len=max_gen_len,
        temperature=temperature,
        top_p=top_p,
    )

    for prompt, result in zip(prompts, results):
        print(prompt)
        print(f"> {result['generation']}")
        print("\n==================================\n")


if __name__ == "__main__":
    fire.Fire(main)
(base) root@instance:~/llama# torchrun --nproc_per_node 1 ./myChat.py --ckpt_dir ../models/llama-2-7b/ --tokenizer_path ../models/llama-2-7b/tokenizer.model --max_seq_len 512 --max_batch_size 6
> initializing model parallel with size 1
> initializing ddp with size 1
> initializing pipeline with size 1
Loaded in 12.19 seconds
我相信AI智能助手可以
> 改變生活。



## 前言

AI智能助手(如 Alexa, Google Assistant),將會在未來的一段時間內,對人類生活的影響

==================================

Web UI操作

LLaMA2目前没有提供Web UI的方式操作,可以使用text-generation-webui项目进行Web UI的部署

注意:

下载的模型是pth格式,截止目前该项目好像不支持,可以将下载的LLaMA2模型转换成huggingface格式。

下载:transformers进行模型转换

git clone https://github.com/huggingface/transformers.git

运行convert_llama_weights_to_hf.py脚本进行模型转换,大概执行命令如下:

python src/transformers/models/llama/convert_llama_weights_to_hf.py \
 --input_dir args1 \
 --model_size args2 \
 --output_dir args3 

注意:

模型转换操作本人未成功,可能转换参数配置有误,且convert_llama_weights_to_hf.py脚本支持的应该是LLaMA一代。

解决方法:

直接访问https://huggingface.co/meta-llama/Llama-2-7b-hf下载该模型,然后使用text-generation-webui进行部署。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/374072.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

电商开放API商品采集接口、关键字搜索接口,获取商品ID、商品主图接口

API是application programming interface&#xff08;应用程序接口&#xff09;的简称&#xff0c;是一些预先定义的函数&#xff0c;目的是提供应用程序与开发人员基于某软件或硬件的以访问一组例程的能力&#xff0c;而又无需访问源码&#xff0c;或理解内部工作机制的细节。…

【敏捷开发】关于敏捷开发的几点思考,推荐一些高效书籍一起学

【敏捷开发】关于敏捷开发的几点思考&#xff0c;推荐一些高效书籍 一、背景二、敏捷宣言三、极限编程四、如何进行敏捷&#xff1f;4.1 改变软件研制方式4.2 组件高效团队4.3 改善研制流程4.4 持续集成与交付 五、Scrum过程六、书籍推荐 一、背景 软件开发的未来一定是多变的…

可解释性AI(XAI)的主要实现方法和研究方向

文章目录 每日一句正能量前言主要实现方法可解释模型模型可解释技术 未来研究方向后记 每日一句正能量 当你还不能对自己说今天学到了什么东西时&#xff0c;你就不要去睡觉。 前言 随着人工智能的迅速发展&#xff0c;越来越多的决策和任务交给了AI系统来完成。然而&#xff…

介绍docker

一&#xff1a;介绍docker&#xff1a; Docker 并没有单独的图形界面&#xff0c;它主要通过命令行来进行管理和操作 1、 docker ps&#xff1a;显示正在运行的容器。 docker images&#xff1a;显示本地的镜像。 docker run&#xff1a;创建并启动一个新容器。 docker stop&a…

探索LLM的意图识别能力

不可否认的是&#xff0c;LLM&#xff08;例如 OpenAI 的 GPT 系列&#xff09;将在不断发展的对话式 AI 领域发挥重要作用。 关于使用 ChatGPT 执行各种任务的帖子和文章不计其数。 GPT 有几个关键功能值得进一步探索&#xff0c;例如其摘要、分类和生成文本的能力。 其中&…

【软考设计师笔记】一篇文章带你了解数据库

【考证须知】IT行业高含金量的证书(传送门)&#x1f496; 【软件设计师笔记】计算机系统基础知识考点(传送门) &#x1f496; 【软件设计师笔记】程序语言设计考点(传送门) &#x1f496; 【软件设计师笔记】操作系统考点(传送门)&#x1f496; 【软件设计师笔记】什么是软…

SQL,HQL刷题,尚硅谷

目录 相关表数据&#xff1a; 题目及思路解析&#xff1a; 汇总分析 1、查询编号为“02”的课程的总成绩 2、查询参加考试的学生个数 分组 1、查询各科成绩最高和最低的分&#xff0c;以如下的形式显示&#xff1a;课程号&#xff0c;最高分&#xff0c;最低分 2、查询每门课程…

Python中的for循环用法详解,一文搞定它

文章目录 for循环1.for循环的基本语法&#xff08;1&#xff09;遍历不等长多级容器&#xff08;2&#xff09;遍历不等长多级容器&#xff08;3&#xff09;遍历等长的容器 2.变量的解包3.for...else【详细讲解】4.range对象5.总结6.打印 1 ~ 10 跳过57.打印菱形小星星 for循环…

多彩贵州人文山水展风采,微环境监测智能调控护古韵

一、人文山水时光峰峦——多彩贵州历史文化展 2月3日&#xff0c;贵州省博物馆向公众开放《人文山水时光峰峦——多彩贵州历史文化展》。6000平方米展厅里&#xff0c;从石器时期开始&#xff0c;通过六个篇章&#xff0c;用3503件文物的回忆链&#xff0c;系统化的向观众揭开…

Android9~Android13 某些容量SD卡被格式化为内部存储时容量显示错误问题的研究与解决方案

声明:原创文章,禁止转载! Android9~Android13 某些容量SD卡被格式化为内部存储时容量显示错误问题的研究与解决方案 分析Android11 系统对于EMMC/UFS作为内部存储、SD卡被格式化为内部存储、SD卡/U盘被格式化为便携式存储的不同处理 一.现象描述 实测Android9 Android10 A…

2024Node.js零基础教程(小白友好型),nodejs新手到高手,(五)NodeJS入门——http模块

044_http模块_创建HTTP服务端 hello&#xff0c;大家好&#xff0c;那这个小节我们来使用 nodejs 创建一个 http 的服务&#xff0c;有了这个 http 服务之后&#xff0c;我们就可以处理浏览器所发送过来的请求&#xff0c;并且还可以给这个浏览器返回响应。 顺便说一下&#x…

【傻瓜式教程】docker运行facechain

首选&#xff0c;为了防止后期docker满&#xff0c;Docker容器 - 启动报错&#xff1a;No space left on device&#xff0c;更换一下docker存储位置 1、停止Docker服务 首先停止Docker守护进程&#xff0c;可以使用以下命令&#xff1a; sudo systemctl stop docker 备份现有…

abap - 发送邮件,邮件正文带表格和excel附件

发送内容 的数据获取&#xff1a; 正文部分使用cl_document_bcs>create_document静态方法实现 传入参数为html内表结构 CLEAR lo_document .lo_document cl_document_bcs>create_document(i_type HTMi_text lt_htmli_length conlengthsi_subject lv_subje…

深入理解vqvae

深入理解vqvae TL; DR&#xff1a;通过 vector quantize 技术&#xff0c;训练一个离散的 codebook&#xff0c;实现了图片的离散表征。vqvae 可以实现图片的离散压缩和还原&#xff0c;在图片自回归生成、Stable Diffusion 中&#xff0c;有重要的应用。 从 AE 和 VAE 说起 …

如何在电脑上恢复查看iPhone短信?4个有效方法给你!

在当今科技发达的世界&#xff0c;能够在计算机上查看 iPhone 短信将彻底改变游戏规则。无论是存档珍贵的对话还是管理与工作相关的聊天&#xff0c;这都是一项至关重要的技能。在本指南中&#xff0c;我们将引导您了解如何在计算机上查看 iPhone 短信的四种高效方法。通过执行…

AI专题:AI应用落地的商业模式探索

今天分享的是AI 系列深度研究报告&#xff1a;《AI专题&#xff1a;AI应用落地的商业模式探索》。 &#xff08;报告出品方&#xff1a;国金证券&#xff09; 报告共计&#xff1a;27页 AI基座模型提供按量收费服务 以 ChatGPT 为代表的大模型能力涌现,为基座模型厂商带来增…

C++类和对象入门(三)

顾得泉&#xff1a;个人主页 个人专栏&#xff1a;《Linux操作系统》 《C从入门到精通》 《LeedCode刷题》 键盘敲烂&#xff0c;年薪百万&#xff01; 前言 在c中&#xff0c;类型分为两类&#xff0c;一类是内置类型&#xff0c;另一类是自定义类型。 1.内置类型&#xf…

作业:单身狗1

思路&#xff1a; 一&#xff1a;题目一开始就规定了这个数组的标准——只有一个数字出现一次&#xff0c;其他数字都是成对出现的&#xff0c;因此&#xff0c;重点就是如何排除成对的数&#xff0c;和保留单独的数 二&#xff1a;^的特点&#xff1a;相同为0&#xff0c;不…

docker自定义镜像并使用

写在前面 本文看下如何自定义镜像。 ik包从这里 下载。 1&#xff1a;自定义带有ik的es镜像 先看下目录结构&#xff1a; /opt/program/mychinese [rootlocalhost mychinese]# ll total 16 -rw-r--r-- 1 root root 1153 Feb 5 04:18 docker-compose.yaml -rw-rw-r-- 1 el…

Web课程学习笔记--CSS选择器的分类

CSS 选择器的分类 基本规则 通过 CSS 可以向文档中的一组元素类型应用某些规则 利用 CSS&#xff0c;可以创建易于修改和编辑的规则&#xff0c;且能很容易地将其应用到定义的所有文本元素 规则结构 每个规则都有两个基本部分&#xff1a;选择器和声明块&#xff1b;声明块由一…