Note:
提示词工程是一门融合了艺术和科学的学科——它既是对技术的理解,也是对创造力和战略思维的理解。本文为对LLMS策略分享内容学习后的整理,尝试抛开网上广泛讨论和记录的传统提示词工程技术,展示通过实验学到的新见解,以及对理解和处理某些技术的不同的一些看法。
目录
本文涵盖以下内容,其中 🔵 指适合初学者的提示技巧,而 🔴 指高级策略:
1. [🔵] 使用 CO-STAR 框架构建提示
(C) Context上下文:提供任务的背景信息
(O) Objective目标:定义您希望 LLM 执行的任务
(S) Style样式:指定您希望 LLM 使用的书写样式
(T) Tone语气:设定回应的态度
(A) Audience受众:确定响应的目标受众
(R) Response响应:提供响应格式
CO-STAR 的实际应用
2. [🔵] 使用分隔符进行分段提示
作为特殊字符的分隔符
作为 XML 标签的分隔符
3. [🔴] 使用 LLM Guardrails 创建系统提示词
系统提示词相关术语
什么是系统提示词?
什么时候应该使用系统提示词?
系统提示词应包含哪些内容?
那么“正常”的聊天提示词又是什么呢?
其他:使LLM护栏动态化
4. [🔴] 仅使用 LLMs 分析数据集,无需插件或代码
LLMs“不”擅长的数据集分析类型
LLMs 擅长的数据集分析类型
仅使用 LLMs 分析 Kaggle 数据集
验证 LLM 的分析
如果我们使用 ChatGPT 的高级数据分析插件会怎么样?
那么……何时使用 LLMs 分析数据集?
现在回到提示词工程!
1. [🔵] Structuring Prompts using the CO-STAR framework 使用 CO-STAR 框架构建提示
Effective prompt structuring is crucial for eliciting optimal responses from an LLM. The CO-STAR framework, a brainchild of GovTech Singapore’s Data Science & AI team, is a handy template for structuring prompts. It considers all the key aspects that influence the effectiveness and relevance of an LLM’s response, leading to more optimal responses.
有效的提示结构对于从 LLM 中引出最佳响应至关重要。 CO-STAR 框架是 GovTech Singapore 数据科学与人工智能团队的创意,是构建提示的便捷模板。它考虑了影响 LLM 响应的有效性和相关性的所有关键方面,从而产生更优化的响应。
CO-STAR framework — Image by author
CO-STAR 框架 — 图片由作者提供
Here’s how it works: 它的工作原理如下:
(C) Context: Provide background information on the task
(C) 上下文:提供任务的背景信息
This helps the LLM understand the specific scenario being discussed, ensuring its response is relevant.
这有助于LLM了解正在讨论的具体场景,确保其响应是相关的。
(O) Objective: Define what the task is that you want the LLM to perform
(O) 目标:定义您希望 LLM 执行的任务
Being clear about your objective helps the LLM to focus its response on meeting that specific goal.
明确您的目标有助于LLM将其响应集中在实现该特定目标上。
(S) Style: Specify the writing style you want the LLM to use
(S) 样式:指定您希望 LLM 使用的书写样式
This could be a particular famous person’s style of writing, or a particular expert in a profession, like a business analyst expert or CEO. This guides the LLM to respond with the manner and choice of words aligned with your needs.
这可能是某个特定名人的写作风格,也可能是某个专业的特定专家,例如商业分析师专家或首席执行官。这将指导LLM以符合您需求的方式和词语进行回应。
(T) Tone: Set the attitude of the response
(T) 语气:设定回应的态度
This ensures the LLM’s response resonates with the intended sentiment or emotional context required. Examples are formal, humorous, empathetic, among others.
这可确保 LLM 的响应与所需的预期情绪或情感背景产生共鸣。例子包括正式的、幽默的、善解人意的等等。
(A) Audience: Identify who the response is intended for
(A) 受众:确定响应的目标受众
Tailoring the LLM’s response to an audience, such as experts in a field, beginners, children, and so on, ensures that it is appropriate and understandable in your required context.
针对受众(例如某个领域的专家、初学者、儿童等)定制 LLM 的响应,确保它在您所需的上下文中是适当且易于理解的。
(R) Response: Provide the response format
(R) 响应:提供响应格式
This ensures that the LLM outputs in the exact format that you require for downstream tasks. Examples include a list, a JSON, a professional report, and so on. For most LLM applications which work on the LLM responses programmatically for downstream manipulations, a JSON output format would be ideal.
这可确保 LLM 以下游任务所需的确切格式输出。示例包括列表、JSON、专业报告等。对于大多数以编程方式处理 LLM 响应以进行下游操作的 LLM 应用程序,JSON 输出格式将是理想的选择。
A practical application of CO-STAR
CO-STAR 的实际应用
Here’s a real-world example of why the CO-STAR framework is useful. Let’s say you work as a social media manager, and you need help drafting a Facebook post to advertise your company’s new product.
下面是一个真实示例,说明了 CO-STAR 框架为何有用。假设您是一名社交媒体经理,您需要帮助起草 Facebook 帖子来宣传您公司的新产品。
A quick prompt without CO-STAR might look something like this:
没有 CO-STAR 的快速提示可能如下所示:
Write a facebook post to advertise my company’s new product. My company’s name is Alpha and the product is called Beta, a new ultra-fast hairdryer.
写一篇 Facebook 帖子来宣传我公司的新产品。我公司的名字叫Alpha,产品叫Beta,是一种新型超快速吹风机。
Here’s the output from GPT-4:
这是 GPT-4 的输出:
Introducing the Revolutionary Beta Hairdryer by Alpha!
Alpha 推出革命性的 Beta 吹风机!
- Say goodbye to long, tedious hair drying sessions! Alpha is proud to unveil the Beta, our latest innovation in haircare technology.
- 告别漫长、乏味的吹风过程! Alpha 很自豪地推出 Beta,这是我们在护发技术方面的最新创新。
- Ultra-Fast Drying: The Beta is designed to cut your drying time in half, thanks to its advanced, powerful motor. Get ready faster and enjoy more time doing what you love!
- 超快速吹干:Beta 配备先进、强大的电机,可将吹干时间缩短一半。更快地做好准备,享受更多时间做自己喜欢的事情!
Shop Now: [Link to product page]
立即购买:[产品页面链接]
While adequate, this output is painfully generic, lacking the specific nuances and targeted appeal necessary to resonate with your company’s audience.
虽然足够了,但此输出非常笼统,缺乏引起公司受众共鸣所需的具体细微差别和有针对性的吸引力。
Here’s an example of using the CO-STAR template instead, which reminds you to incorporate other aspects of your task into your prompt, specifically the STYLE
, TONE
, and AUDIENCE
aspects that are missing from the previous quick prompt:
下面是使用 CO-STAR 模板的示例,它提醒您将任务的其他方面合并到提示中,特别是 STYLE
、 TONE
和 AUDIENCE
# CONTEXT # # 语境 #
I want to advertise my company’s new product. My company’s name is Alpha and the product is called Beta, which is a new ultra-fast hairdryer.
我想为我公司的新产品做广告。我公司的名字叫Alpha,产品叫Beta,是一种新型的超快速吹风机。# OBJECTIVE # # 客观的 #
Create a Facebook post for me, which aims to get people to click on the product link to purchase it.
为我创建一个 Facebook 帖子,旨在让人们点击产品链接来购买。# STYLE # # 风格 #
Follow the writing style of successful companies that advertise similar products, such as Dyson.
遵循为类似产品做广告的成功公司(例如戴森)的写作风格。# TONE # # 语气 #
Persuasive 有说服力# AUDIENCE # # 观众 #
My company’s audience profile on Facebook is typically the older generation. Tailor your post to target what this audience typically looks out for in hair products.
我公司在 Facebook 上的受众群体通常是老一代。根据受众通常对护发产品的需求来定制您的帖子。# RESPONSE # # 回复 #
The Facebook post, kept concise yet impactful.
Facebook 上的帖子简洁而富有影响力。
By applying CO-STAR, GPT-4’s response becomes significantly more targeted and effective:
通过应用 CO-STAR,GPT-4 的响应变得更加有针对性和有效:
Embrace Effortless Elegance with Alpha Beta — The Ultra-Fast Hairdryer for the Wise Generation.
Alpha Beta 拥抱轻松优雅——专为明智一代打造的超快速吹风机。Rediscover the joy of simple, effective hair care with Alpha Beta. Our latest innovation is more than just a hairdryer; it’s a promise of swift, gentle, and
使用 Alpha Beta 重新发现简单、有效护发的乐趣。我们最新的创新不仅仅是吹风机;这是一个快速、温柔、和的承诺
reliable hair styling for those who appreciate the finer things in life.
对于那些欣赏生活中美好事物的人来说,可靠的发型设计。- Easy and Intuitive Use: Say goodbye to complicated gadgets. Alpha Beta is
- 简单直观的使用:告别复杂的小工具。阿尔法贝塔是
crafted for comfort and simplicity, perfect for those who value straightforward,
专为舒适和简约而设计,非常适合那些重视简单的人,
hassle-free technology. 无忧技术。
- Time-Saving Technology: We understand your time is precious. That’s why Alpha Beta cuts down drying time significantly, giving you more moments to enjoy life’s pleasures.
- 省时技术:我们了解您的时间非常宝贵。这就是为什么 Alpha Beta 显着缩短干燥时间,让您有更多时间享受生活乐趣。Make every day a good hair day with Alpha Beta. Experience the blend of
使用 Alpha Beta,让每一天都是美好的一天。体验融合
sophistication and simplicity today.
今天的复杂性和简单性。[Your Product Link Here] [您的产品链接在这里]
The CO-STAR framework guides you to provide all of the crucial pieces of information about your task to the LLM in a structured manner, ensuring a tailored and optimized response to exactly what you need.
CO-STAR 框架指导您以结构化方式向 LLM 提供有关您的任务的所有关键信息,确保针对您的需求提供量身定制和优化的响应。
2. [🔵] Sectioning Prompts Using Delimiters使用分隔符进行分段提示
Image generated by DALL·E 3
图像由 DALL·E 3 生成
Delimiters are special tokens that help the LLM distinguish which parts of your prompt it should consider as a single unit of meaning. This is important because your entire prompt arrives to the LLM as a single long sequence of tokens. Delimiters provide structure to this sequence of tokens by fencing specific parts of your prompt to be treated differently.
分隔符是特殊标记,可帮助 LLM 区分提示的哪些部分应视为单个含义单位。这很重要,因为您的整个提示将作为单个长令牌序列到达 LLM。分隔符通过隔离提示的特定部分以进行不同的处理,从而为该标记序列提供结构。
It is noteworthy that delimiters may not make a difference to the quality of an LLM’s response for straightforward tasks. However, the more complex the task, the more impact the usage of delimiters for sectioning has on the LLM’s response.
值得注意的是,分隔符可能不会影响 LLM 对简单任务的响应质量。然而,任务越复杂,使用分隔符进行分段对 LLM 响应的影响就越大。
Delimiters as Special Characters
作为特殊字符的分隔符
A delimiter could be any sequence of special characters that usually wouldn’t appear together, for example:
分隔符可以是通常不会一起出现的特殊字符的任何序列,例如:
- ###
- ===
- >>>
The number and type of special characters chosen is inconsequential, as long as they are unique enough for the LLM to understand them as content separators instead of normal punctuation.
所选择的特殊字符的数量和类型并不重要,只要它们足够独特,让 LLM 将它们理解为内容分隔符而不是正常的标点符号即可。
Here’s an example of how you might use such delimiters in a prompt:
以下是如何在提示中使用此类分隔符的示例:
Classify the sentiment of each conversation in <<<CONVERSATIONS>>> as
将<<<CONVERSATIONS>>>中每个对话的情绪分类为
‘Positive’ or ‘Negative’. Give the sentiment classifications without any other preamble text.
“积极”或“消极”。给出情感分类,无需任何其他序言文本。###
EXAMPLE CONVERSATIONS 对话示例
[Agent]: Good morning, how can I assist you today?
[代理]:早上好,今天需要什么帮助吗?
[Customer]: This product is terrible, nothing like what was advertised!
[顾客]:这个产品太糟糕了,和广告上的完全不一样!
[Customer]: I’m extremely disappointed and expect a full refund.
[顾客]:我非常失望,希望全额退款。[Agent]: Good morning, how can I help you today?
[代理]:早上好,今天有什么可以帮您的吗?
[Customer]: Hi, I just wanted to say that I’m really impressed with your
[顾客]:嗨,我只是想说,我对你们的印象非常深刻
product. It exceeded my expectations!
产品。它超出了我的预期!###
EXAMPLE OUTPUTS 输出示例
Negative 消极的
Positive 积极的
###
<<<
[Agent]: Hello! Welcome to our support. How can I help you today?
[代理人]:您好!欢迎您对我们的支持。今天我能为您提供什么帮助?
[Customer]: Hi there! I just wanted to let you know I received my order, and
[顾客]:您好!我只是想让您知道我收到了订单,并且
it’s fantastic! 这是梦幻般的!
[Agent]: That’s great to hear! We’re thrilled you’re happy with your purchase.
[代理]:很高兴听到这个消息!我们很高兴您对购买感到满意。
Is there anything else I can assist you with?
还有什么我可以帮助您的吗?
[Customer]: No, that’s it. Just wanted to give some positive feedback. Thanks
[顾客]:不,就是这样。只是想提供一些积极的反馈。谢谢
for your excellent service!
为您提供优质的服务![Agent]: Hello, thank you for reaching out. How can I assist you today?
[代理]:您好,感谢您联系我们。今天我能为您提供什么帮助?
[Customer]: I’m very disappointed with my recent purchase. It’s not what I expected at all.
[顾客]:我对最近购买的产品感到非常失望。这根本不是我所期望的。
[Agent]: I’m sorry to hear that. Could you please provide more details so I can help?
[特工]:听到这个消息我很遗憾。您能否提供更多详细信息以便我提供帮助?
[Customer]: The product is of poor quality and it arrived late. I’m really
[顾客]:产品质量不好,到货晚了。我真的
unhappy with this experience.
对这次经历不满意。
>>>
Above, the examples are sectioned using the delimiter ###
, with the section headings EXAMPLE CONVERSATIONS
and EXAMPLE OUTPUTS
in capital letters to differentiate them. The preamble states that the conversations to be classified are sectioned inside <<<CONVERSATIONS>>>
, and these conversations are subsequently given to the LLM at the bottom of the prompt without any explanatory text, but the LLM understands that these are the conversations it should classify due to the presence of the delimiters <<<
and >>>
.
上面的示例使用分隔符 ###
进行分段,节标题 EXAMPLE CONVERSATIONS
和 EXAMPLE OUTPUTS
采用大写字母以区分它们。序言中指出,要分类的对话在 <<<CONVERSATIONS>>>
内进行分段,这些对话随后被提供给提示底部的 LLM ,没有任何解释性文本,但 理解,由于存在分隔符 <<<
和 >>>
,因此应该对这些对话进行分类。
Here is the output from GPT-4, with the sentiment classifications given without any other preamble text outputted, like what we asked for:
这是 GPT-4 的输出,给出了情感分类,没有输出任何其他前导文本,就像我们要求的那样:
Positive 积极的
Negative 消极的
Delimiters as XML Tags 作为 XML 标签的分隔符
Another approach to using delimiters is having them as XML tags. XML tags are tags enclosed in angle brackets, with opening and closing tags. An example is <tag>
and </tag>
. This is effective as LLMs have been trained on a lot of web content in XML, and have learned to understand its formatting.
使用分隔符的另一种方法是将它们作为 XML 标记。 XML 标签是用尖括号括起来的标签,带有开始和结束标签。一个例子是 <tag>
和 </tag>
。这是有效的,因为 LLMs 已经接受过大量 XML Web 内容的培训,并且学会了理解其格式。
Here’s the same prompt above, but structured using XML tags as delimiters instead:
下面是与上面相同的提示,但使用 XML 标签作为分隔符来构建:
Classify the sentiment of the following conversations into one of two classes, using the examples given. Give the sentiment classifications without any other
使用给出的示例,将以下对话的情绪分为两类之一。给出情感分类,无需任何其他
preamble text. 序言文本。<classes> <类>
Positive 积极的
Negative 消极的
</classes> </类><example-conversations> <示例对话>
[Agent]: Good morning, how can I assist you today?
[代理]:早上好,今天需要什么帮助吗?
[Customer]: This product is terrible, nothing like what was advertised!
[顾客]:这个产品太糟糕了,和广告上的完全不一样!
[Customer]: I’m extremely disappointed and expect a full refund.
[顾客]:我非常失望,希望全额退款。[Agent]: Good morning, how can I help you today?
[代理]:早上好,今天有什么可以帮您的吗?
[Customer]: Hi, I just wanted to say that I’m really impressed with your
[顾客]:嗨,我只是想说,我对你们的印象非常深刻
product. It exceeded my expectations!
产品。它超出了我的预期!
</example-conversations> </示例对话><example-classes> <示例类>
Negative 消极的Positive 积极的
</example-classes> </示例类><conversations> <对话>
[Agent]: Hello! Welcome to our support. How can I help you today?
[代理人]:您好!欢迎您对我们的支持。今天我能为您提供什么帮助?
[Customer]: Hi there! I just wanted to let you know I received my order, and
[顾客]:您好!我只是想让您知道我收到了订单,并且
it’s fantastic! 这是梦幻般的!
[Agent]: That’s great to hear! We’re thrilled you’re happy with your purchase.
[代理]:很高兴听到这个消息!我们很高兴您对购买感到满意。
Is there anything else I can assist you with?
还有什么我可以帮助您的吗?
[Customer]: No, that’s it. Just wanted to give some positive feedback. Thanks
[顾客]:不,就是这样。只是想提供一些积极的反馈。谢谢
for your excellent service!
为您提供优质的服务![Agent]: Hello, thank you for reaching out. How can I assist you today?
[代理]:您好,感谢您联系我们。今天我能为您提供什么帮助?
[Customer]: I’m very disappointed with my recent purchase. It’s not what I
[顾客]:我对最近购买的产品感到非常失望。这不是我的
expected at all. 完全预料到了。
[Agent]: I’m sorry to hear that. Could you please provide more details so I
[特工]:听到这个消息我很遗憾。您能否提供更多详细信息,以便我
can help? 可以帮助?
[Customer]: The product is of poor quality and it arrived late. I’m really
[顾客]:产品质量不好,到货晚了。我真的
unhappy with this experience.
对这次经历不满意。
</conversations> </对话>
It is beneficial to use the same noun for the XML tag as the words you have used to describe them in the instructions. The instructions we gave in the prompt above were:
对 XML 标签使用与在说明中描述它们的单词相同的名词是有益的。我们在上面的提示中给出的说明是:
Classify the sentiment of the following conversations into one of two classes, using the examples given. Give the sentiment classifications without any other
使用给出的示例,将以下对话的情绪分为两类之一。给出情感分类,无需任何其他
preamble text. 序言文本。
Where we used the nouns conversations
, classes
, and examples
. As such, the XML tags we use as delimiters are <conversations>
, <classes>
, <example-conversations>
, and <example-classes>
. This ensures that the LLM understands how your instructions relate to the XML tags used as delimiters.
我们使用名词 conversations
、 classes
和 examples
。因此,我们用作分隔符的 XML 标记是 <conversations>
、 <classes>
、 <example-conversations>
和 <example-classes>
。这可确保 LLM 理解您的指令与用作分隔符的 XML 标记的关系。
Again, the sectioning of your instructions in a clear and structured manner through the use of delimiters ensures that GPT-4 responds exactly how you want it to:
同样,通过使用分隔符以清晰且结构化的方式对指令进行分段可确保 GPT-4 准确地响应您想要的方式:
Positive 积极的
Negative 消极的
3. [🔴] Creating System Prompts With LLM Guardrails使用 LLM Guardrails 创建系统提示词
Before diving in, it is important to note that this section is relevant only to LLMs that possess a System Prompt feature, unlike the other sections in this article which are relevant for any LLM. The most notable LLM with this feature is, of course, ChatGPT, and therefore we will use ChatGPT as the illustrating example for this section.
在深入研究之前,请务必注意,本节仅与具有系统提示词功能的 LLMs 相关,这与本文中与任何 LLM 相关的其他部分不同。具有此功能的最值得注意的LLM当然是ChatGPT,因此我们将使用ChatGPT作为本节的说明示例。
Terminology surrounding System Prompts
系统提示词相关术语
First, let’s iron out terminology: With regards to ChatGPT, there exists a plethora of resources using these 3 terms almost interchangeably: “System Prompts”, “System Messages”, and “Custom Instructions”. This has proved confusing to many (including me!), so much so that OpenAI released an article explaining these terminologies. Here’s a quick summary of it:
首先,让我们弄清楚术语:对于 ChatGPT,存在大量资源几乎可以互换地使用这 3 个术语:“系统提示词”、“系统消息”和“自定义指令”。事实证明,这让很多人(包括我!)感到困惑,以至于 OpenAI 发布了一篇文章解释这些术语。这是对其的快速总结:
Chat Completions API system message vs Custom Instructions in UI | OpenAI Help Centerhttps://help.openai.com/en/articles/8234522-chat-completions-api-system-message-vs-custom-instructions-in-ui
- “System Prompts” and “System Messages” are terms used when interacting with ChatGPT programmatically over its Chat Completions API.
“系统提示词”和“系统消息”是通过 Chat Completions API 以编程方式与 ChatGPT 交互时使用的术语。 - On the other hand, “Custom Instructions” is the term used when interacting with ChatGPT over its user interface at https://chat.openai.com/.
另一方面,“自定义指令”是通过其用户界面(https://chat.openai.com/)与 ChatGPT 交互时使用的术语。
Image from Enterprise DNA Blog
Overall, though, the 3 terms refer to the same thing, so don’t let the terminology confuse you! Moving forward, this section will use the term “System Prompts”. Now let’s dive in!
不过总的来说,这 3 个术语指的是同一件事,所以不要让这些术语让您感到困惑!接下来,本节将使用术语“系统提示词”。现在让我们深入了解一下!
What are System Prompts? 什么是系统提示词?
System Prompts are an additional prompt where you provide instructions on how the LLM should behave. It is considered additional as it is outside of your “normal” prompts (better known as User Prompts) to the LLM.
系统提示词是一个附加提示,您可以在其中提供有关 LLM 应如何运行的说明。它被认为是附加的,因为它超出了 LLM 的“正常”提示(更好地称为用户提示)。
Within a chat, every time you provide a new prompt, System Prompts act like a filter that the LLM automatically applies before giving its response to your new prompt. This means that the System Prompts are taken into account every time the LLM responds within the chat.
在聊天中,每次您提供新提示时,系统提示词都会充当过滤器,LLM 在对新提示做出响应之前自动应用该过滤器。这意味着每次 LLM 在聊天中做出响应时都会考虑系统提示。
When should System Prompts be used?
什么时候应该使用系统提示词?
The first question on your mind might be: Why should I provide instructions inside the System Prompt when I can also provide them in my first prompt to a new chat, before further conversations with the LLM?
您想到的第一个问题可能是:为什么我应该在系统提示词中提供说明,因为我也可以在与 LLM 进行进一步对话之前在新聊天的第一个提示中提供说明?
The answer is because LLMs have a limit to their conversational memory. In the latter case, as the conversation carries on, the LLM is likely to “forget” this first prompt you provided to the chat, making these instructions obsolete.
答案是因为LLMs的对话记忆是有限的。在后一种情况下,随着对话的继续,LLM 可能会“忘记”您向聊天提供的第一个提示,从而使这些说明过时。
On the other hand, when instructions are provided in the System Prompt, these System Prompt instructions are automatically taken into account together with each new prompt provided to the chat. This ensures that the LLM continues to receive these instructions even as the conversation carries on, no matter how long the chat becomes.
另一方面,当在系统提示词中提供指令时,这些系统提示词指令会与提供给聊天的每个新提示一起自动考虑。这可确保 LLM 在对话继续时继续接收这些指令,无论聊天时间有多长。
In conclusion: 综上所述:
Use System Prompts to provide instructions that you want the LLM to remember when responding throughout the entire chat.
使用系统提示词提供您希望 LLM 在整个聊天过程中进行响应时记住的说明。
What should System Prompts include?
系统提示词应包含哪些内容?
Instructions in the System Prompt typically includes the following categories:
系统提示词中的指令通常包括以下几类:
- Task definition, so the LLM will always remember what it has to do throughout the chat.
任务定义,因此 LLM 将始终记住在整个聊天过程中它必须执行的操作。 - Output format, so the LLM will always remember how it should respond.
输出格式,因此 LLM 将始终记住它应该如何响应。 - Guardrails, so the LLM will always remember how it should *not* respond. Guardrails are emerging field in LLM governance, referring to configured boundaries that an LLM is allowed to operate in.
护栏,因此 LLM 将始终记住它不应该如何响应。护栏是LLM治理中的新兴领域,指的是允许LLM运行的配置边界。
For example, a System Prompt might look like this:
例如,系统提示词可能如下所示:
You will answer questions using this text: [insert text].
您将使用以下文本回答问题:[插入文本]。
You will respond with a JSON object in this format: {“Question”: “Answer”}.
您将使用以下格式的 JSON 对象进行响应:{“Question”: “Answer”}。
If the text does not contain sufficient information to answer the question, do not make up information and give the answer as “NA”.
如果文本中没有包含足够的信息来回答问题,请不要弥补信息并给出答案“NA”。
You are only allowed to answer questions related to [insert scope]. Never answer any questions related to demographic information such as age, gender, and religion.
您只能回答与[插入范围]相关的问题。切勿回答任何与年龄、性别和宗教等人口统计信息相关的问题。
Where each portion relates to the categories as follows:
其中每个部分涉及的类别如下:
Breaking down a System Prompt — Image by author
分解系统提示词 — 作者图片
But then what goes into the “normal” prompts to the chat?
那么“正常”的聊天提示又是什么呢?
Now you might be thinking: That sounds like a lot of information already being given in the System Prompt. What do I put in my “normal” prompts (better known as User Prompts) to the chat then?
现在您可能会想:听起来系统提示词中已经给出了很多信息。那么,我应该在聊天的“正常”提示(更好地称为用户提示)中添加什么内容呢?
The System Prompt outlines the general task at hand. In the above System Prompt example, the task has been defined to only use a specific piece of text for question-answering, and the LLM is instructed to respond in the format {"Question": "Answer"}
.
系统提示词概述了手头的一般任务。在上面的系统提示词示例中,任务被定义为仅使用特定的文本片段进行问答,并且 LLM 被指示以 {"Question": "Answer"}
格式进行响应。
You will answer questions using this text: [insert text].
您将使用以下文本回答问题:[插入文本]。
You will respond with a JSON object in this format: {“Question”: “Answer”}.
您将使用以下格式的 JSON 对象进行响应:{“Question”: “Answer”}。
In this case, each User Prompt to the chat would simply be the question that you want answered using the text. For example, a User Prompt might be "What is the text about?"
. And the LLM would respond with {"What is the text about?": "The text is about..."}
.
在这种情况下,聊天的每个用户提示只是您希望使用文本回答的问题。例如,用户提示可能是 "What is the text about?"
。 LLM 将响应 {"What is the text about?": "The text is about..."}
。
But let’s generalize this task example further. In practice, it would be more likely that you have multiple pieces of text that you want to ask questions on, rather than just 1. In this case, we could edit the first line of the above System Prompt from
但让我们进一步概括这个任务示例。在实践中,您更有可能想要询问多段文本,而不仅仅是 1 段。在这种情况下,我们可以编辑上述系统提示词的第一行
You will answer questions using this text: [insert text].
您将使用以下文本回答问题:[插入文本]。
to 到
You will answer questions using the provided text.
您将使用提供的文本回答问题。
Now, each User Prompt to the chat would include both the text to conduct question-answering over, and the question to be answered, such as:
现在,聊天的每个用户提示都将包含进行问答的文本以及要回答的问题,例如:
<text> <文本>
[insert text] [插入文字]
</text> </文本><question> <问题>
[insert question] [插入问题]
</question> </问题>
Here, we also use XML tags as delimiters in order to provide the 2 required pieces of information to the LLM in a structured manner. The nouns used in the XML tags, text
and question
, correspond to the nouns used in the System Prompt so that the LLM understands how the tags relate to the System Prompt instructions.
在这里,我们还使用 XML 标签作为分隔符,以便以结构化方式向 LLM 提供 2 条必需的信息。 XML 标记中使用的名词 text
和 question
与系统提示词中使用的名词相对应,以便 LLM 理解标记如何与系统提示词说明。
In conclusion, the System Prompt should give the overall task instructions, and each User Prompt should provide the exact specifics that you want the task to be executed using. In this case, for example, these exact specifics are the text and the question.
总之,系统提示词应给出总体任务说明,每个用户提示应提供您希望执行任务时使用的确切细节。例如,在这种情况下,这些确切的细节是文本和问题。
Extra: Making LLM guardrails dynamic
额外:使LLM护栏动态化
Above, guardrails are added through a few sentences in the System Prompt. These guardrails are then set in stone and do not change for the entire chat. What if you wish to have different guardrails in place at different points of the conversation?
上面,是通过系统提示词中的几句话来添加护栏的。这些护栏是一成不变的,在整个聊天过程中不会改变。如果您希望在对话的不同点设置不同的护栏怎么办?
Unfortunately for users of the ChatGPT user interface, there is no straightforward way to do this right now. However, if you’re interacting with ChatGPT programmatically, you’re in luck! The increasing focus on building effective LLM guardrails has seen the development of open-source packages that allow you to set up far more detailed and dynamic guardrails programmatically.
不幸的是,对于 ChatGPT 用户界面的用户来说,目前没有直接的方法可以做到这一点。但是,如果您以编程方式与 ChatGPT 交互,那么您很幸运!人们越来越关注构建有效的LLM护栏,开源包的开发使您能够以编程方式设置更详细和动态的护栏。
A noteworthy one is NeMo Guardrails developed by the NVIDIA team, which allows you to configure the expected conversation flow between users and the LLM, and thus set up different guardrails at different points of the chat, allowing for dynamic guardrails that evolve as the chat progresses. I definitely recommend checking it out!
值得一提的是 NVIDIA 团队开发的 NeMo Guardrails,它允许您配置用户与 LLM 之间的预期对话流程,从而在聊天的不同点设置不同的护栏,从而实现动态护栏随着聊天的进行而演变。我绝对建议您去用一下!
4. [🔴] Analyzing datasets using only LLMs, without plugins or code 仅使用 LLMs 分析数据集,无需插件或代码
You might have heard of OpenAI’s Advanced Data Analysis plugin within ChatGPT’s GPT-4 that is available to premium (paid) accounts. It allows users to upload datasets to ChatGPT and run code directly on the dataset, allowing for accurate data analysis.
您可能听说过 ChatGPT 的 GPT-4 中的 OpenAI 高级数据分析插件,可供高级(付费)帐户使用。它允许用户将数据集上传到ChatGPT并直接在数据集上运行代码,从而实现准确的数据分析。
But did you know that you don’t always need such plugins to analyze datasets well with LLMs? Let’s first understand the strengths and limitations of purely using LLMs to analyze datasets.
但您是否知道您并不总是需要此类插件来使用 LLMs 很好地分析数据集?我们首先了解一下纯粹使用 LLMs 来分析数据集的优点和局限性。
Types of dataset analysis that LLMs are *not* great at
LLMs“不”擅长的数据集分析类型
As you probably already know, LLMs are limited in their ability to perform accurate mathematical calculations, making them unsuitable for tasks requiring precise quantitative analysis on datasets, such as:
您可能已经知道,LLMs 执行精确数学计算的能力有限,这使得它们不适合需要对数据集进行精确定量分析的任务,例如:
- Descriptive Statistics: Summarizing numerical columns quantitatively, through measures like the mean or variance.
描述性统计:通过均值或方差等度量定量总结数值列。 - Correlation Analysis: Obtaining the precise correlation coefficient between columns.
相关分析:获取列之间精确的相关系数。 - Statistical Analysis: Such as hypothesis testing to determine if there are statistically significant differences between groups of data points.
统计分析:例如假设检验以确定数据点组之间是否存在统计显着差异。 - Machine Learning: Performing predictive modelling on a dataset such as using linear regressions, gradient boosted trees, or neural networks.
机器学习:对数据集执行预测建模,例如使用线性回归、梯度提升树或神经网络。
Performing such quantitative tasks on datasets is why OpenAI’s Advanced Data Analysis plugin exists, so that programming languages step in to run code for such tasks on a dataset.
在数据集上执行此类定量任务是 OpenAI 高级数据分析插件存在的原因,以便编程语言介入在数据集上运行此类任务的代码。
So, why would anyone want to analyze datasets using only LLMs and without such plugins?
那么,为什么有人想要仅使用 LLMs 而不使用此类插件来分析数据集呢?
Types of dataset analysis that LLMs are great at
LLMs 擅长的数据集分析类型
LLMs are excellent at identifying patterns and trends. This capability stems from their extensive training on diverse and voluminous data, enabling them to discern intricate patterns that may not be immediately apparent.
LLMs 擅长识别模式和趋势。这种能力源于他们对多样化和大量数据的广泛训练,使他们能够辨别可能不会立即显现的复杂模式。
This makes them well-suited for tasks based on pattern-finding within datasets, such as:
这使得它们非常适合基于数据集中模式查找的任务,例如:
- Anomaly detection: Identifying unusual data points that deviate from the norm, based on one or more column values.
异常检测:根据一个或多个列值识别偏离正常值的异常数据点。 - Clustering: Grouping data points with similar characteristics across columns.
聚类:将具有相似特征的数据点跨列分组。 - Cross-Column Relationships: Identifying combined trends across columns.
跨列关系:识别跨列的组合趋势。 - Textual Analysis (For text-based columns): Categorization based on topic or sentiment.
文本分析(对于基于文本的列):根据主题或情绪进行分类。 - Trend Analysis (For datasets with time aspects): Identifying patterns, seasonal variations, or trends within columns across time.
趋势分析(对于具有时间方面的数据集):识别随时间变化的列内的模式、季节性变化或趋势。
For such pattern-based tasks, using LLMs alone may in fact produce better results within a shorter timeframe than using code! Let’s illustrate this fully with an example.
对于此类基于模式的任务,单独使用 LLMs 实际上可能比使用代码在更短的时间内产生更好的结果!让我们用一个例子来充分说明这一点。
Analyzing a Kaggle dataset using only LLMs
仅使用 LLMs 分析 Kaggle 数据集
We’ll use a popular real-world Kaggle dataset curated for Customer Personality Analysis, wherein a company seeks to segment its customer base in order to understand its customers better.
我们将使用一个流行的现实世界 Kaggle 数据集来进行客户个性分析,其中一家公司试图对其客户群进行细分,以便更好地了解其客户。
For easier validation of the LLM’s analysis later, we’ll subset this dataset to 50 rows and retain only the most relevant columns. After which, the dataset for analysis looks like this, where each row represents a customer, and the columns depict customer information:
为了稍后更轻松地验证 LLM 的分析,我们将此数据集划分为 50 行,并仅保留最相关的列。之后,用于分析的数据集如下所示,其中每行代表一个客户,列描述客户信息:
First 3 rows of dataset — Image by author
数据集的前 3 行 — 作者图片
Say you work on the company’s marketing team. You are tasked to utilize this dataset of customer information to guide marketing efforts. This is a 2-step task: First, use the dataset to generate meaningful customer segments. Next, generate ideas on how to best market towards each segment. Now this is a practical business problem where the pattern-finding (for step 1) capability of LLMs can truly excel.
假设您在公司的营销团队工作。您的任务是利用此客户信息数据集来指导营销工作。这是一个分为两步的任务:首先,使用数据集生成有意义的客户群。接下来,就如何针对每个细分市场进行最佳营销提出想法。现在,这是一个实际的业务问题,LLMs 的模式查找(针对步骤 1)功能可以真正发挥作用。
Let’s craft a prompt for this task as follows, using 4 prompt engineering techniques :
让我们使用 4 种提示词工程技术,为此任务制作一个提示,如下所示:
1. Breaking down a complex task into simple steps 将复杂的任务分解为简单的步骤
2. Referencing intermediate outputs from each step 引用每个步骤的中间输出
3. Formatting the LLM’s response 格式化LLM的响应
4. Separating the instructions from the dataset 将指令与数据集分离
System Prompt: 系统提示词:
I want you to act as a data scientist to analyze datasets. Do not make up information that is not in the dataset. For each analysis I ask for, provide me with the exact and definitive answer and do not provide me with code or instructions to do the analysis on other platforms.
我希望你充当数据科学家来分析数据集。不要编造数据集中没有的信息。对于我要求的每项分析,请为我提供准确且明确的答案,并且不要向我提供在其他平台上进行分析的代码或说明。Prompt: 迅速的:
# CONTEXT # # 语境 #
I sell wine. I have a dataset of information on my customers: [year of birth, marital status, income, number of children, days since last purchase, amount spent].
我卖酒。我有一个关于我的客户的信息数据集:[出生年份、婚姻状况、收入、孩子数量、距离上次购买的天数、花费金额]。#############
# OBJECTIVE # # 客观的 #
I want you use the dataset to cluster my customers into groups and then give me ideas on how to target my marketing efforts towards each group. Use this step-by-step process and do not use code:
我希望您使用该数据集将我的客户分组,然后为我提供有关如何针对每个组进行营销工作的想法。使用此分步过程并且不使用代码:1. CLUSTERS: Use the columns of the dataset to cluster the rows of the dataset, such that customers within the same cluster have similar column values while customers in different clusters have distinctly different column values. Ensure that each row only belongs to 1 cluster.
1. 聚类:使用数据集的列对数据集的行进行聚类,使得同一聚类内的客户具有相似的列值,而不同聚类内的客户具有明显不同的列值。确保每一行仅属于 1 个簇。
For each cluster found, 对于找到的每个簇,
2. CLUSTER_INFORMATION: Describe the cluster in terms of the dataset columns.
2. CLUSTER_INFORMATION:根据数据集列描述集群。
3. CLUSTER_NAME: Interpret [CLUSTER_INFORMATION] to obtain a short name for the customer group in this cluster.
3. CLUSTER_NAME:解释[CLUSTER_INFORMATION]以获取该集群中客户组的短名称。
4. MARKETING_IDEAS: Generate ideas to market my product to this customer group.
4. MARKETING_IDEAS:产生向该客户群推销我的产品的想法。
5. RATIONALE: Explain why [MARKETING_IDEAS] is relevant and effective for this customer group.
5. 基本原理:解释为什么 [MARKETING_IDEAS] 与该客户群相关且有效。#############
# STYLE # # 风格 #
Business analytics report
业务分析报告#############
# TONE # # 语气 #
Professional, technical 专业、技术#############
# AUDIENCE # # 观众 #
My business partners. Convince them that your marketing strategy is well thought-out and fully backed by data.
我的商业伙伴。让他们相信您的营销策略经过深思熟虑并且有充分的数据支持。#############
# RESPONSE: MARKDOWN REPORT #
# 回应:降价报告#
<For each cluster in [CLUSTERS]>
<对于 [CLUSTERS] 中的每个簇>
— Customer Group: [CLUSTER_NAME]
— 客户组:[CLUSTER_NAME]
— Profile: [CLUSTER_INFORMATION]
— 个人资料:[CLUSTER_INFORMATION]
— Marketing Ideas: [MARKETING_IDEAS]
— 营销理念:[MARKETING_IDEAS]
— Rationale: [RATIONALE]
— 理由:[理由]<Annex> <附件>
Give a table of the list of row numbers belonging to each cluster, in order to back up your analysis. Use these table headers: [[CLUSTER_NAME], List of Rows].
给出属于每个簇的行号列表的表格,以支持您的分析。使用这些表头:[[CLUSTER_NAME],行列表]。#############
# START ANALYSIS # #开始分析#
If you understand, ask me for my dataset.
如果你明白的话,请向我索要我的数据集。
Below is GPT-4’s reply, and we proceed to pass the dataset to it in a CSV string.
下面是 GPT-4 的回复,我们继续将数据集以 CSV 字符串的形式传递给它。
GPT-4's response — Image by author
GPT-4 的回应 — 作者图片
Following which, GPT-4 replies with its analysis in the markdown report format we asked for:
随后,GPT-4 以我们要求的 Markdown 报告格式回复其分析:
GPT-4's response — Image by author
GPT-4 的回应 — 作者图片
GPT-4's response — Image by author
GPT-4 的回应 — 作者图片
GPT-4's response — Image by author
GPT-4 的回应 — 作者图片
Validating the LLM’s analysis
验证 LLM 的分析
For the sake of brevity, we’ll pick 2 customer groups generated by the LLM for validation — say, Young Families and Discerning Enthusiasts.
为了简洁起见,我们将选择 LLM 生成的 2 个客户群体进行验证 - 例如,年轻家庭和挑剔的爱好者。
Young Families 年轻家庭
- Profile synthesized by LLM: Born after 1980, Married or Together, Moderate to low income, Have children, Frequent small purchases.
- LLM 合成的个人资料:1980 年以后出生、已婚或同居、中低收入、有孩子、频繁小额购物。
- Rows clustered into this group by LLM: 3, 4, 7, 10, 16, 20
- 按 LLM 聚集到该组中的行:3、4、7、10、16、20
- Digging into the dataset, the full data for these rows are:
- 深入研究数据集,这些行的完整数据是:
Full data for Young Families — Image by author
年轻家庭的完整数据 — 图片由作者提供
Which exactly correspond to the profile identified by the LLM. It was even able to cluster the row with a null value without us preprocessing it beforehand!
这与 LLM 标识的配置文件完全对应。它甚至能够对具有空值的行进行聚类,而无需我们事先对其进行预处理!
Discerning Enthusiasts 挑剔的爱好者
- Profile synthesized by LLM: Wide age range, Any marital status, High income, Varied children status, High spend on purchases.
- 由LLM合成的个人资料:广泛的年龄范围、任何婚姻状况、高收入、不同的子女状况、高购买支出。
- Rows clustered into this group by LLM: 2, 5, 18, 29, 34, 36
- 按 LLM 聚集到该组中的行:2、5、18、29、34、36
- Digging into the dataset, the full data for these rows are:
- 深入研究数据集,这些行的完整数据是:
Full data for Discerning Enthusiasts — Image by author
为挑剔的爱好者提供的完整数据——作者图片
Which again align very well with the profile identified by the LLM!
这再次与 LLM 标识的配置文件非常吻合!
This example showcases LLMs’ abilities in pattern-finding, interpreting and distilling multi-dimensional datasets into meaningful insights, while ensuring that its analysis is deeply rooted in the factual truth of the dataset.
此示例展示了 LLMs 寻找模式、解释多维数据集并将其提炼为有意义的见解的能力,同时确保其分析深深植根于数据集的事实真相。
What if we used ChatGPT’s Advanced Data Analysis plugin?
如果我们使用 ChatGPT 的高级数据分析插件会怎么样?
For completeness, I attempted this same task with the same prompt, but asked ChatGPT to execute the analysis using code instead, which activated its Advanced Data Analysis plugin. The idea was for the plugin to run code using a clustering algorithm like K-Means directly on the dataset to obtain each customer group, before synthesizing the profile of each cluster to provide marketing strategies.
为了完整起见,我尝试使用相同的提示执行相同的任务,但要求 ChatGPT 使用代码执行分析,这激活了其高级数据分析插件。这个想法是让插件直接在数据集上使用 K-Means 等聚类算法运行代码来获取每个客户组,然后综合每个集群的配置文件以提供营销策略。
However, multiple attempts resulted in the following error messages with no outputs, despite the dataset being only 50 rows:
然而,尽管数据集只有 50 行,但多次尝试导致出现以下错误消息且没有输出:
Error and no output from Attempt 1 — Image by author
尝试 1 出现错误且无输出 — 作者图片
Error and no output from Attempt 2 — Image by author
尝试 2 出现错误且无输出 — 图片由作者提供
With the Advanced Data Analysis plugin right now, it appears that executing simpler tasks on datasets such as calculating descriptive statistics or creating graphs can be easily achieved, but more advanced tasks that require computing of algorithms may sometimes result in errors and no outputs, due to computational limits or otherwise.
现在使用高级数据分析插件,似乎可以轻松实现在数据集上执行更简单的任务,例如计算描述性统计或创建图表,但需要计算算法的更高级任务有时可能会导致错误并且没有输出,因为计算限制或其他。
So…When to analyze datasets using LLMs?
那么……何时使用 LLMs 分析数据集?
The answer is it depends on the type of analysis.
答案是这取决于分析的类型。
For tasks requiring precise mathematical calculations or complex, rule-based processing, conventional programming methods remain superior.
对于需要精确数学计算或复杂的基于规则的处理的任务,传统的编程方法仍然更胜一筹。
For tasks based on pattern-recognition, it can be challenging or more time-consuming to execute using conventional programming and algorithmic approaches. LLMs, however, excel at such tasks, and can even provide additional outputs such as annexes to back up its analysis, and full analysis reports in markdown formatting.
对于基于模式识别的任务,使用传统的编程和算法方法执行可能具有挑战性或更耗时。然而,LLMs 擅长此类任务,甚至可以提供额外的输出,例如支持其分析的附件,以及 Markdown 格式的完整分析报告。
Ultimately, the decision to utilize LLMs hinges on the nature of the task at hand, balancing the strengths of LLMs in pattern-recognition against the precision and specificity offered by traditional programming techniques.
最终,使用 LLMs 的决定取决于当前任务的性质,平衡 LLMs 在模式识别方面的优势与传统编程技术提供的精度和特异性。
Now back to the prompt engineering!
现在回到提示词工程!
Before this section ends, let’s go back to the prompt used to generate this dataset analysis and break down the key prompt engineering techniques used:
在本节结束之前,让我们回到用于生成此数据集分析的提示并分解所使用的关键提示词工程技术:
Prompt: 迅速的:
# CONTEXT # # 语境 #
I sell wine. I have a dataset of information on my customers: [year of birth, marital status, income, number of children, days since last purchase, amount spent].
我卖酒。我有一个关于我的客户的信息数据集:[出生年份、婚姻状况、收入、孩子数量、距离上次购买的天数、花费金额]。#############
# OBJECTIVE # # 客观的 #
I want you use the dataset to cluster my customers into groups and then give me ideas on how to target my marketing efforts towards each group. Use this step-by-step process and do not use code:
我希望您使用该数据集将我的客户分组,然后为我提供有关如何针对每个组进行营销工作的想法。使用此分步过程并且不使用代码:1. CLUSTERS: Use the columns of the dataset to cluster the rows of the dataset, such that customers within the same cluster have similar column values while customers in different clusters have distinctly different column values. Ensure that each row only belongs to 1 cluster.
1. 聚类:使用数据集的列对数据集的行进行聚类,使得同一聚类内的客户具有相似的列值,而不同聚类内的客户具有明显不同的列值。确保每一行仅属于 1 个簇。
For each cluster found, 对于找到的每个簇,
2. CLUSTER_INFORMATION: Describe the cluster in terms of the dataset columns.
2. CLUSTER_INFORMATION:根据数据集列描述集群。
3. CLUSTER_NAME: Interpret [CLUSTER_INFORMATION] to obtain a short name for the customer group in this cluster.
3. CLUSTER_NAME:解释[CLUSTER_INFORMATION]以获取该集群中客户组的短名称。
4. MARKETING_IDEAS: Generate ideas to market my product to this customer group.
4. MARKETING_IDEAS:产生向该客户群推销我的产品的想法。
5. RATIONALE: Explain why [MARKETING_IDEAS] is relevant and effective for this customer group.
5. 基本原理:解释为什么 [MARKETING_IDEAS] 与该客户群相关且有效。#############
# STYLE # # 风格 #
Business analytics report
业务分析报告#############
# TONE # # 语气 #
Professional, technical 专业、技术#############
# AUDIENCE # # 观众 #
My business partners. Convince them that your marketing strategy is well thought-out and fully backed by data.
我的商业伙伴。让他们相信您的营销策略经过深思熟虑并且有充分的数据支持。#############
# RESPONSE: MARKDOWN REPORT #
# 回应:降价报告#
<For each cluster in [CLUSTERS]>
<对于 [CLUSTERS] 中的每个簇>
— Customer Group: [CLUSTER_NAME]
— 客户组:[CLUSTER_NAME]
— Profile: [CLUSTER_INFORMATION]
— 个人资料:[CLUSTER_INFORMATION]
— Marketing Ideas: [MARKETING_IDEAS]
— 营销理念:[MARKETING_IDEAS]
— Rationale: [RATIONALE]
— 理由:[理由]<Annex> <附件>
Give a table of the list of row numbers belonging to each cluster, in order to back up your analysis. Use these table headers: [[CLUSTER_NAME], List of Rows].
给出属于每个簇的行号列表的表格,以支持您的分析。使用这些表头:[[CLUSTER_NAME],行列表]。#############
# START ANALYSIS # #开始分析#
If you understand, ask me for my dataset.
如果你明白的话,请向我索要我的数据集。
Technique 1: Breaking down a complex task into simple steps
技巧 1:将复杂的任务分解为简单的步骤
LLMs are great at performing simple tasks, but not so great at complex ones. As such, with complex tasks like this one, it is important to break down the task into simple step-by-step instructions for the LLM to follow. The idea is to give the LLM the steps that you yourself would take to execute the task.
LLMs 擅长执行简单任务,但不太擅长执行复杂任务。因此,对于像这样的复杂任务,将任务分解为简单的分步说明以供 LLM 遵循非常重要。这个想法是为 LLM 提供您自己执行任务所需的步骤。
In this example, the steps are given as:
在此示例中,步骤如下:
Use this step-by-step process and do not use code:
使用此分步过程并且不使用代码:1. CLUSTERS: Use the columns of the dataset to cluster the rows of the dataset, such that customers within the same cluster have similar column values while customers in different clusters have distinctly different column values. Ensure that each row only belongs to 1 cluster.
1. 聚类:使用数据集的列对数据集的行进行聚类,使得同一聚类内的客户具有相似的列值,而不同聚类内的客户具有明显不同的列值。确保每一行仅属于 1 个簇。
For each cluster found, 对于找到的每个簇,
2. CLUSTER_INFORMATION: Describe the cluster in terms of the dataset columns.
2. CLUSTER_INFORMATION:根据数据集列描述集群。
3. CLUSTER_NAME: Interpret [CLUSTER_INFORMATION] to obtain a short name for the customer group in this cluster.
3. CLUSTER_NAME:解释[CLUSTER_INFORMATION]以获取该集群中客户组的短名称。
4. MARKETING_IDEAS: Generate ideas to market my product to this customer group.
4. MARKETING_IDEAS:产生向该客户群推销我的产品的想法。
5. RATIONALE: Explain why [MARKETING_IDEAS] is relevant and effective for this customer group.
5. 基本原理:解释为什么 [MARKETING_IDEAS] 与该客户群相关且有效。
As opposed to simply giving the overall task to the LLM as “Cluster the customers into groups and then give ideas on how to market to each group”.
而不是简单地将总体任务交给LLM,即“将客户分组,然后给出如何向每个组进行营销的想法”。
With step-by-step instructions, LLMs are significantly more likely to deliver the correct results.
通过分步说明,LLMs 更有可能提供正确的结果。
Technique 2: Referencing intermediate outputs from each step
技术 2:引用每个步骤的中间输出
When providing the step-by-step process to the LLM, we give the intermediate output from each step a capitalized VARIABLE_NAME
, namely CLUSTERS
, CLUSTER_INFORMATION
, CLUSTER_NAME
, MARKETING_IDEAS
and RATIONALE
.
当向 LLM 提供逐步过程时,我们为每个步骤的中间输出提供大写的 VARIABLE_NAME
,即 CLUSTERS
、 CLUSTER_INFORMATION
、 MARKETING_IDEAS
和 RATIONALE
。
Capitalization is used to differentiate these variable names from the body of instructions given. These intermediate outputs can later be referenced using square brackets as [VARIABLE_NAME]
.
大写用于区分这些变量名称和给定指令的主体。这些中间输出稍后可以使用方括号引用为 [VARIABLE_NAME]
。
Technique 3: Formatting the LLM’s response
技术 3:格式化 LLM 的响应
Here, we ask for a markdown report format, which beautifies the LLM’s response. Having variable names from intermediate outputs again comes in handy here to dictate the structure of the report.
在这里,我们要求一个 Markdown 报告格式,它可以美化 LLM 的响应。中间输出中的变量名称在这里再次派上用场,可以指示报告的结构。
# RESPONSE: MARKDOWN REPORT #
# 回应:降价报告#
<For each cluster in [CLUSTERS]>
<对于 [CLUSTERS] 中的每个簇>
— Customer Group: [CLUSTER_NAME]
— 客户组:[CLUSTER_NAME]
— Profile: [CLUSTER_INFORMATION]
— 个人资料:[CLUSTER_INFORMATION]
— Marketing Ideas: [MARKETING_IDEAS]
— 营销理念:[MARKETING_IDEAS]
— Rationale: [RATIONALE]
— 理由:[理由]<Annex> <附件>
Give a table of the list of row numbers belonging to each cluster, in order to back up your analysis. Use these table headers: [[CLUSTER_NAME], List of Rows].
给出属于每个簇的行号列表的表格,以支持您的分析。使用这些表头:[[CLUSTER_NAME],行列表]。
In fact, you could even subsequently ask ChatGPT to provide the report as a downloadable file, allowing you to work off of its response in writing your final report.
事实上,您甚至可以随后要求 ChatGPT 以可下载文件的形式提供报告,以便您根据其响应编写最终报告。
Saving GPT-4's response as a file — Image by author
将 GPT-4 的响应保存为文件 — 作者图片
Technique 4: Separating the task instructions from the dataset
技术 4:将任务指令与数据集分离
You’ll notice that we never gave the dataset to the LLM in our first prompt. Instead, the prompt gives only the task instructions for the dataset analysis, with this added to the bottom:
您会注意到,我们从未在第一个提示中将数据集提供给 LLM。相反,提示仅提供数据集分析的任务说明,并将其添加到底部:
# START ANALYSIS # #开始分析#
If you understand, ask me for my dataset.
如果你明白的话,请向我索要我的数据集。
ChatGPT then responded that it understands, and we passed the dataset to it as a CSV string in our next prompt:
然后 ChatGPT 回复说它理解,我们在下一个提示中将数据集作为 CSV 字符串传递给它:
GPT-4's response — Image by author
GPT-4 的回应 — 作者图片
But why separate the instructions from the dataset?
但为什么要将指令与数据集分开呢?
Doing so helps the LLM maintain clarity in understanding each, with lower likelihood of missing out information, especially in more complex tasks such as this one with longer instructions. You might have experienced scenarios where the LLM “accidentally forgets” a certain instruction you gave as part of a longer prompt — for example, if you asked for a 100-word response and the LLM gives you a longer paragraph back. By receiving the instructions first, before the dataset that the instructions are for, the LLM can first digest what it should do, before executing it on the dataset provided next.
这样做可以帮助 LLM 保持对每项内容的清晰理解,降低错过信息的可能性,尤其是在更复杂的任务中,例如这个具有较长指令的任务。您可能遇到过这样的情况:LLM“意外忘记”您在较长提示中给出的某个指令 - 例如,如果您要求提供 100 个字的响应,而 LLM可以首先消化它应该做什么,然后在接下来提供的数据集上执行它。
Note however that this separation of instructions and dataset can only be achieved with chat LLMs as they maintain a conversational memory, unlike completion LLMs which do not.
但请注意,指令和数据集的这种分离只能通过聊天 LLMs 来实现,因为它们维护会话记忆,而完成 LLMs 则不然。