【方便 | 重要】#LLM入门 | Agent | langchain | RAG # 3.7_代理Agent,使用langchain自带agent完成任务

大型语言模型(LLMs)虽强大,但在逻辑推理、计算和外部信息检索方面能力有限,不如基础计算机程序。例如,LLMs处理简单计算或最新事件查询时可能不准确,因为它们仅基于预训练数据。LangChain框架通过“代理”(Agent)概念来解决这一问题。
代理作为语言模型的外部模块,可提供计算、逻辑、检索等功能的支持,使语言模型获得异常强大的推理和获取信息的超能力
本章将深入探讨代理机制、类型以及如何与LangChain中的语言模型结合,以构建更全面、智能的应用程序。代理显著扩展了语言模型的能力,是提升智能的关键方法。我们将学习如何通过代理充分发挥语言模型潜力。

一、使用LangChain内置工具llm-math和wikipedia

使用代理(Agents)需要三个要素:基本的大型语言模型(LLM)、交互工具(Tools)以及控制这些交互的代理(Agents)。

from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.python import PythonREPL
from langchain.chat_models import ChatOpenAI 

首先,新建一个基本的 LLM

# 参数temperature设置为0.0,从而减少生成答案的随机性。
llm = ChatOpenAI(temperature=0) 

初始化工具(Tool)时,可创建自定义或加载预构建工具。工具是具有名称(name)和描述(description)的实用链。

  • llm-math 工具结合语言模型和计算器执行数学运算。
  • wikipedia工具通过API连接到wikipedia进行搜索查询。
tools = load_tools(
    ["llm-math","wikipedia"], 
    llm=llm #第一步初始化的模型
) 

现在我们有了 LLM 和工具,最后让我们初始化一个简单的代理 (Agents) :

# 初始化代理
agent= initialize_agent(
    tools, #第二步加载的工具
    llm, #第一步初始化的模型
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,  #代理类型
    handle_parsing_errors=True, #处理解析错误
    verbose = True #输出中间步骤
) 
  • agent: 代理类型。这里使用的是AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION 指的是:
    • CHAT: 针对对话优化的代理模型。
    • Zero-shot: 代理仅在当前操作有效,无记忆功能。
    • REACT: 为REACT设计的提示模板。
  • DESCRIPTION 依据工具描述选择工具。本章不讨论 REACT 框架,但可视为LLM循环执行推理(Reasoning)和行动(Action)的多步骤过程,用于识别答案。
  • handle_parsing_errors: 决定是否处理解析错误,将错误信息反馈给大模型以纠正。
  • verbose: 是否输出中间步骤结果。

使用代理回答数学问题
agent(“计算300的25%”)

> Entering new AgentExecutor chain...
Question: 计算300的25%
Thought: I can use the calculator tool to calculate 25% of 300.
Action:
```json
{
  "action": "Calculator",
  "action_input": "300 * 0.25"
}

Observation: Answer: 75.0
Thought:The calculator tool returned the answer 75.0, which is 25% of 300.
Final Answer: 25% of 300 is 75.0.

Finished chain.

{‘input’: ‘计算300的25%’, ‘output’: ‘25% of 300 is 75.0.’}

**上面的过程可以总结为下**

1. 模型对于接下来需要做什么,给出思考**思考**:我可以使用计算工具来计算300的25%
2. 模型基于思考采取行动**行动**: 使用计算器(calculator),输入(action_input)300*0.25
3. 模型得到观察**观察**:答案: 75.0
4. 基于观察,模型对于接下来需要做什么,给出思考**思考**: 计算工具返回了300的25%,答案为75
5. 给出最终答案(Final Answer)**最终答案**: 300的25%等于75。
6. 以字典的形式给出最终答案。

Tom M. Mitchell的书

question = “Tom M. Mitchell是一位美国计算机科学家,
也是卡内基梅隆大学(CMU)的创始人大学教授。
他写了哪本书呢?”

agent(question)

Entering new AgentExecutor chain…
Thought: I can use Wikipedia to find information about Tom M. Mitchell and his books.
Action:

{
  "action": "Wikipedia",
  "action_input": "Tom M. Mitchell"
}

Observation: Page: Tom M. Mitchell
Summary: Tom Michael Mitchell (born August 9, 1951) is an American computer scientist and the Founders University Professor at Carnegie Mellon University (CMU). He is a founder and former Chair of the Machine Learning Department at CMU. Mitchell is known for his contributions to the advancement of machine learning, artificial intelligence, and cognitive neuroscience and is the author of the textbook Machine Learning. He is a member of the United States National Academy of Engineering since 2010. He is also a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science and a Fellow and past President of the Association for the Advancement of Artificial Intelligence. In October 2018, Mitchell was appointed as the Interim Dean of the School of Computer Science at Carnegie Mellon.

Page: Tom Mitchell (Australian footballer)
Summary: Thomas Mitchell (born 31 May 1993) is a professional Australian rules footballer playing for the Collingwood Football Club in the Australian Football League (AFL). He previously played for the Adelaide Crows, Sydney Swans from 2012 to 2016, and the Hawthorn Football Club between 2017 and 2022. Mitchell won the Brownlow Medal as the league’s best and fairest player in 2018 and set the record for the most disposals in a VFL/AFL match, accruing 54 in a game against Collingwood during that season.
Thought:The book written by Tom M. Mitchell is “Machine Learning”.
Thought: I have found the answer.
Final Answer: The book written by Tom M. Mitchell is “Machine Learning”.

Finished chain.

{‘input’: ‘Tom M. Mitchell是一位美国计算机科学家,也是卡内基梅隆大学(CMU)的创始人大学教授。他写了哪本书呢?’,
‘output’: ‘The book written by Tom M. Mitchell is “Machine Learning”.’}

✅ **总结**

1. 模型对于接下来需要做什么,给出思考(Thought)**思考**:我应该使用维基百科去搜索。
2. 模型基于思考采取行动(Action)**行动**: 使用维基百科,输入Tom M. Mitchell
3. 模型得到观察(Observation)**观测**: 页面: Tom M. Mitchell,页面: Tom Mitchell (澳大利亚足球运动员)
4. 基于观察,模型对于接下来需要做什么,给出思考(Thought)**思考**: Tom M. Mitchell写的书是Machine Learning
5. 给出最终答案(Final Answer)**最终答案**: Machine Learning
6. 以字典的形式给出最终答案。

值得注意的是,模型每次运行推理的过程可能存在差异,但最终的结果一致。

示例代码:

```python
from langchain.agents import initialize_agent, AgentType, load_tools
from langchain.chat_models import ChatOpenAI

from app.pool.component_factory import LLMModelFactory
azure_config = {
    "azure_endpoint": "···",
    "openai_api_version": "···",
    'api_key': "···",
    "model": "gpt-35-turbo"
}
llm = LLMModelFactory().create('AzureChatOpenAI', **azure_config)
# 加载默认工具
tools = load_tools(["llm-math"], llm=llm)
# 创建代理,传入工具、模型、代理类型,开启调试
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# 提问
rs = agent.run("Assume you are 12 years old now, what's your age raised to the 0.43 power?")
print(rs)


我的输出:
image.png

二、 使用LangChain内置工具PythonREPLTool

我们创建一个能将顾客名字转换为拼音的 python 代理,步骤与上一部分的一样:

from langchain.agents.agent_toolkits import create_python_agent
from langchain.tools.python.tool import PythonREPLTool

agent = create_python_agent(
    llm,  #使用前面一节已经加载的大语言模型
    tool=PythonREPLTool(), #使用Python交互式环境工具 REPLTool
    verbose=True #输出中间步骤
)
customer_list = ["小明","小黄","小红","小蓝","小橘","小绿",]

agent.run(f"将使用pinyin拼音库这些客户名字转换为拼音,并打印输出列表: {customer_list}。")  
> Entering new AgentExecutor chain...


Python REPL can execute arbitrary code. Use with caution.


I need to use the pinyin library to convert the names to pinyin. I can then print out the list of converted names.
Action: Python_REPL
Action Input: import pinyin
Observation: 
Thought:I have imported the pinyin library. Now I can use it to convert the names to pinyin.
Action: Python_REPL
Action Input: names = ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']
pinyin_names = [pinyin.get(i, format='strip') for i in names]
print(pinyin_names)
Observation: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']

Thought:I have successfully converted the names to pinyin and printed out the list of converted names.
Final Answer: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']

> Finished chain.





"['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']" 

在调试(debug)模式下再次运行,我们可以把上面的6步分别对应到下面的具体流程

  1. 模型对于接下来需要做什么,给出思考(Thought)
    • [chain/start] [1:chain:AgentExecutor] Entering Chain run with input
    • [chain/start] [1:chain:AgentExecutor > 2:chain:LLMChain] Entering Chain run with input
    • [llm/start] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] Entering LLM run with input
    • [llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] [1.91s] Exiting LLM run with output
    • [chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [1.91s] Exiting Chain run with output
  2. 模型基于思考采取行动(Action), 因为使用的工具不同,Action的输出也和之前有所不同,这里输出的为python代码 import pinyin
    • [tool/start] [1:chain:AgentExecutor > 4:tool:Python REPL] Entering Tool run with input
    • [tool/end] [1:chain:AgentExecutor > 4:tool:Python_REPL] [1.28ms] Exiting Tool run with output
  3. 模型得到观察(Observation)
    • [chain/start] [1:chain:AgentExecutor > 5:chain:LLMChain] Entering Chain run with input
  4. 基于观察,模型对于接下来需要做什么,给出思考(Thought)
    • [llm/start] [1:chain:AgentExecutor > 5:chain:LLMChain > 6:llm:ChatOpenAI] Entering LLM run with input
    • [llm/end] [1:chain:AgentExecutor > 5:chain:LLMChain > 6:llm:ChatOpenAI] [3.48s] Exiting LLM run with output
  5. 给出最终答案(Final Answer)
    • [chain/end] [1:chain:AgentExecutor > 5:chain:LLMChain] [3.48s] Exiting Chain run with output
  6. 返回最终答案。
    • [chain/end] [1:chain:AgentExecutor] [19.20s] Exiting Chain run with output
import langchain
langchain.debug=True
agent.run(f"使用pinyin拼音库将这些客户名字转换为拼音,并打印输出列表: {customer_list}") 
langchain.debug=False 
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
  "input": "使用pinyin拼音库将这些客户名字转换为拼音,并打印输出列表: ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']"
}
[chain/start] [1:chain:AgentExecutor > 2:chain:LLMChain] Entering Chain run with input:
{
  "input": "使用pinyin拼音库将这些客户名字转换为拼音,并打印输出列表: ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']",
  "agent_scratchpad": "",
  "stop": [
    "\nObservation:",
    "\n\tObservation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return \"I don't know\" as the answer.\n\n\nPython_REPL: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Python_REPL]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: 使用pinyin拼音库将这些客户名字转换为拼音,并打印输出列表: ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']\nThought:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] [2.32s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "I need to use the pinyin library to convert the names to pinyin. I can then print out the list of converted names.\nAction: Python_REPL\nAction Input: import pinyin",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "I need to use the pinyin library to convert the names to pinyin. I can then print out the list of converted names.\nAction: Python_REPL\nAction Input: import pinyin",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "prompt_tokens": 320,
      "completion_tokens": 39,
      "total_tokens": 359
    },
    "model_name": "gpt-3.5-turbo"
  },
  "run": null
}
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [2.33s] Exiting Chain run with output:
{
  "text": "I need to use the pinyin library to convert the names to pinyin. I can then print out the list of converted names.\nAction: Python_REPL\nAction Input: import pinyin"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:Python_REPL] Entering Tool run with input:
"import pinyin"
[tool/end] [1:chain:AgentExecutor > 4:tool:Python_REPL] [1.5659999999999998ms] Exiting Tool run with output:
""
[chain/start] [1:chain:AgentExecutor > 5:chain:LLMChain] Entering Chain run with input:
{
  "input": "使用pinyin拼音库将这些客户名字转换为拼音,并打印输出列表: ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']",
  "agent_scratchpad": "I need to use the pinyin library to convert the names to pinyin. I can then print out the list of converted names.\nAction: Python_REPL\nAction Input: import pinyin\nObservation: \nThought:",
  "stop": [
    "\nObservation:",
    "\n\tObservation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 5:chain:LLMChain > 6:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return \"I don't know\" as the answer.\n\n\nPython_REPL: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Python_REPL]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: 使用pinyin拼音库将这些客户名字转换为拼音,并打印输出列表: ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']\nThought:I need to use the pinyin library to convert the names to pinyin. I can then print out the list of converted names.\nAction: Python_REPL\nAction Input: import pinyin\nObservation: \nThought:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 5:chain:LLMChain > 6:llm:ChatOpenAI] [4.09s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "I have imported the pinyin library. Now I can use it to convert the names to pinyin.\nAction: Python_REPL\nAction Input: names = ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']\npinyin_names = [pinyin.get(i, format='strip') for i in names]\nprint(pinyin_names)",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "I have imported the pinyin library. Now I can use it to convert the names to pinyin.\nAction: Python_REPL\nAction Input: names = ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']\npinyin_names = [pinyin.get(i, format='strip') for i in names]\nprint(pinyin_names)",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "prompt_tokens": 365,
      "completion_tokens": 87,
      "total_tokens": 452
    },
    "model_name": "gpt-3.5-turbo"
  },
  "run": null
}
[chain/end] [1:chain:AgentExecutor > 5:chain:LLMChain] [4.09s] Exiting Chain run with output:
{
  "text": "I have imported the pinyin library. Now I can use it to convert the names to pinyin.\nAction: Python_REPL\nAction Input: names = ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']\npinyin_names = [pinyin.get(i, format='strip') for i in names]\nprint(pinyin_names)"
}
[tool/start] [1:chain:AgentExecutor > 7:tool:Python_REPL] Entering Tool run with input:
"names = ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']
pinyin_names = [pinyin.get(i, format='strip') for i in names]
print(pinyin_names)"
[tool/end] [1:chain:AgentExecutor > 7:tool:Python_REPL] [0.8809999999999999ms] Exiting Tool run with output:
"['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']"
[chain/start] [1:chain:AgentExecutor > 8:chain:LLMChain] Entering Chain run with input:
{
  "input": "使用pinyin拼音库将这些客户名字转换为拼音,并打印输出列表: ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']",
  "agent_scratchpad": "I need to use the pinyin library to convert the names to pinyin. I can then print out the list of converted names.\nAction: Python_REPL\nAction Input: import pinyin\nObservation: \nThought:I have imported the pinyin library. Now I can use it to convert the names to pinyin.\nAction: Python_REPL\nAction Input: names = ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']\npinyin_names = [pinyin.get(i, format='strip') for i in names]\nprint(pinyin_names)\nObservation: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']\n\nThought:",
  "stop": [
    "\nObservation:",
    "\n\tObservation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 8:chain:LLMChain > 9:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return \"I don't know\" as the answer.\n\n\nPython_REPL: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Python_REPL]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: 使用pinyin拼音库将这些客户名字转换为拼音,并打印输出列表: ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']\nThought:I need to use the pinyin library to convert the names to pinyin. I can then print out the list of converted names.\nAction: Python_REPL\nAction Input: import pinyin\nObservation: \nThought:I have imported the pinyin library. Now I can use it to convert the names to pinyin.\nAction: Python_REPL\nAction Input: names = ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']\npinyin_names = [pinyin.get(i, format='strip') for i in names]\nprint(pinyin_names)\nObservation: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']\n\nThought:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 8:chain:LLMChain > 9:llm:ChatOpenAI] [2.05s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "I have successfully converted the names to pinyin and printed out the list of converted names.\nFinal Answer: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "I have successfully converted the names to pinyin and printed out the list of converted names.\nFinal Answer: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "prompt_tokens": 483,
      "completion_tokens": 48,
      "total_tokens": 531
    },
    "model_name": "gpt-3.5-turbo"
  },
  "run": null
}
[chain/end] [1:chain:AgentExecutor > 8:chain:LLMChain] [2.05s] Exiting Chain run with output:
{
  "text": "I have successfully converted the names to pinyin and printed out the list of converted names.\nFinal Answer: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']"
}
[chain/end] [1:chain:AgentExecutor] [8.47s] Exiting Chain run with output:
{
  "output": "['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']"
} 

三、 定义自己的工具并在代理中使用

在本节,我们将创建和使用自定义时间工具LangChian tool 函数装饰器可以应用用于任何函数,将函数转化为LangChain 工具,使其成为代理可调用的工具。我们需要给函数加上非常详细的文档字符串, 使得代理知道在什么情况下、如何使用该函数/工具。比如下面的函数time,我们加上了详细的文档字符串。

# 导入tool函数装饰器
from langchain.agents import tool
from datetime import date

@tool
def time(text: str) -> str:
    """
    返回今天的日期,用于任何需要知道今天日期的问题。\
    输入应该总是一个空字符串,\
    这个函数将总是返回今天的日期,任何日期计算应该在这个函数之外进行。
    """
    return str(date.today())

# 初始化代理
agent= initialize_agent(
    tools=[time], #将刚刚创建的时间工具加入代理
    llm=llm, #初始化的模型
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,  #代理类型
    handle_parsing_errors=True, #处理解析错误
    verbose = True #输出中间步骤
)

# 使用代理询问今天的日期. 
# 注: 代理有时候可能会出错(该功能正在开发中)。如果出现错误,请尝试再次运行它。
agent("今天的日期是?")  
> Entering new AgentExecutor chain...
根据提供的工具,我们可以使用`time`函数来获取今天的日期。

Thought: 使用`time`函数来获取今天的日期。

Action:

{
“action”: “time”,
“action_input”: “”
}



Observation: 2023-08-09
Thought:我现在知道了最终答案。
Final Answer: 今天的日期是2023-08-09。

> Finished chain.





{'input': '今天的日期是?', 'output': '今天的日期是2023-08-09。'} 

上面的过程可以总结为下

  1. 模型对于接下来需要做什么,给出思考(Thought)思考:我需要使用 time 工具来获取今天的日期
  2. 模型基于思考采取行动(Action), 因为使用的工具不同,Action的输出也和之前有所不同,这里输出的为python代码行动: 使用time工具,输入为空字符串
  3. 模型得到观察(Observation)观测: 2023-07-04
  4. 基于观察,模型对于接下来需要做什么,给出思考(Thought)思考: 我已成功使用 time 工具检索到了今天的日期
  5. 给出最终答案(Final Answer)最终答案: 今天的日期是2023-08-09.
  6. 返回最终答案。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/556718.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

机器学习在安全领域的应用:从大数据中识别潜在安全威胁

🧑 作者简介:阿里巴巴嵌入式技术专家,深耕嵌入式人工智能领域,具备多年的嵌入式硬件产品研发管理经验。 📒 博客介绍:分享嵌入式开发领域的相关知识、经验、思考和感悟,欢迎关注。提供嵌入式方向的学习指导…

Ubuntu20.04 ISAAC SIM仿真下载使用流程(4.16笔记补充)

机器:华硕天选X2024 显卡:4060Ti ubuntu20.04 安装显卡驱动版本:525.85.05 参考: What Is Isaac Sim? — Omniverse IsaacSim latest documentationIsaac sim Cache 2023.2.3 did not work_isaac cache stopped-CSDN博客 Is…

LeetCode in Python 704. Binary Search (二分查找)

二分查找是一种高效的查询方法&#xff0c;时间复杂度为O(nlogn)&#xff0c;本文给出二分查找的代码实现。 示例&#xff1a; 代码&#xff1a; class Solution:def search(self, nums, target):l, r 0, len(nums) - 1while l < r:mid (l r) // 2if nums[mid] > ta…

C++11 数据结构1 线性表的概念,线性表的顺序存储,实现,测试

一 线性表的概念 线性结构是一种最简单且常用的数据结构。 线性结构的基本特点是节点之间满足线性关系。 本章讨论的动态数组、链表、栈、队列都属于线性结构。 他们的共同之处&#xff0c;是节点中有且只有一个开始节点和终端节点。按这种关系&#xff0c;可以把它们的所有…

MC9S12A64 程序烧写方法

前言 工作需要对MC9S12A64 单片机进行程序烧写。 资料 MC9S12A64 单片机前身属于 飞思卡尔半导体&#xff0c;后来被恩智浦收购&#xff0c;现在属于NXP&#xff1b; MC9S12A64 属于16位S12系列&#xff1b;MC9S12 又叫 HCS12。 数据手册下载连接 S12D_16位微控制器 | N…

[大模型]TransNormerLLM-7B 接入 LangChain 搭建知识库助手

TransNormerLLM-7B 接入 LangChain 搭建知识库助手 环境准备 在 autodl 平台中租赁一个 3090/4090 等 24G 显存的显卡机器&#xff0c;如下图所示镜像选择 PyTorch–>2.0.0–>3.8(ubuntu20.04)–>11.8 接下来打开刚刚租用服务器的 JupyterLab&#xff0c;并且打开其…

简单实用的备忘录小工具 记事提醒备忘效果超好

在这个信息爆炸的时代&#xff0c;我们每个人都需要处理大量的信息和任务。有时候&#xff0c;繁忙的生活和工作会让我们感到压力山大。幸运的是&#xff0c;现在有很多简单实用的软件工具&#xff0c;像得力的小助手一样&#xff0c;帮助我们整理思绪&#xff0c;提高效率&…

Redis系列1:深刻理解高性能Redis的本质

1 背景 分布式系统绕不开的核心之一的就是数据缓存&#xff0c;有了缓存的支撑&#xff0c;系统的整体吞吐量会有很大的提升。通过使用缓存&#xff0c;我们把频繁查询的数据由磁盘调度到缓存中&#xff0c;保证数据的高效率读写。 当然&#xff0c;除了在内存内运行还远远不够…

Docker 和 Podman的区别

文章目录 Docker 和 Podman的区别安装架构和特权要求运行容器方面安全性(root的权限)镜像管理方面命令方面Docker常用命令Podman常用命令 Docker 和 Podman的区别 安装 Docker安装&#xff1a;官方文档 Podman安装&#xff1a;官方文档 架构和特权要求 Docker使用client-se…

11、电科院FTU检测标准学习笔记-越限及告警上送功能

作者简介&#xff1a; 本人从事电力系统多年&#xff0c;岗位包含研发&#xff0c;测试&#xff0c;工程等&#xff0c;具有丰富的经验 在配电自动化验收测试以及电科院测试中&#xff0c;本人全程参与&#xff0c;积累了不少现场的经验 ———————————————————…

git 快问快答

我在实习的时候&#xff0c;是用本地开发&#xff0c;然后 push 到 GitHub 上&#xff0c;之后再从 Linux 服务器上拉 GitHub 代码&#xff0c;然后就可以了。一般程序是在 Linux 服务器上执行的&#xff0c;我当时使用过用 Linux 提供的命令来进行简单的性能排查。 在面试的时…

详解爬虫基本知识及入门案列(爬取豆瓣电影《热辣滚烫》的短评 详细讲解代码实现)

目录 前言什么是爬虫&#xff1f; 爬虫与反爬虫基础知识 一、网页基础知识 二、网络传输协议 HTTP&#xff08;HyperText Transfer Protocol&#xff09;和HTTPS&#xff08;HTTP Secure&#xff09;请求过程的原理&#xff1f; 三、Session和Cookies Session Cookies Session与…

抖音小店流量差怎么办?做好这三大细节,就可以完美解决!

大家好&#xff0c;我是电商糖果 很多刚开店的朋友&#xff0c;遇到的第一个难题就是店铺流量差。 没有流量&#xff0c;也就不会出单&#xff0c;更别提起店了。 糖果做抖音小店四年多了&#xff0c;也开了很多家小店。 很多新店没有流量&#xff0c;其实主要原因是这三个…

在mysql函数中启动事物和行锁/悲观锁实现并发条件下获得唯一流水号

业务场景 我有一个业务需求&#xff1a;我有一个报卡表 report里面有一个登记号字段 fcardno、地区代码 faddrno和发病年份 fyear&#xff0c;登记号由**“4位地区代码”“00”“发病年份”“5位流水号”**组成&#xff0c;我要在每次插入一张报卡&#xff08;每一行数据&#…

【MATLAB源码-第46期】基于matlab的OFDM系统多径数目对比,有无CP(循环前缀)对比,有无信道均衡对比。

操作环境&#xff1a; MATLAB 2022a 1、算法描述 OFDM&#xff08;正交频分复用&#xff09;是一种频域上的多载波调制技术&#xff0c;经常用于高速数据通信中。以下是关于多径数目、有无CP&#xff08;循环前缀&#xff09;以及有无信道均衡在OFDM系统中对误码率的影响&am…

自如电费均摊问题

3月份搬了次家&#xff0c;嫌麻烦租了自如&#xff0c;第一个月的电费账单出来了&#xff0c;由于我是中途搬进去的&#xff0c;于是乎就好奇他会如何计算均摊&#xff0c;这个月电费账单出来了&#xff0c;算了下发现了点东西。 先说结论&#xff1a;按照我的这个均摊的方式&a…

刻度清晰耐酸碱腐蚀PFA材质实验室用塑料量具特氟龙量筒量杯

PFA量筒为上下等粗的直筒状&#xff0c;特氟龙量杯是上大下小的圆台形&#xff0c;底座均有宽台设计&#xff0c;保证稳定性&#xff0c;两者均可在实验室中作为定量量取液体的量具&#xff0c;上沿一侧有弧嘴设计&#xff0c;便于流畅地倾倒液体。 规格参考&#xff1a;5ml、…

主线程捕获子线程异常

正常情况下使用多线程出现异常时&#xff0c;都是按照单个任务去处理异常&#xff0c;在线程间不需要通信的情况下&#xff0c;任务之间互不影响&#xff0c;主线程也不必知道子线程是否发成异常。 那么只需要处理子线程异常即可 Task.Run(() > {try{throw new Exception(&…

【Vision Pro应用】分享一个收集Apple Vision Pro 应用的网站

您是否也觉得 Vision Pro 应用程序商店经常一遍又一遍地展示相同的几个 VisionOS 应用程序?许多有趣、好玩的应用程序似乎消失得无影无踪,让人很难发现它们。为了帮助大家更轻松地探索和体验最新、最有趣的 Vision Pro 应用程序,这里分享一个网站https://www.findvisionapp.…

IDEA @Autowired不显示红线

IDEA 中&#xff0c;Autowired 显示红线一般情况是注入 Mapper 或者 Dao 时出现的&#xff0c;如下图&#xff1a; 这个报错是因为 Mapper 或者 Dao 上没有加 Repository 或者 Mapper&#xff0c;Autowired 注入时就判断为这不是一个 Bean。 不建议通过加上面两个注解的方式解…