LangGraph
LangGraph是建立在LangChain之上的一个框架,它使得创建和管理智能体(Agent)及其运行时环境变得更加简单。
在LangChain的架构中,智能体是指由语言模型控制的系统,它能够自主决策接下来要执行的操作。而智能体运行时则负责维持这个系统的持续运行,它不断地进行决策、记录执行过程中的观察结果,并维持这样的循环直到智能体完成既定任务。
LangChain通过其特有的表达语言,使得定制智能体变得更加简便。而LangGraph在此基础上,为智能体运行时提供了更加灵活和动态的自定义功能。在LangChain中,传统的智能体运行时是通过AgentEX类实现的,而LangGraph的引入为智能体运行时带来了更多的多样性和适应性。
LangGraph的一个显著特点是为智能体运行时引入了循环机制,这对于智能体的操作至关重要。
在LangGraph中,有两种主要的智能体运行时:
- Agent Executor:它与LangChain中的类似,但在LangGraph中需要进行重新构建。
- Chat Agent Executor:它以消息列表的形式来处理智能体状态,这使得它非常适合于基于聊天的模型,这些模型通常通过消息来进行功能调用和响应。
通过LangGraph,开发者可以更加高效地构建和定制智能体,以及管理它们的运行时环境,从而实现更加强大和灵活的智能体应用。
LangSmith
LangSmith 是个先进的平台,专为创建生产级别的语言模型应用程序而设计。这个平台赋予了用户强大的能力,可以轻松地对任何基于语言模型的框架进行调试、测试、评估和监控。更值得一提的是,LangSmith 与 LangChain 实现了无缝对接。LangChain 公司,作为 LangChain 框架的主要支持者,也是 LangSmith 的开发者。
虽然 LangSmith 仍处于发展的早期阶段,但其已经展现出了巨大的潜力。目前,LangSmith 最显著的优势在于它能够对 LangChain 应用进行高效的调试和跟踪,这一功能极大地缩短了学习 LangChain 的曲线,并显著提升了开发效率。
尽管使用 LangChain 来构建语言模型应用原型或代理很简单,但要将这些原型转化为实际可用的产品却充满了挑战。这通常涉及到对模型的大量定制工作,以及不断地优化提示语(Prompt)、链和其他组件。LangSmith 的出现正是为了解决这些难题,它可以帮助开发者迅速地调试和优化链、代理或工具集,直观地展示不同组件间的交互情况,并对各种提示语进行有效评估。
注册地址:https://smith.langchain.com/。
LANGCHAIN_TRACING_V2是设置LangChain是否开启日志跟踪模式。
LANGCHAIN_PROJECT 是要跟踪的项目名称,如果LangSmith平台上还没有这个项目,会自动创建。如果不设置这个环境变量,会把相关信息写到default项目。这里的项目不需要跟实际的项目对应,可以理解为分类或者标签。
代码实现
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.prebuilt import ToolExecutor
from langchain.tools.render import format_tool_to_openai_function
from langchain_core.messages import HumanMessage
from typing import TypedDict, Annotated, Sequence
import operator
from langchain_core.messages import BaseMessage
from langgraph.prebuilt import ToolInvocation
from langchain_core.messages import FunctionMessage
from langgraph.graph import StateGraph, END
from langchain_core.messages import AIMessage
import json
import os
from langchain.chat_models import AzureChatOpenAI
# 替换key
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_API_BASE"] = "https://xxxxxx.openai.azure.com/"
os.environ['OPENAI_API_KEY'] = ""
os.environ['TAVILY_API_KEY'] = "tvly-xxxxxx"
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = f"langchain_to_graph"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "ls__xxxxxx"
model = AzureChatOpenAI(deployment_name="dev", temperature=0)
tools = [TavilySearchResults(max_results=1)]
tool_executor = ToolExecutor(tools)
functions = [format_tool_to_openai_function(t) for t in tools]
model = model.bind_functions(functions)
class AgentState(TypedDict):
"""
定义代理状态,其中包含消息列表的键,这个列表中的消息可以通过加法操作进行合并。
"""
messages: Annotated[Sequence[BaseMessage], operator.add]
def should_continue(state):
"""
Define the function that determines whether to continue or not
"""
messages = state['messages']
last_message = messages[-1]
if "function_call" not in last_message.additional_kwargs:
return "end"
else:
return "continue"
def call_model(state):
"""
Define the function that calls the model,控制代理如何与其消息历史进行交互
"""
messages = state['messages'][-5:]
response = model.invoke(messages)
# return a list, because this will get added to the existing list
return {"messages": [response]}
def first_model(state):
"""
the first call of the model we want to explicitly hard-code some action
"""
human_input = state['messages'][-1].content
return {
"messages": [
AIMessage(
content="",
additional_kwargs={
"function_call": {
"name": "tavily_search_results_json",
"arguments": json.dumps({"query": human_input})
}
}
)
]
}
def call_tool(state):
"""
Define the function to execute tools
"""
messages = state['messages']
# Based on the 'continue' condition,the last message involves a function call
last_message = messages[-1]
action = ToolInvocation(
tool=last_message.additional_kwargs["function_call"]["name"],
tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]),
)
# We call the tool_executor and get back a response
response = tool_executor.invoke(action)
function_message = FunctionMessage(content=str(response), name=action.tool)
return {"messages": [function_message]}
# Define a new graph
workflow = StateGraph(AgentState)
# Define the new entrypoint
workflow.add_node("first_agent", first_model)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", call_tool)
# Set the entrypoint as `agent`,this node is the first one called
workflow.set_entry_point("first_agent")
# We now add a conditional edge
workflow.add_conditional_edges(
"agent",
should_continue,
{
"continue": "action",
"end": END
}
)
# We now add a normal edge from `tools` to `agent`,after `tools` is called, `agent` node is called next.
workflow.add_edge('action', 'agent')
# After we call the first agent, we know we want to go to action
workflow.add_edge('first_agent', 'action')
# This compiles it into a LangChain Runnable, meaning you can use it as you would any other runnable
app = workflow.compile()
inputs = {"messages": [HumanMessage(content="what is the weather in sf")]}
print(app.invoke(inputs))