Demo-LangGraph构建Agent
简单Demo
这里们传入的是
model
,而不是model_with_tools
。这是因为create_react_agent
会在后台为我们调用.bind_tools
。
MemorySaver:工作流状态管理工具,基于内存,保存完整工作流状态。
{ "ts": "2023-01-01T12:00:00Z"
, "step": 5
, // 当前执行步骤 "data": { "messages": [...], // 对话消息
"tool_results": {...}, // 工具执行结果
"decision_path": [...] // 分支决策路径 } }
ChatMessageHistory (LangChain):只是个简单存储的消息列表[
HumanMessage(content="Hello"),
AIMessage(content="Hi there!"),
HumanMessage(content="How's weather?")
]
# Import relevant functionality
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent# Create the agent
memory = MemorySaver()
model = ChatAnthropic(model_name="claude-3-sonnet-20240229")
search = TavilySearchResults(max_results=2)
tools = [search]
agent_executor = create_react_agent(model, tools, checkpointer=memory)# Use the agent
config = {"configurable": {"thread_id": "abc123"}}
for chunk in agent_executor.stream({"messages": [HumanMessage(content="hi im bob! and i live in sf")]}, config
):print(chunk)print("----")for chunk in agent_executor.stream({"messages": [HumanMessage(content="whats the weather where I live?")]}, config
):print(chunk)print("----")
流式Token
event["event"]
的值是由框架自动判别和填充的# 可能捕获的所有事件类型(部分常见) "on_llm_start" # 语言模型调用开始 "on_llm_end" # 语言模型调用结束 "on_chain_start" # 链开始执行 "on_chain_end" # 链结束执行 "on_tool_start" # 工具调用开始 "on_tool_end" # 工具调用结束 "on_chat_model_stream" # 聊天模型流式输出 "on_retriever_start" # 检索器开始工作 "on_retriever_end" # 检索器结束工作自动判别机制:
- 当Agent开始执行一个链(chain)时,会触发
on_chain_start
事件- 当模型开始生成响应时,会触发
on_chat_model_stream
事件- 当工具开始执行时,会触发
on_tool_start
事件- 当对应操作完成时,会触发相应的结束事件(如
on_chain_end
,on_tool_end
)- 这些事件类型字符串由LangChain内部机制决定
async for event in agent_executor.astream_events({"messages": [HumanMessage(content="whats the weather in sf?")]}, version="v1"
):kind = event["event"]if kind == "on_chain_start":if (event["name"] == "Agent"): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`print(f"Starting agent: {event['name']} with input: {event['data'].get('input')}")elif kind == "on_chain_end":if (event["name"] == "Agent"): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`print()print("--")print(f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}")if kind == "on_chat_model_stream":content = event["data"]["chunk"].contentif content:# Empty content in the context of OpenAI means# that the model is asking for a tool to be invoked.# So we only print non-empty contentprint(content, end="|")elif kind == "on_tool_start":print("--")print(f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}")elif kind == "on_tool_end":print(f"Done tool: {event['name']}")print(f"Tool output was: {event['data'].get('output')}")print("--")