执行摘要
LangGraph 1.0(发布于 2025年10月18日)标志着 Agent 框架从无序混乱向系统化工程的重大转变。核心改进围绕三大支柱展开:
1. 中间件机制(Middleware):通过钩子函数实现工作流的可控扩展,彻底解决了上下文工程混乱的问题 2. 状态图设计(StateGraph):用有向图 + 状态机范式取代线性 Agent 循环,支持复杂工作流编排 3. 类型安全与运行时控制:从 v0.6 的配置地狱升级到优雅的 Context API + Reducer 机制
本指南重点覆盖这三大核心创新的具体实现、最佳实践和生产级模式。
第一部分:LangGraph 1.0 的三大核心改进
1.1 从 v0.6 到 v1.0:问题诊断
v0.6 的核心痛点:
# ❌ v0.6 的"配置地狱"
def node(state: State, config: RunnableConfig):
# 层层嵌套获取数据,容易出错
user_id = config.get("configurable", {}).get("user_id")
db_conn = config.get("configurable", {}).get("db_connection")
# 代码可读性差,维护成本高问题根源:
• 上下文管理缺乏系统化支持,全靠手写配置 • 工具调用权限难以精确控制 • 长对话的消息管理容易爆炸 • 没有统一的扩展机制(预算控制、审计、安全过滤等)
1.2 v1.0 的解决方案三支柱
支柱1:中间件机制 (Middleware)
核心理念: 用类似 FastAPI 中间件的概念,在 Agent 执行流程的关键环节插入钩子函数。
执行生命周期:
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response内置中间件示例:
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)自定义中间件开发模式:
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)中间件的设计优势:
• ✅ 代码模块化,功能边界清晰 • ✅ 复用性高,可跨项目共享 • ✅ 组合灵活,如搭积木般构建复杂功能 • ✅ 测试友好,每个中间件可独立测试 • ✅ 生产友好,常见需求都有模式支持
支柱2:状态图设计(StateGraph + 状态机范式)
核心理念: 用有向图的节点和边来表示 Agent 工作流,状态在节点间流动。
从 ReAct Agent 到 StateGraph 的进化:
# ❌ ReAct Agent 方式:无序循环
from langchain.agents import create_openai_tools_agent, AgentExecutor
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, max_iterations=10)
result = executor.invoke({"input": user_query})
# 问题:流程不可控,调试困难,扩展受限# ✅ StateGraph 方式:可控的状态机
from langgraph.graph import StateGraph, MessagesState, START, END
# 定义状态(所有节点间的共享数据)
class State(TypedDict):
messages: Annotated[list, add_messages] # 消息列表,自动追加
user_id: str
context: dict
# 定义节点(处理单元)
def assistant_node(state: State) -> dict:
"""LLM 推理节点"""
response = llm.invoke(state["messages"])
return {"messages": [response]}
def tool_node(state: State) -> dict:
"""工具执行节点"""
last_msg = state["messages"][-1]
tool_results = execute_tools(last_msg.tool_calls)
return {"messages": [ToolMessage(content=tool_results)]}
# 路由函数:根据状态决定下一步
def route_after_llm(state: State) -> str:
last_msg = state["messages"][-1]
if last_msg.tool_calls:
return "tool"
return END
# 构建图
builder = StateGraph(State)
builder.add_node("assistant", assistant_node)
builder.add_node("tool", tool_node)
# 添加边
builder.add_edge(START, "assistant")
builder.add_conditional_edges("assistant", route_after_llm)
builder.add_edge("tool", "assistant")
# 编译为可执行的图
graph = builder.compile()
# 调用
result = graph.invoke({"messages": [HumanMessage("...")]})优势对比:
支柱3:类型安全与上下文控制(Context API)
v0.6 的困境 → v1.0 的优雅:
# ❌ v0.6:配置参数层层嵌套
def node(state: State, config: RunnableConfig):
user_id = config.get("configurable", {}).get("user_id")
db_conn = config.get("configurable", {}).get("db_connection")
# 类型提示缺失,IDE 无法补全,运行时容易出错
# ✅ v1.0:Context API,类型安全
@dataclass
class Context:
user_id: str
db_connection: Connection
cache_client: Redis
def node(state: State, runtime: Runtime[Context]):
# 直接访问,IDE 自动补全,类型检查
user_id = runtime.context.user_id
db_conn = runtime.context.db_connection第二部分:状态管理深度指南
2.1 State 的三种定义方式
方式1:普通字典(最灵活但易出错)
from langgraph.graph import StateGraph, START, END
def add_node(state):
# state 是普通字典,IDE 无法提示
return {"result": state["x"] + 1}
builder = StateGraph(dict) # 使用 dict 作为状态
builder.add_node("process", add_node)方式2:TypedDict(推荐用于中小项目)
from typing import TypedDict, Annotated, List
from langgraph.graph import StateGraph
class State(TypedDict):
x: int # 单纯数值,默认覆盖
messages: List[str] # 列表字段
metadata: dict # 嵌套对象
def process(state: State) -> dict:
# IDE 能补全,类型检查
x = state["x"] # ✅ IDE 提示存在
# y = state["y"] # ❌ IDE 警告不存在
return {"x": x + 1}
builder = StateGraph(State)方式3:Pydantic BaseModel(最严谨,用于大型项目)
from pydantic import BaseModel, Field
class State(BaseModel):
x: int = Field(description="计算数值")
messages: List[str] = Field(default_factory=list)
class Config:
# 允许额外字段
extra = "allow"
# 使用方式与 TypedDict 相同
builder = StateGraph(State)2.2 Reducer 函数:状态融合的核心机制
问题场景: 多个节点同时修改同一字段,如何合并?
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response0常用 Reducer 函数详解:
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response1add_messages 的智能特性:
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response22.3 完整的状态定义最佳实践
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response3第三部分:StateGraph API 完全参考
3.1 核心方法详解
3.1.1 add_node:添加处理节点
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response43.1.2 add_edge:添加无条件边(节点连接)
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response53.1.3 add_conditional_edges:条件边(动态路由)
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response63.1.4 add_sequence:序列快捷方式
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response73.1.5 compile:编译为可执行图
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response83.2 特殊节点:START 和 END
User Input
↓
[before_model] 中间件 - 预处理输入
↓
[wrap_model_call] 中间件 - 包裹模型调用、修改参数
↓
LLM Model Call
↓
[wrap_tool_call] 中间件 - 拦截工具调用、权限控制
↓
Tool Execution
↓
[after_model] 中间件 - 输出验证、安全检查
↓
Response9第四部分:并行执行与超步(SuperStep)
4.1 超步概念
LangGraph 的 超步(SuperStep) 是并行执行的基本单位。
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)04.2 实现并行执行的三种模式
模式1:多条输出边(扇出)
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)1模式2:使用 Send 进行动态分发
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)2模式3:defer 参数控制执行顺序
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)3第五部分:流式执行与流模式(Stream Modes)
5.1 四种流模式对比
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)45.2 选择流模式的指南
5.3 实战:构建实时聊天应用
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)5第六部分:设计模式与最佳实践
6.1 工作流模式库
模式1:检查-处理-验证(Check-Process-Verify)
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)6模式2:重试机制
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)7模式3:多代理协作(Supervisor Pattern)
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)86.2 State 设计最佳实践
from langchain.agents import create_agent
from langchain.agents.middleware import (
PIIMiddleware,
SummarizationMiddleware,
HumanInTheLoopMiddleware
)
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[read_email, send_email],
middleware=[
# 隐私保护:脱敏邮箱地址
PIIMiddleware("email", strategy="redact"),
# 隐私保护:阻止电话号码
PIIMiddleware("phone_number", strategy="block"),
# 自动摘要:对话超过500 tokens自动压缩
SummarizationMiddleware(
model="claude-sonnet-4-5-20250929",
max_tokens_before_summary=500
),
# 人机交互:发送邮件前要求人工审批
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {
"allowed_decisions": ["approve", "edit", "reject"]
}
}
)
]
)96.3 节点函数最佳实践
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)06.4 错误处理与恢复最佳实践
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)1第七部分:生产级应用架构
7.1 可观测性与监控
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)27.2 持久化与检查点
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)37.3 上下文隔离与多租户
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)4第八部分:常见陷阱与解决方案
8.1 状态爆炸
问题: 对话历史越来越长,导致 token 消耗不断增加。
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)58.2 条件边死循环
问题: 路由函数返回同一节点,导致无限循环。
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)68.3 Reducer 合并冲突
问题: 多个节点同时更新同一字段,顺序不可控。
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)7总结与关键要点
核心创新总结
实战最佳实践清单
• ✅ 始终使用 TypedDict 定义 State • ✅ 每个节点只做一件事(单一职责) • ✅ 节点返回部分更新,不返回整个 State • ✅ 使用 Reducer 管理聚合字段(messages、results 等) • ✅ 为关键路由添加条件边,不是简单 IF-ELSE • ✅ 在编译时设置 recursion_limit 防止死循环 • ✅ 使用 stream 而非 invoke 实现实时反馈 • ✅ 中间件用于横切关注点(审计、安全、监控) • ✅ 生产环境必须配置 checkpointer 用于容错恢复 • ✅ 定期裁剪或摘要历史消息,防止 token 爆炸
何时使用 LangGraph 1.0
推荐使用:
• 复杂多步工作流(3+ 个步骤) • 需要用户交互或人工审批 • 多 Agent 协作场景 • 长对话需要记忆管理 • 生产环境需要可观测性
可以用但不必要:
• 简单的"输入 → LLM → 输出" • 单次对话,无状态 • 原型快速验证
附录:完整工程示例
示例:企业级文档分析 Agent
from dataclasses import dataclass
from typing import Callable
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain.agents.middleware.types import ModelResponse
from langchain_openai import ChatOpenAI
@dataclass
class Context:
user_expertise: str = "beginner" # "beginner" 或 "expert"
class ExpertiseBasedToolMiddleware(AgentMiddleware):
"""根据用户技术水平动态调整AI能力的中间件"""
def wrap_model_call(
self,
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
# 读取运行时上下文
user_level = request.runtime.context.user_expertise
if user_level == "expert":
# 专家用户:强模型 + 高级工具
request.model = ChatOpenAI(model="gpt-5")
request.tools = [advanced_search, data_analysis, ml_toolkit]
else:
# 初学者:轻量模型 + 基础工具
request.model = ChatOpenAI(model="gpt-5-nano")
request.tools = [simple_search, basic_calculator]
# 继续执行流程
return handler(request)
# 使用自定义中间件
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[simple_search, advanced_search, basic_calculator, data_analysis],
middleware=[ExpertiseBasedToolMiddleware()],
context_schema=Context
)8这个完整指南涵盖了 LangGraph 1.0 的所有核心概念、API、设计模式和生产级实践,现在你可以开始构建复杂的 Agent 系统了!
推荐站内搜索:最好用的开发软件、免费开源系统、渗透测试工具云盘下载、最新渗透测试资料、最新黑客工具下载……




还没有评论,来说两句吧...