🔌 Model Context Protocol (MCP)
标准化 LLM 工具和上下文提供的开放协议
Model Context Protocol (MCP)
是一个开放协议,用于标准化应用程序如何向 LLM 提供工具和上下文。
LangChain Agent 可以使用 MCP 服务器上定义的工具,通过
langchain-mcp-adapters
库来实现。
快速开始
安装 langchain-mcp-adapters 库:
pip install langchain-mcp-adapters
uv add langchain-mcp-adapters
langchain-mcp-adapters 使 Agent 能够使用跨一个或多个 MCP 服务器定义的工具。
MultiServerMCPClient 默认是无状态的。
每次工具调用都会创建一个新的 MCP ClientSession,
执行工具,然后清理。有关更多详细信息,请参阅有状态会话部分。
访问多个 MCP 服务器
from langchain_mcp_adapters.client import MultiServerMCPClient # [!code highlight]
from langchain.agents import create_agent
client = MultiServerMCPClient( # [!code highlight]
{
"math": {
"transport": "stdio", # Local subprocess communication
"command": "python",
# Absolute path to your math_server.py file
"args": ["/path/to/math_server.py"],
},
"weather": {
"transport": "http", # HTTP-based remote server
# Ensure you start your weather server on port 8000
"url": "http://localhost:8000/mcp",
}
}
)
tools = await client.get_tools() # [!code highlight]
agent = create_agent(
"claude-sonnet-4-5-20250929",
tools # [!code highlight]
)
math_response = await agent.ainvoke(
{"messages": [{"role": "user", "content": "what's (3 + 5) x 12?"}]}
)
weather_response = await agent.ainvoke(
{"messages": [{"role": "user", "content": "what is the weather in nyc?"}]}
)
自定义服务器
要创建自定义 MCP 服务器,请使用 FastMCP 库:
pip install fastmcp
uv add fastmcp
要使用 MCP 工具服务器测试您的 Agent,请使用以下示例:
from fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
from fastmcp import FastMCP
mcp = FastMCP("Weather")
@mcp.tool()
async def get_weather(location: str) -> str:
"""Get weather for location."""
return "It's always sunny in New York"
if __name__ == "__main__":
mcp.run(transport="streamable-http")
传输协议(Transports)
MCP 支持不同的传输机制用于客户端-服务器通信。
HTTP
http 传输(也称为 streamable-http)使用 HTTP 请求进行客户端-服务器通信。
有关更多详细信息,请参阅
MCP HTTP 传输规范。
client = MultiServerMCPClient(
{
"weather": {
"transport": "http",
"url": "http://localhost:8000/mcp",
}
}
)
传递 Headers
当通过 HTTP 连接到 MCP 服务器时,您可以使用连接配置中的 headers 字段
包含自定义标头(例如,用于身份验证或跟踪)。这支持 sse(MCP 规范已弃用)
和 streamable_http 传输。
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
client = MultiServerMCPClient(
{
"weather": {
"transport": "http",
"url": "http://localhost:8000/mcp",
"headers": { # [!code highlight]
"Authorization": "Bearer YOUR_TOKEN", # [!code highlight]
"X-Custom-Header": "custom-value" # [!code highlight]
}, # [!code highlight]
}
}
)
tools = await client.get_tools()
agent = create_agent("openai:gpt-4.1", tools)
response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
身份验证
langchain-mcp-adapters 库在底层使用官方的
MCP SDK,
它允许您通过实现 httpx.Auth 接口来提供自定义身份验证机制。
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient(
{
"weather": {
"transport": "http",
"url": "http://localhost:8000/mcp",
"auth": auth, # [!code highlight]
}
}
)
stdio
客户端将服务器作为子进程启动,并通过标准输入/输出进行通信。 最适合本地工具和简单设置。
与 HTTP 传输不同,stdio 连接本质上是有状态的——
子进程在客户端连接的生命周期内持续存在。但是,当使用 MultiServerMCPClient
而不进行显式会话管理时,每次工具调用仍会创建一个新会话。
请参阅有状态会话以管理持久连接。
client = MultiServerMCPClient(
{
"math": {
"transport": "stdio",
"command": "python",
"args": ["/path/to/math_server.py"],
}
}
)
有状态会话
默认情况下,MultiServerMCPClient 是无状态的——
每次工具调用都会创建一个新的 MCP 会话,执行工具,然后清理。
如果您需要控制 MCP 会话的生命周期
(例如,当使用在工具调用之间维护上下文的有状态服务器时),
您可以使用 client.session() 创建持久的 ClientSession。
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.tools import load_mcp_tools
from langchain.agents import create_agent
client = MultiServerMCPClient({...})
# Create a session explicitly
async with client.session("server_name") as session: # [!code highlight]
# Pass the session to load tools, resources, or prompts
tools = await load_mcp_tools(session) # [!code highlight]
agent = create_agent(
"anthropic:claude-3-7-sonnet-latest",
tools
)
核心功能
工具(Tools)
工具 允许 MCP 服务器公开 LLM 可以调用以执行操作的可执行函数—— 例如查询数据库、调用 API 或与外部系统交互。 LangChain 将 MCP 工具转换为 LangChain 工具, 使它们可以直接在任何 LangChain Agent 或工作流中使用。
加载工具
使用 client.get_tools() 从 MCP 服务器检索工具并将它们传递给您的 Agent:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
client = MultiServerMCPClient({...})
tools = await client.get_tools() # [!code highlight]
agent = create_agent("claude-sonnet-4-5-20250929", tools)
结构化内容
MCP 工具可以在人类可读的文本响应之外返回 结构化内容。 当工具需要返回机器可解析的数据(如 JSON)以及显示给模型的文本时,这非常有用。
当 MCP 工具返回 structuredContent 时,适配器将其包装在
MCPToolArtifact 中并将其作为工具的 artifact 返回。
您可以使用 ToolMessage 上的 artifact 字段访问它。
您还可以使用拦截器自动处理或转换结构化内容。
从 artifact 提取结构化内容
调用 Agent 后,您可以从响应中的工具消息访问结构化内容:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from langchain.messages import ToolMessage
client = MultiServerMCPClient({...})
tools = await client.get_tools()
agent = create_agent("claude-sonnet-4-5-20250929", tools)
result = await agent.ainvoke(
{"messages": [{"role": "user", "content": "Get data from the server"}]}
)
# Extract structured content from tool messages
for message in result["messages"]:
if isinstance(message, ToolMessage) and message.artifact:
structured_content = message.artifact["structured_content"]
通过拦截器附加结构化内容
如果您希望结构化内容在对话历史中可见(对模型可见), 您可以使用拦截器自动将结构化内容附加到工具结果:
import json
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.interceptors import MCPToolCallRequest
from mcp.types import TextContent
async def append_structured_content(request: MCPToolCallRequest, handler):
"""Append structured content from artifact to tool message."""
result = await handler(request)
if result.structuredContent:
result.content += [
TextContent(type="text", text=json.dumps(result.structuredContent)),
]
return result
client = MultiServerMCPClient({...}, tool_interceptors=[append_structured_content])
多模态工具内容
MCP 工具可以在其响应中返回
多模态内容
(图像、文本等)。当 MCP 服务器返回具有多个部分(例如,文本和图像)的内容时,
适配器将它们转换为 LangChain 的标准内容块。
您可以通过 ToolMessage 上的 content_blocks 属性访问标准化表示:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
client = MultiServerMCPClient({...})
tools = await client.get_tools()
agent = create_agent("claude-sonnet-4-5-20250929", tools)
result = await agent.ainvoke(
{"messages": [{"role": "user", "content": "Take a screenshot of the current page"}]}
)
# Access multimodal content from tool messages
for message in result["messages"]:
if message.type == "tool":
# Raw content in provider-native format
print(f"Raw content: {message.content}")
# Standardized content blocks # [!code highlight]
for block in message.content_blocks: # [!code highlight]
if block["type"] == "text": # [!code highlight]
print(f"Text: {block['text']}") # [!code highlight]
elif block["type"] == "image": # [!code highlight]
print(f"Image URL: {block.get('url')}") # [!code highlight]
print(f"Image base64: {block.get('base64', '')[:50]}...") # [!code highlight]
这允许您以与提供者无关的方式处理多模态工具响应, 无论底层 MCP 服务器如何格式化其内容。
资源(Resources)
资源 允许 MCP 服务器公开数据——例如文件、数据库记录或 API 响应—— 客户端可以读取这些数据。LangChain 将 MCP 资源转换为 Blob 对象, 提供了统一的接口来处理文本和二进制内容。
加载资源
使用 client.get_resources() 从 MCP 服务器加载资源:
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({...})
# Load all resources from a server
blobs = await client.get_resources("server_name") # [!code highlight]
# Or load specific resources by URI
blobs = await client.get_resources("server_name", uris=["file:///path/to/file.txt"]) # [!code highlight]
for blob in blobs:
print(f"URI: {blob.metadata['uri']}, MIME type: {blob.mimetype}")
print(blob.as_string()) # For text content
您还可以直接使用会话与 load_mcp_resources 一起使用以获得更多控制:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.resources import load_mcp_resources
client = MultiServerMCPClient({...})
async with client.session("server_name") as session:
# Load all resources
blobs = await load_mcp_resources(session)
# Or load specific resources by URI
blobs = await load_mcp_resources(session, uris=["file:///path/to/file.txt"])
提示词(Prompts)
提示词 允许 MCP 服务器公开客户端可以检索和使用的可重用提示词模板。 LangChain 将 MCP 提示词转换为消息, 使它们易于集成到基于聊天的工作流中。
加载提示词
使用 client.get_prompt() 从 MCP 服务器加载提示词:
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({...})
# Load a prompt by name
messages = await client.get_prompt("server_name", "summarize") # [!code highlight]
# Load a prompt with arguments
messages = await client.get_prompt( # [!code highlight]
"server_name", # [!code highlight]
"code_review", # [!code highlight]
arguments={"language": "python", "focus": "security"} # [!code highlight]
) # [!code highlight]
# Use the messages in your workflow
for message in messages:
print(f"{message.type}: {message.content}")
您还可以直接使用会话与 load_mcp_prompt 一起使用以获得更多控制:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.prompts import load_mcp_prompt
client = MultiServerMCPClient({...})
async with client.session("server_name") as session:
# Load a prompt by name
messages = await load_mcp_prompt(session, "summarize")
# Load a prompt with arguments
messages = await load_mcp_prompt(
session,
"code_review",
arguments={"language": "python", "focus": "security"}
)
高级功能
工具拦截器
MCP 服务器作为独立进程运行——它们无法访问 LangGraph 运行时信息, 如 store、context 或 Agent 状态。 拦截器弥合了这一差距,在 MCP 工具执行期间为您提供对此运行时上下文的访问。
拦截器还为工具调用提供类似中间件的控制: 您可以修改请求、实现重试、动态添加标头或完全短路执行。
| 部分 | 描述 |
|---|---|
| 访问运行时上下文 | 读取用户 ID、API 密钥、存储数据和 Agent 状态 |
| 状态更新和命令 | 使用 Command 更新 Agent 状态或控制图流程 |
| 自定义拦截器 | 修改请求、组合拦截器和错误处理的模式 |
访问运行时上下文
当 MCP 工具在 LangChain Agent(通过 create_agent)中使用时,
拦截器接收对 ToolRuntime 上下文的访问。
这提供了对工具调用 ID、状态、配置和存储的访问——
实现访问用户数据、持久化信息和控制 Agent 行为的强大模式。
访问在调用时传递的用户特定配置,如用户 ID、API 密钥或权限:
from dataclasses import dataclass
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.interceptors import MCPToolCallRequest
from langchain.agents import create_agent
@dataclass
class Context:
user_id: str
api_key: str
async def inject_user_context(
request: MCPToolCallRequest,
handler,
):
"""Inject user credentials into MCP tool calls."""
runtime = request.runtime
user_id = runtime.context.user_id # [!code highlight]
api_key = runtime.context.api_key # [!code highlight]
# Add user context to tool arguments
modified_request = request.override(
args={**request.args, "user_id": user_id}
)
return await handler(modified_request)
client = MultiServerMCPClient(
{...},
tool_interceptors=[inject_user_context],
)
tools = await client.get_tools()
agent = create_agent("gpt-4o", tools, context_schema=Context)
# Invoke with user context
result = await agent.ainvoke(
{"messages": [{"role": "user", "content": "Search my orders"}]},
context={"user_id": "user_123", "api_key": "sk-..."}
)
访问长期记忆以检索用户偏好或跨对话持久化数据:
from dataclasses import dataclass
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.interceptors import MCPToolCallRequest
from langchain.agents import create_agent
from langgraph.store.memory import InMemoryStore
@dataclass
class Context:
user_id: str
async def personalize_search(
request: MCPToolCallRequest,
handler,
):
"""Personalize MCP tool calls using stored preferences."""
runtime = request.runtime
user_id = runtime.context.user_id
store = runtime.store # [!code highlight]
# Read user preferences from store
prefs = store.get(("preferences",), user_id) # [!code highlight]
if prefs and request.name == "search":
# Apply user's preferred language and result limit
modified_args = {
**request.args,
"language": prefs.value.get("language", "en"),
"limit": prefs.value.get("result_limit", 10),
}
request = request.override(args=modified_args)
return await handler(request)
client = MultiServerMCPClient(
{...},
tool_interceptors=[personalize_search],
)
tools = await client.get_tools()
agent = create_agent(
"gpt-4o",
tools,
context_schema=Context,
store=InMemoryStore()
)
访问对话状态以根据当前会话做出决策:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.interceptors import MCPToolCallRequest
from langchain.messages import ToolMessage
async def require_authentication(
request: MCPToolCallRequest,
handler,
):
"""Block sensitive MCP tools if user is not authenticated."""
runtime = request.runtime
state = runtime.state # [!code highlight]
is_authenticated = state.get("authenticated", False) # [!code highlight]
sensitive_tools = ["delete_file", "update_settings", "export_data"]
if request.name in sensitive_tools and not is_authenticated:
# Return error instead of calling tool
return ToolMessage(
content="Authentication required. Please log in first.",
tool_call_id=runtime.tool_call_id,
)
return await handler(request)
client = MultiServerMCPClient(
{...},
tool_interceptors=[require_authentication],
)
访问工具调用 ID 以返回正确格式的响应或跟踪工具执行:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.interceptors import MCPToolCallRequest
from langchain.messages import ToolMessage
async def rate_limit_interceptor(
request: MCPToolCallRequest,
handler,
):
"""Rate limit expensive MCP tool calls."""
runtime = request.runtime
tool_call_id = runtime.tool_call_id # [!code highlight]
# Check rate limit (simplified example)
if is_rate_limited(request.name):
return ToolMessage(
content="Rate limit exceeded. Please try again later.",
tool_call_id=tool_call_id, # [!code highlight]
)
result = await handler(request)
# Log successful tool call
log_tool_execution(tool_call_id, request.name, success=True)
return result
client = MultiServerMCPClient(
{...},
tool_interceptors=[rate_limit_interceptor],
)
状态更新和命令
拦截器可以返回 Command 对象来更新 Agent 状态或控制图执行流程。
这对于跟踪任务进度、在 Agent 之间切换或提前结束执行很有用。
from langchain.agents import AgentState, create_agent
from langchain_mcp_adapters.interceptors import MCPToolCallRequest
from langchain.messages import ToolMessage
from langgraph.types import Command
async def handle_task_completion(
request: MCPToolCallRequest,
handler,
):
"""Mark task complete and hand off to summary agent."""
result = await handler(request)
if request.name == "submit_order":
return Command(
update={
"messages": [result] if isinstance(result, ToolMessage) else [],
"task_status": "completed", # [!code highlight]
},
goto="summary_agent", # [!code highlight]
)
return result
使用 Command 和 goto="__end__" 提前结束执行:
async def end_on_success(
request: MCPToolCallRequest,
handler,
):
"""End agent run when task is marked complete."""
result = await handler(request)
if request.name == "mark_complete":
return Command(
update={"messages": [result], "status": "done"},
goto="__end__", # [!code highlight]
)
return result
自定义拦截器
拦截器是包装工具执行的异步函数,实现请求/响应修改、重试逻辑和其他横切关注点。 它们遵循"洋葱"模式,其中列表中的第一个拦截器是最外层。
基本模式
拦截器是一个异步函数,接收请求和处理程序。 您可以在调用处理程序之前修改请求,之后修改响应,或完全跳过处理程序。
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.interceptors import MCPToolCallRequest
async def logging_interceptor(
request: MCPToolCallRequest,
handler,
):
"""Log tool calls before and after execution."""
print(f"Calling tool: {request.name} with args: {request.args}")
result = await handler(request)
print(f"Tool {request.name} returned: {result}")
return result
client = MultiServerMCPClient(
{"math": {"transport": "stdio", "command": "python", "args": ["/path/to/server.py"]}},
tool_interceptors=[logging_interceptor], # [!code highlight]
)
修改请求
使用 request.override() 创建修改后的请求。
这遵循不可变模式,使原始请求保持不变。
async def double_args_interceptor(
request: MCPToolCallRequest,
handler,
):
"""Double all numeric arguments before execution."""
modified_args = {k: v * 2 for k, v in request.args.items()}
modified_request = request.override(args=modified_args) # [!code highlight]
return await handler(modified_request)
# Original call: add(a=2, b=3) becomes add(a=4, b=6)
运行时修改标头
拦截器可以根据请求上下文动态修改 HTTP 标头:
async def auth_header_interceptor(
request: MCPToolCallRequest,
handler,
):
"""Add authentication headers based on the tool being called."""
token = get_token_for_tool(request.name)
modified_request = request.override(
headers={"Authorization": f"Bearer {token}"} # [!code highlight]
)
return await handler(modified_request)
组合拦截器
多个拦截器以"洋葱"顺序组合——列表中的第一个拦截器是最外层:
async def outer_interceptor(request, handler):
print("outer: before")
result = await handler(request)
print("outer: after")
return result
async def inner_interceptor(request, handler):
print("inner: before")
result = await handler(request)
print("inner: after")
return result
client = MultiServerMCPClient(
{...},
tool_interceptors=[outer_interceptor, inner_interceptor], # [!code highlight]
)
# Execution order:
# outer: before -> inner: before -> tool execution -> inner: after -> outer: after
错误处理
使用拦截器捕获工具执行错误并实现重试逻辑:
import asyncio
async def retry_interceptor(
request: MCPToolCallRequest,
handler,
max_retries: int = 3,
delay: float = 1.0,
):
"""Retry failed tool calls with exponential backoff."""
last_error = None
for attempt in range(max_retries):
try:
return await handler(request)
except Exception as e:
last_error = e
if attempt < max_retries - 1:
wait_time = delay * (2 ** attempt) # Exponential backoff
print(f"Tool {request.name} failed (attempt {attempt + 1}), retrying in {wait_time}s...")
await asyncio.sleep(wait_time)
raise last_error
client = MultiServerMCPClient(
{...},
tool_interceptors=[retry_interceptor], # [!code highlight]
)
您还可以捕获特定错误类型并返回回退值:
async def fallback_interceptor(
request: MCPToolCallRequest,
handler,
):
"""Return a fallback value if tool execution fails."""
try:
return await handler(request)
except TimeoutError:
return f"Tool {request.name} timed out. Please try again later."
except ConnectionError:
return f"Could not connect to {request.name} service. Using cached data."
进度通知
订阅长时间运行的工具执行的进度更新:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.callbacks import Callbacks, CallbackContext
async def on_progress(
progress: float,
total: float | None,
message: str | None,
context: CallbackContext,
):
"""Handle progress updates from MCP servers."""
percent = (progress / total * 100) if total else progress
tool_info = f" ({context.tool_name})" if context.tool_name else ""
print(f"[{context.server_name}{tool_info}] Progress: {percent:.1f}% - {message}")
client = MultiServerMCPClient(
{...},
callbacks=Callbacks(on_progress=on_progress), # [!code highlight]
)
CallbackContext 提供:
server_name:MCP 服务器的名称tool_name:正在执行的工具名称(在工具调用期间可用)
日志记录
MCP 协议支持来自服务器的日志记录通知。
使用 Callbacks 类订阅这些事件。
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.callbacks import Callbacks, CallbackContext
from mcp.types import LoggingMessageNotificationParams
async def on_logging_message(
params: LoggingMessageNotificationParams,
context: CallbackContext,
):
"""Handle log messages from MCP servers."""
print(f"[{context.server_name}] {params.level}: {params.data}")
client = MultiServerMCPClient(
{...},
callbacks=Callbacks(on_logging_message=on_logging_message), # [!code highlight]
)
启发(Elicitation)
启发 允许 MCP 服务器在工具执行期间向用户请求额外输入。 服务器可以交互式地根据需要询问信息,而不是预先要求所有输入。
服务器设置
定义一个使用 ctx.elicit() 请求用户输入的工具:
from pydantic import BaseModel
from mcp.server.fastmcp import Context, FastMCP
server = FastMCP("Profile")
class UserDetails(BaseModel):
email: str
age: int
@server.tool()
async def create_profile(name: str, ctx: Context) -> str:
"""Create a user profile, requesting details via elicitation."""
result = await ctx.elicit( # [!code highlight]
message=f"Please provide details for {name}'s profile:", # [!code highlight]
schema=UserDetails, # [!code highlight]
) # [!code highlight]
if result.action == "accept" and result.data:
return f"Created profile for {name}: email={result.data.email}, age={result.data.age}"
if result.action == "decline":
return f"User declined. Created minimal profile for {name}."
return "Profile creation cancelled."
if __name__ == "__main__":
server.run(transport="http")
客户端设置
通过向 MultiServerMCPClient 提供回调来处理启发请求:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.callbacks import Callbacks, CallbackContext
from mcp.shared.context import RequestContext
from mcp.types import ElicitRequestParams, ElicitResult
async def on_elicitation(
mcp_context: RequestContext,
params: ElicitRequestParams,
context: CallbackContext,
) -> ElicitResult:
"""Handle elicitation requests from MCP servers."""
# In a real application, you would prompt the user for input
# based on params.message and params.requestedSchema
return ElicitResult( # [!code highlight]
action="accept", # [!code highlight]
content={"email": "[email protected]", "age": 25}, # [!code highlight]
) # [!code highlight]
client = MultiServerMCPClient(
{
"profile": {
"url": "http://localhost:8000/mcp",
"transport": "http",
}
},
callbacks=Callbacks(on_elicitation=on_elicitation), # [!code highlight]
)
响应动作
启发回调可以返回三种动作之一:
| 动作 | 描述 |
|---|---|
accept |
用户提供了有效输入。在 content 字段中包含数据。 |
decline |
用户选择不提供请求的信息。 |
cancel |
用户完全取消了操作。 |
# Accept with data
ElicitResult(action="accept", content={"email": "[email protected]", "age": 25})
# Decline (user doesn't want to provide info)
ElicitResult(action="decline")
# Cancel (abort the operation)
ElicitResult(action="cancel")