admin管理员组

文章数量:1027321

Python开发MCP 入门指南

开源的FastMCP提供了Python版本简便的开发MCP的功能。

官网:

Github:

安装依赖

代码语言:javascript代码运行次数:0运行复制
pip install fastmcp

MCP Server开发

FastMCP中提供了三种类型的MCP函数

  • Tool
  • Resource
  • Prompt

Tool

FastMCP 中的工具将常规 Python 函数转换为 LLM 可以在对话过程中调用的功能。这使得 LLM 能够执行查询数据库、调用 API、进行计算或访问文件等任务,从而将其功能扩展到训练的数据之外。

Tool工具只需要在函数上添加@mcp.tool()注解即可,加上Annotated和Field标记,可以在使用MCP Server函数的时候可以看到参数的描述,推荐使用。

代码语言:javascript代码运行次数:0运行复制
from typing import Annotated

from fastmcp import FastMCP, Context
from fastmcp.prompts import UserMessage
from pydantic import Field

mcp = FastMCP(
    name="HelpfulAssistant",
    instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)

@mcp.tool()
def greet(name: Annotated[str, Field(description="Greet Name")]) -> str:
    return f"Hello, {name}!"

Resource

Resource表示 MCP 客户端可以读取的数据或文件,资源模板通过允许客户端根据 URI 中传递的参数请求动态生成的资源来扩展此概念。这允许 LLM 访问与对话相关的文件、数据库内容、配置或动态生成的信息。

Resource工具只需要在函数上添加@mcp.resource()注解即可,加上Annotated和Field标记,可以在使用MCP Server函数的时候可以看到参数的描述,推荐使用。

其中Resource Template可以在URI中动态获取用户传递的数据,填充模板数据后返回。

代码语言:javascript代码运行次数:0运行复制
from typing import Annotated

from fastmcp import FastMCP, Context
from fastmcp.prompts import UserMessage
from pydantic import Field

mcp = FastMCP(
    name="HelpfulAssistant",
    instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)

@mcp.resource("data://config")
def get_config() -> dict:
    """Provides the application configuration."""
    return {"theme": "dark", "version": "1.0"}

@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: int) -> dict:
    """Retrieves a user's profile by ID."""
    # The {user_id} in the URI is extracted and passed to this function
    return {"id": user_id, "name": f"User {user_id}", "status": "active"}

Prompt

Prompt是可重复使用的消息模板,可帮助 LLM 生成结构化、有针对性的响应。这使得可以定义 LLM 可以在不同的客户端和环境中使用的一致、可重复使用的模板。

FastMCP 主要使用 @mcp.prompt 装饰器来简化这些模板的定义,加上Annotated和Field标记,可以在使用MCP Server函数的时候可以看到参数的描述,推荐使用。

函数返回字符串的时候,会自动转换为UserMessage类型返回。

代码语言:javascript代码运行次数:0运行复制
from typing import Annotated

from fastmcp import FastMCP, Context
from fastmcp.prompts import UserMessage
from pydantic import Field

mcp = FastMCP(
    name="HelpfulAssistant",
    instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)

# Basic prompt returning a string (converted to UserMessage)
@mcp.prompt()
def ask_about_topic(topic: str) -> str:
    """Generates a user message asking for an explanation of a topic."""
    return f"Can you please explain the concept of '{topic}'?"

# Prompt returning a specific message type
@mcp.prompt()
def generate_code_request(language: str, task_description: str) -> UserMessage:
    """Generates a user message requesting code generation."""
    content = f"Write a {language} function that performs the following task: {task_description}"
    return UserMessage(content=content)

Context

在定义 FastMCP 工具 、 资源 、资源模板或提示符时,函数可能需要与底层 MCP 会话交互或访问服务器功能。FastMCP 为此提供了 Context 对象。它支持如下功能:

  • 日志记录 :将调试、信息、警告和错误消息发送回客户端
  • 进度报告 :向客户端更新长期运行操作的进度
  • 资源访问 :从服务器注册的资源读取数据
  • LLM 请求:请求客户的 LLM 根据提供的消息生成文本
  • 请求信息 :访问有关当前请求的元数据
  • 服务器访问 :需要时,访问底层 FastMCP 服务器实例

如下是一个Server的工具函数示例,通过ctx.debug打印Debug日志,ctx.report_progress上报执行进度。

代码语言:javascript代码运行次数:0运行复制
from fastmcp import FastMCP, Context

mcp = FastMCP(name="ContextDemo")

@mcp.tool()
async def process_file(file_uri: str, ctx: Context) -> str:
    """Processes a file, using context for logging and resource access."""
    request_id = ctx.request_id
    await ctx.info(f"[{request_id}] Starting processing for {file_uri}")

    try:
        # Use context to read a resource
        contents_list = await ctx.read_resource(file_uri)
        if not contents_list:
            await ctx.warning(f"Resource {file_uri} is empty.")
            return "Resource empty"

        data = contents_list[0].content # Assuming TextResourceContents
        await ctx.debug(f"Read {len(data)} bytes from {file_uri}")

        # Report progress
        await ctx.report_progress(progress=50, total=100)
        
        # Simulate work
        processed_data = data.upper() # Example processing

        await ctx.report_progress(progress=100, total=100)
        await ctx.info(f"Processing complete for {file_uri}")

        return f"Processed data length: {len(processed_data)}"

    except Exception as e:
        # Use context to log errors
        await ctx.error(f"Error processing {file_uri}: {str(e)}")
        raise # Re-raise to send error back to client

更多实现参考:

运行Server

MCP服务的运行支持stdiosse形式。

代码语言:javascript代码运行次数:0运行复制
if __name__ == "__main__":
    # Basic run with default settings (stdio transport)
    mcp.run()
    
    # Or with specific transport and parameters
    # mcp.run(transport="sse", host="127.0.0.1", port=9000)

完整的MCP Server的代码参考如下:

代码语言:javascript代码运行次数:0运行复制
from typing import Annotated

from fastmcp import FastMCP, Context
from fastmcp.prompts import UserMessage
from pydantic import Field

mcp = FastMCP(
    name="HelpfulAssistant",
    instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)

@mcp.tool()
def greet(name: Annotated[str, Field(description="Greet Name")]) -> str:
    return f"Hello, {name}!"

@mcp.tool()
def multiply(a: float, b: float) -> float:
    """Multiplies two numbers together."""
    return a * b

@mcp.tool()
def read_file(path: str, ctx: Context) -> str:
    with open(path, 'r') as f:
        content = f.read().strip()
        ctx.info('Read file successfully')
    return content


@mcp.resource("data://config")
def get_config() -> dict:
    """Provides the application configuration."""
    return {"theme": "dark", "version": "1.0"}

@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: int) -> dict:
    """Retrieves a user's profile by ID."""
    # The {user_id} in the URI is extracted and passed to this function
    return {"id": user_id, "name": f"User {user_id}", "status": "active"}

# Basic prompt returning a string (converted to UserMessage)
@mcp.prompt()
def ask_about_topic(topic: str) -> str:
    """Generates a user message asking for an explanation of a topic."""
    return f"Can you please explain the concept of '{topic}'?"

# Prompt returning a specific message type
@mcp.prompt()
def generate_code_request(language: str, task_description: str) -> UserMessage:
    """Generates a user message requesting code generation."""
    content = f"Write a {language} function that performs the following task: {task_description}"
    return UserMessage(content=content)


@mcp.tool()
async def analyze_data(data: list[float], ctx: Context) -> dict:
    """Analyze numerical data with logging."""
    await ctx.debug("Starting analysis of numerical data")
    await ctx.info(f"Analyzing {len(data)} data points")

    try:
        result = sum(data) / len(data)
        await ctx.info(f"Analysis complete, average: {result}")
        return {"average": result, "count": len(data)}
    except ZeroDivisionError:
        await ctx.warning("Empty data list provided")
        return {"error": "Empty data list"}
    except Exception as e:
        await ctx.error(f"Analysis failed: {str(e)}")
        raise

@mcp.tool()
async def analyze_sentiment(text: str, ctx: Context) -> dict:
    """Analyze the sentiment of a text using the client's LLM."""
    # Create a sampling prompt asking for sentiment analysis
    prompt = f"Analyze the sentiment of the following text as positive, negative, or neutral. Just output a single word - 'positive', 'negative', or 'neutral'. Text to analyze: {text}"

    # Send the sampling request to the client's LLM
    response = await ctx.sample(prompt)

    # Process the LLM's response
    sentiment = response.text.strip().lower()

    # Map to standard sentiment values
    if "positive" in sentiment:
        sentiment = "positive"
    elif "negative" in sentiment:
        sentiment = "negative"
    else:
        sentiment = "neutral"

    return {"text": text, "sentiment": sentiment}

if __name__ == "__main__":
    # This code only runs when the file is executed directly

    # Basic run with default settings (stdio transport)
    mcp.run()

    # Or with specific transport and parameters
    # mcp.run(transport="sse", host="127.0.0.1", port=9000)

MCP Client开发

Client与Server之间的通信有四种方式

  • Memory
  • SSE
  • Websocket
  • PythonFile

MCP server可以通过以上四种方式运行,Client连接对应的Server即可访问其实现的函数。

代码语言:javascript代码运行次数:0运行复制
import asyncio
from fastmcp import Client, FastMCP

# Example transports (more details in Transports page)
server_instance = FastMCP(name="TestServer") # In-memory server
sse_url = "http://localhost:8000/sse"       # SSE server URL
ws_url = "ws://localhost:9000"             # WebSocket server URL
server_script = "my_mcp_server.py"         # Path to a Python server file

# Client automatically infers the transport type
client_in_memory = Client(server_instance)
client_sse = Client(sse_url)
client_ws = Client(ws_url)
client_stdio = Client(server_script)

print(client_in_memory.transport)
print(client_sse.transport)
print(client_ws.transport)
print(client_stdio.transport)

# Expected Output (types may vary slightly based on environment):
# <FastMCP(server='TestServer')>
# <SSE(url='http://localhost:8000/sse')>
# <WebSocket(url='ws://localhost:9000')>
# <PythonStdioTransport(command='python', args=['/path/to/your/my_mcp_server.py'])>

Client列出Server的ToolResourcePrompt

代码语言:javascript代码运行次数:0运行复制
import asyncio
from pathlib import Path

from fastmcp import Client, Context
from fastmcp.client.sampling import MessageResult
from mcp import SamplingMessage
from mcp.shared.context import RequestContext
from mcp.types import TextContent, CreateMessageRequestParams

client = Client(
    "mcp_server.py",
)

async def main():
    # Connection is established here
    async with client:
        print(f"Client connected: {client.is_connected()}")

        # Make MCP calls within the context
        tools = await client.list_tools()
        print(f"Available tools: {tools}")

        resources = await client.list_resources()
        print(f"Available resources: {resources}")

        templates = await client.list_resource_templates()
        print(f"Available resource templates: {templates}")

    # Connection is closed automatically here
    print(f"Client connected: {client.is_connected()}")

if __name__ == "__main__":
    asyncio.run(main())

Client调用Server的ToolResourcePrompt函数。

代码语言:javascript代码运行次数:0运行复制
import asyncio
from pathlib import Path

from fastmcp import Client, Context
from fastmcp.client.sampling import MessageResult
from mcp import SamplingMessage
from mcp.shared.context import RequestContext
from mcp.types import TextContent, CreateMessageRequestParams

client = Client(
    "mcp_server.py",
)

async def main():
    # Connection is established here
    async with client:
        print(f"Client connected: {client.is_connected()}")

        # Make MCP calls within the context
        tools = await client.list_tools()
        print(f"Available tools: {tools}")

        if any(tool.name == "greet" for tool in tools):
            result = await client.call_tool("greet", {"name": "World"})
            print(f"Tool result: {result}")

        result = await client.call_tool("read_file", {"path": str(Path.home() / "Code")+"/MCP/mcp_client.py"})
        print(f"Tool Root handle result: {result}")
        
        result = await client.read_resource("data://config")
        print(f"Resource result: {result}")

        result = await client.read_resource("users://1/profile")
        print(f"Template resource result: {result}")

        result = await client.get_prompt("ask_about_topic", {"topic": "Python"})
        print(f"Prompt result: {result}")

        result = await client.get_prompt("generate_code_request", {"language": "Python", "task_description": "Generate a function to print 'Hello World'."})
        print(f"Prompt code result: {result}")

    # Connection is closed automatically here
    print(f"Client connected: {client.is_connected()}")

if __name__ == "__main__":
    asyncio.run(main())

Client设置Server的回调函数。

假设Server中有一个函数需要调用大模型API接口实现功能逻辑,但是不想把大模型API接口在Server中固定死,或者不想把APIKEY在Server中配置,想在Client端灵活配置不同的大模型,则可以在Server端通过ctx.sample来调用Client设置的回调函数。

Server端端部分示例如下:

代码语言:javascript代码运行次数:0运行复制
@mcp.tool()
async def analyze_sentiment(text: str, ctx: Context) -> dict:
    """Analyze the sentiment of a text using the client's LLM."""
    # Create a sampling prompt asking for sentiment analysis
    prompt = f"Analyze the sentiment of the following text as positive, negative, or neutral. Just output a single word - 'positive', 'negative', or 'neutral'. Text to analyze: {text}"

    # Send the sampling request to the client's LLM
    response = await ctx.sample(prompt)

    # Process the LLM's response
    sentiment = response.text.strip().lower()

    # Map to standard sentiment values
    if "positive" in sentiment:
        sentiment = "positive"
    elif "negative" in sentiment:
        sentiment = "negative"
    else:
        sentiment = "neutral"

    return {"text": text, "sentiment": sentiment}

Client端需要实现一个handler函数,如下my_llm_handler函数中的response_text的值应该为请求大模型API返回的值,在示例中没有实际请求大模型,根据需求自行实现。

在初始化Client的时候设置sampling_handler,即可使得Server调用ctx.sample的时候触发该回调函数并获取返回值。

需要注意的是,Client调用Server的时候需要传递一个Context()对象作为Server函数的参数。

代码语言:javascript代码运行次数:0运行复制
import asyncio
from pathlib import Path

from fastmcp import Client, Context
from fastmcp.client.sampling import MessageResult
from mcp import SamplingMessage
from mcp.shared.context import RequestContext
from mcp.types import TextContent, CreateMessageRequestParams

async def my_llm_handler(
    messages: list[SamplingMessage],
    params: CreateMessageRequestParams,
    context: RequestContext
) -> str | MessageResult:
    print(f"Server requested sampling (Request ID: {context.request_id})")
    # In a real scenario, call your LLM API here
    print(f"Messages: {messages}")
    last_user_message = next((m for m in reversed(messages) if m.role == 'user'), None)
    prompt = last_user_message.content.text if last_user_message and isinstance(last_user_message.content, TextContent) else "Default prompt"
    print(f"Prompt: {prompt}")
    
    # Simulate LLM response
    response_text = "positive"
    # Return simple string (becomes TextContent) or a MessageResult object
    return response_text

client = Client(
    "mcp_server.py",
    sampling_handler=my_llm_handler,
)

async def main():
    # Connection is established here
    async with client:
        print(f"Client connected: {client.is_connected()}")

        result = await client.call_tool("analyze_sentiment", {"text": "You are the best!", "ctx": Context()})
        print(f"Tool LLM handle result: {result}")

    # Connection is closed automatically here
    print(f"Client connected: {client.is_connected()}")

if __name__ == "__main__":
    asyncio.run(main())

完整的MCP Client的代码参考如下:

代码语言:javascript代码运行次数:0运行复制
import asyncio
from pathlib import Path

from fastmcp import Client, Context
from fastmcp.client.sampling import MessageResult
from mcp import SamplingMessage
from mcp.shared.context import RequestContext
from mcp.types import TextContent, CreateMessageRequestParams

async def my_llm_handler(
    messages: list[SamplingMessage],
    params: CreateMessageRequestParams,
    context: RequestContext
) -> str | MessageResult:
    print(f"Server requested sampling (Request ID: {context.request_id})")
    # In a real scenario, call your LLM API here
    print(f"Messages: {messages}")
    last_user_message = next((m for m in reversed(messages) if m.role == 'user'), None)
    prompt = last_user_message.content.text if last_user_message and isinstance(last_user_message.content, TextContent) else "Default prompt"
    print(f"Prompt: {prompt}")
    # Simulate LLM response
    response_text = "positive"
    # Return simple string (becomes TextContent) or a MessageResult object
    return response_text

client = Client(
    "mcp_server.py",
    sampling_handler=my_llm_handler,
)

async def main():
    # Connection is established here
    async with client:
        print(f"Client connected: {client.is_connected()}")

        # Make MCP calls within the context
        tools = await client.list_tools()
        print(f"Available tools: {tools}")

        resources = await client.list_resources()
        print(f"Available resources: {resources}")

        templates = await client.list_resource_templates()
        print(f"Available resource templates: {templates}")

        if any(tool.name == "greet" for tool in tools):
            result = await client.call_tool("greet", {"name": "World"})
            print(f"Tool result: {result}")

        result = await client.read_resource("data://config")
        print(f"Resource result: {result}")

        result = await client.read_resource("users://1/profile")
        print(f"Template resource result: {result}")

        result = await client.get_prompt("ask_about_topic", {"topic": "Python"})
        print(f"Prompt result: {result}")

        result = await client.get_prompt("generate_code_request", {"language": "Python", "task_description": "Generate a function to print 'Hello World'."})
        print(f"Prompt code result: {result}")

        result = await client.call_tool("read_file", {"path": str(Path.home() / "Code")+"/MCP/mcp_client.py"})
        print(f"Tool Root handle result: {result}")

        result = await client.call_tool("analyze_sentiment", {"text": "You are the best!", "ctx": Context()})
        print(f"Tool LLM handle result: {result}")

    # Connection is closed automatically here
    print(f"Client connected: {client.is_connected()}")

if __name__ == "__main__":
    asyncio.run(main())

Cursor配置MCP Server

  1. 打开 Cursor 设置: File -> Preferences -> Cursor Settings -> MCP
  2. 点击 Add new global MCP server,编辑 mcp.json
代码语言:javascript代码运行次数:0运行复制
{
  "mcpServers": {
    "test_service": {
            "command": "/Users/ryonluo/Software/miniforge3/envs/MCP/bin/python /Users/ryonluo/Code/MCP/mcp_server.py"
        }
  }
}
  • weather_service 是服务名称,可自定义。
  • command 需替换为实际 Python 文件路径。

配置后的MCP Servers如下,会显示绿色的通过按钮。

在Curosr中可以和agent对话,比如输入greet mike,Cursor识别到MCP中的greet工具函数,会提示你是否运行工具,点击按钮即可。

运行MCP工具后的结果会把运行结果进行展示。

Python开发MCP 入门指南

开源的FastMCP提供了Python版本简便的开发MCP的功能。

官网:

Github:

安装依赖

代码语言:javascript代码运行次数:0运行复制
pip install fastmcp

MCP Server开发

FastMCP中提供了三种类型的MCP函数

  • Tool
  • Resource
  • Prompt

Tool

FastMCP 中的工具将常规 Python 函数转换为 LLM 可以在对话过程中调用的功能。这使得 LLM 能够执行查询数据库、调用 API、进行计算或访问文件等任务,从而将其功能扩展到训练的数据之外。

Tool工具只需要在函数上添加@mcp.tool()注解即可,加上Annotated和Field标记,可以在使用MCP Server函数的时候可以看到参数的描述,推荐使用。

代码语言:javascript代码运行次数:0运行复制
from typing import Annotated

from fastmcp import FastMCP, Context
from fastmcp.prompts import UserMessage
from pydantic import Field

mcp = FastMCP(
    name="HelpfulAssistant",
    instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)

@mcp.tool()
def greet(name: Annotated[str, Field(description="Greet Name")]) -> str:
    return f"Hello, {name}!"

Resource

Resource表示 MCP 客户端可以读取的数据或文件,资源模板通过允许客户端根据 URI 中传递的参数请求动态生成的资源来扩展此概念。这允许 LLM 访问与对话相关的文件、数据库内容、配置或动态生成的信息。

Resource工具只需要在函数上添加@mcp.resource()注解即可,加上Annotated和Field标记,可以在使用MCP Server函数的时候可以看到参数的描述,推荐使用。

其中Resource Template可以在URI中动态获取用户传递的数据,填充模板数据后返回。

代码语言:javascript代码运行次数:0运行复制
from typing import Annotated

from fastmcp import FastMCP, Context
from fastmcp.prompts import UserMessage
from pydantic import Field

mcp = FastMCP(
    name="HelpfulAssistant",
    instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)

@mcp.resource("data://config")
def get_config() -> dict:
    """Provides the application configuration."""
    return {"theme": "dark", "version": "1.0"}

@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: int) -> dict:
    """Retrieves a user's profile by ID."""
    # The {user_id} in the URI is extracted and passed to this function
    return {"id": user_id, "name": f"User {user_id}", "status": "active"}

Prompt

Prompt是可重复使用的消息模板,可帮助 LLM 生成结构化、有针对性的响应。这使得可以定义 LLM 可以在不同的客户端和环境中使用的一致、可重复使用的模板。

FastMCP 主要使用 @mcp.prompt 装饰器来简化这些模板的定义,加上Annotated和Field标记,可以在使用MCP Server函数的时候可以看到参数的描述,推荐使用。

函数返回字符串的时候,会自动转换为UserMessage类型返回。

代码语言:javascript代码运行次数:0运行复制
from typing import Annotated

from fastmcp import FastMCP, Context
from fastmcp.prompts import UserMessage
from pydantic import Field

mcp = FastMCP(
    name="HelpfulAssistant",
    instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)

# Basic prompt returning a string (converted to UserMessage)
@mcp.prompt()
def ask_about_topic(topic: str) -> str:
    """Generates a user message asking for an explanation of a topic."""
    return f"Can you please explain the concept of '{topic}'?"

# Prompt returning a specific message type
@mcp.prompt()
def generate_code_request(language: str, task_description: str) -> UserMessage:
    """Generates a user message requesting code generation."""
    content = f"Write a {language} function that performs the following task: {task_description}"
    return UserMessage(content=content)

Context

在定义 FastMCP 工具 、 资源 、资源模板或提示符时,函数可能需要与底层 MCP 会话交互或访问服务器功能。FastMCP 为此提供了 Context 对象。它支持如下功能:

  • 日志记录 :将调试、信息、警告和错误消息发送回客户端
  • 进度报告 :向客户端更新长期运行操作的进度
  • 资源访问 :从服务器注册的资源读取数据
  • LLM 请求:请求客户的 LLM 根据提供的消息生成文本
  • 请求信息 :访问有关当前请求的元数据
  • 服务器访问 :需要时,访问底层 FastMCP 服务器实例

如下是一个Server的工具函数示例,通过ctx.debug打印Debug日志,ctx.report_progress上报执行进度。

代码语言:javascript代码运行次数:0运行复制
from fastmcp import FastMCP, Context

mcp = FastMCP(name="ContextDemo")

@mcp.tool()
async def process_file(file_uri: str, ctx: Context) -> str:
    """Processes a file, using context for logging and resource access."""
    request_id = ctx.request_id
    await ctx.info(f"[{request_id}] Starting processing for {file_uri}")

    try:
        # Use context to read a resource
        contents_list = await ctx.read_resource(file_uri)
        if not contents_list:
            await ctx.warning(f"Resource {file_uri} is empty.")
            return "Resource empty"

        data = contents_list[0].content # Assuming TextResourceContents
        await ctx.debug(f"Read {len(data)} bytes from {file_uri}")

        # Report progress
        await ctx.report_progress(progress=50, total=100)
        
        # Simulate work
        processed_data = data.upper() # Example processing

        await ctx.report_progress(progress=100, total=100)
        await ctx.info(f"Processing complete for {file_uri}")

        return f"Processed data length: {len(processed_data)}"

    except Exception as e:
        # Use context to log errors
        await ctx.error(f"Error processing {file_uri}: {str(e)}")
        raise # Re-raise to send error back to client

更多实现参考:

运行Server

MCP服务的运行支持stdiosse形式。

代码语言:javascript代码运行次数:0运行复制
if __name__ == "__main__":
    # Basic run with default settings (stdio transport)
    mcp.run()
    
    # Or with specific transport and parameters
    # mcp.run(transport="sse", host="127.0.0.1", port=9000)

完整的MCP Server的代码参考如下:

代码语言:javascript代码运行次数:0运行复制
from typing import Annotated

from fastmcp import FastMCP, Context
from fastmcp.prompts import UserMessage
from pydantic import Field

mcp = FastMCP(
    name="HelpfulAssistant",
    instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)

@mcp.tool()
def greet(name: Annotated[str, Field(description="Greet Name")]) -> str:
    return f"Hello, {name}!"

@mcp.tool()
def multiply(a: float, b: float) -> float:
    """Multiplies two numbers together."""
    return a * b

@mcp.tool()
def read_file(path: str, ctx: Context) -> str:
    with open(path, 'r') as f:
        content = f.read().strip()
        ctx.info('Read file successfully')
    return content


@mcp.resource("data://config")
def get_config() -> dict:
    """Provides the application configuration."""
    return {"theme": "dark", "version": "1.0"}

@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: int) -> dict:
    """Retrieves a user's profile by ID."""
    # The {user_id} in the URI is extracted and passed to this function
    return {"id": user_id, "name": f"User {user_id}", "status": "active"}

# Basic prompt returning a string (converted to UserMessage)
@mcp.prompt()
def ask_about_topic(topic: str) -> str:
    """Generates a user message asking for an explanation of a topic."""
    return f"Can you please explain the concept of '{topic}'?"

# Prompt returning a specific message type
@mcp.prompt()
def generate_code_request(language: str, task_description: str) -> UserMessage:
    """Generates a user message requesting code generation."""
    content = f"Write a {language} function that performs the following task: {task_description}"
    return UserMessage(content=content)


@mcp.tool()
async def analyze_data(data: list[float], ctx: Context) -> dict:
    """Analyze numerical data with logging."""
    await ctx.debug("Starting analysis of numerical data")
    await ctx.info(f"Analyzing {len(data)} data points")

    try:
        result = sum(data) / len(data)
        await ctx.info(f"Analysis complete, average: {result}")
        return {"average": result, "count": len(data)}
    except ZeroDivisionError:
        await ctx.warning("Empty data list provided")
        return {"error": "Empty data list"}
    except Exception as e:
        await ctx.error(f"Analysis failed: {str(e)}")
        raise

@mcp.tool()
async def analyze_sentiment(text: str, ctx: Context) -> dict:
    """Analyze the sentiment of a text using the client's LLM."""
    # Create a sampling prompt asking for sentiment analysis
    prompt = f"Analyze the sentiment of the following text as positive, negative, or neutral. Just output a single word - 'positive', 'negative', or 'neutral'. Text to analyze: {text}"

    # Send the sampling request to the client's LLM
    response = await ctx.sample(prompt)

    # Process the LLM's response
    sentiment = response.text.strip().lower()

    # Map to standard sentiment values
    if "positive" in sentiment:
        sentiment = "positive"
    elif "negative" in sentiment:
        sentiment = "negative"
    else:
        sentiment = "neutral"

    return {"text": text, "sentiment": sentiment}

if __name__ == "__main__":
    # This code only runs when the file is executed directly

    # Basic run with default settings (stdio transport)
    mcp.run()

    # Or with specific transport and parameters
    # mcp.run(transport="sse", host="127.0.0.1", port=9000)

MCP Client开发

Client与Server之间的通信有四种方式

  • Memory
  • SSE
  • Websocket
  • PythonFile

MCP server可以通过以上四种方式运行,Client连接对应的Server即可访问其实现的函数。

代码语言:javascript代码运行次数:0运行复制
import asyncio
from fastmcp import Client, FastMCP

# Example transports (more details in Transports page)
server_instance = FastMCP(name="TestServer") # In-memory server
sse_url = "http://localhost:8000/sse"       # SSE server URL
ws_url = "ws://localhost:9000"             # WebSocket server URL
server_script = "my_mcp_server.py"         # Path to a Python server file

# Client automatically infers the transport type
client_in_memory = Client(server_instance)
client_sse = Client(sse_url)
client_ws = Client(ws_url)
client_stdio = Client(server_script)

print(client_in_memory.transport)
print(client_sse.transport)
print(client_ws.transport)
print(client_stdio.transport)

# Expected Output (types may vary slightly based on environment):
# <FastMCP(server='TestServer')>
# <SSE(url='http://localhost:8000/sse')>
# <WebSocket(url='ws://localhost:9000')>
# <PythonStdioTransport(command='python', args=['/path/to/your/my_mcp_server.py'])>

Client列出Server的ToolResourcePrompt

代码语言:javascript代码运行次数:0运行复制
import asyncio
from pathlib import Path

from fastmcp import Client, Context
from fastmcp.client.sampling import MessageResult
from mcp import SamplingMessage
from mcp.shared.context import RequestContext
from mcp.types import TextContent, CreateMessageRequestParams

client = Client(
    "mcp_server.py",
)

async def main():
    # Connection is established here
    async with client:
        print(f"Client connected: {client.is_connected()}")

        # Make MCP calls within the context
        tools = await client.list_tools()
        print(f"Available tools: {tools}")

        resources = await client.list_resources()
        print(f"Available resources: {resources}")

        templates = await client.list_resource_templates()
        print(f"Available resource templates: {templates}")

    # Connection is closed automatically here
    print(f"Client connected: {client.is_connected()}")

if __name__ == "__main__":
    asyncio.run(main())

Client调用Server的ToolResourcePrompt函数。

代码语言:javascript代码运行次数:0运行复制
import asyncio
from pathlib import Path

from fastmcp import Client, Context
from fastmcp.client.sampling import MessageResult
from mcp import SamplingMessage
from mcp.shared.context import RequestContext
from mcp.types import TextContent, CreateMessageRequestParams

client = Client(
    "mcp_server.py",
)

async def main():
    # Connection is established here
    async with client:
        print(f"Client connected: {client.is_connected()}")

        # Make MCP calls within the context
        tools = await client.list_tools()
        print(f"Available tools: {tools}")

        if any(tool.name == "greet" for tool in tools):
            result = await client.call_tool("greet", {"name": "World"})
            print(f"Tool result: {result}")

        result = await client.call_tool("read_file", {"path": str(Path.home() / "Code")+"/MCP/mcp_client.py"})
        print(f"Tool Root handle result: {result}")
        
        result = await client.read_resource("data://config")
        print(f"Resource result: {result}")

        result = await client.read_resource("users://1/profile")
        print(f"Template resource result: {result}")

        result = await client.get_prompt("ask_about_topic", {"topic": "Python"})
        print(f"Prompt result: {result}")

        result = await client.get_prompt("generate_code_request", {"language": "Python", "task_description": "Generate a function to print 'Hello World'."})
        print(f"Prompt code result: {result}")

    # Connection is closed automatically here
    print(f"Client connected: {client.is_connected()}")

if __name__ == "__main__":
    asyncio.run(main())

Client设置Server的回调函数。

假设Server中有一个函数需要调用大模型API接口实现功能逻辑,但是不想把大模型API接口在Server中固定死,或者不想把APIKEY在Server中配置,想在Client端灵活配置不同的大模型,则可以在Server端通过ctx.sample来调用Client设置的回调函数。

Server端端部分示例如下:

代码语言:javascript代码运行次数:0运行复制
@mcp.tool()
async def analyze_sentiment(text: str, ctx: Context) -> dict:
    """Analyze the sentiment of a text using the client's LLM."""
    # Create a sampling prompt asking for sentiment analysis
    prompt = f"Analyze the sentiment of the following text as positive, negative, or neutral. Just output a single word - 'positive', 'negative', or 'neutral'. Text to analyze: {text}"

    # Send the sampling request to the client's LLM
    response = await ctx.sample(prompt)

    # Process the LLM's response
    sentiment = response.text.strip().lower()

    # Map to standard sentiment values
    if "positive" in sentiment:
        sentiment = "positive"
    elif "negative" in sentiment:
        sentiment = "negative"
    else:
        sentiment = "neutral"

    return {"text": text, "sentiment": sentiment}

Client端需要实现一个handler函数,如下my_llm_handler函数中的response_text的值应该为请求大模型API返回的值,在示例中没有实际请求大模型,根据需求自行实现。

在初始化Client的时候设置sampling_handler,即可使得Server调用ctx.sample的时候触发该回调函数并获取返回值。

需要注意的是,Client调用Server的时候需要传递一个Context()对象作为Server函数的参数。

代码语言:javascript代码运行次数:0运行复制
import asyncio
from pathlib import Path

from fastmcp import Client, Context
from fastmcp.client.sampling import MessageResult
from mcp import SamplingMessage
from mcp.shared.context import RequestContext
from mcp.types import TextContent, CreateMessageRequestParams

async def my_llm_handler(
    messages: list[SamplingMessage],
    params: CreateMessageRequestParams,
    context: RequestContext
) -> str | MessageResult:
    print(f"Server requested sampling (Request ID: {context.request_id})")
    # In a real scenario, call your LLM API here
    print(f"Messages: {messages}")
    last_user_message = next((m for m in reversed(messages) if m.role == 'user'), None)
    prompt = last_user_message.content.text if last_user_message and isinstance(last_user_message.content, TextContent) else "Default prompt"
    print(f"Prompt: {prompt}")
    
    # Simulate LLM response
    response_text = "positive"
    # Return simple string (becomes TextContent) or a MessageResult object
    return response_text

client = Client(
    "mcp_server.py",
    sampling_handler=my_llm_handler,
)

async def main():
    # Connection is established here
    async with client:
        print(f"Client connected: {client.is_connected()}")

        result = await client.call_tool("analyze_sentiment", {"text": "You are the best!", "ctx": Context()})
        print(f"Tool LLM handle result: {result}")

    # Connection is closed automatically here
    print(f"Client connected: {client.is_connected()}")

if __name__ == "__main__":
    asyncio.run(main())

完整的MCP Client的代码参考如下:

代码语言:javascript代码运行次数:0运行复制
import asyncio
from pathlib import Path

from fastmcp import Client, Context
from fastmcp.client.sampling import MessageResult
from mcp import SamplingMessage
from mcp.shared.context import RequestContext
from mcp.types import TextContent, CreateMessageRequestParams

async def my_llm_handler(
    messages: list[SamplingMessage],
    params: CreateMessageRequestParams,
    context: RequestContext
) -> str | MessageResult:
    print(f"Server requested sampling (Request ID: {context.request_id})")
    # In a real scenario, call your LLM API here
    print(f"Messages: {messages}")
    last_user_message = next((m for m in reversed(messages) if m.role == 'user'), None)
    prompt = last_user_message.content.text if last_user_message and isinstance(last_user_message.content, TextContent) else "Default prompt"
    print(f"Prompt: {prompt}")
    # Simulate LLM response
    response_text = "positive"
    # Return simple string (becomes TextContent) or a MessageResult object
    return response_text

client = Client(
    "mcp_server.py",
    sampling_handler=my_llm_handler,
)

async def main():
    # Connection is established here
    async with client:
        print(f"Client connected: {client.is_connected()}")

        # Make MCP calls within the context
        tools = await client.list_tools()
        print(f"Available tools: {tools}")

        resources = await client.list_resources()
        print(f"Available resources: {resources}")

        templates = await client.list_resource_templates()
        print(f"Available resource templates: {templates}")

        if any(tool.name == "greet" for tool in tools):
            result = await client.call_tool("greet", {"name": "World"})
            print(f"Tool result: {result}")

        result = await client.read_resource("data://config")
        print(f"Resource result: {result}")

        result = await client.read_resource("users://1/profile")
        print(f"Template resource result: {result}")

        result = await client.get_prompt("ask_about_topic", {"topic": "Python"})
        print(f"Prompt result: {result}")

        result = await client.get_prompt("generate_code_request", {"language": "Python", "task_description": "Generate a function to print 'Hello World'."})
        print(f"Prompt code result: {result}")

        result = await client.call_tool("read_file", {"path": str(Path.home() / "Code")+"/MCP/mcp_client.py"})
        print(f"Tool Root handle result: {result}")

        result = await client.call_tool("analyze_sentiment", {"text": "You are the best!", "ctx": Context()})
        print(f"Tool LLM handle result: {result}")

    # Connection is closed automatically here
    print(f"Client connected: {client.is_connected()}")

if __name__ == "__main__":
    asyncio.run(main())

Cursor配置MCP Server

  1. 打开 Cursor 设置: File -> Preferences -> Cursor Settings -> MCP
  2. 点击 Add new global MCP server,编辑 mcp.json
代码语言:javascript代码运行次数:0运行复制
{
  "mcpServers": {
    "test_service": {
            "command": "/Users/ryonluo/Software/miniforge3/envs/MCP/bin/python /Users/ryonluo/Code/MCP/mcp_server.py"
        }
  }
}
  • weather_service 是服务名称,可自定义。
  • command 需替换为实际 Python 文件路径。

配置后的MCP Servers如下,会显示绿色的通过按钮。

在Curosr中可以和agent对话,比如输入greet mike,Cursor识别到MCP中的greet工具函数,会提示你是否运行工具,点击按钮即可。

运行MCP工具后的结果会把运行结果进行展示。

本文标签: Python开发MCP 入门指南