MCP Integration Guide

Enable AI agents (Claude, ChatGPT, custom LLMs) to discover and execute PyExecutor workflows via MCP.

🤖 What is MCP Integration?

The Model Context Protocol (MCP) is a standardized way for AI agents to discover and call tools/functions. PyExecutor exposes your workflows as MCP tools, allowing any compatible AI model to:

Architecture Flow:

Claude / ChatGPT / Custom LLM

MCP Protocol (WebSocket/Stdio)

PyExecutor MCP Server

REST API → Workflows → Execution Engine

Database, APIs, Scripts, Notifications

⚙️ Setup PyExecutor MCP Server

1. Install MCP Server Dependencies

# In your PyExecutor backend directory
pip install fastmcp pydantic

# Or add to requirements.txt
fastmcp>=0.1.0
pydantic>=2.0.0

2. Create MCP Server Module

# backend/mcp_server.py
from fastmcp import FastMCP
from fastmcp.resources import Resource
from pydantic import BaseModel
import os
import httpx

app = FastMCP("pyexec")

# Get PyExecutor API config
API_URL = os.getenv("PYEXEC_API_URL", "http://localhost:8000")
API_KEY = os.getenv("PYEXEC_API_KEY", "")

class WorkflowInput(BaseModel):
    workflow_name: str
    context: dict = {}
    wait: bool = True
    timeout: int = 300

@app.list_tools()
async def list_tools():
    """Discover available workflows as MCP tools"""
    async with httpx.AsyncClient() as client:
        headers = {"X-API-Key": API_KEY}
        response = await client.get(
            f"{API_URL}/api/workflows/",
            headers=headers
        )
        workflows = response.json()
        
        tools = []
        for workflow in workflows:
            tools.append({
                "name": f"execute_{workflow['name'].replace('-', '_')}",
                "description": workflow.get('description', f"Execute {workflow['name']} workflow"),
                "inputSchema": {
                    "type": "object",
                    "properties": {
                        "context": {
                            "type": "object",
                            "description": "Context variables for workflow execution",
                            "additionalProperties": {"type": "string"}
                        }
                    },
                    "required": []
                }
            })
        
        return tools

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    """Execute a PyExecutor workflow via MCP"""
    # Extract workflow name from tool name
    workflow_name = name.replace("execute_", "").replace("_", "-")
    context = arguments.get("context", {})
    
    async with httpx.AsyncClient() as client:
        headers = {"X-API-Key": API_KEY, "Content-Type": "application/json"}
        
        # Execute workflow
        response = await client.post(
            f"{API_URL}/api/workflows/{workflow_name}/execute/",
            json={"context": context},
            headers=headers
        )
        
        if response.status_code != 200:
            return f"Error executing workflow: {response.text}"
        
        job_data = response.json()
        job_id = job_data['id']
        
        # Wait for completion
        import asyncio
        while True:
            status_response = await client.get(
                f"{API_URL}/api/jobs/{job_id}/",
                headers=headers
            )
            job = status_response.json()
            
            if job['status'] in ['success', 'failed', 'cancelled']:
                return {
                    "status": job['status'],
                    "output": job.get('output', {}),
                    "error": job.get('error')
                }
            
            await asyncio.sleep(2)

if __name__ == "__main__":
    app.run()

3. Configuration & Environment

# .env or docker-compose.yml
PYEXEC_API_URL=http://localhost:8000
PYEXEC_API_KEY=sk_your_api_key_here
MCP_SERVER_HOST=0.0.0.0
MCP_SERVER_PORT=3000

🧠 Integrate with Claude

Option 1: Via Claude Desktop App

# ~/.claude_desktop_config.json (macOS/Linux)
# or %APPDATA%\Claude\claude_desktop_config.json (Windows)

{
  "mcpServers": {
    "pyexec": {
      "command": "python",
      "args": [
        "-m",
        "mcp.server",
        "--host",
        "localhost",
        "--port",
        "3000"
      ],
      "env": {
        "PYEXEC_API_URL": "http://localhost:8000",
        "PYEXEC_API_KEY": "sk_your_key_here"
      }
    }
  }
}

Option 2: Via Python SDK

from anthropic import Anthropic
import json

client = Anthropic()

# Start conversation
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=4096,
    tools=[
        {
            "name": "execute_order_processing",
            "description": "Process a customer order",
            "input_schema": {
                "type": "object",
                "properties": {
                    "context": {
                        "type": "object",
                        "properties": {
                            "customer_id": {"type": "string"},
                            "order_total": {"type": "string"},
                            "items": {"type": "array"}
                        }
                    }
                }
            }
        }
    ],
    messages=[
        {
            "role": "user",
            "content": "Process order #12345 for $500 with 3 items"
        }
    ]
)

# Handle tool calls
while response.stop_reason == "tool_use":
    tool_use = next(
        block for block in response.content 
        if block.type == "tool_use"
    )
    
    # Execute the tool (call PyExecutor)
    if tool_use.name == "execute_order_processing":
        # Call your PyExecutor MCP endpoint
        result = execute_workflow(
            tool_use.name,
            tool_use.input
        )
        
        # Continue conversation with result
        response = client.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=4096,
            tools=[...],  # same tools
            messages=[
                {"role": "user", "content": "Process order..."},
                {"role": "assistant", "content": response.content},
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "tool_result",
                            "tool_use_id": tool_use.id,
                            "content": json.dumps(result)
                        }
                    ]
                }
            ]
        )

print(response.content[0].text)

Claude Usage Examples

Example 1: "Process this customer order and send them an email confirmation"

Example 2: "Analyze these system logs for errors and create an incident ticket"

Example 3: "Fetch the latest sales data, generate a report, and email it to the team"

Example 4: "Check inventory levels and trigger reorder workflows for items below threshold"

🔧 Defining MCP Tools

Automatic Tool Discovery

PyExecutor MCP server automatically discovers all workflows and exposes them as tools. The server introspects each workflow's configuration to create tool schemas:

# Tool name: execute_order_processing
# Derived from workflow name: order-processing

Tool Schema:
{
  "name": "execute_order_processing",
  "description": "Automated workflow for processing orders...",
  "inputSchema": {
    "type": "object",
    "properties": {
      "context": {
        "type": "object",
        "description": "Context variables",
        "properties": {
          "customer_id": {"type": "string"},
          "order_total": {"type": "string"},
          "items": {"type": "array"}
        }
      }
    }
  }
}

Custom Tool Annotations

Add descriptions and required parameters to workflows:

# In workflow config or via API
{
  "name": "order-processing",
  "description": "Process customer orders with payment validation, inventory check, and email confirmation",
  "mcp_config": {
    "enabled": true,
    "required_context_fields": [
      "customer_id",
      "order_total"
    ],
    "optional_context_fields": [
      "shipping_address",
      "gift_message"
    ],
    "example_input": {
      "customer_id": "CUST-12345",
      "order_total": "99.99",
      "items": [...]
    }
  }
}

Approval Management via MCP

Three built-in MCP tools let AI agents manage workflow approval gates conversationally. Designated approver verification and platform admin bypass are enforced automatically.

// Tool: list_pending_approvals
// Lists all pending approval requests for the connected org.
// Returns request IDs, workflow names, approver lists, and tokens.
User: "Show me pending approvals"
Agent → list_pending_approvals()
→ Found 2 pending approval request(s):
    Request ID: 30
      Workflow: Deploy Pipeline
      Approver(s): ops@company.com, lead@company.com

// Tool: approve_workflow
// Approves a request by ID. Validates caller is a designated approver.
User: "Approve request 30"
Agent → approve_workflow(request_id=30, comment="Ship it")
→ ✓ Approval request 30 has been APPROVED.

// Tool: reject_workflow
// Rejects a request by ID with an optional reason.
User: "Reject request 31, not ready"
Agent → reject_workflow(request_id=31, reason="Not ready for production")
→ ✗ Approval request 31 has been REJECTED.

When PYEXEC_MCP_CLIENT_SECRET is configured, approval actions require a valid HMAC signature. Platform super admins bypass designated-approver restrictions.

📝 Context & Data Flow

How Data Flows Through MCP

# Step 1: AI Agent detects tool use (Claude conversation)
Agent: "Process this order"
↓

# Step 2: MCP transmits tool call to server
{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "execute_order_processing",
    "arguments": {
      "context": {
        "customer_id": "CUST-12345",
        "order_total": "99.99"
      }
    }
  }
}
↓

# Step 3: PyExecutor MCP Server receives and validates
- Validates input schema
- Checks API key/permissions
- Calls PyExecutor REST API
↓

# Step 4: Workflow executes with context variables
Script:
  order = {{context_customer_id}}  # CUST-12345
  amount = {{context_order_total}}  # 99.99
↓

# Step 5: Result returned to AI Agent
{
  "status": "success",
  "output": {
    "order_id": "ORD-98765",
    "confirmation_email_sent": true,
    "delivery_date": "2024-01-15"
  }
}
↓

# Step 6: AI Agent continues reasoning with result
Agent: "Great! The order was processed..."

Handling Complex Outputs

Workflow outputs can be complex and are automatically serialized for MCP:

# Workflow output (from last step)
{
  "order_id": "ORD-98765",
  "total_processed": 99.99,
  "items": [
    {"sku": "ITEM-123", "qty": 2, "price": 49.99},
    {"sku": "ITEM-456", "qty": 1, "price": 0}
  ],
  "status": "confirmed",
  "confirmation_sent_to": "user@example.com",
  "delivery_estimate": "2024-01-15"
}

# Available to AI agent for further processing
Agent reasoning: "The order was processed successfully.
Order ID: {output.order_id}
Items: {output.items}
..." 

⚠️ Error Handling

Handling Workflow Failures

# Error responses are automatically formatted for MCP

Success Case:
{
  "status": "success",
  "output": {...}
}

Failure Case (workflow failed):
{
  "status": "failed",
  "error": "Payment processing failed",
  "error_code": "PAYMENT_DECLINED",
  "details": {
    "failed_step": "Process Payment",
    "reason": "Insufficient funds"
  }
}

Timeout Case:
{
  "status": "timeout",
  "error": "Workflow did not complete within 300 seconds",
  "job_id": "job_12345",
  "message": "Check job status at: api/jobs/job_12345"
}

Permission Error:
{
  "status": "error",
  "error": "Unauthorized",
  "message": "API key does not have permission to execute this workflow"
}

AI Agent Error Handling Strategy

# Example: Claude responding to workflow errors

SYSTEM_PROMPT = """
You are an AI agent with access to PyExecutor workflows.
When executing workflows:
1. If status == 'success': Report the output to the user
2. If status == 'failed': Explain the error and suggest retry with different params
3. If status == 'timeout': Offer to check job status later
4. If status == 'error': Report permission/auth issues and suggest contacting admin

Always maintain conversation context about what workflows were attempted."""

# In Claude SDK:
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    system=SYSTEM_PROMPT,
    tools=[workflow_tools],
    messages=[...]
)

🔒 Security Considerations

Authentication & Authorization

  • API Key Required: All MCP → PyExecutor calls require valid API key
  • Workflow Permissions: MCP server respects workflow access control
  • Rate Limiting: MCP server enforces rate limits per API key
  • Audit Logging: All MCP-triggered executions are logged with agent/source
  • Input Validation: Context variables validated before passing to workflows

Input Sanitization

# MCP Server input validation example

def validate_context(context: dict) -> dict:
    """Sanitize workflow context"""
    sanitized = {}
    
    for key, value in context.items():
        # Only allow specific context variables
        if key not in get_workflow_expected_fields():
            raise ValueError(f"Unexpected context field: {key}")
        
        # Check value types
        if not isinstance(value, (str, int, float, bool, list, dict)):
            raise ValueError(f"Invalid type for {key}")
        
        # Limit string lengths
        if isinstance(value, str) and len(value) > 10000:
            raise ValueError(f"Value too long for {key}")
        
        sanitized[key] = value
    
    return sanitized

Best Practices

  • ✓ Use separate API keys for different MCP integrations
  • ✓ Rotate API keys regularly
  • ✓ Limit workflows that are exposed via MCP (don't expose dangerous ones)
  • ✓ Monitor MCP server logs for suspicious activity
  • ✓ Use HTTPS/WSS in production (not HTTP/WS)
  • ✓ Set reasonable timeout values to prevent resource exhaustion
  • ✓ Implement rate limiting per API key or per workflow

🌐 Multi-Language & Admin via MCP

Multi-Language Awareness

When an AI agent executes a workflow via MCP, script steps can run in any of the 5 supported languages (Python, JavaScript, PowerShell, Bash, Go). The agent doesn't need to manage this — the engine selects the correct runtime based on the script's language setting.

Admin Operations

Additional platform endpoints available via the REST API (callable by agents with appropriate API keys):

  • POST /api/mcp/reload/ — Reload MCP tool discovery from workflow definitions
  • GET /api/analytics/stats/ — Fetch dashboard analytics for reporting
  • GET /api/templates/featured/ — Browse template gallery for quick workflow setup
  • POST /api/templates/{slug}/clone-and-configure/ — Clone and configure a template

🚀 Deployment

Docker Deployment

# Dockerfile for MCP server
FROM python:3.12-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY mcp_server.py .

ENV PYEXEC_API_URL=http://backend:8000
ENV PYEXEC_API_KEY=${PYEXEC_API_KEY}

EXPOSE 3000

CMD ["python", "mcp_server.py"]

Docker Compose

# docker-compose.yml
services:
  backend:
    image: pyexec-backend
    ports:
      - "8000:8000"
    environment:
      DATABASE_URL: postgresql://...
  
  mcp-server:
    image: pyexec-mcp
    ports:
      - "3000:3000"
    environment:
      PYEXEC_API_URL: http://backend:8000
      PYEXEC_API_KEY: ${PYEXEC_API_KEY}
    depends_on:
      - backend

Production Checklist

  • ☐ Configure HTTPS/TLS for MCP server
  • ☐ Set up monitoring and alerting
  • ☐ Configure rate limiting and quotas
  • ☐ Enable audit logging
  • ☐ Set reasonable timeout values
  • ☐ Create backup API keys
  • ☐ Test failover scenarios
  • ☐ Document integration for ops team

💡 Real-World Examples

Example 1: AI-Powered Customer Support

# Claude helps customer with their order

User: "I need to process a bulk import of 1000 customer records"

Claude Response & Actions:
1. "I can help you with that bulk import. Let me prepare the import workflow."
2. [CALLS: execute_bulk_import with 1000 records]
3. "Import started! Processing your 1000 records... This typically takes 5-10 minutes."
4. [POLLS job status]
5. "✅ Import complete! Successfully processed 995 records. 
   5 failed due to invalid email formats. I can help fix those."

Example 2: Automated DevOps Task

# Claude manages infrastructure workflow

User: "Deploy the latest version and run health checks"

Claude Response & Actions:
1. "I'll deploy the application and verify system health."
2. [CALLS: execute_deployment with version=latest]
3. [CALLS: execute_health_check after deployment completes]
4. "✅ Deployment successful! All health checks passed.
   - CPU: 45%
   - Memory: 62%
   - API latency: 125ms
   - Database connections: 24/50"

Example 3: Data Analysis & Reporting

# Claude generates insightful analysis

User: "What were our sales trends last quarter?"

Claude Response & Actions:
1. "Let me fetch the quarterly sales data and analyze it."
2. [CALLS: execute_quarterly_report with quarter=Q4]
3. "Based on the data analysis:
   📈 Revenue grew 23% YoY
   📊 Top products: Product A (+45%), Product B (+32%)
   ⚠️  Product C declined (-15%) - recommend review
   Would you like me to create an action plan?"