Skip to content

πŸš€ Generate executable TypeScript tools from MCP servers with 98% token savings. Progressive loading pattern for AI agents. Production-ready.

License

Notifications You must be signed in to change notification settings

bug-ops/mcp-execution

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

88 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MCP Code Execution

Autonomous MCP Tool Execution with 98% Token Savings

Transform any MCP server into executable, type-safe TypeScript tools using progressive loading pattern. Load only what you need, when you need it.

CI codecov MSRV License Rust


Table of Contents


Why MCP Code Execution?

The Problem: Traditional MCP integration loads ALL tools from a server (~30,000 tokens), even when you only need one or two. This wastes context window space and slows down AI agents.

The Solution: Progressive loading generates one TypeScript file per tool (~500-1,500 tokens each). AI agents discover and load only what they need via simple ls and cat commands.

The Result: 98% token savings + autonomous execution + type safety

Inspired by Anthropic's engineering blog post on Code Execution with MCP.


Key Features

πŸš€ Autonomous Tool Execution

Generated TypeScript files are directly executable via Node.js CLI. No middleware, no proxiesβ€”just run the tool:

# Generate GitHub tools
mcp-execution-cli generate --from-config github

# Execute directly from command line
node ~/.claude/servers/github/createIssue.ts \
  --repo="owner/repo" \
  --title="Bug report" \
  --body="Description"

AI agents can now execute MCP tools autonomously by generating and running shell commands.

πŸ“Š 98% Token Savings

Progressive loading pattern dramatically reduces context usage:

# Traditional: Load everything (30,000 tokens)
cat ~/.claude/servers/github/index.ts

# Progressive: Load only what you need (500-1,500 tokens)
cat ~/.claude/servers/github/createIssue.ts
# Savings: 98%! πŸŽ‰

Load 1 tool, 5 tools, or all toolsβ€”you choose the tradeoff between context usage and capabilities.

⚑ Lightning Fast

Code generation is measured in milliseconds:

Operation Target Achieved Speedup
10 tools <100ms 0.19ms 526x faster
50 tools <20ms 0.97ms 20.6x faster
VFS export <10ms 1.2ms 8.3x faster

πŸ”’ Type-Safe by Design

Every tool gets full TypeScript interfaces generated from MCP JSON schemas:

export async function createIssue(params: CreateIssueParams): Promise<CreateIssueResult>;

export interface CreateIssueParams {
  repo: string;           // Required (no ?)
  title: string;          // Required (no ?)
  body?: string;          // Optional (has ?)
  labels?: string[];      // Optional (has ?)
  assignees?: string[];   // Optional (has ?)
}

export interface CreateIssueResult {
  number: number;
  url: string;
  state: "open" | "closed";
}

AI agents get IntelliSense-quality parameter documentation in the generated files.

πŸ”Œ 100% MCP Compatible

Works with all existing MCP servers via official rmcp SDK. No custom protocols, no vendor lock-in.

Supported transports:

  • stdio (most common): npx server-name
  • HTTP: REST API endpoints
  • SSE: Server-Sent Events streaming
  • Docker: Containerized servers

πŸ—οΈ Production Ready

  • 550 tests passing (100% pass rate)
  • Microsoft Rust Guidelines compliant
  • 100% documentation coverage
  • Multi-platform releases: Linux, macOS, Windows (x86_64 + ARM64)
  • CI/CD with code coverage via GitHub Actions + Codecov

Quick Start

Installation

Pre-built binaries (recommended):

# Download latest release for your platform
# Linux (x86_64)
curl -L https://github.com/bug-ops/mcp-execution/releases/latest/download/mcp-execution-cli-linux-amd64.tar.gz | tar xz

# macOS (ARM64)
curl -L https://github.com/bug-ops/mcp-execution/releases/latest/download/mcp-execution-cli-macos-arm64.tar.gz | tar xz

# Windows (x86_64)
# Download from: https://github.com/bug-ops/mcp-execution/releases/latest

From source:

git clone https://github.com/bug-ops/mcp-execution
cd mcp-execution
cargo install --path crates/mcp-cli

Basic Usage

1. Configure MCP Server (mcp.json)

Create ~/.config/claude/mcp.json with your MCP servers:

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_TOKEN": "ghp_your_token_here"
      }
    },
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost/db"
      }
    }
  }
}

2. Generate TypeScript Tools

# Generate from config (simplest)
mcp-execution-cli generate --from-config github

# Or specify server manually
mcp-execution-cli generate npx -y @modelcontextprotocol/server-github \
  --env GITHUB_TOKEN=ghp_xxx

Output (written to ~/.claude/servers/github/):

github/
β”œβ”€β”€ createIssue.ts          # One tool file
β”œβ”€β”€ updateIssue.ts          # Another tool file
β”œβ”€β”€ getIssue.ts             # Yet another...
β”œβ”€β”€ ... (42 more tools)     # All tools as separate files
β”œβ”€β”€ index.ts                # Re-exports all (for bulk loading)
└── _runtime/
    └── mcp-bridge.ts       # Runtime connection manager

3. Discover Available Tools (Progressive Loading)

# List all available tools
ls ~/.claude/servers/github/
# createIssue.ts  updateIssue.ts  getIssue.ts  listIssues.ts  ...

# Load only the tool you need (98% token savings!)
cat ~/.claude/servers/github/createIssue.ts

4. Execute Tools Autonomously

From command line:

# Execute tool directly
node ~/.claude/servers/github/createIssue.ts \
  --repo="bug-ops/mcp-execution" \
  --title="Add ARM64 support" \
  --body="We should support ARM64 architecture"

From AI agent (e.g., Claude Code):

// Agent discovers available tools
const tools = await exec('ls ~/.claude/servers/github/');

// Agent loads specific tool definition
const toolCode = await readFile('~/.claude/servers/github/createIssue.ts');

// Agent executes tool autonomously
await exec(`node ~/.claude/servers/github/createIssue.ts --repo="..." --title="..."`);

See examples/progressive-loading-usage.md for complete tutorial.


How It Works

Architecture: 6 Workspace Crates

mcp-execution/
β”œβ”€β”€ mcp-core/              # Foundation: types, traits, errors
β”‚   β”œβ”€β”€ ServerId          # Strong-typed server identifier
β”‚   β”œβ”€β”€ ToolName          # Strong-typed tool name
β”‚   └── SessionId         # Strong-typed session ID
β”‚
β”œβ”€β”€ mcp-introspector/     # MCP server analysis
β”‚   β”œβ”€β”€ Connect via rmcp  # Official Rust MCP SDK
β”‚   β”œβ”€β”€ Extract schemas   # Tool definitions + parameters
β”‚   └── Cache results     # LRU cache for performance
β”‚
β”œβ”€β”€ mcp-codegen/          # TypeScript code generation
β”‚   β”œβ”€β”€ Handlebars        # Template engine
β”‚   β”œβ”€β”€ JSON Schema       # Convert to TypeScript types
β”‚   └── Progressive       # One file per tool pattern
β”‚
β”œβ”€β”€ mcp-files/            # Virtual filesystem
β”‚   β”œβ”€β”€ In-memory VFS     # Fast code organization
β”‚   β”œβ”€β”€ Export to disk    # Write to ~/.claude/servers/
β”‚   └── Directory mgmt    # Handle nested structures
β”‚
β”œβ”€β”€ mcp-server/           # MCP server for generation
β”‚   β”œβ”€β”€ introspect_server # Discover tools from MCP server
β”‚   β”œβ”€β”€ save_categorized  # Generate with categorization
β”‚   β”œβ”€β”€ list_generated    # List generated servers
β”‚   β”œβ”€β”€ generate_skill    # Generate skill from tool files
β”‚   └── save_skill        # Save generated skill to file
β”‚
└── mcp-cli/              # Command-line interface
    β”œβ”€β”€ generate          # Main command
    β”œβ”€β”€ setup             # Initialize configuration
    β”œβ”€β”€ introspect        # Debug server info
    └── stats             # View cache statistics

Dependency Graph (no circular dependencies):

mcp-cli β†’ {mcp-codegen, mcp-introspector, mcp-files, mcp-core}
mcp-server β†’ {mcp-codegen, mcp-introspector, mcp-files, mcp-core}
mcp-codegen β†’ {mcp-files, mcp-core}
mcp-introspector β†’ {rmcp, mcp-core}
mcp-files β†’ mcp-core

Progressive Loading Pattern

Traditional MCP integration:

AI Agent β†’ Load ALL tools β†’ 30,000 tokens β†’ Limited context for actual work

Progressive loading:

AI Agent β†’ ls (discover) β†’ cat specific tool β†’ 500-1,500 tokens β†’ 98% context saved!

Example workflow:

  1. Discovery: ls ~/.claude/servers/github/ shows all available tools
  2. Selection: Agent chooses createIssue.ts based on task
  3. Loading: cat ~/.claude/servers/github/createIssue.ts loads only that tool
  4. Execution: Agent runs node createIssue.ts --repo=... --title=...

Token comparison:

  • Load 1 tool: ~500-1,500 tokens (vs ~30,000) = 98% savings
  • Load 5 tools: ~2,500-7,500 tokens (vs ~30,000) = 95% savings
  • Load all tools: ~30,000 tokens (via index.ts if needed)

Generated TypeScript Structure

Each tool file includes:

#!/usr/bin/env node
/**
 * MCP Tool: createIssue
 *
 * Creates a new issue in a GitHub repository.
 *
 * @example
 * ```bash
 * node createIssue.ts --repo="owner/repo" --title="Bug report"
 * ```
 */

import { callMCPTool } from './_runtime/mcp-bridge';

/**
 * Parameters for createIssue tool
 */
export interface CreateIssueParams {
  /** Repository in format "owner/repo" */
  repo: string;

  /** Issue title */
  title: string;

  /** Issue description (optional) */
  body?: string;

  /** Labels to apply (optional) */
  labels?: string[];
}

/**
 * Result from createIssue tool
 */
export interface CreateIssueResult {
  number: number;
  url: string;
  state: "open" | "closed";
}

/**
 * Creates a new issue in a GitHub repository
 */
export async function createIssue(params: CreateIssueParams): Promise<CreateIssueResult> {
  return callMCPTool('github', 'createIssue', params);
}

// CLI execution support
if (import.meta.url === `file://${process.argv[1]}`) {
  const args = process.argv.slice(2);
  // Parse --key=value arguments...
  createIssue(params).then(console.log).catch(console.error);
}

Key benefits:

  • Executable: Shebang + CLI parsing = direct execution
  • Type-safe: Full TypeScript interfaces from JSON schemas
  • Documented: JSDoc comments with examples
  • Small: ~50-150 lines per tool (vs ~30,000 for all tools)

Integration with Claude Code

mcp-execution generates TypeScript files with progressive loading pattern that Claude Code (or any AI agent) can discover and use autonomously.

How Claude Code Uses It

  1. Setup (one-time):

    # Generate tools for your MCP servers
    mcp-execution-cli generate --from-config github
    mcp-execution-cli generate --from-config postgres
  2. Discovery (runtime):

    # Claude Code lists available servers
    ls ~/.claude/servers/
    # github/  postgres/  slack/
    
    # Claude Code lists tools in server
    ls ~/.claude/servers/github/
    # createIssue.ts  updateIssue.ts  getIssue.ts  ...
  3. Loading (as needed):

    # Claude Code loads only the tool it needs
    cat ~/.claude/servers/github/createIssue.ts
    # (500-1,500 tokens instead of 30,000)
  4. Execution (autonomous):

    # Claude Code executes tool directly
    node ~/.claude/servers/github/createIssue.ts \
      --repo="bug-ops/mcp-execution" \
      --title="Add feature X"

Claude Code Skill Integration

mcp-execution can generate instruction skills that teach Claude Code how to discover and use progressive loading tools autonomously.

What is a Skill?

Skills are instruction files (SKILL.md) that guide Claude Code on using specific tools or patterns. Skills are generated dynamically based on your actual MCP server tools.

Generating Skills

Use the mcp-server MCP tools to generate skills:

# 1. Start the MCP server
mcp-execution

# 2. Claude uses generate_skill tool to scan TypeScript files
# 3. Claude uses save_skill tool to write SKILL.md

Or generate TypeScript files first, then create skills:

# Generate TypeScript tool files
mcp-execution-cli generate --from-config github

# Skills are created in ~/.claude/skills/{server_id}/SKILL.md

What Generated Skills Include

Each skill provides guidance on:

βœ… Discovery Pattern: How to list available MCP servers and tools via ls βœ… Progressive Loading: When to load individual tools vs bulk loading βœ… Token Optimization: Choosing the right loading strategy for context efficiency βœ… Autonomous Execution: How to execute tools directly via Node.js CLI βœ… Category Organization: Tools grouped by function (e.g., issues, repos) βœ… Keyword Search: Find tools by keyword using grep

Example: Claude Code in Action

With skills installed, Claude Code automatically:

  1. Discovers tools: ls ~/.claude/servers/github/ to see available tools
  2. Loads efficiently: Reads only createIssue.ts (500 tokens) instead of all tools (30,000 tokens)
  3. Executes autonomously: Runs node createIssue.ts --repo=... --title=...
  4. Optimizes context: Saves 98% of token budget for actual work

Note: Skills are a Claude Code feature. Other AI agents can follow similar patterns using the progressive loading documentation.


CLI Reference

generate - Generate TypeScript Tools

Generate type-safe TypeScript files from MCP servers using progressive loading pattern.

# From mcp.json config (recommended)
mcp-execution-cli generate --from-config github

# Stdio transport (npx)
mcp-execution-cli generate npx -y @modelcontextprotocol/server-github \
  --env GITHUB_TOKEN=ghp_xxx

# HTTP transport
mcp-execution-cli generate --http https://api.example.com/mcp \
  --header "Authorization=Bearer token"

# SSE transport
mcp-execution-cli generate --sse https://api.example.com/mcp/events

# Docker transport
mcp-execution-cli generate docker \
  --arg=run --arg=-i --arg=--rm \
  --arg=ghcr.io/org/mcp-server \
  --env=API_KEY=xxx

# Custom output directory
mcp-execution-cli generate github --progressive-output /custom/path

# Custom server name (affects output directory)
mcp-execution-cli generate npx -y @server/package --name custom-name
# Output: ~/.claude/servers/custom-name/ (not ~/.claude/servers/npx/)

Options:

  • --from-config <name>: Load configuration from mcp.json
  • --http <url>: Use HTTP transport
  • --sse <url>: Use SSE transport
  • --header <key=value>: Add HTTP headers (repeatable)
  • --env <key=value>: Set environment variables (repeatable)
  • --arg <value>: Add command arguments (repeatable)
  • --progressive-output <dir>: Custom output directory
  • --name <name>: Custom server name (affects directory)

setup - Initialize Configuration

Create default mcp.json configuration file.

# Interactive setup
mcp-execution-cli setup

# Output: ~/.config/claude/mcp.json created

introspect - Debug Server Info

Inspect MCP server capabilities and available tools.

# From config
mcp-execution-cli introspect --from-config github

# Manual
mcp-execution-cli introspect npx -y @modelcontextprotocol/server-github

stats - View Cache Statistics

Show cache performance metrics.

mcp-execution-cli stats

# Output:
# Cache Statistics:
#   Entries: 42
#   Hits: 156
#   Misses: 12
#   Hit Rate: 92.9%

completions - Shell Completions

Generate shell completion scripts.

# Bash
mcp-execution-cli completions bash > /etc/bash_completion.d/mcp-execution-cli

# Zsh
mcp-execution-cli completions zsh > /usr/local/share/zsh/site-functions/_mcp-execution-cli

# Fish
mcp-execution-cli completions fish > ~/.config/fish/completions/mcp-execution-cli.fish

# PowerShell
mcp-execution-cli completions powershell > mcp-execution-cli.ps1

Performance Benchmarks

All benchmarks run on Apple M1 Pro (2021).

Code Generation Speed

Operation Target Achieved Speedup
10 tools <100ms 0.19ms 526x faster
50 tools <20ms 0.97ms 20.6x faster
VFS export <10ms 1.2ms 8.3x faster
Memory (1000 tools) <256MB ~2MB 128x better

Token Savings

Scenario Traditional Progressive Savings
Load 1 tool ~30,000 tokens ~500-1,500 tokens 98%
Load 5 tools ~30,000 tokens ~2,500-7,500 tokens 95%
Load 10 tools ~30,000 tokens ~5,000-15,000 tokens 90%
Load all tools ~30,000 tokens ~30,000 tokens 0% (but still type-safe)

Takeaway: Progressive loading shines when you need only a few tools. For bulk operations, index.ts is available.

Run Benchmarks Locally

# Code generation benchmarks
cargo bench --package mcp-codegen

# VFS benchmarks
cargo bench --package mcp-files

# Full workspace benchmarks
cargo bench --workspace

Development

Prerequisites

  • Rust 1.89+ (Edition 2024, MSRV 1.89)
  • Tokio 1.48 async runtime
  • Optional: cargo-nextest for faster tests (cargo install cargo-nextest)

Building from Source

# Clone repository
git clone https://github.com/bug-ops/mcp-execution
cd mcp-execution

# Check workspace
cargo check --workspace

# Run tests (with nextest, faster)
cargo nextest run --workspace

# Run tests (without nextest)
cargo test --workspace

# Run benchmarks
cargo bench --workspace

# Build release binary
cargo build --release -p mcp-execution-cli

# Binary location: target/release/mcp-execution-cli

Project Guidelines

All development follows Microsoft Rust Guidelines:

βœ… Strong types over primitives: ServerId, ToolName, SessionId instead of String βœ… Error handling: thiserror for libraries, anyhow for CLI only βœ… Thread safety: All public types are Send + Sync βœ… Documentation: 100% coverage with examples and error cases βœ… No unsafe: Zero unsafe blocks in the codebase βœ… Testing: 550 tests with 100% pass rate

Running Tests

# All tests (with nextest)
cargo nextest run --workspace

# All tests (without nextest)
cargo test --workspace

# Specific crate
cargo nextest run -p mcp-core

# Doc tests
cargo test --doc --workspace

# With coverage (requires tarpaulin)
cargo tarpaulin --workspace --out Html

Code Quality Checks

# Linting
cargo clippy --workspace -- -D warnings

# Formatting (requires nightly)
cargo +nightly fmt --workspace --check

# Documentation
cargo doc --workspace --no-deps --open

# Security audit
cargo audit

Security

Code Generation Safety

  • No code execution during generation: Generated TypeScript is static, no eval/exec
  • Input validation: All MCP server parameters validated before use
  • Path safety: Output paths validated to prevent directory traversal
  • Template security: Handlebars templates escape all user input

Best Practices

  1. Review generated code: Always inspect TypeScript files before execution
  2. Keep updated: Update mcp-execution-cli regularly for security patches
  3. Use environment variables: Never hardcode tokens in mcp.json (use env vars)
  4. Validate MCP servers: Only generate from trusted MCP server sources
  5. Principle of least privilege: Grant MCP servers only necessary permissions

Contributing

Contributions welcome! Please read CONTRIBUTING.md first.

Areas We Need Help

  • Documentation: Examples, tutorials, use cases
  • Testing: More integration tests, edge cases
  • Platform support: Testing on various OS/architectures
  • MCP servers: Report compatibility issues
  • Performance: Optimization ideas and benchmarks

Development Process

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing)
  3. Follow Microsoft Rust Guidelines
  4. Write tests for new functionality
  5. Run cargo test --workspace and cargo clippy --workspace
  6. Submit a pull request

License

Licensed under either of:

at your option.


Resources

Official Documentation

External Resources

Community

About

πŸš€ Generate executable TypeScript tools from MCP servers with 98% token savings. Progressive loading pattern for AI agents. Production-ready.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Contributors 4

  •  
  •  
  •  
  •