Convert any MCP (Model Context Protocol) server into TypeScript APIs that LLMs can write code against.
Based on Cloudflare's Code Mode: LLMs write better code than they write tool-calling JSON.
- Node.js 20, 22, or 23 (even-numbered stable releases)
- Not compatible with Node.js 24+ (odd-numbered/experimental releases)
npm install mcp-codemodeconst MCPCodeMode = require('mcp-codemode');
// 1. Connect to any MCP server
const mcp = new MCPCodeMode({
command: 'npx',
args: ['@modelcontextprotocol/server-filesystem', '/path']
});
await mcp.connect();
// 2. Get TypeScript context for your LLM
const context = mcp.getContext();
// 3. Your LLM writes code (you handle this part)
const code = await yourLLM(context + "List all files");
// 4. Execute the code securely
const result = await mcp.execute(code);The library has just 4 main methods:
connect()- Connect to an MCP servergetContext()- Get TypeScript definitions for your LLMexecute(code)- Run code in a secure sandboxMCPCodeMode.extractCode(response)- Extract code from LLM responses
This library doesn't force any LLM choice. Use whatever you want:
const response = await openai.chat.completions.create({
messages: [{ role: 'user', content: context + prompt }]
});
const code = MCPCodeMode.extractCode(response.choices[0].message.content);const response = await anthropic.messages.create({
messages: [{ role: 'user', content: context + prompt }]
});
const code = MCPCodeMode.extractCode(response.content[0].text);const response = await ollama.generate({
model: 'codellama',
prompt: context + prompt
});
const code = MCPCodeMode.extractCode(response.response);- Connects to any MCP server and discovers available tools
- Generates TypeScript type definitions from tool schemas
- Your LLM writes code using these types
- Code executes in a secure isolated-vm sandbox
- Returns results or mapped errors
See the /examples folder for complete examples with:
- OpenAI (
examples/openai.js) - Anthropic Claude (
examples/anthropic.js) - Custom/Local LLMs (
examples/custom.js)
new MCPCodeMode(serverConfig, {
validateTypes: true, // Validate TypeScript before execution
memoryLimit: 128, // Sandbox memory limit (MB)
timeout: 5000 // Execution timeout (ms)
});LLMs have seen millions of code examples but relatively few tool-calling examples. When you let them write code:
- Better quality outputs
- Complex logic flows naturally
- Fewer errors
- Can compose multiple tools easily
MIT