Volcano Unleashed: Kong's TypeScript SDK for MCP-Native Multi-LLM Agents
What Volcano is and why it matters
Volcano is an open-source TypeScript SDK from Kong designed to build production-ready AI agents that orchestrate multi-step workflows across multiple LLM providers. It treats the Model Context Protocol (MCP) as a first-class interface, automating tool discovery, invocation, and context passing so developers write far less glue code while retaining enterprise controls like OAuth and observability.
A compact, chainable API
Volcano exposes a concise, chainable API (.then(…).run()) that passes intermediate context between steps and allows switching LLMs per step — for example, planning with one model and executing with another. The SDK is built for real-world use and includes production features such as retries, per-step timeouts, connection pooling for MCP servers, OAuth 2.1 support, and OpenTelemetry traces/metrics for distributed observability.
Example: a nine-line workflow
The SDK emphasizes brevity: the same workflow that would usually require 100+ lines of integration code can be expressed in a handful of lines. The original example from Kong shows how to wire two LLMs and two MCP servers into a single agent flow:
import { agent, llmOpenAI, llmAnthropic, mcp } from "volcano-ai";
// Setup: two LLMs, two MCP servers
const planner = llmOpenAI({ model: "gpt-5-mini", apiKey: process.env.OPENAI_API_KEY! });
const executor = llmAnthropic({ model: "claude-4.5-sonnet", apiKey: process.env.ANTHROPIC_API_KEY! });
const database = mcp("https://api.company.com/database/mcp");
const slack = mcp("https://api.company.com/slack/mcp");
// One workflow
await agent({ llm: planner })
.then({
prompt: "Analyze last week's sales data",
mcps: [database] // Auto-discovers and calls the right tools
})
.then({
llm: executor, // Switch to Claude
prompt: "Write an executive summary"
})
.then({
prompt: "Post the summary to #executives",
mcps: [slack]
})
.run();
Core features
- Chainable API to compose multi-step workflows with context flowing between steps
- MCP-native tool use: pass MCP servers and let the SDK auto-discover and invoke tools
- Multi-provider LLM support: mix models inside one workflow (planning, execution, etc.)
- Streaming of intermediate and final results for responsive agents
- Configurable retries and per-step timeouts for resilience
- Hooks before/after steps for customization and instrumentation
- Typed error handling to surface actionable failures
- Parallel execution, branching, and loops for complex control flow
- Observability through OpenTelemetry for end-to-end tracing and metrics
- OAuth 2.1 support and connection pooling for secure, efficient MCP access
- Apache-2.0 license for open-source adoption
How Volcano fits into Kong’s MCP architecture
Volcano acts as the developer-facing SDK inside an MCP-governed control plane. Kong’s Konnect and AI Gateway add governance and operational layers on top of Volcano by providing MCP server autogeneration, centralized OAuth 2.1 for MCP servers, and observability across tools, workflows, and prompts. The Konnect Developer Portal can itself act as an MCP server, allowing tools and agents to discover APIs, request access, and consume endpoints programmatically. Kong is also previewing MCP Composer and MCP Runner to design, generate, and operate MCP servers and integrations.
Real-world implications for teams
For engineering teams building internal agents, Volcano reduces the amount of custom integration code and centralizes important operational concerns like authentication, retries, and tracing. By relying on MCP as the canonical tool interface, the SDK minimizes protocol drift and simplifies auditing and governance as agent usage scales across an organization.