Zero to agent in 3 lines
agent('You are helpful.') → .run(prompt) → result.text. Smart defaults for LLM, session, tools, and guardrails. Override anything, keep everything else.
Quick start
The only TypeScript AI agent framework with smart defaults AND full control. 100+ tools, multi-agent orchestration, circuit breakers, budget caps, HITL, OTLP — from prototype to enterprise in one package.
Confused-AI ships with 100+ built-in tools, multi-agent orchestration, RAG, guardrails, session stores, circuit breakers, budget enforcement, OTLP tracing, HITL, MCP, voice, and deployment templates. Everything you need — none of the glue code.
Browse all tools →Super clean APIs. Three lines to a working agent. Full TypeScript inference end-to-end. Mock providers for fast unit tests. Progressive escape hatches — start simple, go enterprise without ever rewriting.
Get started →Every pattern you'll ever need — from a one-liner to a hardened enterprise agent.
import { agent } from 'confused-ai';
// That's it. Model, session & guardrails wired automatically.
const ai = agent('You are a helpful assistant.');
const { text } = await ai.run(
'Summarize the Rust ownership model in 3 bullets.',
);
console.log(text);Zero-to-agent in 3 lines. A path to enterprise that never forces a rewrite. Every abstraction earns its place.
npm install confused-aiOPENAI_API_KEY=sk-...import { agent } from 'confused-ai';
const { text } = await agent('Be helpful.').run('Hello!');agent() one-liner. Add tools, sessions, guardrails, budgets one-by-one as you need them. Never rewrite.any.MockLLMProvider + MockToolRegistry let you write fast, deterministic unit tests without real API calls.createAgent(), compose(), createSupervisor() freely. The abstractions are composable, not hierarchical.Every tool is Zod-validated, tree-shakeable, and ready to drop into any agent. Import only what you need — each subpath is independently bundleable.
confused-aiMain barrel exportconfused-ai/tools100+ built-in toolsconfused-ai/orchestrationMulti-agent patternsconfused-ai/knowledgeRAG + vector storeconfused-ai/sessionSession storesconfused-ai/guardrailsSafety & validationconfused-ai/productionCircuit breakers, rate limitsconfused-ai/observabilityOTLP, logs, evalsconfused-ai/runtimeHTTP + WebSocket serverconfused-ai/adapters20-category adaptersconfused-ai/testingMocks & fixturesconfused-ai/contractsShared types onlyNo patchwork of add-ons. Every production concern is built-in and composable.
| Capability | Confused-AI | LangChain.js | Vercel AI SDK | Mastra |
|---|---|---|---|---|
| Zero-Config Progressive DX | ||||
| First-Class TypeScript | ||||
| 100+ Built-In Tools | ||||
| Multi-Agent Orchestration | ||||
| Durable DAG Graph Engine | ||||
| Native MCP Support | ||||
| OTLP Distributed Tracing | ||||
| Circuit Breakers & Retries | ||||
| USD Budget Enforcement | ||||
| Multi-Tenancy Context | ||||
| SOC2/HIPAA Audit Logging | ||||
| Human-in-the-Loop (HITL) | ||||
| Intelligent LLM Router | ||||
| Automatic REST API | ||||
| Voice (TTS/STT) & Video |
No switching frameworks at scale. Every capability ships with confused-ai.
Swap providers with a single import. Same agent code runs on all of them.
OPENAI_API_KEYANTHROPIC_API_KEYGOOGLE_API_KEYOPENROUTER_API_KEYOPENAI_BASE_URL@aws-sdk/...apiKey + baseURLNo key neededOne package. From prototype to enterprise without changing frameworks.
npm install confused-ainpm install confused-ai
# or
bun add confused-ai
# or
pnpm add confused-aiSet at least one provider key in your environment:
OPENAI_API_KEY=sk-... # OpenAI / Azure OpenAI
ANTHROPIC_API_KEY=sk-ant-... # Anthropic Claude
GOOGLE_API_KEY=... # Google Gemini
OPENROUTER_API_KEY=sk-or-... # OpenRouter (100+ models)import { agent } from 'confused-ai';
const ai = agent('You are a helpful assistant.');
const { text } = await ai.run('What is 2 + 2?');
console.log(text); // "4"import { agent, defineTool } from 'confused-ai';
import { z } from 'zod';
const getWeather = defineTool()
.name('getWeather')
.description('Get current weather for a city')
.parameters(z.object({ city: z.string() }))
.execute(async ({ city }) => ({ city, temp: 22, condition: 'sunny' }))
.build();
const ai = agent({ instructions: 'Help with weather.', tools: [getWeather] });
const { text } = await ai.run('What is the weather in Paris?');import { agent, compose } from 'confused-ai';
const researcher = agent('Research topics thoroughly and return key findings.');
const writer = agent('Turn research notes into polished reports.');
const pipeline = compose(researcher, writer);
const { text } = await pipeline.run('Report on TypeScript 5.5 features');import { createCostOptimizedRouter } from 'confused-ai';
// Automatically sends coding tasks to GPT-4o, simple Q&A to GPT-4o-mini
const router = createCostOptimizedRouter({
providers: { fast: gpt4oMini, smart: gpt4o },
});
const { text } = await router.run('What is 2+2?'); // → routed to fast modelimport { createAgent } from 'confused-ai';
import { openai, anthropic, ollama } from 'confused-ai/model';
import { createSqliteSessionStore } from 'confused-ai/session';
import { withResilience } from 'confused-ai/guard';
const agent = createAgent({
name: 'SupportBot',
instructions: 'You are a helpful support agent.',
model: openai('gpt-4o-mini'),
sessionStore: createSqliteSessionStore('./sessions.db'),
budget: { maxUsdPerRun: 0.05, maxUsdPerUser: 5.0 },
guardrails: true,
});
const resilient = withResilience(agent, {
circuitBreaker: { threshold: 5, timeout: 30_000 },
rateLimit: { maxRequests: 100, windowMs: 60_000 },
retry: { maxAttempts: 3, backoff: 'exponential' },
});
export default resilient;import { createAgent, defineTool } from 'confused-ai';
import { createSqliteSessionStore } from 'confused-ai/session';
import { KnowledgeEngine, TextLoader, InMemoryVectorStore } from 'confused-ai/knowledge';
import { OpenAIEmbeddingProvider } from 'confused-ai/memory';
import { GuardrailValidator, createSensitiveDataRule } from 'confused-ai/guardrails';
import { createHttpService, listenService } from 'confused-ai/serve';
import { z } from 'zod';
// 1. Knowledge base from your docs
const knowledge = new KnowledgeEngine({
embeddingProvider: new OpenAIEmbeddingProvider({ apiKey: process.env.OPENAI_API_KEY! }),
vectorStore: new InMemoryVectorStore(),
});
await knowledge.ingest(await new TextLoader('./docs/policy.md').load());
// 2. Sessions, guardrails, and a custom tool
const sessions = createSqliteSessionStore('./data/sessions.db');
const guardrails = new GuardrailValidator({ rules: [createSensitiveDataRule()] });
const lookupOrder = defineTool()
.name('lookupOrder')
.description('Look up an order by ID')
.parameters(z.object({ orderId: z.string() }))
.execute(async ({ orderId }) => db.orders.findById(orderId))
.build();
// 3. One agent — everything wired up
const support = createAgent({
name: 'SupportBot',
instructions: 'You are a helpful support agent. Use the knowledge base for policies.',
model: openai('gpt-4o-mini'),
tools: [lookupOrder],
sessionStore: sessions,
knowledgebase: knowledge,
guardrails,
budget: { maxUsdPerRun: 0.10 },
hooks: {
afterRun: async (result) => {
await analytics.track('support_run', { steps: result.steps });
return result;
},
},
});
// 4. Serve over HTTP with OpenAPI, SSE streaming, and HITL approval endpoints
const service = createHttpService({
agents: { support },
openApi: { title: 'SupportBot API', version: '1.0.0' },
});
listenService(service, { port: 3000 });
// → GET /v1/openapi.json
// → POST /v1/agents/support/run
// → GET /v1/approvals (pending HITL requests)
// → POST /v1/approvals/:id (approve/reject)| Enterprise Capability | Confused-AI | LangChain.js | Vercel AI SDK | Mastra |
|---|---|---|---|---|
| Zero-Config Progressive DX | ✅ | ⚠️ | ✅ | ⚠️ |
| First-Class TypeScript | ✅ | ⚠️ | ✅ | ✅ |
| 100+ Built-In Tools | ✅ | ✅ | ❌ | ⚠️ |
| Multi-Agent Orchestration | ✅ | ✅ | ❌ | ✅ |
| Durable DAG Graph Engine | ✅ | ⚠️ (LangGraph) | ❌ | ❌ |
| Native MCP Support | ✅ | ⚠️ | ❌ | ✅ |
| OTLP Distributed Tracing | ✅ | ⚠️ (LangSmith) | ⚠️ | ⚠️ |
| Circuit Breakers & Retries | ✅ | ❌ | ❌ | ❌ |
| USD Budget Enforcement | ✅ | ❌ | ❌ | ❌ |
| Multi-Tenancy Context | ✅ | ❌ | ❌ | ❌ |
| SOC2/HIPAA Audit Logging | ✅ | ❌ | ❌ | ❌ |
| Idempotency Keys | ✅ | ❌ | ❌ | ❌ |
| Human-in-the-Loop (HITL) | ✅ | ⚠️ | ❌ | ⚠️ |
| Intelligent LLM Router | ✅ | ❌ | ❌ | ❌ |
| Automatic REST API | ✅ | ❌ | ❌ | ⚠️ |
| Background Job Queues | ✅ | ❌ | ❌ | ❌ |
| Voice (TTS/STT) & Video | ✅ | ⚠️ | ❌ | ❌ |
| MIT license | ✅ | ✅ | ✅ | ✅ |
Everything you need to go from prototype to production without switching frameworks:
/templatesMockLLMProvider, MockToolRegistry, fixture helpers, Vitest-compatible test utilities in confused-ai/testingImport only what you need — every module is independently tree-shakeable:
| Import path | Contents |
|---|---|
confused-ai | Main barrel: createAgent, agent, tools, session, LLM, orchestration |
confused-ai/create-agent | Lean createAgent + env helpers |
confused-ai/llm | Providers, model resolution, embeddings |
confused-ai/tools | BaseTool, registries, 100+ built-in tools |
confused-ai/orchestration | Pipelines, supervisor, swarm, team, router |
confused-ai/knowledge | RAG engine, loaders, vector store |
confused-ai/session | Session stores (in-memory, SQL, SQLite) |
confused-ai/memory | Memory stores + vector-backed memory |
confused-ai/guardrails | Validators, rules, content safety |
confused-ai/production | Circuit breaker, rate limiter, resilience wrappers |
confused-ai/observability | OTLP tracer, logger, metrics, eval store |
confused-ai/runtime | HTTP service, OpenAPI, WebSocket, HITL endpoints |
confused-ai/adapters | 20-category adapter system (SQL, Redis, S3, …) |
confused-ai/plugins | Plugin registry + built-in logging/rate-limit/telemetry |
confused-ai/testing | Mock LLM, mock tools, test fixtures |
confused-ai/contracts | Shared interfaces (no runtime code) |
| Provider | Import / env var |
|---|---|
| OpenAI (GPT-4o, GPT-4o-mini, o1, …) | OPENAI_API_KEY |
| Anthropic Claude (3.5 Sonnet, Haiku, Opus) | ANTHROPIC_API_KEY |
| Google Gemini (1.5 Pro, Flash) | GOOGLE_API_KEY or GEMINI_API_KEY |
| OpenRouter (100+ models) | OPENROUTER_API_KEY |
| Azure OpenAI | OPENAI_API_KEY + OPENAI_BASE_URL |
| AWS Bedrock | @aws-sdk/client-bedrock-runtime peer dep |
| Any OpenAI-compatible API | apiKey + baseURL in createAgent options |
One-command deploys with the included templates:
# Build and run
docker build -t myagent .
docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -p 3000:3000 myagentfly launch --copy-config --name myagent
fly secrets set OPENAI_API_KEY=sk-...
fly deploy# render.yaml included — just connect your GitHub repo in the Render dashboardkubectl apply -f templates/k8s.yaml
kubectl set env deployment/agent OPENAI_API_KEY=sk-...Templates are in /templates — includes Dockerfile, docker-compose.yml, fly.toml, render.yaml, and k8s.yaml with resource limits, health checks, and rolling update config.
import { agent } from 'confused-ai';
const ai = agent('You are a helpful assistant.');
const { text } = await ai.run('What is 2 + 2?');
console.log(text); // "4"import { agent, defineTool } from 'confused-ai';
import { z } from 'zod';
const getWeather = defineTool()
.name('getWeather')
.description('Get current weather for a city')
.parameters(z.object({ city: z.string() }))
.execute(async ({ city }) => ({ city, temp: 22, condition: 'sunny' }))
.build();
const ai = agent({ instructions: 'Help with weather.', tools: [getWeather] });
const { text } = await ai.run('What is the weather in Paris?');import { agent, compose } from 'confused-ai';
const researcher = agent('Research topics thoroughly and return key findings.');
const writer = agent('Turn research notes into polished, engaging reports.');
const pipeline = compose(researcher, writer);
const { text } = await pipeline.run('Report on TypeScript 5.5 features');import { agent } from 'confused-ai';
const ai = agent({
instructions: 'You are a helpful assistant.',
hooks: {
beforeRun: async (prompt) => `Today is ${new Date().toDateString()}\n\n${prompt}`,
beforeToolCall: async (name, args, step) => { console.log(`→ ${name}`, args); return args; },
onError: async (error, step) => console.error(`Step ${step} failed:`, error.message),
},
});import { createCostOptimizedRouter } from 'confused-ai';
// Automatically sends coding tasks to GPT-4o, simple Q&A to GPT-4o-mini
const router = createCostOptimizedRouter({
providers: { fast: gpt4oMini, smart: gpt4o },
});
const { text } = await router.run('What is 2+2?'); // → routed to fastimport { agent, defineTool } from 'confused-ai';
import { createSqliteSessionStore } from 'confused-ai/session';
import { KnowledgeEngine, TextLoader, InMemoryVectorStore } from 'confused-ai/knowledge';
import { OpenAIEmbeddingProvider } from 'confused-ai/memory';
import { GuardrailValidator, createSensitiveDataRule } from 'confused-ai/guardrails';
import { z } from 'zod';
// 1. Knowledge base from your docs
const knowledge = new KnowledgeEngine({
embeddingProvider: new OpenAIEmbeddingProvider({ apiKey: process.env.OPENAI_API_KEY! }),
vectorStore: new InMemoryVectorStore(),
});
await knowledge.ingest(await new TextLoader('./docs/policy.md').load());
// 2. Sessions, guardrails, and a custom tool
const sessions = createSqliteSessionStore('./data/sessions.db');
const guardrails = new GuardrailValidator({ rules: [createSensitiveDataRule()] });
const lookupOrder = defineTool()
.name('lookupOrder')
.description('Look up an order by ID')
.parameters(z.object({ orderId: z.string() }))
.execute(async ({ orderId }) => db.orders.findById(orderId))
.build();
// 3. One agent — everything wired up
const support = agent({
name: 'SupportBot',
instructions: 'You are a helpful support agent. Use the knowledge base for policies.',
model: 'gpt-4o-mini',
tools: [lookupOrder],
sessionStore: sessions,
knowledgebase: knowledge,
guardrails,
hooks: {
afterRun: async (result) => { await analytics.track('support_run', { steps: result.steps }); return result; },
},
});
// 4. Run with session continuity
const sessionId = await support.createSession('user-42');
const { text } = await support.run('What is your return policy?', { sessionId });