Skip to content

Confused-AIProduction Ready AI Agents, Shipped in TypeScript

The only TypeScript AI agent framework with smart defaults AND full control. 100+ tools, multi-agent orchestration, circuit breakers, budget caps, HITL, OTLP — from prototype to enterprise in one package.

Confused-AI
100+Built-in Tools
18Example Apps
6LLM Providers
MITOpen Source

From idea to production, in minutes

Every pattern you'll ever need — from a one-liner to a hardened enterprise agent.

hello.tsTypeScript
import { agent } from 'confused-ai';

// That's it. Model, session & guardrails wired automatically.
const ai = agent('You are a helpful assistant.');

const { text } = await ai.run(
  'Summarize the Rust ownership model in 3 bullets.',
);

console.log(text);

Built for developer happiness at every scale

Zero-to-agent in 3 lines. A path to enterprise that never forces a rewrite. Every abstraction earns its place.

01
Install one package
npm install confused-ai
No 12-step setup. No mandatory config files.
02
Set an API key
OPENAI_API_KEY=sk-...
Any of 7 providers. Swap at any time.
03
Run your first agent
import { agent } from 'confused-ai';
const { text } = await agent('Be helpful.').run('Hello!');
Smart defaults chosen for you. Override anything.
🧩
Progressive Escape Hatches
Start with agent() one-liner. Add tools, sessions, guardrails, budgets one-by-one as you need them. Never rewrite.
📐
Full TypeScript Inference
Every parameter, every hook, every tool result is typed end-to-end. Autocomplete works everywhere — no any.
🧪
Test Without an LLM
MockLLMProvider + MockToolRegistry let you write fast, deterministic unit tests without real API calls.
Smart Defaults, Not Magic
Defaults are explicit and documented. No hidden global state. No invisible retry loops. Every behaviour is opt-in.
🔀
Mix and Match
Combine createAgent(), compose(), createSupervisor() freely. The abstractions are composable, not hierarchical.
📦
Monorepo Friendly
Independent subpath imports mean each service only bundles what it needs. Works perfectly with Turborepo, Nx, and Bun workspaces.

100+ tools, zero assembly required

Every tool is Zod-validated, tree-shakeable, and ready to drop into any agent. Import only what you need — each subpath is independently bundleable.

🌐HTTP & Web
fetchscrapebrowsercrawlsitemap
💬Communication
emailSlackDiscordTwilioTelegram
🗄️Databases
PostgreSQLMySQLSQLiteRedisMongoDB
☁️Cloud Storage
S3GCSAzure BlobDropboxDrive
🔧Dev Tools
GitHubGitLabJiraLinearNotion
🔍Search & Data
DuckDuckGoWikipediaCSVJSONExcel
💳Payments
StripePayPalinvoicessubscriptionsrefunds
📁File System
readwritecopymovezip/unzip
🧮Compute & Math
calculatorcode runnerunit convertdate/timecron
Tree-shakeable subpath importsImport only what you need
confused-aiMain barrel export
confused-ai/tools100+ built-in tools
confused-ai/orchestrationMulti-agent patterns
confused-ai/knowledgeRAG + vector store
confused-ai/sessionSession stores
confused-ai/guardrailsSafety & validation
confused-ai/productionCircuit breakers, rate limits
confused-ai/observabilityOTLP, logs, evals
confused-ai/runtimeHTTP + WebSocket server
confused-ai/adapters20-category adapters
confused-ai/testingMocks & fixtures
confused-ai/contractsShared types only

Enterprise-ready from day one

No patchwork of add-ons. Every production concern is built-in and composable.

CapabilityConfused-AILangChain.jsVercel AI SDKMastra
Zero-Config Progressive DX~~
First-Class TypeScript~
100+ Built-In Tools~
Multi-Agent Orchestration
Durable DAG Graph Engine~
Native MCP Support~
OTLP Distributed Tracing~~~
Circuit Breakers & Retries
USD Budget Enforcement
Multi-Tenancy Context
SOC2/HIPAA Audit Logging
Human-in-the-Loop (HITL)~~
Intelligent LLM Router
Automatic REST API~
Voice (TTS/STT) & Video~

Everything you need to go to production

No switching frameworks at scale. Every capability ships with confused-ai.

🔒Security
  • Guardrails engine with sensitive-data rules
  • JWT RBAC on HTTP routes
  • Secret-manager adapters (AWS, Azure KV, HashiCorp, GCP)
  • Content safety hooks
Reliability
  • Circuit breakers with configurable thresholds
  • Exponential-backoff retry with jitter
  • Redis distributed rate limiting
  • Graceful shutdown + checkpoint/resume
📋Compliance
  • Persistent audit log (SQLite / pluggable)
  • X-Idempotency-Key deduplication
  • Per-user and per-tenant cost caps
  • W3C trace-context propagation
🔭Observability
  • OTLP tracing (Jaeger, Datadog, Honeycomb)
  • Structured logging with context
  • Eval store + LLM-as-judge scoring
  • Health endpoints + Grafana dashboard
🚀Deployment
  • Docker + docker-compose templates
  • Kubernetes with rolling updates & probes
  • Fly.io and Render one-click config
  • OpenAPI + WebSocket + SSE server built-in
🧪Testing
  • MockLLMProvider for deterministic tests
  • MockToolRegistry + fixture helpers
  • Vitest-compatible test utilities
  • 99 passing tests out of the box

One framework, every model

Swap providers with a single import. Same agent code runs on all of them.

OpenAIGPT-4o
OPENAI_API_KEY
AnthropicClaude 3.5
ANTHROPIC_API_KEY
GoogleGemini 1.5
GOOGLE_API_KEY
OpenRouter100+ models
OPENROUTER_API_KEY
Azure OpenAIEnterprise
OPENAI_BASE_URL
AWS BedrockPeer dep
@aws-sdk/...
Any OpenAI-compatCustom URL
apiKey + baseURL
OllamaLocal
No key needed
READY TO SHIP?

Start building in
30 seconds

One package. From prototype to enterprise without changing frameworks.

npm install confused-ai
MIT License·No telemetry by default·TypeScript-first·Zero lock-in

Install

bash
npm install confused-ai
# or
bun add confused-ai
# or
pnpm add confused-ai

Set at least one provider key in your environment:

bash
OPENAI_API_KEY=sk-...          # OpenAI / Azure OpenAI
ANTHROPIC_API_KEY=sk-ant-...   # Anthropic Claude
GOOGLE_API_KEY=...             # Google Gemini
OPENROUTER_API_KEY=sk-or-...   # OpenRouter (100+ models)

From zero to production

ts
import { agent } from 'confused-ai';

const ai = agent('You are a helpful assistant.');
const { text } = await ai.run('What is 2 + 2?');
console.log(text); // "4"
ts
import { agent, defineTool } from 'confused-ai';
import { z } from 'zod';

const getWeather = defineTool()
  .name('getWeather')
  .description('Get current weather for a city')
  .parameters(z.object({ city: z.string() }))
  .execute(async ({ city }) => ({ city, temp: 22, condition: 'sunny' }))
  .build();

const ai = agent({ instructions: 'Help with weather.', tools: [getWeather] });
const { text } = await ai.run('What is the weather in Paris?');
ts
import { agent, compose } from 'confused-ai';

const researcher = agent('Research topics thoroughly and return key findings.');
const writer     = agent('Turn research notes into polished reports.');

const pipeline = compose(researcher, writer);
const { text } = await pipeline.run('Report on TypeScript 5.5 features');
ts
import { createCostOptimizedRouter } from 'confused-ai';

// Automatically sends coding tasks to GPT-4o, simple Q&A to GPT-4o-mini
const router = createCostOptimizedRouter({
  providers: { fast: gpt4oMini, smart: gpt4o },
});
const { text } = await router.run('What is 2+2?'); // → routed to fast model
ts
import { createAgent } from 'confused-ai';
import { openai, anthropic, ollama } from 'confused-ai/model';
import { createSqliteSessionStore } from 'confused-ai/session';
import { withResilience } from 'confused-ai/guard';

const agent = createAgent({
  name:         'SupportBot',
  instructions: 'You are a helpful support agent.',
  model: openai('gpt-4o-mini'),
  sessionStore: createSqliteSessionStore('./sessions.db'),
  budget:       { maxUsdPerRun: 0.05, maxUsdPerUser: 5.0 },
  guardrails:   true,
});

const resilient = withResilience(agent, {
  circuitBreaker: { threshold: 5, timeout: 30_000 },
  rateLimit:      { maxRequests: 100, windowMs: 60_000 },
  retry:          { maxAttempts: 3, backoff: 'exponential' },
});

export default resilient;

What production looks like

ts
import { createAgent, defineTool } from 'confused-ai';
import { createSqliteSessionStore } from 'confused-ai/session';
import { KnowledgeEngine, TextLoader, InMemoryVectorStore } from 'confused-ai/knowledge';
import { OpenAIEmbeddingProvider } from 'confused-ai/memory';
import { GuardrailValidator, createSensitiveDataRule } from 'confused-ai/guardrails';
import { createHttpService, listenService } from 'confused-ai/serve';
import { z } from 'zod';

// 1. Knowledge base from your docs
const knowledge = new KnowledgeEngine({
  embeddingProvider: new OpenAIEmbeddingProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  vectorStore: new InMemoryVectorStore(),
});
await knowledge.ingest(await new TextLoader('./docs/policy.md').load());

// 2. Sessions, guardrails, and a custom tool
const sessions  = createSqliteSessionStore('./data/sessions.db');
const guardrails = new GuardrailValidator({ rules: [createSensitiveDataRule()] });

const lookupOrder = defineTool()
  .name('lookupOrder')
  .description('Look up an order by ID')
  .parameters(z.object({ orderId: z.string() }))
  .execute(async ({ orderId }) => db.orders.findById(orderId))
  .build();

// 3. One agent — everything wired up
const support = createAgent({
  name:         'SupportBot',
  instructions: 'You are a helpful support agent. Use the knowledge base for policies.',
  model: openai('gpt-4o-mini'),
  tools:        [lookupOrder],
  sessionStore: sessions,
  knowledgebase: knowledge,
  guardrails,
  budget:       { maxUsdPerRun: 0.10 },
  hooks: {
    afterRun: async (result) => {
      await analytics.track('support_run', { steps: result.steps });
      return result;
    },
  },
});

// 4. Serve over HTTP with OpenAPI, SSE streaming, and HITL approval endpoints
const service = createHttpService({
  agents: { support },
  openApi: { title: 'SupportBot API', version: '1.0.0' },
});

listenService(service, { port: 3000 });
// → GET  /v1/openapi.json
// → POST /v1/agents/support/run
// → GET  /v1/approvals        (pending HITL requests)
// → POST /v1/approvals/:id    (approve/reject)

Why Confused-AI?

Enterprise CapabilityConfused-AILangChain.jsVercel AI SDKMastra
Zero-Config Progressive DX⚠️⚠️
First-Class TypeScript⚠️
100+ Built-In Tools⚠️
Multi-Agent Orchestration
Durable DAG Graph Engine⚠️ (LangGraph)
Native MCP Support⚠️
OTLP Distributed Tracing⚠️ (LangSmith)⚠️⚠️
Circuit Breakers & Retries
USD Budget Enforcement
Multi-Tenancy Context
SOC2/HIPAA Audit Logging
Idempotency Keys
Human-in-the-Loop (HITL)⚠️⚠️
Intelligent LLM Router
Automatic REST API⚠️
Background Job Queues
Voice (TTS/STT) & Video⚠️
MIT license

Enterprise checklist

Everything you need to go from prototype to production without switching frameworks:

  • Security — Guardrails engine with sensitive-data rules, JWT RBAC on HTTP routes, secret-manager adapters (AWS Secrets Manager, Azure Key Vault, HashiCorp Vault, GCP), content safety hooks
  • Reliability — Circuit breakers, exponential-backoff retry, Redis distributed rate limiting, graceful shutdown, checkpoint/resume for long-running agents
  • Compliance — Persistent audit log (SQLite / pluggable), X-Idempotency-Key deduplication, per-user and per-tenant cost caps, W3C trace-context propagation
  • Observability — OTLP tracing (Jaeger, Datadog, Honeycomb), structured logging, eval store, health endpoints, Grafana dashboard template
  • Deployment — Docker, docker-compose, Kubernetes, Fly.io, and Render templates in /templates
  • TestingMockLLMProvider, MockToolRegistry, fixture helpers, Vitest-compatible test utilities in confused-ai/testing

Subpath packages

Import only what you need — every module is independently tree-shakeable:

Import pathContents
confused-aiMain barrel: createAgent, agent, tools, session, LLM, orchestration
confused-ai/create-agentLean createAgent + env helpers
confused-ai/llmProviders, model resolution, embeddings
confused-ai/toolsBaseTool, registries, 100+ built-in tools
confused-ai/orchestrationPipelines, supervisor, swarm, team, router
confused-ai/knowledgeRAG engine, loaders, vector store
confused-ai/sessionSession stores (in-memory, SQL, SQLite)
confused-ai/memoryMemory stores + vector-backed memory
confused-ai/guardrailsValidators, rules, content safety
confused-ai/productionCircuit breaker, rate limiter, resilience wrappers
confused-ai/observabilityOTLP tracer, logger, metrics, eval store
confused-ai/runtimeHTTP service, OpenAPI, WebSocket, HITL endpoints
confused-ai/adapters20-category adapter system (SQL, Redis, S3, …)
confused-ai/pluginsPlugin registry + built-in logging/rate-limit/telemetry
confused-ai/testingMock LLM, mock tools, test fixtures
confused-ai/contractsShared interfaces (no runtime code)

Supported LLM providers

ProviderImport / env var
OpenAI (GPT-4o, GPT-4o-mini, o1, …)OPENAI_API_KEY
Anthropic Claude (3.5 Sonnet, Haiku, Opus)ANTHROPIC_API_KEY
Google Gemini (1.5 Pro, Flash)GOOGLE_API_KEY or GEMINI_API_KEY
OpenRouter (100+ models)OPENROUTER_API_KEY
Azure OpenAIOPENAI_API_KEY + OPENAI_BASE_URL
AWS Bedrock@aws-sdk/client-bedrock-runtime peer dep
Any OpenAI-compatible APIapiKey + baseURL in createAgent options

Deployment

One-command deploys with the included templates:

bash
# Build and run
docker build -t myagent .
docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -p 3000:3000 myagent
bash
fly launch --copy-config --name myagent
fly secrets set OPENAI_API_KEY=sk-...
fly deploy
bash
# render.yaml included — just connect your GitHub repo in the Render dashboard
bash
kubectl apply -f templates/k8s.yaml
kubectl set env deployment/agent OPENAI_API_KEY=sk-...

Templates are in /templates — includes Dockerfile, docker-compose.yml, fly.toml, render.yaml, and k8s.yaml with resource limits, health checks, and rolling update config.

From zero to production

ts
import { agent } from 'confused-ai';

const ai = agent('You are a helpful assistant.');
const { text } = await ai.run('What is 2 + 2?');
console.log(text); // "4"
ts
import { agent, defineTool } from 'confused-ai';
import { z } from 'zod';

const getWeather = defineTool()
  .name('getWeather')
  .description('Get current weather for a city')
  .parameters(z.object({ city: z.string() }))
  .execute(async ({ city }) => ({ city, temp: 22, condition: 'sunny' }))
  .build();

const ai = agent({ instructions: 'Help with weather.', tools: [getWeather] });
const { text } = await ai.run('What is the weather in Paris?');
ts
import { agent, compose } from 'confused-ai';

const researcher = agent('Research topics thoroughly and return key findings.');
const writer     = agent('Turn research notes into polished, engaging reports.');

const pipeline = compose(researcher, writer);
const { text } = await pipeline.run('Report on TypeScript 5.5 features');
ts
import { agent } from 'confused-ai';

const ai = agent({
  instructions: 'You are a helpful assistant.',
  hooks: {
    beforeRun:      async (prompt) => `Today is ${new Date().toDateString()}\n\n${prompt}`,
    beforeToolCall: async (name, args, step) => { console.log(`→ ${name}`, args); return args; },
    onError:        async (error, step) => console.error(`Step ${step} failed:`, error.message),
  },
});
ts
import { createCostOptimizedRouter } from 'confused-ai';

// Automatically sends coding tasks to GPT-4o, simple Q&A to GPT-4o-mini
const router = createCostOptimizedRouter({
  providers: { fast: gpt4oMini, smart: gpt4o },
});
const { text } = await router.run('What is 2+2?'); // → routed to fast

What it looks like in production

ts
import { agent, defineTool } from 'confused-ai';
import { createSqliteSessionStore } from 'confused-ai/session';
import { KnowledgeEngine, TextLoader, InMemoryVectorStore } from 'confused-ai/knowledge';
import { OpenAIEmbeddingProvider } from 'confused-ai/memory';
import { GuardrailValidator, createSensitiveDataRule } from 'confused-ai/guardrails';
import { z } from 'zod';

// 1. Knowledge base from your docs
const knowledge = new KnowledgeEngine({
  embeddingProvider: new OpenAIEmbeddingProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  vectorStore: new InMemoryVectorStore(),
});
await knowledge.ingest(await new TextLoader('./docs/policy.md').load());

// 2. Sessions, guardrails, and a custom tool
const sessions = createSqliteSessionStore('./data/sessions.db');

const guardrails = new GuardrailValidator({ rules: [createSensitiveDataRule()] });

const lookupOrder = defineTool()
  .name('lookupOrder')
  .description('Look up an order by ID')
  .parameters(z.object({ orderId: z.string() }))
  .execute(async ({ orderId }) => db.orders.findById(orderId))
  .build();

// 3. One agent — everything wired up
const support = agent({
  name:         'SupportBot',
  instructions: 'You are a helpful support agent. Use the knowledge base for policies.',
  model:        'gpt-4o-mini',
  tools:        [lookupOrder],
  sessionStore: sessions,
  knowledgebase: knowledge,
  guardrails,
  hooks: {
    afterRun: async (result) => { await analytics.track('support_run', { steps: result.steps }); return result; },
  },
});

// 4. Run with session continuity
const sessionId = await support.createSession('user-42');
const { text } = await support.run('What is your return policy?', { sessionId });

Released under the MIT License.