LLM Provider Setup
Configure OpenClaw with Anthropic Claude, OpenAI GPT, Ollama local models, or other LLM providers.
LLM Provider Setup
OpenClaw is model-agnostic -- you can plug in any LLM and switch providers without rewriting your agent. This tutorial covers the most popular options.
Supported Providers#
OpenClaw ships with built-in support for 12+ providers:
| Provider | Type | Best For | Cost | |----------|------|----------|------| | Anthropic (Claude) | Cloud | Best reasoning, complex tasks | $3-75/MTok | | OpenAI (GPT) | Cloud | Speed, wide compatibility | $2-60/MTok | | Google (Gemini) | Cloud | Large context windows | $1-20/MTok | | OpenRouter | Cloud | Access to many models via one API | Varies | | Ollama | Local | Privacy, no API costs | Free (hardware costs) | | vLLM | Local | High-throughput inference | Free (hardware costs) | | Amazon Bedrock | Cloud | AWS integration | Varies |
Quick Setup#
The fastest way to start is with environment variables:
# For Anthropic (recommended) export ANTHROPIC_API_KEY=sk-ant-... # For OpenAI export OPENAI_API_KEY=sk-... # For Ollama (local) export MODEL_BACKEND_URL=http://localhost:11434
Then run openclaw onboard -- the wizard detects your keys and configures the provider automatically.
Anthropic Claude (Recommended)#
Claude is recommended for OpenClaw because it has the best reasoning capabilities for complex architectural tasks and tool use.
Configuration#
Edit ~/.openclaw/openclaw.json5:
{ "models": { "defaults": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" }, "providers": { "anthropic": { "apiKey": "$ANTHROPIC_API_KEY" } } } }
Use Sonnet for daily tasks (email triage, bookmarks, quick questions) and reserve Opus for deep thinking (research, complex analysis, content strategy). This balances quality and cost.
Configuration#
openclaw config set llm.provider "openai" openclaw config set llm.openai.apiKey "sk-xxxxxxxxxxxx"
Or via config file:
{ "models": { "defaults": { "provider": "openai", "model": "gpt-5.1-codex" }, "providers": { "openai": { "apiKey": "$OPENAI_API_KEY" } } } }
Ollama (Local Models)#
Ollama lets you run open-source models locally -- no API costs, full privacy.
Step 1: Install Ollama#
# macOS/Linux curl -fsSL https://ollama.ai/install.sh | sh # Pull a model ollama pull llama3:70b
Step 2: Configure OpenClaw#
{ "models": { "defaults": { "provider": "ollama", "model": "llama3:70b" }, "providers": { "ollama": { "baseUrl": "http://localhost:11434", "apiKey": "ollama-local" } } } }
Do not use the /v1 OpenAI-compatible URL with OpenClaw -- this breaks tool calling. Use the native Ollama API URL: http://localhost:11434 (no /v1).
Configure a primary model with automatic fallbacks:
{ "models": { "defaults": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" }, "fallbacks": [ { "provider": "openai", "model": "gpt-5" }, { "provider": "ollama", "model": "llama3:70b" } ] } }
If the primary model fails (rate limit, error, timeout), OpenClaw automatically tries the next model in the list.
Per-Channel Model Routing#
Route different channels to different providers:
{ "channels": { "whatsapp": { "model": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" } }, "discord": { "model": { "provider": "ollama", "model": "llama3:70b" } } } }
This lets you use a powerful cloud model for important conversations and a free local model for casual channels.
Cost Management#
Protect your API budget:
openclaw config set llm.limits.dailySpend 10
When the limit is reached, OpenClaw can fall back to a cheaper model or pause until the next day.
Choosing the Right Provider#
Four factors to consider:
- Budget -- Cloud models cost $0.50-75 per million tokens. Local models are free but need hardware.
- Hardware -- Local models need a GPU with 8GB+ VRAM for decent performance.
- Privacy -- If data cannot leave your network, use Ollama or vLLM.
- Task complexity -- Use the strongest model for tool-enabled agents. Older or smaller models are less robust against prompt injection.
Next Steps#
- Build your first agent -- Put your LLM to work
- LLM providers deep-dive -- Detailed comparison of all providers