Codex Configuration

referenceintermediate5 min readVerified Mar 8, 2026

Complete reference for configuring OpenAI Codex: config.toml settings, AGENTS.md project instructions, MCP servers, safety modes, and profiles.

codexconfigurationconfigsettingstoml

Codex Configuration

All Codex surfaces — CLI, IDE extension, and desktop app — share a single configuration file at ~/.codex/config.toml. This means you configure once and your settings apply everywhere.

Configuration File Locations#

| Location | Scope | Notes | |----------|-------|-------| | ~/.codex/config.toml | Global (user-level) | Primary configuration | | .codex/config.toml | Project-scoped | Overrides global settings (trusted projects only) | | requirements.toml | Organization-enforced | Admin restrictions on managed machines |

Precedence Order (Highest First)#

  1. CLI flags (-c key=value)
  2. Project config (.codex/config.toml, closest to CWD wins)
  3. Global config (~/.codex/config.toml)

Essential Settings#

# Model selection model = "gpt-5.4" # Safety settings approval_policy = "on-request" sandbox_mode = "workspace-write" # Web search web_search = "cached" # "cached" (default) or "live" # Review model (separate from main model) review_model = "gpt-5.3-codex"

Key Configuration Areas#

| Area | Description | Guide | |------|-------------|-------| | AGENTS.md | Project instructions and conventions | How to teach Codex your codebase | | MCP Servers | External tool integrations | Connect Figma, databases, docs | | Safety Modes | Approval and sandbox policies | Control what Codex can do |

Profiles#

Define named profiles for different workflows:

[profiles.deep-review] model = "gpt-5.3-codex" model_reasoning_effort = "high" approval_policy = "on-request" [profiles.quick-fix] model = "gpt-5.3-codex-spark" approval_policy = "never" sandbox_mode = "workspace-write"

Switch profiles from the command line:

codex --profile deep-review "Review the auth module" codex --profile quick-fix "Fix the typo in the README"

Local and Open-Source Models#

Codex supports local models via Ollama and any provider that supports the Chat Completions or Responses APIs:

# Use a local model via Ollama codex -c model_provider="oss" -c model="codellama:34b"
# config.toml for custom provider model_provider = "custom" model = "your-model-name" base_url = "http://localhost:11434/v1"
Info

Local models may have reduced capabilities compared to GPT-5.x models. For best results with open-source models, use models with strong instruction-following abilities.

## Project Trust

Project-scoped config files (.codex/config.toml) are only loaded when you trust the project. This prevents untrusted repositories from overriding your safety settings.

Environment Variables#

| Variable | Purpose | |----------|---------| | CODEX_API_KEY | API key for authentication | | CODEX_HOME | Override the config directory (default: ~/.codex) | | OPENAI_API_KEY | Alternative API key variable |

Sample Configuration#

For a complete annotated sample, see the official sample config.

Next Steps#