* Add OpenAI profile responses and custom auth header support * Fix knowledge graph config reference in query loop * Address OpenAI profile review edge cases * Remove unused getGlobalConfig import Delete an unused import of getGlobalConfig from src/query.ts. This cleans up dead code and avoids unused-import lint warnings; no functional behavior changes. * Address follow-up OpenAI profile review comments * Refine OpenAI responses auth review fixes * Fix custom auth header default scheme
424 lines
17 KiB
Plaintext
424 lines
17 KiB
Plaintext
# =============================================================================
|
|
# OpenClaude Environment Configuration
|
|
# =============================================================================
|
|
# Copy this file to .env and fill in your values:
|
|
# cp .env.example .env
|
|
#
|
|
# Only set the variables for the provider you want to use.
|
|
# All other sections can be left commented out.
|
|
# =============================================================================
|
|
|
|
# =============================================================================
|
|
# SYSTEM-WIDE SETUP (OPTIONAL)
|
|
# =============================================================================
|
|
# Instead of using a .env file per project, you can set these variables
|
|
# system-wide so OpenClaude works from any directory on your machine.
|
|
#
|
|
# STEP 1: Pick your provider variables from the list below.
|
|
# STEP 2: Set them using the method for your OS (see further down).
|
|
#
|
|
# ── Provider variables ───────────────────────────────────────────────
|
|
#
|
|
# Option 1 — Anthropic:
|
|
# ANTHROPIC_API_KEY=sk-ant-your-key-here
|
|
# ANTHROPIC_MODEL=claude-sonnet-4-5 (optional)
|
|
# ANTHROPIC_BASE_URL=https://api.anthropic.com (optional)
|
|
#
|
|
# Option 2 — OpenAI:
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_API_KEY=sk-your-key-here
|
|
# OPENAI_MODEL=gpt-4o
|
|
# OPENAI_BASE_URL=https://api.openai.com/v1 (optional)
|
|
#
|
|
# Option 3 — Google Gemini:
|
|
# CLAUDE_CODE_USE_GEMINI=1
|
|
# GEMINI_API_KEY=your-gemini-key-here
|
|
# GEMINI_MODEL=gemini-2.0-flash
|
|
# GEMINI_BASE_URL=https://generativelanguage.googleapis.com (optional)
|
|
#
|
|
# Option 4 — GitHub Models:
|
|
# CLAUDE_CODE_USE_GITHUB=1
|
|
# GITHUB_TOKEN=ghp_your-token-here
|
|
#
|
|
# Option 5 — Ollama (local):
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_BASE_URL=http://localhost:11434/v1
|
|
# OPENAI_API_KEY=ollama
|
|
# OPENAI_MODEL=llama3.2
|
|
#
|
|
# Option 6 — LM Studio (local):
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_BASE_URL=http://localhost:1234/v1
|
|
# OPENAI_MODEL=your-model-id-here
|
|
# OPENAI_API_KEY=lmstudio (optional)
|
|
#
|
|
# Option 7 — AWS Bedrock (may also need: aws configure):
|
|
# CLAUDE_CODE_USE_BEDROCK=1
|
|
# AWS_REGION=us-east-1
|
|
# AWS_DEFAULT_REGION=us-east-1
|
|
# AWS_BEARER_TOKEN_BEDROCK=your-bearer-token-here
|
|
# ANTHROPIC_BEDROCK_BASE_URL=https://bedrock-runtime.us-east-1.amazonaws.com
|
|
#
|
|
# Option 8 — Google Vertex AI:
|
|
# CLAUDE_CODE_USE_VERTEX=1
|
|
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
|
|
# CLOUD_ML_REGION=us-east5
|
|
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
|
|
#
|
|
# ── How to set variables on each OS ──────────────────────────────────
|
|
#
|
|
# macOS (zsh):
|
|
# 1. Open: nano ~/.zshrc
|
|
# 2. Add each variable as: export VAR_NAME=value
|
|
# 3. Save and reload: source ~/.zshrc
|
|
#
|
|
# Linux (bash):
|
|
# 1. Open: nano ~/.bashrc
|
|
# 2. Add each variable as: export VAR_NAME=value
|
|
# 3. Save and reload: source ~/.bashrc
|
|
#
|
|
# Windows (PowerShell):
|
|
# Run for each variable:
|
|
# [System.Environment]::SetEnvironmentVariable('VAR_NAME', 'value', 'User')
|
|
# Then restart your terminal.
|
|
#
|
|
# Windows (Command Prompt):
|
|
# Run for each variable:
|
|
# setx VAR_NAME value
|
|
# Then restart your terminal.
|
|
#
|
|
# Windows (GUI):
|
|
# Settings > System > About > Advanced System Settings >
|
|
# Environment Variables > under "User variables" click New,
|
|
# then add each variable.
|
|
#
|
|
# ── Important notes ──────────────────────────────────────────────────
|
|
#
|
|
# LOCAL SERVERS: If using LM Studio or Ollama, the server MUST be
|
|
# running with a model loaded before you launch OpenClaude —
|
|
# otherwise you'll get connection errors.
|
|
#
|
|
# SWITCHING PROVIDERS: To temporarily switch, unset the relevant
|
|
# variables in your current terminal session:
|
|
#
|
|
# macOS / Linux:
|
|
# unset VAR_NAME
|
|
# # e.g.: unset CLAUDE_CODE_USE_OPENAI OPENAI_BASE_URL OPENAI_MODEL
|
|
#
|
|
# Windows (PowerShell — current session only):
|
|
# Remove-Item Env:VAR_NAME
|
|
#
|
|
# To permanently remove a variable on Windows:
|
|
# [System.Environment]::SetEnvironmentVariable('VAR_NAME', $null, 'User')
|
|
#
|
|
# LOAD ORDER:
|
|
# Shell and system environment variables are inherited by the process.
|
|
# Project .env files are only used if your launcher or shell loads them
|
|
# before starting OpenClaude.
|
|
# COMPATIBILITY:
|
|
# System-wide variables work regardless of how you run OpenClaude:
|
|
# npx, global npm install, bun run, or node directly. Any process
|
|
# launched from your terminal inherits your shell's environment.
|
|
#
|
|
# REMINDER: Make sure .env is in your .gitignore to avoid committing secrets.
|
|
# =============================================================================
|
|
|
|
# =============================================================================
|
|
# PROVIDER SELECTION — uncomment ONE block below
|
|
# =============================================================================
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 1: Anthropic (default — no provider flag needed)
|
|
# -----------------------------------------------------------------------------
|
|
ANTHROPIC_API_KEY=sk-ant-your-key-here
|
|
|
|
# Override the default model (optional)
|
|
# ANTHROPIC_MODEL=claude-sonnet-4-5
|
|
|
|
# Use a custom Anthropic-compatible endpoint (optional)
|
|
# ANTHROPIC_BASE_URL=https://api.anthropic.com
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 2: OpenAI
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_API_KEY=sk-your-key-here
|
|
# OPENAI_MODEL=gpt-4o
|
|
# For DeepSeek, set:
|
|
# OPENAI_BASE_URL=https://api.deepseek.com/v1
|
|
# OPENAI_MODEL=deepseek-v4-flash
|
|
# Optional: OPENAI_MODEL=deepseek-v4-pro
|
|
# Legacy aliases also work: deepseek-chat and deepseek-reasoner
|
|
# For Z.AI GLM Coding Plan, set:
|
|
# OPENAI_BASE_URL=https://api.z.ai/api/coding/paas/v4
|
|
# OPENAI_MODEL=GLM-5.1
|
|
# Optional: OPENAI_MODEL=GLM-5-Turbo, GLM-4.7, or GLM-4.5-Air
|
|
|
|
# Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com)
|
|
# OPENAI_BASE_URL=https://api.openai.com/v1
|
|
# Choose the OpenAI-compatible API surface (optional — defaults to chat_completions)
|
|
# Supported: chat_completions, responses
|
|
# OPENAI_API_FORMAT=chat_completions
|
|
# Choose a custom auth header for OpenAI-compatible providers (optional).
|
|
# Authorization defaults to Bearer; custom headers default to the raw API key.
|
|
# Set OPENAI_AUTH_HEADER_VALUE when the header value differs from OPENAI_API_KEY.
|
|
# OPENAI_AUTH_HEADER=api-key
|
|
# OPENAI_AUTH_SCHEME=raw
|
|
# OPENAI_AUTH_HEADER_VALUE=your-header-value-here
|
|
|
|
# Fallback context window size (tokens) when the model is not found in the
|
|
# built-in table (default: 128000). Increase this for models with larger
|
|
# context windows (e.g. 200000 for Claude-sized contexts).
|
|
# CLAUDE_CODE_OPENAI_FALLBACK_CONTEXT_WINDOW=128000
|
|
|
|
# Per-model context window overrides as a JSON object.
|
|
# Takes precedence over the built-in table, so you can register new or
|
|
# custom models without patching source.
|
|
# Example: CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS={"my-corp/llm-v3":262144,"gpt-4o-mini":128000}
|
|
# CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS=
|
|
|
|
# Per-model maximum output token overrides as a JSON object.
|
|
# Use this alongside CLAUDE_CODE_OPENAI_CONTEXT_WINDOWS when your model
|
|
# supports a different output limit than what the built-in table specifies.
|
|
# Example: CLAUDE_CODE_OPENAI_MAX_OUTPUT_TOKENS={"my-corp/llm-v3":8192}
|
|
# CLAUDE_CODE_OPENAI_MAX_OUTPUT_TOKENS=
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 3: Google Gemini
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_GEMINI=1
|
|
# GEMINI_API_KEY=your-gemini-key-here
|
|
# GEMINI_MODEL=gemini-2.0-flash
|
|
|
|
# Use a custom Gemini endpoint (optional)
|
|
# GEMINI_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 4: GitHub Models
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_GITHUB=1
|
|
# GITHUB_TOKEN=ghp_your-token-here
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 5: Ollama (local models)
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_BASE_URL=http://localhost:11434/v1
|
|
# OPENAI_API_KEY=ollama
|
|
# OPENAI_MODEL=llama3.2
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 6: LM Studio (local models)
|
|
# -----------------------------------------------------------------------------
|
|
# LM Studio exposes an OpenAI-compatible API, so we use the OpenAI provider.
|
|
# Make sure LM Studio is running with the Developer server enabled
|
|
# (Developer tab > toggle server ON).
|
|
#
|
|
# Steps:
|
|
# 1. Download and install LM Studio from https://lmstudio.ai
|
|
# 2. Search for and download a model (e.g. any coding or instruct model)
|
|
# 3. Load the model and start the Developer server
|
|
# 4. Set OPENAI_MODEL to the model ID shown in LM Studio's Developer tab
|
|
#
|
|
# The default server URL is http://localhost:1234 — change the port below
|
|
# if you've configured a different one in LM Studio.
|
|
#
|
|
# OPENAI_API_KEY is optional — LM Studio runs locally and ignores it.
|
|
# Some clients require a non-empty value; if you get auth errors, set it
|
|
# to any dummy value (e.g. "lmstudio").
|
|
#
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_BASE_URL=http://localhost:1234/v1
|
|
# OPENAI_MODEL=your-model-id-here
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 7: AWS Bedrock
|
|
# -----------------------------------------------------------------------------
|
|
|
|
# You may also need AWS CLI credentials configured (run: aws configure)
|
|
# or have AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set in your
|
|
# environment in addition to the variables below.
|
|
#
|
|
# CLAUDE_CODE_USE_BEDROCK=1
|
|
# AWS_REGION=us-east-1
|
|
# AWS_DEFAULT_REGION=us-east-1
|
|
# AWS_BEARER_TOKEN_BEDROCK=your-bearer-token-here
|
|
# ANTHROPIC_BEDROCK_BASE_URL=https://bedrock-runtime.us-east-1.amazonaws.com
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 8: Google Vertex AI
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_VERTEX=1
|
|
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
|
|
# CLOUD_ML_REGION=us-east5
|
|
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 9: NVIDIA NIM
|
|
# -----------------------------------------------------------------------------
|
|
# NVIDIA NIM provides hosted inference endpoints for NVIDIA models.
|
|
# Get your API key from https://build.nvidia.com/
|
|
#
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# NVIDIA_API_KEY=nvapi-your-key-here
|
|
# OPENAI_BASE_URL=https://integrate.api.nvidia.com/v1
|
|
# OPENAI_MODEL=nvidia/llama-3.1-nemotron-70b-instruct
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 10: MiniMax
|
|
# -----------------------------------------------------------------------------
|
|
# MiniMax API provides text generation models.
|
|
# Get your API key from https://platform.minimax.io/
|
|
#
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# MINIMAX_API_KEY=your-minimax-key-here
|
|
# OPENAI_BASE_URL=https://api.minimax.io/v1
|
|
# OPENAI_MODEL=MiniMax-M2.5
|
|
|
|
|
|
# =============================================================================
|
|
# OPTIONAL TUNING
|
|
# =============================================================================
|
|
|
|
# Max number of API retries on failure (default: 10)
|
|
# CLAUDE_CODE_MAX_RETRIES=10
|
|
|
|
# Enable persistent retry mode for unattended/CI sessions
|
|
# Retries 429/529 indefinitely with smart backoff
|
|
# CLAUDE_CODE_UNATTENDED_RETRY=1
|
|
|
|
# Enable extended key reporting (Kitty keyboard protocol)
|
|
# Useful for iTerm2, WezTerm, Ghostty if modifier keys feel off
|
|
# OPENCLAUDE_ENABLE_EXTENDED_KEYS=1
|
|
|
|
# Disable "Co-authored-by" line in git commits made by OpenClaude
|
|
# OPENCLAUDE_DISABLE_CO_AUTHORED_BY=1
|
|
|
|
# Disable strict tool schema normalization for non-Gemini providers
|
|
# Useful when MCP tools with complex optional params (e.g. list[dict])
|
|
# trigger "Extra required key ... supplied" errors from OpenAI-compatible endpoints
|
|
# OPENCLAUDE_DISABLE_STRICT_TOOLS=1
|
|
|
|
# Disable hidden <system-reminder> messages injected into tool output
|
|
# Suppresses the file-read cyber-risk reminder and the todo/task tool nudges
|
|
# Useful for users who want full transparency over what the model sees
|
|
# OPENCLAUDE_DISABLE_TOOL_REMINDERS=1
|
|
|
|
# Log structured per-request token usage (including cache metrics) to stderr.
|
|
# Useful for auditing cache hit rate / debugging cost spikes outside the REPL.
|
|
# Any truthy value enables it ("verbose", "1", "true").
|
|
#
|
|
# Complements (does NOT replace) CLAUDE_CODE_ENABLE_TOKEN_USAGE_ATTACHMENT —
|
|
# they serve different audiences:
|
|
# - OPENCLAUDE_LOG_TOKEN_USAGE is user-facing: one JSON line per API
|
|
# request on stderr, intended for humans inspecting cost/caching.
|
|
# - CLAUDE_CODE_ENABLE_TOKEN_USAGE_ATTACHMENT is model-facing: injects
|
|
# a context-usage attachment INTO the prompt so the model can reason
|
|
# about its own remaining context. Does not touch stderr.
|
|
# Turn on whichever audience you're debugging; both can run together.
|
|
# OPENCLAUDE_LOG_TOKEN_USAGE=verbose
|
|
|
|
# Custom timeout for API requests in milliseconds (default: varies)
|
|
# API_TIMEOUT_MS=60000
|
|
|
|
# Enable debug logging
|
|
# CLAUDE_DEBUG=1
|
|
|
|
|
|
# =============================================================================
|
|
# WEB SEARCH (OPTIONAL)
|
|
# =============================================================================
|
|
# OpenClaude includes a web search tool. By default it uses DuckDuckGo (free)
|
|
# or the provider's native search (Anthropic firstParty / vertex).
|
|
#
|
|
# Set one API key below to enable a provider. That's it.
|
|
|
|
# ── Provider API keys — set ONE of these ────────────────────────────
|
|
|
|
# Tavily (AI-optimized search, recommended)
|
|
# TAVILY_API_KEY=tvly-your-key-here
|
|
|
|
# Exa (neural/semantic search)
|
|
# EXA_API_KEY=your-exa-key-here
|
|
|
|
# You.com (RAG-ready snippets)
|
|
# YOU_API_KEY=your-you-key-here
|
|
|
|
# Jina (s.jina.ai endpoint)
|
|
# JINA_API_KEY=your-jina-key-here
|
|
|
|
# Bing Web Search
|
|
# BING_API_KEY=your-bing-key-here
|
|
|
|
# Mojeek (privacy-focused)
|
|
# MOJEEK_API_KEY=your-mojeek-key-here
|
|
|
|
# Linkup
|
|
# LINKUP_API_KEY=your-linkup-key-here
|
|
|
|
# Firecrawl (premium, uses @mendable/firecrawl-js)
|
|
# FIRECRAWL_API_KEY=fc-your-key-here
|
|
|
|
# ── Provider selection mode ─────────────────────────────────────────
|
|
#
|
|
# WEB_SEARCH_PROVIDER controls fallback behavior:
|
|
#
|
|
# "auto" (default) — try all configured providers, fall through on failure
|
|
# "custom" — custom API only, throw on failure (NOT in auto chain)
|
|
# "firecrawl" — firecrawl only
|
|
# "tavily" — tavily only
|
|
# "exa" — exa only
|
|
# "you" — you.com only
|
|
# "jina" — jina only
|
|
# "bing" — bing only
|
|
# "mojeek" — mojeek only
|
|
# "linkup" — linkup only
|
|
# "ddg" — duckduckgo only
|
|
# "native" — anthropic native / codex only
|
|
#
|
|
# Auto mode priority: firecrawl → tavily → exa → you → jina → bing → mojeek →
|
|
# linkup → ddg
|
|
# Note: "custom" is NOT in the auto chain. To use the custom API provider,
|
|
# you must explicitly set WEB_SEARCH_PROVIDER=custom.
|
|
#
|
|
# WEB_SEARCH_PROVIDER=auto
|
|
|
|
# ── Built-in custom API presets ─────────────────────────────────────
|
|
#
|
|
# Use with WEB_KEY for the API key:
|
|
# WEB_PROVIDER=searxng|google|brave|serpapi
|
|
# WEB_KEY=your-api-key-here
|
|
|
|
# ── Custom API endpoint (advanced) ──────────────────────────────────
|
|
#
|
|
# WEB_SEARCH_API — base URL of your search endpoint
|
|
# WEB_QUERY_PARAM — query parameter name (default: "q")
|
|
# WEB_METHOD — GET or POST (default: GET)
|
|
# WEB_PARAMS — extra static query params as JSON: {"lang":"en","count":"10"}
|
|
# WEB_URL_TEMPLATE — URL template with {query} for path embedding
|
|
# WEB_BODY_TEMPLATE — custom POST body with {query} placeholder
|
|
# WEB_AUTH_HEADER — header name for API key (default: "Authorization")
|
|
# WEB_AUTH_SCHEME — prefix before key (default: "Bearer")
|
|
# WEB_HEADERS — extra headers as "Name: value; Name2: value2"
|
|
# WEB_JSON_PATH — dot-path to results array in response
|
|
|
|
# ── Custom API security guardrails ──────────────────────────────────
|
|
#
|
|
# The custom provider enforces security guardrails by default.
|
|
# Override these only if you understand the risks.
|
|
#
|
|
# WEB_CUSTOM_TIMEOUT_SEC=15 — request timeout in seconds (default 15)
|
|
# WEB_CUSTOM_MAX_BODY_KB=300 — max POST body size in KB (default 300)
|
|
# WEB_CUSTOM_ALLOW_ARBITRARY_HEADERS=false — set "true" to use non-standard headers
|
|
# WEB_CUSTOM_ALLOW_HTTP=false — set "true" to allow http:// URLs
|
|
# WEB_CUSTOM_ALLOW_PRIVATE=false — set "true" to target localhost/private IPs
|
|
# (needed for self-hosted SearXNG)
|