* added Instructions to env example to allow openclaude to be used system wide * added suggested .env.example changes I added the suggested .env.example changes suggested earlier within the pr thread
251 lines
9.5 KiB
Plaintext
251 lines
9.5 KiB
Plaintext
# =============================================================================
|
|
# OpenClaude Environment Configuration
|
|
# =============================================================================
|
|
# Copy this file to .env and fill in your values:
|
|
# cp .env.example .env
|
|
#
|
|
# Only set the variables for the provider you want to use.
|
|
# All other sections can be left commented out.
|
|
# =============================================================================
|
|
|
|
# =============================================================================
|
|
# SYSTEM-WIDE SETUP (OPTIONAL)
|
|
# =============================================================================
|
|
# Instead of using a .env file per project, you can set these variables
|
|
# system-wide so OpenClaude works from any directory on your machine.
|
|
#
|
|
# STEP 1: Pick your provider variables from the list below.
|
|
# STEP 2: Set them using the method for your OS (see further down).
|
|
#
|
|
# ── Provider variables ───────────────────────────────────────────────
|
|
#
|
|
# Option 1 — Anthropic:
|
|
# ANTHROPIC_API_KEY=sk-ant-your-key-here
|
|
# ANTHROPIC_MODEL=claude-sonnet-4-5 (optional)
|
|
# ANTHROPIC_BASE_URL=https://api.anthropic.com (optional)
|
|
#
|
|
# Option 2 — OpenAI:
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_API_KEY=sk-your-key-here
|
|
# OPENAI_MODEL=gpt-4o
|
|
# OPENAI_BASE_URL=https://api.openai.com/v1 (optional)
|
|
#
|
|
# Option 3 — Google Gemini:
|
|
# CLAUDE_CODE_USE_GEMINI=1
|
|
# GEMINI_API_KEY=your-gemini-key-here
|
|
# GEMINI_MODEL=gemini-2.0-flash
|
|
# GEMINI_BASE_URL=https://generativelanguage.googleapis.com (optional)
|
|
#
|
|
# Option 4 — GitHub Models:
|
|
# CLAUDE_CODE_USE_GITHUB=1
|
|
# GITHUB_TOKEN=ghp_your-token-here
|
|
#
|
|
# Option 5 — Ollama (local):
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_BASE_URL=http://localhost:11434/v1
|
|
# OPENAI_API_KEY=ollama
|
|
# OPENAI_MODEL=llama3.2
|
|
#
|
|
# Option 6 — LM Studio (local):
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_BASE_URL=http://localhost:1234/v1
|
|
# OPENAI_MODEL=your-model-id-here
|
|
# OPENAI_API_KEY=lmstudio (optional)
|
|
#
|
|
# Option 7 — AWS Bedrock (may also need: aws configure):
|
|
# CLAUDE_CODE_USE_BEDROCK=1
|
|
# AWS_REGION=us-east-1
|
|
# AWS_DEFAULT_REGION=us-east-1
|
|
# AWS_BEARER_TOKEN_BEDROCK=your-bearer-token-here
|
|
# ANTHROPIC_BEDROCK_BASE_URL=https://bedrock-runtime.us-east-1.amazonaws.com
|
|
#
|
|
# Option 8 — Google Vertex AI:
|
|
# CLAUDE_CODE_USE_VERTEX=1
|
|
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
|
|
# CLOUD_ML_REGION=us-east5
|
|
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
|
|
#
|
|
# ── How to set variables on each OS ──────────────────────────────────
|
|
#
|
|
# macOS (zsh):
|
|
# 1. Open: nano ~/.zshrc
|
|
# 2. Add each variable as: export VAR_NAME=value
|
|
# 3. Save and reload: source ~/.zshrc
|
|
#
|
|
# Linux (bash):
|
|
# 1. Open: nano ~/.bashrc
|
|
# 2. Add each variable as: export VAR_NAME=value
|
|
# 3. Save and reload: source ~/.bashrc
|
|
#
|
|
# Windows (PowerShell):
|
|
# Run for each variable:
|
|
# [System.Environment]::SetEnvironmentVariable('VAR_NAME', 'value', 'User')
|
|
# Then restart your terminal.
|
|
#
|
|
# Windows (Command Prompt):
|
|
# Run for each variable:
|
|
# setx VAR_NAME value
|
|
# Then restart your terminal.
|
|
#
|
|
# Windows (GUI):
|
|
# Settings > System > About > Advanced System Settings >
|
|
# Environment Variables > under "User variables" click New,
|
|
# then add each variable.
|
|
#
|
|
# ── Important notes ──────────────────────────────────────────────────
|
|
#
|
|
# LOCAL SERVERS: If using LM Studio or Ollama, the server MUST be
|
|
# running with a model loaded before you launch OpenClaude —
|
|
# otherwise you'll get connection errors.
|
|
#
|
|
# SWITCHING PROVIDERS: To temporarily switch, unset the relevant
|
|
# variables in your current terminal session:
|
|
#
|
|
# macOS / Linux:
|
|
# unset VAR_NAME
|
|
# # e.g.: unset CLAUDE_CODE_USE_OPENAI OPENAI_BASE_URL OPENAI_MODEL
|
|
#
|
|
# Windows (PowerShell — current session only):
|
|
# Remove-Item Env:VAR_NAME
|
|
#
|
|
# To permanently remove a variable on Windows:
|
|
# [System.Environment]::SetEnvironmentVariable('VAR_NAME', $null, 'User')
|
|
#
|
|
# LOAD ORDER:
|
|
# Shell and system environment variables are inherited by the process.
|
|
# Project .env files are only used if your launcher or shell loads them
|
|
# before starting OpenClaude.
|
|
# COMPATIBILITY:
|
|
# System-wide variables work regardless of how you run OpenClaude:
|
|
# npx, global npm install, bun run, or node directly. Any process
|
|
# launched from your terminal inherits your shell's environment.
|
|
#
|
|
# REMINDER: Make sure .env is in your .gitignore to avoid committing secrets.
|
|
# =============================================================================
|
|
|
|
# =============================================================================
|
|
# PROVIDER SELECTION — uncomment ONE block below
|
|
# =============================================================================
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 1: Anthropic (default — no provider flag needed)
|
|
# -----------------------------------------------------------------------------
|
|
ANTHROPIC_API_KEY=sk-ant-your-key-here
|
|
|
|
# Override the default model (optional)
|
|
# ANTHROPIC_MODEL=claude-sonnet-4-5
|
|
|
|
# Use a custom Anthropic-compatible endpoint (optional)
|
|
# ANTHROPIC_BASE_URL=https://api.anthropic.com
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 2: OpenAI
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_API_KEY=sk-your-key-here
|
|
# OPENAI_MODEL=gpt-4o
|
|
|
|
# Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com)
|
|
# OPENAI_BASE_URL=https://api.openai.com/v1
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 3: Google Gemini
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_GEMINI=1
|
|
# GEMINI_API_KEY=your-gemini-key-here
|
|
# GEMINI_MODEL=gemini-2.0-flash
|
|
|
|
# Use a custom Gemini endpoint (optional)
|
|
# GEMINI_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 4: GitHub Models
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_GITHUB=1
|
|
# GITHUB_TOKEN=ghp_your-token-here
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 5: Ollama (local models)
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_BASE_URL=http://localhost:11434/v1
|
|
# OPENAI_API_KEY=ollama
|
|
# OPENAI_MODEL=llama3.2
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 6: LM Studio (local models)
|
|
# -----------------------------------------------------------------------------
|
|
# LM Studio exposes an OpenAI-compatible API, so we use the OpenAI provider.
|
|
# Make sure LM Studio is running with the Developer server enabled
|
|
# (Developer tab > toggle server ON).
|
|
#
|
|
# Steps:
|
|
# 1. Download and install LM Studio from https://lmstudio.ai
|
|
# 2. Search for and download a model (e.g. any coding or instruct model)
|
|
# 3. Load the model and start the Developer server
|
|
# 4. Set OPENAI_MODEL to the model ID shown in LM Studio's Developer tab
|
|
#
|
|
# The default server URL is http://localhost:1234 — change the port below
|
|
# if you've configured a different one in LM Studio.
|
|
#
|
|
# OPENAI_API_KEY is optional — LM Studio runs locally and ignores it.
|
|
# Some clients require a non-empty value; if you get auth errors, set it
|
|
# to any dummy value (e.g. "lmstudio").
|
|
#
|
|
# CLAUDE_CODE_USE_OPENAI=1
|
|
# OPENAI_BASE_URL=http://localhost:1234/v1
|
|
# OPENAI_MODEL=your-model-id-here
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 7: AWS Bedrock
|
|
# -----------------------------------------------------------------------------
|
|
|
|
# You may also need AWS CLI credentials configured (run: aws configure)
|
|
# or have AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set in your
|
|
# environment in addition to the variables below.
|
|
#
|
|
# CLAUDE_CODE_USE_BEDROCK=1
|
|
# AWS_REGION=us-east-1
|
|
# AWS_DEFAULT_REGION=us-east-1
|
|
# AWS_BEARER_TOKEN_BEDROCK=your-bearer-token-here
|
|
# ANTHROPIC_BEDROCK_BASE_URL=https://bedrock-runtime.us-east-1.amazonaws.com
|
|
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# Option 8: Google Vertex AI
|
|
# -----------------------------------------------------------------------------
|
|
# CLAUDE_CODE_USE_VERTEX=1
|
|
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
|
|
# CLOUD_ML_REGION=us-east5
|
|
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
|
|
|
|
|
|
# =============================================================================
|
|
# OPTIONAL TUNING
|
|
# =============================================================================
|
|
|
|
# Max number of API retries on failure (default: 10)
|
|
# CLAUDE_CODE_MAX_RETRIES=10
|
|
|
|
# Enable persistent retry mode for unattended/CI sessions
|
|
# Retries 429/529 indefinitely with smart backoff
|
|
# CLAUDE_CODE_UNATTENDED_RETRY=1
|
|
|
|
# Enable extended key reporting (Kitty keyboard protocol)
|
|
# Useful for iTerm2, WezTerm, Ghostty if modifier keys feel off
|
|
# OPENCLAUDE_ENABLE_EXTENDED_KEYS=1
|
|
|
|
# Disable "Co-authored-by" line in git commits made by OpenClaude
|
|
# OPENCLAUDE_DISABLE_CO_AUTHORED_BY=1
|
|
|
|
# Custom timeout for API requests in milliseconds (default: varies)
|
|
# API_TIMEOUT_MS=60000
|
|
|
|
# Enable debug logging
|
|
# CLAUDE_DEBUG=1
|