Files
orcs-code/docs/advanced-setup.md
gnanam1990 aac326fa3f docs(setup): add beginner and advanced guides
Split the setup documentation into a simple beginner path and a separate advanced path. Add OS-specific quick starts for Windows and macOS/Linux so non-technical users can copy and paste the right commands without sorting through Bun and source-build instructions.
2026-04-02 18:09:04 +05:30

6.5 KiB

OpenClaude Advanced Setup

This guide is for users who want source builds, Bun workflows, provider profiles, diagnostics, or more control over runtime behavior.

Install Options

Option A: npm

npm install -g @gitlawb/openclaude

Option B: From source with Bun

Use Bun 1.3.11 or newer for source builds on Windows. Older Bun versions can fail during bun run build.

git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude

bun install
bun run build
npm link

Option C: Run directly with Bun

git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude

bun install
bun run dev

Provider Examples

OpenAI

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o

Codex via ChatGPT auth

codexplan maps to GPT-5.4 on the Codex backend with high reasoning. codexspark maps to GPT-5.3 Codex Spark for faster loops.

If you already use the Codex CLI, OpenClaude reads ~/.codex/auth.json automatically. You can also point it elsewhere with CODEX_AUTH_JSON_PATH or override the token directly with CODEX_API_KEY.

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan

# optional if you do not already have ~/.codex/auth.json
export CODEX_API_KEY=...

openclaude

DeepSeek

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat

Google Gemini via OpenRouter

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-or-...
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash-001

OpenRouter model availability changes over time. If a model stops working, try another current OpenRouter model before assuming the integration is broken.

Ollama

ollama pull llama3.3:70b

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b

LM Studio

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name

Together AI

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo

Groq

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile

Mistral

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest

Azure OpenAI

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=your-azure-key
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o

Environment Variables

Variable Required Description
CLAUDE_CODE_USE_OPENAI Yes Set to 1 to enable the OpenAI provider
OPENAI_API_KEY Yes* Your API key (* not needed for local models like Ollama)
OPENAI_MODEL Yes Model name such as gpt-4o, deepseek-chat, or llama3.3:70b
OPENAI_BASE_URL No API endpoint, defaulting to https://api.openai.com/v1
CODEX_API_KEY Codex only Codex or ChatGPT access token override
CODEX_AUTH_JSON_PATH Codex only Path to a Codex CLI auth.json file
CODEX_HOME Codex only Alternative Codex home directory
OPENCLAUDE_DISABLE_CO_AUTHORED_BY No Suppress the default Co-Authored-By trailer in generated git commits

You can also use ANTHROPIC_MODEL to override the model name. OPENAI_MODEL takes priority.

Runtime Hardening

Use these commands to validate your setup and catch mistakes early:

# quick startup sanity check
bun run smoke

# validate provider env + reachability
bun run doctor:runtime

# print machine-readable runtime diagnostics
bun run doctor:runtime:json

# persist a diagnostics report to reports/doctor-runtime.json
bun run doctor:report

# full local hardening check (smoke + runtime doctor)
bun run hardening:check

# strict hardening (includes project-wide typecheck)
bun run hardening:strict

Notes:

  • doctor:runtime fails fast if CLAUDE_CODE_USE_OPENAI=1 with a placeholder key or a missing key for non-local providers.
  • Local providers such as http://localhost:11434/v1 can run without OPENAI_API_KEY.
  • Codex profiles validate CODEX_API_KEY or the Codex CLI auth file and probe POST /responses instead of GET /models.

Provider Launch Profiles

Use profile launchers to avoid repeated environment setup:

# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
bun run profile:init

# preview the best provider/model for your goal
bun run profile:recommend -- --goal coding --benchmark

# auto-apply the best available local/openai provider/model for your goal
bun run profile:auto -- --goal latency

# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
bun run profile:codex

# openai bootstrap with explicit key
bun run profile:init -- --provider openai --api-key sk-...

# ollama bootstrap with custom model
bun run profile:init -- --provider ollama --model llama3.1:8b

# ollama bootstrap with intelligent model auto-selection
bun run profile:init -- --provider ollama --goal coding

# codex bootstrap with a fast model alias
bun run profile:init -- --provider codex --model codexspark

# launch using persisted profile (.openclaude-profile.json)
bun run dev:profile

# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
bun run dev:codex

# OpenAI profile (requires OPENAI_API_KEY in your shell)
bun run dev:openai

# Ollama profile (defaults: localhost:11434, llama3.1:8b)
bun run dev:ollama

profile:recommend ranks installed Ollama models for latency, balanced, or coding, and profile:auto can persist the recommendation directly.

If no profile exists yet, dev:profile uses the same goal-aware defaults when picking the initial model.

Use --provider ollama when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.

Use profile:codex or --provider codex when you want the ChatGPT Codex backend.

dev:openai, dev:ollama, and dev:codex run doctor:runtime first and only launch the app if checks pass.

For dev:ollama, make sure Ollama is running locally before launch.