Merge pull request #149 from gnanam1990/docs/non-technical-setup-guide
docs: split beginner and advanced setup guides
This commit is contained in:
325
README.md
325
README.md
@@ -8,308 +8,99 @@ All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agen
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Install
|
## Start Here
|
||||||
|
|
||||||
### Option A: npm (recommended)
|
If you are new to terminals or just want the easiest path, start with the beginner guides:
|
||||||
|
|
||||||
|
- [Non-Technical Setup](docs/non-technical-setup.md)
|
||||||
|
- [Windows Quick Start](docs/quick-start-windows.md)
|
||||||
|
- [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
|
||||||
|
|
||||||
|
If you want source builds, Bun workflows, profile launchers, or full provider examples, use:
|
||||||
|
|
||||||
|
- [Advanced Setup](docs/advanced-setup.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Beginner Install
|
||||||
|
|
||||||
|
For most users, install the npm package:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
npm install -g @gitlawb/openclaude
|
npm install -g @gitlawb/openclaude
|
||||||
```
|
```
|
||||||
|
|
||||||
If you install via npm and later see `ripgrep not found`, install ripgrep
|
The package name is `@gitlawb/openclaude`, but the command you run is:
|
||||||
system-wide and confirm `rg --version` works in the same terminal before
|
|
||||||
starting OpenClaude.
|
|
||||||
|
|
||||||
### Option B: From source (requires Bun)
|
|
||||||
|
|
||||||
Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions such as `1.3.4` can fail with a large batch of unresolved module errors during `bun run build`.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Clone from gitlawb
|
openclaude
|
||||||
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
|
|
||||||
cd openclaude
|
|
||||||
|
|
||||||
# Install dependencies
|
|
||||||
bun install
|
|
||||||
|
|
||||||
# Build
|
|
||||||
bun run build
|
|
||||||
|
|
||||||
# Link globally (optional)
|
|
||||||
npm link
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Option C: Run directly with Bun (no build step)
|
If you install via npm and later see `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
|
|
||||||
cd openclaude
|
|
||||||
bun install
|
|
||||||
bun run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Quick Start
|
## Fastest Setup
|
||||||
|
|
||||||
### 1. Set 3 environment variables
|
### Windows PowerShell
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
npm install -g @gitlawb/openclaude
|
||||||
|
|
||||||
|
$env:CLAUDE_CODE_USE_OPENAI="1"
|
||||||
|
$env:OPENAI_API_KEY="sk-your-key-here"
|
||||||
|
$env:OPENAI_MODEL="gpt-4o"
|
||||||
|
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
### macOS / Linux
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
npm install -g @gitlawb/openclaude
|
||||||
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
export OPENAI_API_KEY=sk-your-key-here
|
export OPENAI_API_KEY=sk-your-key-here
|
||||||
export OPENAI_MODEL=gpt-4o
|
export OPENAI_MODEL=gpt-4o
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Run it
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# If installed via npm
|
|
||||||
openclaude
|
openclaude
|
||||||
|
|
||||||
# If built from source
|
|
||||||
bun run dev
|
|
||||||
# or after build:
|
|
||||||
node dist/cli.mjs
|
|
||||||
```
|
```
|
||||||
|
|
||||||
That's it. The tool system, streaming, file editing, multi-step reasoning — everything works through the model you picked.
|
That is enough to start with OpenAI.
|
||||||
|
|
||||||
The npm package name is `@gitlawb/openclaude`, but the installed CLI command is still `openclaude`.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Provider Examples
|
## Choose Your Guide
|
||||||
|
|
||||||
|
### Beginner
|
||||||
|
|
||||||
|
- Want the easiest setup with copy-paste steps: [Non-Technical Setup](docs/non-technical-setup.md)
|
||||||
|
- On Windows: [Windows Quick Start](docs/quick-start-windows.md)
|
||||||
|
- On macOS or Linux: [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
|
||||||
|
|
||||||
|
### Advanced
|
||||||
|
|
||||||
|
- Want source builds, Bun, local profiles, runtime checks, or more provider choices: [Advanced Setup](docs/advanced-setup.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Beginner Choices
|
||||||
|
|
||||||
### OpenAI
|
### OpenAI
|
||||||
|
|
||||||
```bash
|
Best default if you already have an OpenAI API key.
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_API_KEY=sk-...
|
|
||||||
export OPENAI_MODEL=gpt-4o
|
|
||||||
```
|
|
||||||
|
|
||||||
### Codex via ChatGPT auth
|
### Ollama
|
||||||
|
|
||||||
`codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
|
Best if you want to run models locally on your own machine.
|
||||||
`codexspark` maps to GPT-5.3 Codex Spark for faster loops.
|
|
||||||
|
|
||||||
If you already use the Codex CLI, OpenClaude will read `~/.codex/auth.json`
|
### Codex
|
||||||
automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or
|
|
||||||
override the token directly with `CODEX_API_KEY`.
|
|
||||||
|
|
||||||
```bash
|
Best if you already use the Codex CLI or ChatGPT Codex backend.
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_MODEL=codexplan
|
|
||||||
|
|
||||||
# optional if you do not already have ~/.codex/auth.json
|
### Atomic Chat
|
||||||
export CODEX_API_KEY=...
|
|
||||||
|
|
||||||
openclaude
|
Best if you want local inference on Apple Silicon with Atomic Chat. See [Advanced Setup](docs/advanced-setup.md).
|
||||||
```
|
|
||||||
|
|
||||||
### DeepSeek
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_API_KEY=sk-...
|
|
||||||
export OPENAI_BASE_URL=https://api.deepseek.com/v1
|
|
||||||
export OPENAI_MODEL=deepseek-chat
|
|
||||||
```
|
|
||||||
|
|
||||||
### Google Gemini (via OpenRouter)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_API_KEY=sk-or-...
|
|
||||||
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
|
||||||
export OPENAI_MODEL=google/gemini-2.0-flash-001
|
|
||||||
```
|
|
||||||
|
|
||||||
OpenRouter model availability changes over time. If a model stops working,
|
|
||||||
pick another currently available OpenRouter model before assuming the
|
|
||||||
OpenAI-compatible setup is broken.
|
|
||||||
|
|
||||||
### Ollama (local, free)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ollama pull llama3.3:70b
|
|
||||||
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_BASE_URL=http://localhost:11434/v1
|
|
||||||
export OPENAI_MODEL=llama3.3:70b
|
|
||||||
# no API key needed for local models
|
|
||||||
```
|
|
||||||
|
|
||||||
### Atomic Chat (local, Apple Silicon)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_BASE_URL=http://127.0.0.1:1337/v1
|
|
||||||
export OPENAI_MODEL=your-model-name
|
|
||||||
# no API key needed for local models
|
|
||||||
```
|
|
||||||
|
|
||||||
Or use the profile launcher:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
bun run dev:atomic-chat
|
|
||||||
```
|
|
||||||
|
|
||||||
Download Atomic Chat from [atomic.chat](https://atomic.chat/). The app must be running with a model loaded before launching.
|
|
||||||
|
|
||||||
### LM Studio (local)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_BASE_URL=http://localhost:1234/v1
|
|
||||||
export OPENAI_MODEL=your-model-name
|
|
||||||
```
|
|
||||||
|
|
||||||
### Together AI
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_API_KEY=...
|
|
||||||
export OPENAI_BASE_URL=https://api.together.xyz/v1
|
|
||||||
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
|
|
||||||
```
|
|
||||||
|
|
||||||
### Groq
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_API_KEY=gsk_...
|
|
||||||
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
|
|
||||||
export OPENAI_MODEL=llama-3.3-70b-versatile
|
|
||||||
```
|
|
||||||
|
|
||||||
### Mistral
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_API_KEY=...
|
|
||||||
export OPENAI_BASE_URL=https://api.mistral.ai/v1
|
|
||||||
export OPENAI_MODEL=mistral-large-latest
|
|
||||||
```
|
|
||||||
|
|
||||||
### Azure OpenAI
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export CLAUDE_CODE_USE_OPENAI=1
|
|
||||||
export OPENAI_API_KEY=your-azure-key
|
|
||||||
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
|
|
||||||
export OPENAI_MODEL=gpt-4o
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Environment Variables
|
|
||||||
|
|
||||||
| Variable | Required | Description |
|
|
||||||
|----------|----------|-------------|
|
|
||||||
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
|
|
||||||
| `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama/Atomic Chat) |
|
|
||||||
| `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
|
|
||||||
| `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
|
|
||||||
| `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
|
|
||||||
| `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
|
|
||||||
| `CODEX_HOME` | Codex only | Alternative Codex home directory (`auth.json` will be read from here) |
|
|
||||||
| `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Set to `1` to suppress the default `Co-Authored-By` trailer in generated git commit messages |
|
|
||||||
|
|
||||||
You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
|
|
||||||
|
|
||||||
OpenClaude PR bodies use OpenClaude branding by default. `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` only affects the commit trailer, not PR attribution text.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Runtime Hardening
|
|
||||||
|
|
||||||
Use these commands to keep the CLI stable and catch environment mistakes early:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# quick startup sanity check
|
|
||||||
bun run smoke
|
|
||||||
|
|
||||||
# validate provider env + reachability
|
|
||||||
bun run doctor:runtime
|
|
||||||
|
|
||||||
# print machine-readable runtime diagnostics
|
|
||||||
bun run doctor:runtime:json
|
|
||||||
|
|
||||||
# persist a diagnostics report to reports/doctor-runtime.json
|
|
||||||
bun run doctor:report
|
|
||||||
|
|
||||||
# full local hardening check (smoke + runtime doctor)
|
|
||||||
bun run hardening:check
|
|
||||||
|
|
||||||
# strict hardening (includes project-wide typecheck)
|
|
||||||
bun run hardening:strict
|
|
||||||
```
|
|
||||||
|
|
||||||
Notes:
|
|
||||||
- `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key (`SUA_CHAVE`) or a missing key for non-local providers.
|
|
||||||
- Local providers (for example `http://localhost:11434/v1`) can run without `OPENAI_API_KEY`.
|
|
||||||
- Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
|
|
||||||
|
|
||||||
### Provider Launch Profiles
|
|
||||||
|
|
||||||
Use profile launchers to avoid repeated environment setup:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
|
|
||||||
bun run profile:init
|
|
||||||
|
|
||||||
# preview the best provider/model for your goal
|
|
||||||
bun run profile:recommend -- --goal coding --benchmark
|
|
||||||
|
|
||||||
# auto-apply the best available local/openai provider/model for your goal
|
|
||||||
bun run profile:auto -- --goal latency
|
|
||||||
|
|
||||||
# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
|
|
||||||
bun run profile:codex
|
|
||||||
|
|
||||||
# openai bootstrap with explicit key
|
|
||||||
bun run profile:init -- --provider openai --api-key sk-...
|
|
||||||
|
|
||||||
# atomic-chat bootstrap (auto-detects running model)
|
|
||||||
bun run profile:init -- --provider atomic-chat
|
|
||||||
|
|
||||||
# ollama bootstrap with custom model
|
|
||||||
bun run profile:init -- --provider ollama --model llama3.1:8b
|
|
||||||
|
|
||||||
# ollama bootstrap with intelligent model auto-selection
|
|
||||||
bun run profile:init -- --provider ollama --goal coding
|
|
||||||
|
|
||||||
# codex bootstrap with a fast model alias
|
|
||||||
bun run profile:init -- --provider codex --model codexspark
|
|
||||||
|
|
||||||
# launch using persisted profile (.openclaude-profile.json)
|
|
||||||
bun run dev:profile
|
|
||||||
|
|
||||||
# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
|
|
||||||
bun run dev:codex
|
|
||||||
|
|
||||||
# OpenAI profile (requires OPENAI_API_KEY in your shell)
|
|
||||||
bun run dev:openai
|
|
||||||
|
|
||||||
# Ollama profile (defaults: localhost:11434, llama3.1:8b)
|
|
||||||
bun run dev:ollama
|
|
||||||
|
|
||||||
# Atomic Chat profile (Apple Silicon local LLMs at 127.0.0.1:1337)
|
|
||||||
bun run dev:atomic-chat
|
|
||||||
```
|
|
||||||
|
|
||||||
`profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
|
|
||||||
If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
|
|
||||||
|
|
||||||
Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
|
|
||||||
Goal-based Ollama selection only recommends among models that are already installed and reachable from Ollama.
|
|
||||||
|
|
||||||
Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
|
|
||||||
|
|
||||||
`dev:openai`, `dev:ollama`, `dev:atomic-chat`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
|
|
||||||
For `dev:ollama`, make sure Ollama is running locally before launch.
|
|
||||||
For `dev:atomic-chat`, make sure Atomic Chat is running with a model loaded.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
262
docs/advanced-setup.md
Normal file
262
docs/advanced-setup.md
Normal file
@@ -0,0 +1,262 @@
|
|||||||
|
# OpenClaude Advanced Setup
|
||||||
|
|
||||||
|
This guide is for users who want source builds, Bun workflows, provider profiles, diagnostics, or more control over runtime behavior.
|
||||||
|
|
||||||
|
## Install Options
|
||||||
|
|
||||||
|
### Option A: npm
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @gitlawb/openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option B: From source with Bun
|
||||||
|
|
||||||
|
Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions can fail during `bun run build`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
|
||||||
|
cd openclaude
|
||||||
|
|
||||||
|
bun install
|
||||||
|
bun run build
|
||||||
|
npm link
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option C: Run directly with Bun
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
|
||||||
|
cd openclaude
|
||||||
|
|
||||||
|
bun install
|
||||||
|
bun run dev
|
||||||
|
```
|
||||||
|
|
||||||
|
## Provider Examples
|
||||||
|
|
||||||
|
### OpenAI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=sk-...
|
||||||
|
export OPENAI_MODEL=gpt-4o
|
||||||
|
```
|
||||||
|
|
||||||
|
### Codex via ChatGPT auth
|
||||||
|
|
||||||
|
`codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
|
||||||
|
`codexspark` maps to GPT-5.3 Codex Spark for faster loops.
|
||||||
|
|
||||||
|
If you already use the Codex CLI, OpenClaude reads `~/.codex/auth.json` automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or override the token directly with `CODEX_API_KEY`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_MODEL=codexplan
|
||||||
|
|
||||||
|
# optional if you do not already have ~/.codex/auth.json
|
||||||
|
export CODEX_API_KEY=...
|
||||||
|
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
### DeepSeek
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=sk-...
|
||||||
|
export OPENAI_BASE_URL=https://api.deepseek.com/v1
|
||||||
|
export OPENAI_MODEL=deepseek-chat
|
||||||
|
```
|
||||||
|
|
||||||
|
### Google Gemini via OpenRouter
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=sk-or-...
|
||||||
|
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
||||||
|
export OPENAI_MODEL=google/gemini-2.0-flash-001
|
||||||
|
```
|
||||||
|
|
||||||
|
OpenRouter model availability changes over time. If a model stops working, try another current OpenRouter model before assuming the integration is broken.
|
||||||
|
|
||||||
|
### Ollama
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ollama pull llama3.3:70b
|
||||||
|
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_BASE_URL=http://localhost:11434/v1
|
||||||
|
export OPENAI_MODEL=llama3.3:70b
|
||||||
|
```
|
||||||
|
|
||||||
|
### Atomic Chat (local, Apple Silicon)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_BASE_URL=http://127.0.0.1:1337/v1
|
||||||
|
export OPENAI_MODEL=your-model-name
|
||||||
|
```
|
||||||
|
|
||||||
|
No API key is needed for Atomic Chat local models.
|
||||||
|
|
||||||
|
Or use the profile launcher:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bun run dev:atomic-chat
|
||||||
|
```
|
||||||
|
|
||||||
|
Download Atomic Chat from [atomic.chat](https://atomic.chat/). The app must be running with a model loaded before launching.
|
||||||
|
|
||||||
|
### LM Studio
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_BASE_URL=http://localhost:1234/v1
|
||||||
|
export OPENAI_MODEL=your-model-name
|
||||||
|
```
|
||||||
|
|
||||||
|
### Together AI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=...
|
||||||
|
export OPENAI_BASE_URL=https://api.together.xyz/v1
|
||||||
|
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
|
||||||
|
```
|
||||||
|
|
||||||
|
### Groq
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=gsk_...
|
||||||
|
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
|
||||||
|
export OPENAI_MODEL=llama-3.3-70b-versatile
|
||||||
|
```
|
||||||
|
|
||||||
|
### Mistral
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=...
|
||||||
|
export OPENAI_BASE_URL=https://api.mistral.ai/v1
|
||||||
|
export OPENAI_MODEL=mistral-large-latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### Azure OpenAI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=your-azure-key
|
||||||
|
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
|
||||||
|
export OPENAI_MODEL=gpt-4o
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
| Variable | Required | Description |
|
||||||
|
|----------|----------|-------------|
|
||||||
|
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
|
||||||
|
| `OPENAI_API_KEY` | Yes* | Your API key (`*` not needed for local models like Ollama or Atomic Chat) |
|
||||||
|
| `OPENAI_MODEL` | Yes | Model name such as `gpt-4o`, `deepseek-chat`, or `llama3.3:70b` |
|
||||||
|
| `OPENAI_BASE_URL` | No | API endpoint, defaulting to `https://api.openai.com/v1` |
|
||||||
|
| `CODEX_API_KEY` | Codex only | Codex or ChatGPT access token override |
|
||||||
|
| `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
|
||||||
|
| `CODEX_HOME` | Codex only | Alternative Codex home directory |
|
||||||
|
| `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Suppress the default `Co-Authored-By` trailer in generated git commits |
|
||||||
|
|
||||||
|
You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
|
||||||
|
|
||||||
|
## Runtime Hardening
|
||||||
|
|
||||||
|
Use these commands to validate your setup and catch mistakes early:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# quick startup sanity check
|
||||||
|
bun run smoke
|
||||||
|
|
||||||
|
# validate provider env + reachability
|
||||||
|
bun run doctor:runtime
|
||||||
|
|
||||||
|
# print machine-readable runtime diagnostics
|
||||||
|
bun run doctor:runtime:json
|
||||||
|
|
||||||
|
# persist a diagnostics report to reports/doctor-runtime.json
|
||||||
|
bun run doctor:report
|
||||||
|
|
||||||
|
# full local hardening check (smoke + runtime doctor)
|
||||||
|
bun run hardening:check
|
||||||
|
|
||||||
|
# strict hardening (includes project-wide typecheck)
|
||||||
|
bun run hardening:strict
|
||||||
|
```
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key or a missing key for non-local providers.
|
||||||
|
- Local providers such as `http://localhost:11434/v1` and `http://127.0.0.1:1337/v1` can run without `OPENAI_API_KEY`.
|
||||||
|
- Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
|
||||||
|
|
||||||
|
## Provider Launch Profiles
|
||||||
|
|
||||||
|
Use profile launchers to avoid repeated environment setup:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
|
||||||
|
bun run profile:init
|
||||||
|
|
||||||
|
# preview the best provider/model for your goal
|
||||||
|
bun run profile:recommend -- --goal coding --benchmark
|
||||||
|
|
||||||
|
# auto-apply the best available local/openai provider/model for your goal
|
||||||
|
bun run profile:auto -- --goal latency
|
||||||
|
|
||||||
|
# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
|
||||||
|
bun run profile:codex
|
||||||
|
|
||||||
|
# openai bootstrap with explicit key
|
||||||
|
bun run profile:init -- --provider openai --api-key sk-...
|
||||||
|
|
||||||
|
# ollama bootstrap with custom model
|
||||||
|
bun run profile:init -- --provider ollama --model llama3.1:8b
|
||||||
|
|
||||||
|
# ollama bootstrap with intelligent model auto-selection
|
||||||
|
bun run profile:init -- --provider ollama --goal coding
|
||||||
|
|
||||||
|
# atomic-chat bootstrap (auto-detects running model)
|
||||||
|
bun run profile:init -- --provider atomic-chat
|
||||||
|
|
||||||
|
# codex bootstrap with a fast model alias
|
||||||
|
bun run profile:init -- --provider codex --model codexspark
|
||||||
|
|
||||||
|
# launch using persisted profile (.openclaude-profile.json)
|
||||||
|
bun run dev:profile
|
||||||
|
|
||||||
|
# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
|
||||||
|
bun run dev:codex
|
||||||
|
|
||||||
|
# OpenAI profile (requires OPENAI_API_KEY in your shell)
|
||||||
|
bun run dev:openai
|
||||||
|
|
||||||
|
# Ollama profile (defaults: localhost:11434, llama3.1:8b)
|
||||||
|
bun run dev:ollama
|
||||||
|
|
||||||
|
# Atomic Chat profile (Apple Silicon local LLMs at 127.0.0.1:1337)
|
||||||
|
bun run dev:atomic-chat
|
||||||
|
```
|
||||||
|
|
||||||
|
`profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
|
||||||
|
|
||||||
|
If no profile exists yet, `dev:profile` uses the same goal-aware defaults when picking the initial model.
|
||||||
|
|
||||||
|
Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
|
||||||
|
|
||||||
|
Use `--provider atomic-chat` when you want Atomic Chat as the local Apple Silicon provider.
|
||||||
|
|
||||||
|
Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
|
||||||
|
|
||||||
|
`dev:openai`, `dev:ollama`, `dev:atomic-chat`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
|
||||||
|
|
||||||
|
For `dev:ollama`, make sure Ollama is running locally before launch.
|
||||||
|
|
||||||
|
For `dev:atomic-chat`, make sure Atomic Chat is running with a model loaded before launch.
|
||||||
116
docs/non-technical-setup.md
Normal file
116
docs/non-technical-setup.md
Normal file
@@ -0,0 +1,116 @@
|
|||||||
|
# OpenClaude for Non-Technical Users
|
||||||
|
|
||||||
|
This guide is for people who want the easiest setup path.
|
||||||
|
|
||||||
|
You do not need to build from source. You do not need Bun. You do not need to understand the full codebase.
|
||||||
|
|
||||||
|
If you can copy and paste commands into a terminal, you can set this up.
|
||||||
|
|
||||||
|
## What OpenClaude Does
|
||||||
|
|
||||||
|
OpenClaude lets you use an AI coding assistant with different model providers such as:
|
||||||
|
|
||||||
|
- OpenAI
|
||||||
|
- DeepSeek
|
||||||
|
- Gemini
|
||||||
|
- Ollama
|
||||||
|
- Codex
|
||||||
|
|
||||||
|
For most first-time users, OpenAI is the easiest option.
|
||||||
|
|
||||||
|
## Before You Start
|
||||||
|
|
||||||
|
You need:
|
||||||
|
|
||||||
|
1. Node.js 20 or newer installed
|
||||||
|
2. A terminal window
|
||||||
|
3. An API key from your provider, unless you are using a local model like Ollama
|
||||||
|
|
||||||
|
## Fastest Path
|
||||||
|
|
||||||
|
1. Install OpenClaude with npm
|
||||||
|
2. Set 3 environment variables
|
||||||
|
3. Run `openclaude`
|
||||||
|
|
||||||
|
## Choose Your Operating System
|
||||||
|
|
||||||
|
- Windows: [Windows Quick Start](quick-start-windows.md)
|
||||||
|
- macOS / Linux: [macOS / Linux Quick Start](quick-start-mac-linux.md)
|
||||||
|
|
||||||
|
## Which Provider Should You Choose?
|
||||||
|
|
||||||
|
### OpenAI
|
||||||
|
|
||||||
|
Choose this if:
|
||||||
|
|
||||||
|
- you want the easiest setup
|
||||||
|
- you already have an OpenAI API key
|
||||||
|
|
||||||
|
### Ollama
|
||||||
|
|
||||||
|
Choose this if:
|
||||||
|
|
||||||
|
- you want to run models locally
|
||||||
|
- you do not want to depend on a cloud API for testing
|
||||||
|
|
||||||
|
### Codex
|
||||||
|
|
||||||
|
Choose this if:
|
||||||
|
|
||||||
|
- you already use the Codex CLI
|
||||||
|
- you already have Codex or ChatGPT auth configured
|
||||||
|
|
||||||
|
## What Success Looks Like
|
||||||
|
|
||||||
|
After you run `openclaude`, the CLI should start and wait for your prompt.
|
||||||
|
|
||||||
|
At that point, you can ask it to:
|
||||||
|
|
||||||
|
- explain code
|
||||||
|
- edit files
|
||||||
|
- run commands
|
||||||
|
- review changes
|
||||||
|
|
||||||
|
## Common Problems
|
||||||
|
|
||||||
|
### `openclaude` command not found
|
||||||
|
|
||||||
|
Cause:
|
||||||
|
|
||||||
|
- npm installed the package, but your terminal has not refreshed yet
|
||||||
|
|
||||||
|
Fix:
|
||||||
|
|
||||||
|
1. Close the terminal
|
||||||
|
2. Open a new terminal
|
||||||
|
3. Run `openclaude` again
|
||||||
|
|
||||||
|
### Invalid API key
|
||||||
|
|
||||||
|
Cause:
|
||||||
|
|
||||||
|
- the key is wrong, expired, or copied incorrectly
|
||||||
|
|
||||||
|
Fix:
|
||||||
|
|
||||||
|
1. Get a fresh key from your provider
|
||||||
|
2. Paste it again carefully
|
||||||
|
3. Re-run `openclaude`
|
||||||
|
|
||||||
|
### Ollama not working
|
||||||
|
|
||||||
|
Cause:
|
||||||
|
|
||||||
|
- Ollama is not installed or not running
|
||||||
|
|
||||||
|
Fix:
|
||||||
|
|
||||||
|
1. Install Ollama from `https://ollama.com/download`
|
||||||
|
2. Start Ollama
|
||||||
|
3. Try again
|
||||||
|
|
||||||
|
## Want More Control?
|
||||||
|
|
||||||
|
If you want source builds, advanced provider profiles, diagnostics, or Bun-based workflows, use:
|
||||||
|
|
||||||
|
- [Advanced Setup](advanced-setup.md)
|
||||||
108
docs/quick-start-mac-linux.md
Normal file
108
docs/quick-start-mac-linux.md
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
# OpenClaude Quick Start for macOS and Linux
|
||||||
|
|
||||||
|
This guide uses a standard shell such as Terminal, iTerm, bash, or zsh.
|
||||||
|
|
||||||
|
## 1. Install Node.js
|
||||||
|
|
||||||
|
Install Node.js 20 or newer from:
|
||||||
|
|
||||||
|
- `https://nodejs.org/`
|
||||||
|
|
||||||
|
Then check it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node --version
|
||||||
|
npm --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Install OpenClaude
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @gitlawb/openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. Pick One Provider
|
||||||
|
|
||||||
|
### Option A: OpenAI
|
||||||
|
|
||||||
|
Replace `sk-your-key-here` with your real key.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=sk-your-key-here
|
||||||
|
export OPENAI_MODEL=gpt-4o
|
||||||
|
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option B: DeepSeek
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_API_KEY=sk-your-key-here
|
||||||
|
export OPENAI_BASE_URL=https://api.deepseek.com/v1
|
||||||
|
export OPENAI_MODEL=deepseek-chat
|
||||||
|
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option C: Ollama
|
||||||
|
|
||||||
|
Install Ollama first from:
|
||||||
|
|
||||||
|
- `https://ollama.com/download`
|
||||||
|
|
||||||
|
Then run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ollama pull llama3.1:8b
|
||||||
|
|
||||||
|
export CLAUDE_CODE_USE_OPENAI=1
|
||||||
|
export OPENAI_BASE_URL=http://localhost:11434/v1
|
||||||
|
export OPENAI_MODEL=llama3.1:8b
|
||||||
|
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
No API key is needed for Ollama local models.
|
||||||
|
|
||||||
|
## 4. If `openclaude` Is Not Found
|
||||||
|
|
||||||
|
Close the terminal, open a new one, and try again:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. If Your Provider Fails
|
||||||
|
|
||||||
|
Check the basics:
|
||||||
|
|
||||||
|
### For OpenAI or DeepSeek
|
||||||
|
|
||||||
|
- make sure the key is real
|
||||||
|
- make sure you copied it fully
|
||||||
|
|
||||||
|
### For Ollama
|
||||||
|
|
||||||
|
- make sure Ollama is installed
|
||||||
|
- make sure Ollama is running
|
||||||
|
- make sure the model was pulled successfully
|
||||||
|
|
||||||
|
## 6. Updating OpenClaude
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @gitlawb/openclaude@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
## 7. Uninstalling OpenClaude
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm uninstall -g @gitlawb/openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
## Need Advanced Setup?
|
||||||
|
|
||||||
|
Use:
|
||||||
|
|
||||||
|
- [Advanced Setup](advanced-setup.md)
|
||||||
108
docs/quick-start-windows.md
Normal file
108
docs/quick-start-windows.md
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
# OpenClaude Quick Start for Windows
|
||||||
|
|
||||||
|
This guide uses Windows PowerShell.
|
||||||
|
|
||||||
|
## 1. Install Node.js
|
||||||
|
|
||||||
|
Install Node.js 20 or newer from:
|
||||||
|
|
||||||
|
- `https://nodejs.org/`
|
||||||
|
|
||||||
|
Then open PowerShell and check it:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
node --version
|
||||||
|
npm --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Install OpenClaude
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
npm install -g @gitlawb/openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. Pick One Provider
|
||||||
|
|
||||||
|
### Option A: OpenAI
|
||||||
|
|
||||||
|
Replace `sk-your-key-here` with your real key.
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
$env:CLAUDE_CODE_USE_OPENAI="1"
|
||||||
|
$env:OPENAI_API_KEY="sk-your-key-here"
|
||||||
|
$env:OPENAI_MODEL="gpt-4o"
|
||||||
|
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option B: DeepSeek
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
$env:CLAUDE_CODE_USE_OPENAI="1"
|
||||||
|
$env:OPENAI_API_KEY="sk-your-key-here"
|
||||||
|
$env:OPENAI_BASE_URL="https://api.deepseek.com/v1"
|
||||||
|
$env:OPENAI_MODEL="deepseek-chat"
|
||||||
|
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option C: Ollama
|
||||||
|
|
||||||
|
Install Ollama first from:
|
||||||
|
|
||||||
|
- `https://ollama.com/download/windows`
|
||||||
|
|
||||||
|
Then run:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
ollama pull llama3.1:8b
|
||||||
|
|
||||||
|
$env:CLAUDE_CODE_USE_OPENAI="1"
|
||||||
|
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
|
||||||
|
$env:OPENAI_MODEL="llama3.1:8b"
|
||||||
|
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
No API key is needed for Ollama local models.
|
||||||
|
|
||||||
|
## 4. If `openclaude` Is Not Found
|
||||||
|
|
||||||
|
Close PowerShell, open a new one, and try again:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. If Your Provider Fails
|
||||||
|
|
||||||
|
Check the basics:
|
||||||
|
|
||||||
|
### For OpenAI or DeepSeek
|
||||||
|
|
||||||
|
- make sure the key is real
|
||||||
|
- make sure you copied it fully
|
||||||
|
|
||||||
|
### For Ollama
|
||||||
|
|
||||||
|
- make sure Ollama is installed
|
||||||
|
- make sure Ollama is running
|
||||||
|
- make sure the model was pulled successfully
|
||||||
|
|
||||||
|
## 6. Updating OpenClaude
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
npm install -g @gitlawb/openclaude@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
## 7. Uninstalling OpenClaude
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
npm uninstall -g @gitlawb/openclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
## Need Advanced Setup?
|
||||||
|
|
||||||
|
Use:
|
||||||
|
|
||||||
|
- [Advanced Setup](advanced-setup.md)
|
||||||
Reference in New Issue
Block a user