Merge upstream/main into fix/anthropic-schema-format
This commit is contained in:
6
.github/workflows/pr-checks.yml
vendored
6
.github/workflows/pr-checks.yml
vendored
@@ -12,15 +12,15 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Check out repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Set up Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
uses: oven-sh/setup-bun@4bc047ad259df6fc24a6c9b0f9a0cb08cf17fbe5 # v2.0.1
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
|
||||
|
||||
303
README.md
303
README.md
@@ -2,290 +2,105 @@
|
||||
|
||||
Use Claude Code with **any LLM** — not just Claude.
|
||||
|
||||
OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`.
|
||||
OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`, and local inference via [Atomic Chat](https://atomic.chat/) on Apple Silicon.
|
||||
|
||||
All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
|
||||
|
||||
---
|
||||
|
||||
## Install
|
||||
## Start Here
|
||||
|
||||
### Option A: npm (recommended)
|
||||
If you are new to terminals or just want the easiest path, start with the beginner guides:
|
||||
|
||||
- [Non-Technical Setup](docs/non-technical-setup.md)
|
||||
- [Windows Quick Start](docs/quick-start-windows.md)
|
||||
- [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
|
||||
|
||||
If you want source builds, Bun workflows, profile launchers, or full provider examples, use:
|
||||
|
||||
- [Advanced Setup](docs/advanced-setup.md)
|
||||
|
||||
---
|
||||
|
||||
## Beginner Install
|
||||
|
||||
For most users, install the npm package:
|
||||
|
||||
```bash
|
||||
npm install -g @gitlawb/openclaude
|
||||
```
|
||||
|
||||
If you install via npm and later see `ripgrep not found`, install ripgrep
|
||||
system-wide and confirm `rg --version` works in the same terminal before
|
||||
starting OpenClaude.
|
||||
|
||||
### Option B: From source (requires Bun)
|
||||
|
||||
Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions such as `1.3.4` can fail with a large batch of unresolved module errors during `bun run build`.
|
||||
The package name is `@gitlawb/openclaude`, but the command you run is:
|
||||
|
||||
```bash
|
||||
# Clone from gitlawb
|
||||
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
|
||||
cd openclaude
|
||||
|
||||
# Install dependencies
|
||||
bun install
|
||||
|
||||
# Build
|
||||
bun run build
|
||||
|
||||
# Link globally (optional)
|
||||
npm link
|
||||
openclaude
|
||||
```
|
||||
|
||||
### Option C: Run directly with Bun (no build step)
|
||||
|
||||
```bash
|
||||
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
|
||||
cd openclaude
|
||||
bun install
|
||||
bun run dev
|
||||
```
|
||||
If you install via npm and later see `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
## Fastest Setup
|
||||
|
||||
### 1. Set 3 environment variables
|
||||
### Windows PowerShell
|
||||
|
||||
```powershell
|
||||
npm install -g @gitlawb/openclaude
|
||||
|
||||
$env:CLAUDE_CODE_USE_OPENAI="1"
|
||||
$env:OPENAI_API_KEY="sk-your-key-here"
|
||||
$env:OPENAI_MODEL="gpt-4o"
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
### macOS / Linux
|
||||
|
||||
```bash
|
||||
npm install -g @gitlawb/openclaude
|
||||
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-your-key-here
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
### 2. Run it
|
||||
|
||||
```bash
|
||||
# If installed via npm
|
||||
openclaude
|
||||
|
||||
# If built from source
|
||||
bun run dev
|
||||
# or after build:
|
||||
node dist/cli.mjs
|
||||
```
|
||||
|
||||
That's it. The tool system, streaming, file editing, multi-step reasoning — everything works through the model you picked.
|
||||
|
||||
The npm package name is `@gitlawb/openclaude`, but the installed CLI command is still `openclaude`.
|
||||
That is enough to start with OpenAI.
|
||||
|
||||
---
|
||||
|
||||
## Provider Examples
|
||||
## Choose Your Guide
|
||||
|
||||
### Beginner
|
||||
|
||||
- Want the easiest setup with copy-paste steps: [Non-Technical Setup](docs/non-technical-setup.md)
|
||||
- On Windows: [Windows Quick Start](docs/quick-start-windows.md)
|
||||
- On macOS or Linux: [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
|
||||
|
||||
### Advanced
|
||||
|
||||
- Want source builds, Bun, local profiles, runtime checks, or more provider choices: [Advanced Setup](docs/advanced-setup.md)
|
||||
|
||||
---
|
||||
|
||||
## Common Beginner Choices
|
||||
|
||||
### OpenAI
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-...
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
```
|
||||
Best default if you already have an OpenAI API key.
|
||||
|
||||
### Codex via ChatGPT auth
|
||||
### Ollama
|
||||
|
||||
`codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
|
||||
`codexspark` maps to GPT-5.3 Codex Spark for faster loops.
|
||||
Best if you want to run models locally on your own machine.
|
||||
|
||||
If you already use the Codex CLI, OpenClaude will read `~/.codex/auth.json`
|
||||
automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or
|
||||
override the token directly with `CODEX_API_KEY`.
|
||||
### Codex
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_MODEL=codexplan
|
||||
Best if you already use the Codex CLI or ChatGPT Codex backend.
|
||||
|
||||
# optional if you do not already have ~/.codex/auth.json
|
||||
export CODEX_API_KEY=...
|
||||
### Atomic Chat
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
### DeepSeek
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-...
|
||||
export OPENAI_BASE_URL=https://api.deepseek.com/v1
|
||||
export OPENAI_MODEL=deepseek-chat
|
||||
```
|
||||
|
||||
### Google Gemini (via OpenRouter)
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-or-...
|
||||
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
||||
export OPENAI_MODEL=google/gemini-2.0-flash-001
|
||||
```
|
||||
|
||||
OpenRouter model availability changes over time. If a model stops working,
|
||||
pick another currently available OpenRouter model before assuming the
|
||||
OpenAI-compatible setup is broken.
|
||||
|
||||
### Ollama (local, free)
|
||||
|
||||
```bash
|
||||
ollama pull llama3.3:70b
|
||||
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://localhost:11434/v1
|
||||
export OPENAI_MODEL=llama3.3:70b
|
||||
# no API key needed for local models
|
||||
```
|
||||
|
||||
### LM Studio (local)
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://localhost:1234/v1
|
||||
export OPENAI_MODEL=your-model-name
|
||||
```
|
||||
|
||||
### Together AI
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=...
|
||||
export OPENAI_BASE_URL=https://api.together.xyz/v1
|
||||
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
|
||||
```
|
||||
|
||||
### Groq
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=gsk_...
|
||||
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
|
||||
export OPENAI_MODEL=llama-3.3-70b-versatile
|
||||
```
|
||||
|
||||
### Mistral
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=...
|
||||
export OPENAI_BASE_URL=https://api.mistral.ai/v1
|
||||
export OPENAI_MODEL=mistral-large-latest
|
||||
```
|
||||
|
||||
### Azure OpenAI
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=your-azure-key
|
||||
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Description |
|
||||
|----------|----------|-------------|
|
||||
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
|
||||
| `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
|
||||
| `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
|
||||
| `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
|
||||
| `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
|
||||
| `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
|
||||
| `CODEX_HOME` | Codex only | Alternative Codex home directory (`auth.json` will be read from here) |
|
||||
| `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Set to `1` to suppress the default `Co-Authored-By` trailer in generated git commit messages |
|
||||
|
||||
You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
|
||||
|
||||
OpenClaude PR bodies use OpenClaude branding by default. `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` only affects the commit trailer, not PR attribution text.
|
||||
|
||||
---
|
||||
|
||||
## Runtime Hardening
|
||||
|
||||
Use these commands to keep the CLI stable and catch environment mistakes early:
|
||||
|
||||
```bash
|
||||
# quick startup sanity check
|
||||
bun run smoke
|
||||
|
||||
# validate provider env + reachability
|
||||
bun run doctor:runtime
|
||||
|
||||
# print machine-readable runtime diagnostics
|
||||
bun run doctor:runtime:json
|
||||
|
||||
# persist a diagnostics report to reports/doctor-runtime.json
|
||||
bun run doctor:report
|
||||
|
||||
# full local hardening check (smoke + runtime doctor)
|
||||
bun run hardening:check
|
||||
|
||||
# strict hardening (includes project-wide typecheck)
|
||||
bun run hardening:strict
|
||||
```
|
||||
|
||||
Notes:
|
||||
- `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key (`SUA_CHAVE`) or a missing key for non-local providers.
|
||||
- Local providers (for example `http://localhost:11434/v1`) can run without `OPENAI_API_KEY`.
|
||||
- Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
|
||||
|
||||
### Provider Launch Profiles
|
||||
|
||||
Use profile launchers to avoid repeated environment setup:
|
||||
|
||||
```bash
|
||||
# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
|
||||
bun run profile:init
|
||||
|
||||
# preview the best provider/model for your goal
|
||||
bun run profile:recommend -- --goal coding --benchmark
|
||||
|
||||
# auto-apply the best available local/openai provider/model for your goal
|
||||
bun run profile:auto -- --goal latency
|
||||
|
||||
# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
|
||||
bun run profile:codex
|
||||
|
||||
# openai bootstrap with explicit key
|
||||
bun run profile:init -- --provider openai --api-key sk-...
|
||||
|
||||
# ollama bootstrap with custom model
|
||||
bun run profile:init -- --provider ollama --model llama3.1:8b
|
||||
|
||||
# ollama bootstrap with intelligent model auto-selection
|
||||
bun run profile:init -- --provider ollama --goal coding
|
||||
|
||||
# codex bootstrap with a fast model alias
|
||||
bun run profile:init -- --provider codex --model codexspark
|
||||
|
||||
# launch using persisted profile (.openclaude-profile.json)
|
||||
bun run dev:profile
|
||||
|
||||
# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
|
||||
bun run dev:codex
|
||||
|
||||
# OpenAI profile (requires OPENAI_API_KEY in your shell)
|
||||
bun run dev:openai
|
||||
|
||||
# Ollama profile (defaults: localhost:11434, llama3.1:8b)
|
||||
bun run dev:ollama
|
||||
```
|
||||
|
||||
`profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
|
||||
If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
|
||||
|
||||
Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
|
||||
Goal-based Ollama selection only recommends among models that are already installed and reachable from Ollama.
|
||||
|
||||
Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
|
||||
|
||||
`dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
|
||||
For `dev:ollama`, make sure Ollama is running locally before launch.
|
||||
Best if you want local inference on Apple Silicon with Atomic Chat. See [Advanced Setup](docs/advanced-setup.md).
|
||||
|
||||
---
|
||||
|
||||
|
||||
146
atomic_chat_provider.py
Normal file
146
atomic_chat_provider.py
Normal file
@@ -0,0 +1,146 @@
|
||||
"""
|
||||
atomic_chat_provider.py
|
||||
-----------------------
|
||||
Adds native Atomic Chat support to openclaude.
|
||||
Lets Claude Code route requests to any locally-running model via
|
||||
Atomic Chat (Apple Silicon only) at 127.0.0.1:1337.
|
||||
|
||||
Atomic Chat exposes an OpenAI-compatible API, so messages are forwarded
|
||||
directly without translation.
|
||||
|
||||
Usage (.env):
|
||||
PREFERRED_PROVIDER=atomic-chat
|
||||
ATOMIC_CHAT_BASE_URL=http://127.0.0.1:1337
|
||||
"""
|
||||
|
||||
import httpx
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import AsyncIterator
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
ATOMIC_CHAT_BASE_URL = os.getenv("ATOMIC_CHAT_BASE_URL", "http://127.0.0.1:1337")
|
||||
|
||||
|
||||
def _api_url(path: str) -> str:
|
||||
return f"{ATOMIC_CHAT_BASE_URL}/v1{path}"
|
||||
|
||||
|
||||
async def check_atomic_chat_running() -> bool:
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=3.0) as client:
|
||||
resp = await client.get(_api_url("/models"))
|
||||
return resp.status_code == 200
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
async def list_atomic_chat_models() -> list[str]:
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
resp = await client.get(_api_url("/models"))
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
return [m["id"] for m in data.get("data", [])]
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not list Atomic Chat models: {e}")
|
||||
return []
|
||||
|
||||
|
||||
async def atomic_chat(
|
||||
model: str,
|
||||
messages: list[dict],
|
||||
system: str | None = None,
|
||||
max_tokens: int = 4096,
|
||||
temperature: float = 1.0,
|
||||
) -> dict:
|
||||
chat_messages = list(messages)
|
||||
if system:
|
||||
chat_messages.insert(0, {"role": "system", "content": system})
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": chat_messages,
|
||||
"max_tokens": max_tokens,
|
||||
"temperature": temperature,
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient(timeout=120.0) as client:
|
||||
resp = await client.post(_api_url("/chat/completions"), json=payload)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
|
||||
choice = data.get("choices", [{}])[0]
|
||||
assistant_text = choice.get("message", {}).get("content", "")
|
||||
usage = data.get("usage", {})
|
||||
|
||||
return {
|
||||
"id": data.get("id", "msg_atomic_chat"),
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"content": [{"type": "text", "text": assistant_text}],
|
||||
"model": model,
|
||||
"stop_reason": "end_turn",
|
||||
"stop_sequence": None,
|
||||
"usage": {
|
||||
"input_tokens": usage.get("prompt_tokens", 0),
|
||||
"output_tokens": usage.get("completion_tokens", 0),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
async def atomic_chat_stream(
|
||||
model: str,
|
||||
messages: list[dict],
|
||||
system: str | None = None,
|
||||
max_tokens: int = 4096,
|
||||
temperature: float = 1.0,
|
||||
) -> AsyncIterator[str]:
|
||||
chat_messages = list(messages)
|
||||
if system:
|
||||
chat_messages.insert(0, {"role": "system", "content": system})
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": chat_messages,
|
||||
"max_tokens": max_tokens,
|
||||
"temperature": temperature,
|
||||
"stream": True,
|
||||
}
|
||||
|
||||
yield "event: message_start\n"
|
||||
yield f'data: {json.dumps({"type": "message_start", "message": {"id": "msg_atomic_chat_stream", "type": "message", "role": "assistant", "content": [], "model": model, "stop_reason": None, "usage": {"input_tokens": 0, "output_tokens": 0}}})}\n\n'
|
||||
yield "event: content_block_start\n"
|
||||
yield f'data: {json.dumps({"type": "content_block_start", "index": 0, "content_block": {"type": "text", "text": ""}})}\n\n'
|
||||
|
||||
async with httpx.AsyncClient(timeout=120.0) as client:
|
||||
async with client.stream("POST", _api_url("/chat/completions"), json=payload) as resp:
|
||||
resp.raise_for_status()
|
||||
async for line in resp.aiter_lines():
|
||||
if not line or not line.startswith("data: "):
|
||||
continue
|
||||
raw = line[len("data: "):]
|
||||
if raw.strip() == "[DONE]":
|
||||
break
|
||||
try:
|
||||
chunk = json.loads(raw)
|
||||
delta = chunk.get("choices", [{}])[0].get("delta", {})
|
||||
delta_text = delta.get("content", "")
|
||||
if delta_text:
|
||||
yield "event: content_block_delta\n"
|
||||
yield f'data: {json.dumps({"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": delta_text}})}\n\n'
|
||||
|
||||
finish_reason = chunk.get("choices", [{}])[0].get("finish_reason")
|
||||
if finish_reason:
|
||||
usage = chunk.get("usage", {})
|
||||
yield "event: content_block_stop\n"
|
||||
yield f'data: {json.dumps({"type": "content_block_stop", "index": 0})}\n\n'
|
||||
yield "event: message_delta\n"
|
||||
yield f'data: {json.dumps({"type": "message_delta", "delta": {"stop_reason": "end_turn", "stop_sequence": None}, "usage": {"output_tokens": usage.get("completion_tokens", 0)}})}\n\n'
|
||||
yield "event: message_stop\n"
|
||||
yield f'data: {json.dumps({"type": "message_stop"})}\n\n'
|
||||
break
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
262
docs/advanced-setup.md
Normal file
262
docs/advanced-setup.md
Normal file
@@ -0,0 +1,262 @@
|
||||
# OpenClaude Advanced Setup
|
||||
|
||||
This guide is for users who want source builds, Bun workflows, provider profiles, diagnostics, or more control over runtime behavior.
|
||||
|
||||
## Install Options
|
||||
|
||||
### Option A: npm
|
||||
|
||||
```bash
|
||||
npm install -g @gitlawb/openclaude
|
||||
```
|
||||
|
||||
### Option B: From source with Bun
|
||||
|
||||
Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions can fail during `bun run build`.
|
||||
|
||||
```bash
|
||||
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
|
||||
cd openclaude
|
||||
|
||||
bun install
|
||||
bun run build
|
||||
npm link
|
||||
```
|
||||
|
||||
### Option C: Run directly with Bun
|
||||
|
||||
```bash
|
||||
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
|
||||
cd openclaude
|
||||
|
||||
bun install
|
||||
bun run dev
|
||||
```
|
||||
|
||||
## Provider Examples
|
||||
|
||||
### OpenAI
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-...
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
### Codex via ChatGPT auth
|
||||
|
||||
`codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
|
||||
`codexspark` maps to GPT-5.3 Codex Spark for faster loops.
|
||||
|
||||
If you already use the Codex CLI, OpenClaude reads `~/.codex/auth.json` automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or override the token directly with `CODEX_API_KEY`.
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_MODEL=codexplan
|
||||
|
||||
# optional if you do not already have ~/.codex/auth.json
|
||||
export CODEX_API_KEY=...
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
### DeepSeek
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-...
|
||||
export OPENAI_BASE_URL=https://api.deepseek.com/v1
|
||||
export OPENAI_MODEL=deepseek-chat
|
||||
```
|
||||
|
||||
### Google Gemini via OpenRouter
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-or-...
|
||||
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
||||
export OPENAI_MODEL=google/gemini-2.0-flash-001
|
||||
```
|
||||
|
||||
OpenRouter model availability changes over time. If a model stops working, try another current OpenRouter model before assuming the integration is broken.
|
||||
|
||||
### Ollama
|
||||
|
||||
```bash
|
||||
ollama pull llama3.3:70b
|
||||
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://localhost:11434/v1
|
||||
export OPENAI_MODEL=llama3.3:70b
|
||||
```
|
||||
|
||||
### Atomic Chat (local, Apple Silicon)
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://127.0.0.1:1337/v1
|
||||
export OPENAI_MODEL=your-model-name
|
||||
```
|
||||
|
||||
No API key is needed for Atomic Chat local models.
|
||||
|
||||
Or use the profile launcher:
|
||||
|
||||
```bash
|
||||
bun run dev:atomic-chat
|
||||
```
|
||||
|
||||
Download Atomic Chat from [atomic.chat](https://atomic.chat/). The app must be running with a model loaded before launching.
|
||||
|
||||
### LM Studio
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://localhost:1234/v1
|
||||
export OPENAI_MODEL=your-model-name
|
||||
```
|
||||
|
||||
### Together AI
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=...
|
||||
export OPENAI_BASE_URL=https://api.together.xyz/v1
|
||||
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
|
||||
```
|
||||
|
||||
### Groq
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=gsk_...
|
||||
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
|
||||
export OPENAI_MODEL=llama-3.3-70b-versatile
|
||||
```
|
||||
|
||||
### Mistral
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=...
|
||||
export OPENAI_BASE_URL=https://api.mistral.ai/v1
|
||||
export OPENAI_MODEL=mistral-large-latest
|
||||
```
|
||||
|
||||
### Azure OpenAI
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=your-azure-key
|
||||
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Description |
|
||||
|----------|----------|-------------|
|
||||
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
|
||||
| `OPENAI_API_KEY` | Yes* | Your API key (`*` not needed for local models like Ollama or Atomic Chat) |
|
||||
| `OPENAI_MODEL` | Yes | Model name such as `gpt-4o`, `deepseek-chat`, or `llama3.3:70b` |
|
||||
| `OPENAI_BASE_URL` | No | API endpoint, defaulting to `https://api.openai.com/v1` |
|
||||
| `CODEX_API_KEY` | Codex only | Codex or ChatGPT access token override |
|
||||
| `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
|
||||
| `CODEX_HOME` | Codex only | Alternative Codex home directory |
|
||||
| `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Suppress the default `Co-Authored-By` trailer in generated git commits |
|
||||
|
||||
You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
|
||||
|
||||
## Runtime Hardening
|
||||
|
||||
Use these commands to validate your setup and catch mistakes early:
|
||||
|
||||
```bash
|
||||
# quick startup sanity check
|
||||
bun run smoke
|
||||
|
||||
# validate provider env + reachability
|
||||
bun run doctor:runtime
|
||||
|
||||
# print machine-readable runtime diagnostics
|
||||
bun run doctor:runtime:json
|
||||
|
||||
# persist a diagnostics report to reports/doctor-runtime.json
|
||||
bun run doctor:report
|
||||
|
||||
# full local hardening check (smoke + runtime doctor)
|
||||
bun run hardening:check
|
||||
|
||||
# strict hardening (includes project-wide typecheck)
|
||||
bun run hardening:strict
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key or a missing key for non-local providers.
|
||||
- Local providers such as `http://localhost:11434/v1` and `http://127.0.0.1:1337/v1` can run without `OPENAI_API_KEY`.
|
||||
- Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
|
||||
|
||||
## Provider Launch Profiles
|
||||
|
||||
Use profile launchers to avoid repeated environment setup:
|
||||
|
||||
```bash
|
||||
# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
|
||||
bun run profile:init
|
||||
|
||||
# preview the best provider/model for your goal
|
||||
bun run profile:recommend -- --goal coding --benchmark
|
||||
|
||||
# auto-apply the best available local/openai provider/model for your goal
|
||||
bun run profile:auto -- --goal latency
|
||||
|
||||
# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
|
||||
bun run profile:codex
|
||||
|
||||
# openai bootstrap with explicit key
|
||||
bun run profile:init -- --provider openai --api-key sk-...
|
||||
|
||||
# ollama bootstrap with custom model
|
||||
bun run profile:init -- --provider ollama --model llama3.1:8b
|
||||
|
||||
# ollama bootstrap with intelligent model auto-selection
|
||||
bun run profile:init -- --provider ollama --goal coding
|
||||
|
||||
# atomic-chat bootstrap (auto-detects running model)
|
||||
bun run profile:init -- --provider atomic-chat
|
||||
|
||||
# codex bootstrap with a fast model alias
|
||||
bun run profile:init -- --provider codex --model codexspark
|
||||
|
||||
# launch using persisted profile (.openclaude-profile.json)
|
||||
bun run dev:profile
|
||||
|
||||
# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
|
||||
bun run dev:codex
|
||||
|
||||
# OpenAI profile (requires OPENAI_API_KEY in your shell)
|
||||
bun run dev:openai
|
||||
|
||||
# Ollama profile (defaults: localhost:11434, llama3.1:8b)
|
||||
bun run dev:ollama
|
||||
|
||||
# Atomic Chat profile (Apple Silicon local LLMs at 127.0.0.1:1337)
|
||||
bun run dev:atomic-chat
|
||||
```
|
||||
|
||||
`profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
|
||||
|
||||
If no profile exists yet, `dev:profile` uses the same goal-aware defaults when picking the initial model.
|
||||
|
||||
Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
|
||||
|
||||
Use `--provider atomic-chat` when you want Atomic Chat as the local Apple Silicon provider.
|
||||
|
||||
Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
|
||||
|
||||
`dev:openai`, `dev:ollama`, `dev:atomic-chat`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
|
||||
|
||||
For `dev:ollama`, make sure Ollama is running locally before launch.
|
||||
|
||||
For `dev:atomic-chat`, make sure Atomic Chat is running with a model loaded before launch.
|
||||
116
docs/non-technical-setup.md
Normal file
116
docs/non-technical-setup.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# OpenClaude for Non-Technical Users
|
||||
|
||||
This guide is for people who want the easiest setup path.
|
||||
|
||||
You do not need to build from source. You do not need Bun. You do not need to understand the full codebase.
|
||||
|
||||
If you can copy and paste commands into a terminal, you can set this up.
|
||||
|
||||
## What OpenClaude Does
|
||||
|
||||
OpenClaude lets you use an AI coding assistant with different model providers such as:
|
||||
|
||||
- OpenAI
|
||||
- DeepSeek
|
||||
- Gemini
|
||||
- Ollama
|
||||
- Codex
|
||||
|
||||
For most first-time users, OpenAI is the easiest option.
|
||||
|
||||
## Before You Start
|
||||
|
||||
You need:
|
||||
|
||||
1. Node.js 20 or newer installed
|
||||
2. A terminal window
|
||||
3. An API key from your provider, unless you are using a local model like Ollama
|
||||
|
||||
## Fastest Path
|
||||
|
||||
1. Install OpenClaude with npm
|
||||
2. Set 3 environment variables
|
||||
3. Run `openclaude`
|
||||
|
||||
## Choose Your Operating System
|
||||
|
||||
- Windows: [Windows Quick Start](quick-start-windows.md)
|
||||
- macOS / Linux: [macOS / Linux Quick Start](quick-start-mac-linux.md)
|
||||
|
||||
## Which Provider Should You Choose?
|
||||
|
||||
### OpenAI
|
||||
|
||||
Choose this if:
|
||||
|
||||
- you want the easiest setup
|
||||
- you already have an OpenAI API key
|
||||
|
||||
### Ollama
|
||||
|
||||
Choose this if:
|
||||
|
||||
- you want to run models locally
|
||||
- you do not want to depend on a cloud API for testing
|
||||
|
||||
### Codex
|
||||
|
||||
Choose this if:
|
||||
|
||||
- you already use the Codex CLI
|
||||
- you already have Codex or ChatGPT auth configured
|
||||
|
||||
## What Success Looks Like
|
||||
|
||||
After you run `openclaude`, the CLI should start and wait for your prompt.
|
||||
|
||||
At that point, you can ask it to:
|
||||
|
||||
- explain code
|
||||
- edit files
|
||||
- run commands
|
||||
- review changes
|
||||
|
||||
## Common Problems
|
||||
|
||||
### `openclaude` command not found
|
||||
|
||||
Cause:
|
||||
|
||||
- npm installed the package, but your terminal has not refreshed yet
|
||||
|
||||
Fix:
|
||||
|
||||
1. Close the terminal
|
||||
2. Open a new terminal
|
||||
3. Run `openclaude` again
|
||||
|
||||
### Invalid API key
|
||||
|
||||
Cause:
|
||||
|
||||
- the key is wrong, expired, or copied incorrectly
|
||||
|
||||
Fix:
|
||||
|
||||
1. Get a fresh key from your provider
|
||||
2. Paste it again carefully
|
||||
3. Re-run `openclaude`
|
||||
|
||||
### Ollama not working
|
||||
|
||||
Cause:
|
||||
|
||||
- Ollama is not installed or not running
|
||||
|
||||
Fix:
|
||||
|
||||
1. Install Ollama from `https://ollama.com/download`
|
||||
2. Start Ollama
|
||||
3. Try again
|
||||
|
||||
## Want More Control?
|
||||
|
||||
If you want source builds, advanced provider profiles, diagnostics, or Bun-based workflows, use:
|
||||
|
||||
- [Advanced Setup](advanced-setup.md)
|
||||
108
docs/quick-start-mac-linux.md
Normal file
108
docs/quick-start-mac-linux.md
Normal file
@@ -0,0 +1,108 @@
|
||||
# OpenClaude Quick Start for macOS and Linux
|
||||
|
||||
This guide uses a standard shell such as Terminal, iTerm, bash, or zsh.
|
||||
|
||||
## 1. Install Node.js
|
||||
|
||||
Install Node.js 20 or newer from:
|
||||
|
||||
- `https://nodejs.org/`
|
||||
|
||||
Then check it:
|
||||
|
||||
```bash
|
||||
node --version
|
||||
npm --version
|
||||
```
|
||||
|
||||
## 2. Install OpenClaude
|
||||
|
||||
```bash
|
||||
npm install -g @gitlawb/openclaude
|
||||
```
|
||||
|
||||
## 3. Pick One Provider
|
||||
|
||||
### Option A: OpenAI
|
||||
|
||||
Replace `sk-your-key-here` with your real key.
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-your-key-here
|
||||
export OPENAI_MODEL=gpt-4o
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
### Option B: DeepSeek
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-your-key-here
|
||||
export OPENAI_BASE_URL=https://api.deepseek.com/v1
|
||||
export OPENAI_MODEL=deepseek-chat
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
### Option C: Ollama
|
||||
|
||||
Install Ollama first from:
|
||||
|
||||
- `https://ollama.com/download`
|
||||
|
||||
Then run:
|
||||
|
||||
```bash
|
||||
ollama pull llama3.1:8b
|
||||
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://localhost:11434/v1
|
||||
export OPENAI_MODEL=llama3.1:8b
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
No API key is needed for Ollama local models.
|
||||
|
||||
## 4. If `openclaude` Is Not Found
|
||||
|
||||
Close the terminal, open a new one, and try again:
|
||||
|
||||
```bash
|
||||
openclaude
|
||||
```
|
||||
|
||||
## 5. If Your Provider Fails
|
||||
|
||||
Check the basics:
|
||||
|
||||
### For OpenAI or DeepSeek
|
||||
|
||||
- make sure the key is real
|
||||
- make sure you copied it fully
|
||||
|
||||
### For Ollama
|
||||
|
||||
- make sure Ollama is installed
|
||||
- make sure Ollama is running
|
||||
- make sure the model was pulled successfully
|
||||
|
||||
## 6. Updating OpenClaude
|
||||
|
||||
```bash
|
||||
npm install -g @gitlawb/openclaude@latest
|
||||
```
|
||||
|
||||
## 7. Uninstalling OpenClaude
|
||||
|
||||
```bash
|
||||
npm uninstall -g @gitlawb/openclaude
|
||||
```
|
||||
|
||||
## Need Advanced Setup?
|
||||
|
||||
Use:
|
||||
|
||||
- [Advanced Setup](advanced-setup.md)
|
||||
108
docs/quick-start-windows.md
Normal file
108
docs/quick-start-windows.md
Normal file
@@ -0,0 +1,108 @@
|
||||
# OpenClaude Quick Start for Windows
|
||||
|
||||
This guide uses Windows PowerShell.
|
||||
|
||||
## 1. Install Node.js
|
||||
|
||||
Install Node.js 20 or newer from:
|
||||
|
||||
- `https://nodejs.org/`
|
||||
|
||||
Then open PowerShell and check it:
|
||||
|
||||
```powershell
|
||||
node --version
|
||||
npm --version
|
||||
```
|
||||
|
||||
## 2. Install OpenClaude
|
||||
|
||||
```powershell
|
||||
npm install -g @gitlawb/openclaude
|
||||
```
|
||||
|
||||
## 3. Pick One Provider
|
||||
|
||||
### Option A: OpenAI
|
||||
|
||||
Replace `sk-your-key-here` with your real key.
|
||||
|
||||
```powershell
|
||||
$env:CLAUDE_CODE_USE_OPENAI="1"
|
||||
$env:OPENAI_API_KEY="sk-your-key-here"
|
||||
$env:OPENAI_MODEL="gpt-4o"
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
### Option B: DeepSeek
|
||||
|
||||
```powershell
|
||||
$env:CLAUDE_CODE_USE_OPENAI="1"
|
||||
$env:OPENAI_API_KEY="sk-your-key-here"
|
||||
$env:OPENAI_BASE_URL="https://api.deepseek.com/v1"
|
||||
$env:OPENAI_MODEL="deepseek-chat"
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
### Option C: Ollama
|
||||
|
||||
Install Ollama first from:
|
||||
|
||||
- `https://ollama.com/download/windows`
|
||||
|
||||
Then run:
|
||||
|
||||
```powershell
|
||||
ollama pull llama3.1:8b
|
||||
|
||||
$env:CLAUDE_CODE_USE_OPENAI="1"
|
||||
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
|
||||
$env:OPENAI_MODEL="llama3.1:8b"
|
||||
|
||||
openclaude
|
||||
```
|
||||
|
||||
No API key is needed for Ollama local models.
|
||||
|
||||
## 4. If `openclaude` Is Not Found
|
||||
|
||||
Close PowerShell, open a new one, and try again:
|
||||
|
||||
```powershell
|
||||
openclaude
|
||||
```
|
||||
|
||||
## 5. If Your Provider Fails
|
||||
|
||||
Check the basics:
|
||||
|
||||
### For OpenAI or DeepSeek
|
||||
|
||||
- make sure the key is real
|
||||
- make sure you copied it fully
|
||||
|
||||
### For Ollama
|
||||
|
||||
- make sure Ollama is installed
|
||||
- make sure Ollama is running
|
||||
- make sure the model was pulled successfully
|
||||
|
||||
## 6. Updating OpenClaude
|
||||
|
||||
```powershell
|
||||
npm install -g @gitlawb/openclaude@latest
|
||||
```
|
||||
|
||||
## 7. Uninstalling OpenClaude
|
||||
|
||||
```powershell
|
||||
npm uninstall -g @gitlawb/openclaude
|
||||
```
|
||||
|
||||
## Need Advanced Setup?
|
||||
|
||||
Use:
|
||||
|
||||
- [Advanced Setup](advanced-setup.md)
|
||||
@@ -21,6 +21,7 @@
|
||||
"dev:gemini": "bun run scripts/provider-launch.ts gemini",
|
||||
"dev:ollama": "bun run scripts/provider-launch.ts ollama",
|
||||
"dev:ollama:fast": "bun run scripts/provider-launch.ts ollama --fast --bare",
|
||||
"dev:atomic-chat": "bun run scripts/provider-launch.ts atomic-chat",
|
||||
"profile:init": "bun run scripts/provider-bootstrap.ts",
|
||||
"profile:recommend": "bun run scripts/provider-recommend.ts",
|
||||
"profile:auto": "bun run scripts/provider-recommend.ts --apply",
|
||||
@@ -30,7 +31,7 @@
|
||||
"dev:fast": "bun run profile:fast && bun run dev:ollama:fast",
|
||||
"dev:code": "bun run profile:code && bun run dev:profile",
|
||||
"start": "node dist/cli.mjs",
|
||||
"test:provider-recommendation": "node --test --experimental-strip-types src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts",
|
||||
"test:provider-recommendation": "bun test src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts",
|
||||
"typecheck": "tsc --noEmit",
|
||||
"smoke": "bun run build && node dist/cli.mjs --version",
|
||||
"test:provider": "bun test src/services/api/*.test.ts src/utils/context.test.ts",
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
// @ts-nocheck
|
||||
import { writeFileSync } from 'node:fs'
|
||||
import { resolve } from 'node:path'
|
||||
import {
|
||||
resolveCodexApiCredentials,
|
||||
} from '../src/services/api/providerConfig.js'
|
||||
@@ -10,18 +8,23 @@ import {
|
||||
recommendOllamaModel,
|
||||
} from '../src/utils/providerRecommendation.ts'
|
||||
import {
|
||||
buildAtomicChatProfileEnv,
|
||||
buildCodexProfileEnv,
|
||||
buildGeminiProfileEnv,
|
||||
buildOllamaProfileEnv,
|
||||
buildOpenAIProfileEnv,
|
||||
createProfileFile,
|
||||
saveProfileFile,
|
||||
selectAutoProfile,
|
||||
type ProfileFile,
|
||||
type ProviderProfile,
|
||||
} from '../src/utils/providerProfile.ts'
|
||||
import {
|
||||
getAtomicChatChatBaseUrl,
|
||||
getOllamaChatBaseUrl,
|
||||
hasLocalAtomicChat,
|
||||
hasLocalOllama,
|
||||
listAtomicChatModels,
|
||||
listOllamaModels,
|
||||
} from './provider-discovery.ts'
|
||||
|
||||
@@ -34,7 +37,7 @@ function parseArg(name: string): string | null {
|
||||
|
||||
function parseProviderArg(): ProviderProfile | 'auto' {
|
||||
const p = parseArg('--provider')?.toLowerCase()
|
||||
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini') return p
|
||||
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini' || p === 'atomic-chat') return p
|
||||
return 'auto'
|
||||
}
|
||||
|
||||
@@ -102,6 +105,21 @@ async function main(): Promise<void> {
|
||||
getOllamaChatBaseUrl,
|
||||
},
|
||||
)
|
||||
} else if (selected === 'atomic-chat') {
|
||||
const model = argModel || (await listAtomicChatModels(argBaseUrl || undefined))[0]
|
||||
if (!model) {
|
||||
if (!(await hasLocalAtomicChat(argBaseUrl || undefined))) {
|
||||
console.error('Atomic Chat is not running (could not connect to 127.0.0.1:1337).\n Download from https://atomic.chat/ and launch the application.')
|
||||
} else {
|
||||
console.error('Atomic Chat is running but no model is loaded. Open Atomic Chat and download or start a model first.')
|
||||
}
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
env = buildAtomicChatProfileEnv(model, {
|
||||
baseUrl: argBaseUrl,
|
||||
getAtomicChatChatBaseUrl,
|
||||
})
|
||||
} else if (selected === 'codex') {
|
||||
const builtEnv = buildCodexProfileEnv({
|
||||
model: argModel,
|
||||
@@ -147,8 +165,7 @@ async function main(): Promise<void> {
|
||||
|
||||
const profile = createProfileFile(selected, env)
|
||||
|
||||
const outputPath = resolve(process.cwd(), '.openclaude-profile.json')
|
||||
writeFileSync(outputPath, JSON.stringify(profile, null, 2), { encoding: 'utf8', mode: 0o600 })
|
||||
const outputPath = saveProfileFile(profile)
|
||||
|
||||
console.log(`Saved profile: ${selected}`)
|
||||
console.log(`Goal: ${goal}`)
|
||||
|
||||
@@ -1,129 +1,13 @@
|
||||
import type { OllamaModelDescriptor } from '../src/utils/providerRecommendation.ts'
|
||||
|
||||
export const DEFAULT_OLLAMA_BASE_URL = 'http://localhost:11434'
|
||||
|
||||
function withTimeoutSignal(timeoutMs: number): {
|
||||
signal: AbortSignal
|
||||
clear: () => void
|
||||
} {
|
||||
const controller = new AbortController()
|
||||
const timeout = setTimeout(() => controller.abort(), timeoutMs)
|
||||
return {
|
||||
signal: controller.signal,
|
||||
clear: () => clearTimeout(timeout),
|
||||
}
|
||||
}
|
||||
|
||||
function trimTrailingSlash(value: string): string {
|
||||
return value.replace(/\/+$/, '')
|
||||
}
|
||||
|
||||
export function getOllamaApiBaseUrl(baseUrl?: string): string {
|
||||
const parsed = new URL(
|
||||
baseUrl || process.env.OLLAMA_BASE_URL || DEFAULT_OLLAMA_BASE_URL,
|
||||
)
|
||||
const pathname = trimTrailingSlash(parsed.pathname)
|
||||
parsed.pathname = pathname.endsWith('/v1')
|
||||
? pathname.slice(0, -3) || '/'
|
||||
: pathname || '/'
|
||||
parsed.search = ''
|
||||
parsed.hash = ''
|
||||
return trimTrailingSlash(parsed.toString())
|
||||
}
|
||||
|
||||
export function getOllamaChatBaseUrl(baseUrl?: string): string {
|
||||
return `${getOllamaApiBaseUrl(baseUrl)}/v1`
|
||||
}
|
||||
|
||||
export async function hasLocalOllama(baseUrl?: string): Promise<boolean> {
|
||||
const { signal, clear } = withTimeoutSignal(1200)
|
||||
try {
|
||||
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/tags`, {
|
||||
method: 'GET',
|
||||
signal,
|
||||
})
|
||||
return response.ok
|
||||
} catch {
|
||||
return false
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
|
||||
export async function listOllamaModels(
|
||||
baseUrl?: string,
|
||||
): Promise<OllamaModelDescriptor[]> {
|
||||
const { signal, clear } = withTimeoutSignal(5000)
|
||||
try {
|
||||
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/tags`, {
|
||||
method: 'GET',
|
||||
signal,
|
||||
})
|
||||
if (!response.ok) {
|
||||
return []
|
||||
}
|
||||
|
||||
const data = await response.json() as {
|
||||
models?: Array<{
|
||||
name?: string
|
||||
size?: number
|
||||
details?: {
|
||||
family?: string
|
||||
families?: string[]
|
||||
parameter_size?: string
|
||||
quantization_level?: string
|
||||
}
|
||||
}>
|
||||
}
|
||||
|
||||
return (data.models ?? [])
|
||||
.filter(model => Boolean(model.name))
|
||||
.map(model => ({
|
||||
name: model.name!,
|
||||
sizeBytes: typeof model.size === 'number' ? model.size : null,
|
||||
family: model.details?.family ?? null,
|
||||
families: model.details?.families ?? [],
|
||||
parameterSize: model.details?.parameter_size ?? null,
|
||||
quantizationLevel: model.details?.quantization_level ?? null,
|
||||
}))
|
||||
} catch {
|
||||
return []
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
|
||||
export async function benchmarkOllamaModel(
|
||||
modelName: string,
|
||||
baseUrl?: string,
|
||||
): Promise<number | null> {
|
||||
const start = Date.now()
|
||||
const { signal, clear } = withTimeoutSignal(20000)
|
||||
try {
|
||||
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/chat`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
signal,
|
||||
body: JSON.stringify({
|
||||
model: modelName,
|
||||
stream: false,
|
||||
messages: [{ role: 'user', content: 'Reply with OK.' }],
|
||||
options: {
|
||||
temperature: 0,
|
||||
num_predict: 8,
|
||||
},
|
||||
}),
|
||||
})
|
||||
if (!response.ok) {
|
||||
return null
|
||||
}
|
||||
await response.json()
|
||||
return Date.now() - start
|
||||
} catch {
|
||||
return null
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
export {
|
||||
benchmarkOllamaModel,
|
||||
DEFAULT_ATOMIC_CHAT_BASE_URL,
|
||||
DEFAULT_OLLAMA_BASE_URL,
|
||||
getAtomicChatApiBaseUrl,
|
||||
getAtomicChatChatBaseUrl,
|
||||
getOllamaApiBaseUrl,
|
||||
getOllamaChatBaseUrl,
|
||||
hasLocalAtomicChat,
|
||||
hasLocalOllama,
|
||||
listAtomicChatModels,
|
||||
listOllamaModels,
|
||||
} from '../src/utils/providerDiscovery.ts'
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
// @ts-nocheck
|
||||
import { spawn } from 'node:child_process'
|
||||
import { existsSync, readFileSync } from 'node:fs'
|
||||
import { resolve } from 'node:path'
|
||||
import {
|
||||
resolveCodexApiCredentials,
|
||||
} from '../src/services/api/providerConfig.js'
|
||||
@@ -11,13 +9,17 @@ import {
|
||||
} from '../src/utils/providerRecommendation.ts'
|
||||
import {
|
||||
buildLaunchEnv,
|
||||
loadProfileFile,
|
||||
selectAutoProfile,
|
||||
type ProfileFile,
|
||||
type ProviderProfile,
|
||||
} from '../src/utils/providerProfile.ts'
|
||||
import {
|
||||
getAtomicChatChatBaseUrl,
|
||||
getOllamaChatBaseUrl,
|
||||
hasLocalAtomicChat,
|
||||
hasLocalOllama,
|
||||
listAtomicChatModels,
|
||||
listOllamaModels,
|
||||
} from './provider-discovery.ts'
|
||||
|
||||
@@ -48,7 +50,7 @@ function parseLaunchOptions(argv: string[]): LaunchOptions {
|
||||
continue
|
||||
}
|
||||
|
||||
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini') && requestedProfile === 'auto') {
|
||||
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini' || lower === 'atomic-chat') && requestedProfile === 'auto') {
|
||||
requestedProfile = lower as ProviderProfile | 'auto'
|
||||
continue
|
||||
}
|
||||
@@ -75,17 +77,7 @@ function parseLaunchOptions(argv: string[]): LaunchOptions {
|
||||
}
|
||||
|
||||
function loadPersistedProfile(): ProfileFile | null {
|
||||
const path = resolve(process.cwd(), '.openclaude-profile.json')
|
||||
if (!existsSync(path)) return null
|
||||
try {
|
||||
const parsed = JSON.parse(readFileSync(path, 'utf8')) as ProfileFile
|
||||
if (parsed.profile === 'openai' || parsed.profile === 'ollama' || parsed.profile === 'codex' || parsed.profile === 'gemini') {
|
||||
return parsed
|
||||
}
|
||||
return null
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
return loadProfileFile()
|
||||
}
|
||||
|
||||
async function resolveOllamaDefaultModel(
|
||||
@@ -96,6 +88,11 @@ async function resolveOllamaDefaultModel(
|
||||
return recommended?.name ?? null
|
||||
}
|
||||
|
||||
async function resolveAtomicChatDefaultModel(): Promise<string | null> {
|
||||
const models = await listAtomicChatModels()
|
||||
return models[0] ?? null
|
||||
}
|
||||
|
||||
function runCommand(command: string, env: NodeJS.ProcessEnv): Promise<number> {
|
||||
return runProcess(command, [], env)
|
||||
}
|
||||
@@ -132,6 +129,10 @@ function printSummary(profile: ProviderProfile, env: NodeJS.ProcessEnv): void {
|
||||
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
|
||||
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
|
||||
console.log(`CODEX_API_KEY_SET=${Boolean(resolveCodexApiCredentials(env).apiKey)}`)
|
||||
} else if (profile === 'atomic-chat') {
|
||||
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
|
||||
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
|
||||
console.log('OPENAI_API_KEY_SET=false (local provider, no key required)')
|
||||
} else {
|
||||
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
|
||||
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
|
||||
@@ -143,7 +144,7 @@ async function main(): Promise<void> {
|
||||
const options = parseLaunchOptions(process.argv.slice(2))
|
||||
const requestedProfile = options.requestedProfile
|
||||
if (!requestedProfile) {
|
||||
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
|
||||
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|atomic-chat|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
@@ -175,12 +176,30 @@ async function main(): Promise<void> {
|
||||
}
|
||||
}
|
||||
|
||||
let resolvedAtomicChatModel: string | null = null
|
||||
if (
|
||||
profile === 'atomic-chat' &&
|
||||
(persisted?.profile !== 'atomic-chat' || !persisted?.env?.OPENAI_MODEL)
|
||||
) {
|
||||
if (!(await hasLocalAtomicChat())) {
|
||||
console.error('Atomic Chat is not running (could not connect to 127.0.0.1:1337).\n Download from https://atomic.chat/ and launch the application.')
|
||||
process.exit(1)
|
||||
}
|
||||
resolvedAtomicChatModel = await resolveAtomicChatDefaultModel()
|
||||
if (!resolvedAtomicChatModel) {
|
||||
console.error('Atomic Chat is running but no model is loaded. Open Atomic Chat and download or start a model first.')
|
||||
process.exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
const env = await buildLaunchEnv({
|
||||
profile,
|
||||
persisted,
|
||||
goal: options.goal,
|
||||
getOllamaChatBaseUrl,
|
||||
resolveOllamaDefaultModel: async () => resolvedOllamaModel || 'llama3.1:8b',
|
||||
getAtomicChatChatBaseUrl,
|
||||
resolveAtomicChatDefaultModel: async () => resolvedAtomicChatModel,
|
||||
})
|
||||
if (options.fast) {
|
||||
applyFastFlags(env)
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
// @ts-nocheck
|
||||
import { writeFileSync } from 'node:fs'
|
||||
import { resolve } from 'node:path'
|
||||
|
||||
import {
|
||||
applyBenchmarkLatency,
|
||||
@@ -16,6 +14,7 @@ import {
|
||||
buildOllamaProfileEnv,
|
||||
buildOpenAIProfileEnv,
|
||||
createProfileFile,
|
||||
saveProfileFile,
|
||||
sanitizeApiKey,
|
||||
type ProfileFile,
|
||||
type ProviderProfile,
|
||||
@@ -153,11 +152,7 @@ async function maybeApplyProfile(
|
||||
|
||||
const profileFile = createProfileFile(profile, env)
|
||||
|
||||
writeFileSync(
|
||||
resolve(process.cwd(), '.openclaude-profile.json'),
|
||||
JSON.stringify(profileFile, null, 2),
|
||||
'utf8',
|
||||
)
|
||||
saveProfileFile(profileFile)
|
||||
return true
|
||||
}
|
||||
|
||||
|
||||
@@ -93,11 +93,15 @@ function isLocalBaseUrl(baseUrl: string): boolean {
|
||||
}
|
||||
|
||||
const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai'
|
||||
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
|
||||
|
||||
function currentBaseUrl(): string {
|
||||
if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
|
||||
return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL
|
||||
}
|
||||
if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
return process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
|
||||
}
|
||||
return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1'
|
||||
}
|
||||
|
||||
@@ -126,15 +130,47 @@ function checkGeminiEnv(): CheckResult[] {
|
||||
return results
|
||||
}
|
||||
|
||||
function checkGithubEnv(): CheckResult[] {
|
||||
const results: CheckResult[] = []
|
||||
const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
|
||||
results.push(pass('Provider mode', 'GitHub Models provider enabled.'))
|
||||
|
||||
const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
|
||||
if (!token?.trim()) {
|
||||
results.push(fail('GITHUB_TOKEN', 'Missing. Set GITHUB_TOKEN or GH_TOKEN.'))
|
||||
} else {
|
||||
results.push(pass('GITHUB_TOKEN', 'Configured.'))
|
||||
}
|
||||
|
||||
if (!process.env.OPENAI_MODEL) {
|
||||
results.push(
|
||||
pass(
|
||||
'OPENAI_MODEL',
|
||||
'Not set. Default github:copilot → openai/gpt-4.1 at runtime.',
|
||||
),
|
||||
)
|
||||
} else {
|
||||
results.push(pass('OPENAI_MODEL', process.env.OPENAI_MODEL))
|
||||
}
|
||||
|
||||
results.push(pass('OPENAI_BASE_URL', baseUrl))
|
||||
return results
|
||||
}
|
||||
|
||||
function checkOpenAIEnv(): CheckResult[] {
|
||||
const results: CheckResult[] = []
|
||||
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
|
||||
if (useGemini) {
|
||||
return checkGeminiEnv()
|
||||
}
|
||||
|
||||
if (useGithub && !useOpenAI) {
|
||||
return checkGithubEnv()
|
||||
}
|
||||
|
||||
if (!useOpenAI) {
|
||||
results.push(pass('Provider mode', 'Anthropic login flow enabled (CLAUDE_CODE_USE_OPENAI is off).'))
|
||||
return results
|
||||
@@ -181,12 +217,21 @@ function checkOpenAIEnv(): CheckResult[] {
|
||||
}
|
||||
|
||||
const key = process.env.OPENAI_API_KEY
|
||||
const githubToken = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
|
||||
if (key === 'SUA_CHAVE') {
|
||||
results.push(fail('OPENAI_API_KEY', 'Placeholder value detected: SUA_CHAVE.'))
|
||||
} else if (!key && !isLocalBaseUrl(request.baseUrl)) {
|
||||
} else if (
|
||||
!key &&
|
||||
!isLocalBaseUrl(request.baseUrl) &&
|
||||
!(useGithub && githubToken?.trim())
|
||||
) {
|
||||
results.push(fail('OPENAI_API_KEY', 'Missing key for non-local provider URL.'))
|
||||
} else if (!key && useGithub && githubToken?.trim()) {
|
||||
results.push(
|
||||
pass('OPENAI_API_KEY', 'Not set; GITHUB_TOKEN/GH_TOKEN will be used for GitHub Models.'),
|
||||
)
|
||||
} else if (!key) {
|
||||
results.push(pass('OPENAI_API_KEY', 'Not set (allowed for local providers like Ollama/LM Studio).'))
|
||||
results.push(pass('OPENAI_API_KEY', 'Not set (allowed for local providers like Atomic Chat/Ollama/LM Studio).'))
|
||||
} else {
|
||||
results.push(pass('OPENAI_API_KEY', 'Configured.'))
|
||||
}
|
||||
@@ -197,11 +242,19 @@ function checkOpenAIEnv(): CheckResult[] {
|
||||
async function checkBaseUrlReachability(): Promise<CheckResult> {
|
||||
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
|
||||
if (!useGemini && !useOpenAI) {
|
||||
if (!useGemini && !useOpenAI && !useGithub) {
|
||||
return pass('Provider reachability', 'Skipped (OpenAI-compatible mode disabled).')
|
||||
}
|
||||
|
||||
if (useGithub) {
|
||||
return pass(
|
||||
'Provider reachability',
|
||||
'Skipped for GitHub Models (inference endpoint differs from OpenAI /models probe).',
|
||||
)
|
||||
}
|
||||
|
||||
const geminiBaseUrl = 'https://generativelanguage.googleapis.com/v1beta/openai'
|
||||
const resolvedBaseUrl = useGemini
|
||||
? (process.env.GEMINI_BASE_URL ?? geminiBaseUrl)
|
||||
@@ -271,8 +324,21 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
|
||||
}
|
||||
}
|
||||
|
||||
function isAtomicChatUrl(baseUrl: string): boolean {
|
||||
try {
|
||||
const parsed = new URL(baseUrl)
|
||||
return parsed.port === '1337' && isLocalBaseUrl(baseUrl)
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
function checkOllamaProcessorMode(): CheckResult {
|
||||
if (!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) || isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
|
||||
if (
|
||||
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
|
||||
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
) {
|
||||
return pass('Ollama processor mode', 'Skipped (OpenAI-compatible mode disabled).')
|
||||
}
|
||||
|
||||
@@ -281,6 +347,10 @@ function checkOllamaProcessorMode(): CheckResult {
|
||||
return pass('Ollama processor mode', 'Skipped (provider URL is not local).')
|
||||
}
|
||||
|
||||
if (isAtomicChatUrl(baseUrl)) {
|
||||
return pass('Ollama processor mode', 'Skipped (Atomic Chat local provider detected, not Ollama).')
|
||||
}
|
||||
|
||||
const result = spawnSync('ollama', ['ps'], {
|
||||
cwd: process.cwd(),
|
||||
encoding: 'utf8',
|
||||
@@ -319,6 +389,22 @@ function serializeSafeEnvSummary(): Record<string, string | boolean> {
|
||||
GEMINI_API_KEY_SET: Boolean(process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY),
|
||||
}
|
||||
}
|
||||
if (
|
||||
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
|
||||
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
) {
|
||||
return {
|
||||
CLAUDE_CODE_USE_GITHUB: true,
|
||||
OPENAI_MODEL:
|
||||
process.env.OPENAI_MODEL ??
|
||||
'(unset, default: github:copilot → openai/gpt-4.1)',
|
||||
OPENAI_BASE_URL:
|
||||
process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE,
|
||||
GITHUB_TOKEN_SET: Boolean(
|
||||
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN,
|
||||
),
|
||||
}
|
||||
}
|
||||
const request = resolveProviderRequest({
|
||||
model: process.env.OPENAI_MODEL,
|
||||
baseUrl: process.env.OPENAI_BASE_URL,
|
||||
@@ -374,6 +460,13 @@ async function main(): Promise<void> {
|
||||
const options = parseOptions(process.argv.slice(2))
|
||||
const results: CheckResult[] = []
|
||||
|
||||
const { enableConfigs } = await import('../src/utils/config.js')
|
||||
enableConfigs()
|
||||
const { applySafeConfigEnvironmentVariables } = await import('../src/utils/managedEnv.js')
|
||||
applySafeConfigEnvironmentVariables()
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../src/utils/githubModelsCredentials.js')
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
|
||||
results.push(checkNodeVersion())
|
||||
results.push(checkBunRuntime())
|
||||
results.push(checkBuildArtifacts())
|
||||
|
||||
@@ -57,8 +57,8 @@ class Provider:
|
||||
@property
|
||||
def is_configured(self) -> bool:
|
||||
"""True if the provider has an API key set."""
|
||||
if self.name == "ollama":
|
||||
return True # Ollama needs no API key
|
||||
if self.name in ("ollama", "atomic-chat"):
|
||||
return True # Local providers need no API key
|
||||
return bool(self.api_key)
|
||||
|
||||
@property
|
||||
@@ -93,6 +93,7 @@ def build_default_providers() -> list[Provider]:
|
||||
big = os.getenv("BIG_MODEL", "gpt-4.1")
|
||||
small = os.getenv("SMALL_MODEL", "gpt-4.1-mini")
|
||||
ollama_url = os.getenv("OLLAMA_BASE_URL", "http://localhost:11434")
|
||||
atomic_chat_url = os.getenv("ATOMIC_CHAT_BASE_URL", "http://127.0.0.1:1337")
|
||||
|
||||
return [
|
||||
Provider(
|
||||
@@ -119,6 +120,14 @@ def build_default_providers() -> list[Provider]:
|
||||
big_model=big if "gemini" not in big and "gpt" not in big else "llama3:8b",
|
||||
small_model=small if "gemini" not in small and "gpt" not in small else "llama3:8b",
|
||||
),
|
||||
Provider(
|
||||
name="atomic-chat",
|
||||
ping_url=f"{atomic_chat_url}/v1/models",
|
||||
api_key_env="",
|
||||
cost_per_1k_tokens=0.0, # free — local (Apple Silicon)
|
||||
big_model=big if "gemini" not in big and "gpt" not in big else "llama3:8b",
|
||||
small_model=small if "gemini" not in small and "gpt" not in small else "llama3:8b",
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -19,6 +19,7 @@ import cost from './commands/cost/index.js'
|
||||
import diff from './commands/diff/index.js'
|
||||
import ctx_viz from './commands/ctx_viz/index.js'
|
||||
import doctor from './commands/doctor/index.js'
|
||||
import onboardGithub from './commands/onboard-github/index.js'
|
||||
import memory from './commands/memory/index.js'
|
||||
import help from './commands/help/index.js'
|
||||
import ide from './commands/ide/index.js'
|
||||
@@ -128,6 +129,7 @@ import plan from './commands/plan/index.js'
|
||||
import fast from './commands/fast/index.js'
|
||||
import passes from './commands/passes/index.js'
|
||||
import privacySettings from './commands/privacy-settings/index.js'
|
||||
import provider from './commands/provider/index.js'
|
||||
import hooks from './commands/hooks/index.js'
|
||||
import files from './commands/files/index.js'
|
||||
import branch from './commands/branch/index.js'
|
||||
@@ -288,9 +290,11 @@ const COMMANDS = memoize((): Command[] => [
|
||||
memory,
|
||||
mobile,
|
||||
model,
|
||||
onboardGithub,
|
||||
outputStyle,
|
||||
remoteEnv,
|
||||
plugin,
|
||||
provider,
|
||||
pr_comments,
|
||||
releaseNotes,
|
||||
reloadPlugins,
|
||||
|
||||
File diff suppressed because one or more lines are too long
19
src/commands/mcp/doctorCommand.test.ts
Normal file
19
src/commands/mcp/doctorCommand.test.ts
Normal file
@@ -0,0 +1,19 @@
|
||||
import assert from 'node:assert/strict'
|
||||
import test from 'node:test'
|
||||
|
||||
import { Command } from '@commander-js/extra-typings'
|
||||
|
||||
import { registerMcpDoctorCommand } from './doctorCommand.js'
|
||||
|
||||
test('registerMcpDoctorCommand adds the doctor subcommand with expected options', () => {
|
||||
const mcp = new Command('mcp')
|
||||
|
||||
registerMcpDoctorCommand(mcp)
|
||||
|
||||
const doctor = mcp.commands.find(command => command.name() === 'doctor')
|
||||
assert.ok(doctor)
|
||||
assert.equal(doctor?.usage(), '[options] [name]')
|
||||
|
||||
const optionFlags = doctor?.options.map(option => option.long)
|
||||
assert.deepEqual(optionFlags, ['--scope', '--config-only', '--json'])
|
||||
})
|
||||
25
src/commands/mcp/doctorCommand.ts
Normal file
25
src/commands/mcp/doctorCommand.ts
Normal file
@@ -0,0 +1,25 @@
|
||||
/**
|
||||
* MCP doctor CLI subcommand.
|
||||
*/
|
||||
import { type Command } from '@commander-js/extra-typings'
|
||||
|
||||
export function registerMcpDoctorCommand(mcp: Command): void {
|
||||
mcp
|
||||
.command('doctor [name]')
|
||||
.description(
|
||||
'Diagnose MCP configuration, precedence, disabled/pending state, and connection health. ' +
|
||||
'Note: unless --config-only is used, stdio servers may be spawned and remote servers may be contacted. ' +
|
||||
'Only use this command in directories you trust.',
|
||||
)
|
||||
.option('-s, --scope <scope>', 'Restrict config analysis to a specific scope (local, project, user, or enterprise)')
|
||||
.option('--config-only', 'Skip live connection checks and only analyze configuration state')
|
||||
.option('--json', 'Output the diagnostics report as JSON')
|
||||
.action(async (name: string | undefined, options: {
|
||||
scope?: string
|
||||
configOnly?: boolean
|
||||
json?: boolean
|
||||
}) => {
|
||||
const { mcpDoctorHandler } = await import('../../cli/handlers/mcp.js')
|
||||
await mcpDoctorHandler(name, options)
|
||||
})
|
||||
}
|
||||
11
src/commands/onboard-github/index.ts
Normal file
11
src/commands/onboard-github/index.ts
Normal file
@@ -0,0 +1,11 @@
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const onboardGithub: Command = {
|
||||
name: 'onboard-github',
|
||||
description:
|
||||
'Interactive setup for GitHub Models: device login or PAT, saved to secure storage',
|
||||
type: 'local-jsx',
|
||||
load: () => import('./onboard-github.js'),
|
||||
}
|
||||
|
||||
export default onboardGithub
|
||||
237
src/commands/onboard-github/onboard-github.tsx
Normal file
237
src/commands/onboard-github/onboard-github.tsx
Normal file
@@ -0,0 +1,237 @@
|
||||
import * as React from 'react'
|
||||
import { useCallback, useState } from 'react'
|
||||
import { Select } from '../../components/CustomSelect/select.js'
|
||||
import { Spinner } from '../../components/Spinner.js'
|
||||
import TextInput from '../../components/TextInput.js'
|
||||
import { Box, Text } from '../../ink.js'
|
||||
import {
|
||||
openVerificationUri,
|
||||
pollAccessToken,
|
||||
requestDeviceCode,
|
||||
} from '../../services/github/deviceFlow.js'
|
||||
import type { LocalJSXCommandCall } from '../../types/command.js'
|
||||
import {
|
||||
hydrateGithubModelsTokenFromSecureStorage,
|
||||
saveGithubModelsToken,
|
||||
} from '../../utils/githubModelsCredentials.js'
|
||||
import { updateSettingsForSource } from '../../utils/settings/settings.js'
|
||||
|
||||
const DEFAULT_MODEL = 'github:copilot'
|
||||
|
||||
type Step =
|
||||
| 'menu'
|
||||
| 'device-busy'
|
||||
| 'pat'
|
||||
| 'error'
|
||||
|
||||
function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
|
||||
const { error } = updateSettingsForSource('userSettings', {
|
||||
env: {
|
||||
CLAUDE_CODE_USE_GITHUB: '1',
|
||||
OPENAI_MODEL: model,
|
||||
CLAUDE_CODE_USE_OPENAI: undefined as any,
|
||||
CLAUDE_CODE_USE_GEMINI: undefined as any,
|
||||
CLAUDE_CODE_USE_BEDROCK: undefined as any,
|
||||
CLAUDE_CODE_USE_VERTEX: undefined as any,
|
||||
CLAUDE_CODE_USE_FOUNDRY: undefined as any,
|
||||
},
|
||||
})
|
||||
if (error) {
|
||||
return { ok: false, detail: error.message }
|
||||
}
|
||||
return { ok: true }
|
||||
}
|
||||
|
||||
function OnboardGithub(props: {
|
||||
onDone: Parameters<LocalJSXCommandCall>[0]
|
||||
onChangeAPIKey: () => void
|
||||
}): React.ReactNode {
|
||||
const { onDone, onChangeAPIKey } = props
|
||||
const [step, setStep] = useState<Step>('menu')
|
||||
const [errorMsg, setErrorMsg] = useState<string | null>(null)
|
||||
const [deviceHint, setDeviceHint] = useState<{
|
||||
user_code: string
|
||||
verification_uri: string
|
||||
} | null>(null)
|
||||
const [patDraft, setPatDraft] = useState('')
|
||||
const [cursorOffset, setCursorOffset] = useState(0)
|
||||
|
||||
const finalize = useCallback(
|
||||
async (token: string, model: string = DEFAULT_MODEL) => {
|
||||
const saved = saveGithubModelsToken(token)
|
||||
if (!saved.success) {
|
||||
setErrorMsg(saved.warning ?? 'Could not save token to secure storage.')
|
||||
setStep('error')
|
||||
return
|
||||
}
|
||||
const merged = mergeUserSettingsEnv(model.trim() || DEFAULT_MODEL)
|
||||
if (!merged.ok) {
|
||||
setErrorMsg(
|
||||
`Token saved, but settings were not updated: ${merged.detail ?? 'unknown error'}. ` +
|
||||
`Add env CLAUDE_CODE_USE_GITHUB=1 and OPENAI_MODEL to ~/.claude/settings.json manually.`,
|
||||
)
|
||||
setStep('error')
|
||||
return
|
||||
}
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
process.env.OPENAI_MODEL = model.trim() || DEFAULT_MODEL
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
onChangeAPIKey()
|
||||
onDone(
|
||||
'GitHub Models onboard complete. Token stored in secure storage; user settings updated. Restart if the model does not switch.',
|
||||
{ display: 'user' },
|
||||
)
|
||||
},
|
||||
[onChangeAPIKey, onDone],
|
||||
)
|
||||
|
||||
const runDeviceFlow = useCallback(async () => {
|
||||
setStep('device-busy')
|
||||
setErrorMsg(null)
|
||||
setDeviceHint(null)
|
||||
try {
|
||||
const device = await requestDeviceCode()
|
||||
setDeviceHint({
|
||||
user_code: device.user_code,
|
||||
verification_uri: device.verification_uri,
|
||||
})
|
||||
await openVerificationUri(device.verification_uri)
|
||||
const token = await pollAccessToken(device.device_code, {
|
||||
initialInterval: device.interval,
|
||||
timeoutSeconds: device.expires_in,
|
||||
})
|
||||
await finalize(token, DEFAULT_MODEL)
|
||||
} catch (e) {
|
||||
setErrorMsg(e instanceof Error ? e.message : String(e))
|
||||
setStep('error')
|
||||
}
|
||||
}, [finalize])
|
||||
|
||||
if (step === 'error' && errorMsg) {
|
||||
const options = [
|
||||
{
|
||||
label: 'Back to menu',
|
||||
value: 'back' as const,
|
||||
},
|
||||
{
|
||||
label: 'Exit',
|
||||
value: 'exit' as const,
|
||||
},
|
||||
]
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text color="red">{errorMsg}</Text>
|
||||
<Select
|
||||
options={options}
|
||||
onChange={(v: string) => {
|
||||
if (v === 'back') {
|
||||
setStep('menu')
|
||||
setErrorMsg(null)
|
||||
} else {
|
||||
onDone('GitHub onboard cancelled', { display: 'system' })
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
if (step === 'device-busy') {
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text>GitHub device login</Text>
|
||||
{deviceHint ? (
|
||||
<>
|
||||
<Text>
|
||||
Enter code <Text bold>{deviceHint.user_code}</Text> at{' '}
|
||||
{deviceHint.verification_uri}
|
||||
</Text>
|
||||
<Text dimColor>
|
||||
A browser window may have opened. Waiting for authorization…
|
||||
</Text>
|
||||
</>
|
||||
) : (
|
||||
<Text dimColor>Requesting device code from GitHub…</Text>
|
||||
)}
|
||||
<Spinner />
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
if (step === 'pat') {
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text>Paste a GitHub personal access token with access to GitHub Models.</Text>
|
||||
<Text dimColor>Input is masked. Enter to submit; Esc to go back.</Text>
|
||||
<TextInput
|
||||
value={patDraft}
|
||||
mask="*"
|
||||
onChange={setPatDraft}
|
||||
onSubmit={async (value: string) => {
|
||||
const t = value.trim()
|
||||
if (!t) {
|
||||
return
|
||||
}
|
||||
await finalize(t, DEFAULT_MODEL)
|
||||
}}
|
||||
onExit={() => {
|
||||
setStep('menu')
|
||||
setPatDraft('')
|
||||
}}
|
||||
columns={80}
|
||||
cursorOffset={cursorOffset}
|
||||
onChangeCursorOffset={setCursorOffset}
|
||||
/>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
const menuOptions = [
|
||||
{
|
||||
label: 'Sign in with browser (device code)',
|
||||
value: 'device' as const,
|
||||
},
|
||||
{
|
||||
label: 'Paste personal access token',
|
||||
value: 'pat' as const,
|
||||
},
|
||||
{
|
||||
label: 'Cancel',
|
||||
value: 'cancel' as const,
|
||||
},
|
||||
]
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text bold>GitHub Models setup</Text>
|
||||
<Text dimColor>
|
||||
Stores your token in the OS credential store (macOS Keychain when available)
|
||||
and enables CLAUDE_CODE_USE_GITHUB in your user settings — no export
|
||||
GITHUB_TOKEN needed for future runs.
|
||||
</Text>
|
||||
<Select
|
||||
options={menuOptions}
|
||||
onChange={(v: string) => {
|
||||
if (v === 'cancel') {
|
||||
onDone('GitHub onboard cancelled', { display: 'system' })
|
||||
return
|
||||
}
|
||||
if (v === 'pat') {
|
||||
setStep('pat')
|
||||
return
|
||||
}
|
||||
void runDeviceFlow()
|
||||
}}
|
||||
/>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
export const call: LocalJSXCommandCall = async (onDone, context) => {
|
||||
return (
|
||||
<OnboardGithub
|
||||
onDone={onDone}
|
||||
onChangeAPIKey={context.onChangeAPIKey}
|
||||
/>
|
||||
)
|
||||
}
|
||||
12
src/commands/provider/index.ts
Normal file
12
src/commands/provider/index.ts
Normal file
@@ -0,0 +1,12 @@
|
||||
import type { Command } from '../../commands.js'
|
||||
import { shouldInferenceConfigCommandBeImmediate } from '../../utils/immediateCommand.js'
|
||||
|
||||
export default {
|
||||
type: 'local-jsx',
|
||||
name: 'provider',
|
||||
description: 'Set up and save a third-party provider profile for OpenClaude',
|
||||
get immediate() {
|
||||
return shouldInferenceConfigCommandBeImmediate()
|
||||
},
|
||||
load: () => import('./provider.js'),
|
||||
} satisfies Command
|
||||
228
src/commands/provider/provider.test.tsx
Normal file
228
src/commands/provider/provider.test.tsx
Normal file
@@ -0,0 +1,228 @@
|
||||
import { PassThrough } from 'node:stream'
|
||||
|
||||
import { expect, test } from 'bun:test'
|
||||
import React from 'react'
|
||||
import stripAnsi from 'strip-ansi'
|
||||
|
||||
import { createRoot, render, useApp } from '../../ink.js'
|
||||
import { AppStateProvider } from '../../state/AppState.js'
|
||||
import {
|
||||
buildCurrentProviderSummary,
|
||||
buildProfileSaveMessage,
|
||||
getProviderWizardDefaults,
|
||||
TextEntryDialog,
|
||||
} from './provider.js'
|
||||
|
||||
const SYNC_START = '\x1B[?2026h'
|
||||
const SYNC_END = '\x1B[?2026l'
|
||||
|
||||
function extractLastFrame(output: string): string {
|
||||
let lastFrame: string | null = null
|
||||
let cursor = 0
|
||||
|
||||
while (cursor < output.length) {
|
||||
const start = output.indexOf(SYNC_START, cursor)
|
||||
if (start === -1) {
|
||||
break
|
||||
}
|
||||
|
||||
const contentStart = start + SYNC_START.length
|
||||
const end = output.indexOf(SYNC_END, contentStart)
|
||||
if (end === -1) {
|
||||
break
|
||||
}
|
||||
|
||||
const frame = output.slice(contentStart, end)
|
||||
if (frame.trim().length > 0) {
|
||||
lastFrame = frame
|
||||
}
|
||||
cursor = end + SYNC_END.length
|
||||
}
|
||||
|
||||
return lastFrame ?? output
|
||||
}
|
||||
|
||||
async function renderFinalFrame(node: React.ReactNode): Promise<string> {
|
||||
let output = ''
|
||||
const { stdout, stdin, getOutput } = createTestStreams()
|
||||
|
||||
const instance = await render(node, {
|
||||
stdout: stdout as unknown as NodeJS.WriteStream,
|
||||
stdin: stdin as unknown as NodeJS.ReadStream,
|
||||
patchConsole: false,
|
||||
})
|
||||
|
||||
await instance.waitUntilExit()
|
||||
return stripAnsi(extractLastFrame(getOutput()))
|
||||
}
|
||||
|
||||
function createTestStreams(): {
|
||||
stdout: PassThrough
|
||||
stdin: PassThrough & {
|
||||
isTTY: boolean
|
||||
setRawMode: (mode: boolean) => void
|
||||
ref: () => void
|
||||
unref: () => void
|
||||
}
|
||||
getOutput: () => string
|
||||
} {
|
||||
let output = ''
|
||||
const stdout = new PassThrough()
|
||||
const stdin = new PassThrough() as PassThrough & {
|
||||
isTTY: boolean
|
||||
setRawMode: (mode: boolean) => void
|
||||
ref: () => void
|
||||
unref: () => void
|
||||
}
|
||||
stdin.isTTY = true
|
||||
stdin.setRawMode = () => {}
|
||||
stdin.ref = () => {}
|
||||
stdin.unref = () => {}
|
||||
;(stdout as unknown as { columns: number }).columns = 120
|
||||
stdout.on('data', chunk => {
|
||||
output += chunk.toString()
|
||||
})
|
||||
|
||||
return {
|
||||
stdout,
|
||||
stdin,
|
||||
getOutput: () => output,
|
||||
}
|
||||
}
|
||||
|
||||
function StepChangeHarness(): React.ReactNode {
|
||||
const { exit } = useApp()
|
||||
const [step, setStep] = React.useState<'api' | 'model'>('api')
|
||||
|
||||
React.useLayoutEffect(() => {
|
||||
if (step === 'api') {
|
||||
setStep('model')
|
||||
return
|
||||
}
|
||||
|
||||
const timer = setTimeout(exit, 0)
|
||||
return () => clearTimeout(timer)
|
||||
}, [exit, step])
|
||||
|
||||
return (
|
||||
<AppStateProvider>
|
||||
<TextEntryDialog
|
||||
title="Provider"
|
||||
subtitle={step === 'api' ? 'API key step' : 'Model step'}
|
||||
description="Enter the next value"
|
||||
initialValue={step === 'api' ? 'stale-secret-key' : 'fresh-model-name'}
|
||||
mask={step === 'api' ? '*' : undefined}
|
||||
onSubmit={() => {}}
|
||||
onCancel={() => {}}
|
||||
/>
|
||||
</AppStateProvider>
|
||||
)
|
||||
}
|
||||
|
||||
test('TextEntryDialog resets its input state when initialValue changes', async () => {
|
||||
const output = await renderFinalFrame(<StepChangeHarness />)
|
||||
|
||||
expect(output).toContain('Model step')
|
||||
expect(output).toContain('fresh-model-name')
|
||||
expect(output).not.toContain('stale-secret-key')
|
||||
})
|
||||
|
||||
test('wizard step remount prevents a typed API key from leaking into the next field', async () => {
|
||||
const { stdout, stdin, getOutput } = createTestStreams()
|
||||
const root = await createRoot({
|
||||
stdout: stdout as unknown as NodeJS.WriteStream,
|
||||
stdin: stdin as unknown as NodeJS.ReadStream,
|
||||
patchConsole: false,
|
||||
})
|
||||
|
||||
root.render(
|
||||
<AppStateProvider>
|
||||
<TextEntryDialog
|
||||
resetStateKey="api"
|
||||
title="Provider"
|
||||
subtitle="API key step"
|
||||
description="Enter the API key"
|
||||
initialValue=""
|
||||
mask="*"
|
||||
onSubmit={() => {}}
|
||||
onCancel={() => {}}
|
||||
/>
|
||||
</AppStateProvider>,
|
||||
)
|
||||
|
||||
await Bun.sleep(25)
|
||||
stdin.write('sk-secret-12345678')
|
||||
await Bun.sleep(25)
|
||||
|
||||
root.render(
|
||||
<AppStateProvider>
|
||||
<TextEntryDialog
|
||||
resetStateKey="model"
|
||||
title="Provider"
|
||||
subtitle="Model step"
|
||||
description="Enter the model"
|
||||
initialValue=""
|
||||
onSubmit={() => {}}
|
||||
onCancel={() => {}}
|
||||
/>
|
||||
</AppStateProvider>,
|
||||
)
|
||||
|
||||
await Bun.sleep(25)
|
||||
root.unmount()
|
||||
stdin.end()
|
||||
stdout.end()
|
||||
await Bun.sleep(25)
|
||||
|
||||
const output = stripAnsi(extractLastFrame(getOutput()))
|
||||
expect(output).toContain('Model step')
|
||||
expect(output).not.toContain('sk-secret-12345678')
|
||||
})
|
||||
|
||||
test('buildProfileSaveMessage maps provider fields without echoing secrets', () => {
|
||||
const message = buildProfileSaveMessage(
|
||||
'openai',
|
||||
{
|
||||
OPENAI_API_KEY: 'sk-secret-12345678',
|
||||
OPENAI_MODEL: 'gpt-4o',
|
||||
OPENAI_BASE_URL: 'https://api.openai.com/v1',
|
||||
},
|
||||
'D:/codings/Opensource/openclaude/.openclaude-profile.json',
|
||||
)
|
||||
|
||||
expect(message).toContain('Saved OpenAI-compatible profile.')
|
||||
expect(message).toContain('Model: gpt-4o')
|
||||
expect(message).toContain('Endpoint: https://api.openai.com/v1')
|
||||
expect(message).toContain('Credentials: configured')
|
||||
expect(message).not.toContain('sk-secret-12345678')
|
||||
})
|
||||
|
||||
test('buildCurrentProviderSummary redacts poisoned model and endpoint values', () => {
|
||||
const summary = buildCurrentProviderSummary({
|
||||
processEnv: {
|
||||
CLAUDE_CODE_USE_OPENAI: '1',
|
||||
OPENAI_API_KEY: 'sk-secret-12345678',
|
||||
OPENAI_MODEL: 'sk-secret-12345678',
|
||||
OPENAI_BASE_URL: 'sk-secret-12345678',
|
||||
},
|
||||
persisted: null,
|
||||
})
|
||||
|
||||
expect(summary.providerLabel).toBe('OpenAI-compatible')
|
||||
expect(summary.modelLabel).toBe('sk-...5678')
|
||||
expect(summary.endpointLabel).toBe('sk-...5678')
|
||||
})
|
||||
|
||||
test('getProviderWizardDefaults ignores poisoned current provider values', () => {
|
||||
const defaults = getProviderWizardDefaults({
|
||||
OPENAI_API_KEY: 'sk-secret-12345678',
|
||||
OPENAI_MODEL: 'sk-secret-12345678',
|
||||
OPENAI_BASE_URL: 'sk-secret-12345678',
|
||||
GEMINI_API_KEY: 'AIzaSecret12345678',
|
||||
GEMINI_MODEL: 'AIzaSecret12345678',
|
||||
})
|
||||
|
||||
expect(defaults.openAIModel).toBe('gpt-4o')
|
||||
expect(defaults.openAIBaseUrl).toBe('https://api.openai.com/v1')
|
||||
expect(defaults.geminiModel).toBe('gemini-2.0-flash')
|
||||
})
|
||||
1148
src/commands/provider/provider.tsx
Normal file
1148
src/commands/provider/provider.tsx
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
@@ -84,44 +84,44 @@ const reducer = <T>(state: State<T>, action: Action<T>): State<T> => {
|
||||
return state
|
||||
}
|
||||
|
||||
// Wrap to first item if at the end
|
||||
const next = item.next || state.optionMap.first
|
||||
// If there's a next item in the list, go to it
|
||||
if (item.next) {
|
||||
const needsToScroll = item.next.index >= state.visibleToIndex
|
||||
|
||||
if (!next) {
|
||||
if (!needsToScroll) {
|
||||
return {
|
||||
...state,
|
||||
focusedValue: item.next.value,
|
||||
}
|
||||
}
|
||||
|
||||
const nextVisibleToIndex = Math.min(
|
||||
state.optionMap.size,
|
||||
state.visibleToIndex + 1,
|
||||
)
|
||||
|
||||
const nextVisibleFromIndex = nextVisibleToIndex - state.visibleOptionCount
|
||||
|
||||
return {
|
||||
...state,
|
||||
focusedValue: item.next.value,
|
||||
visibleFromIndex: nextVisibleFromIndex,
|
||||
visibleToIndex: nextVisibleToIndex,
|
||||
}
|
||||
}
|
||||
|
||||
// No next item - wrap to first item
|
||||
const firstItem = state.optionMap.first
|
||||
if (!firstItem) {
|
||||
return state
|
||||
}
|
||||
|
||||
// When wrapping to first, reset viewport to start
|
||||
if (!item.next && next === state.optionMap.first) {
|
||||
return {
|
||||
...state,
|
||||
focusedValue: next.value,
|
||||
visibleFromIndex: 0,
|
||||
visibleToIndex: state.visibleOptionCount,
|
||||
}
|
||||
}
|
||||
|
||||
const needsToScroll = next.index >= state.visibleToIndex
|
||||
|
||||
if (!needsToScroll) {
|
||||
return {
|
||||
...state,
|
||||
focusedValue: next.value,
|
||||
}
|
||||
}
|
||||
|
||||
const nextVisibleToIndex = Math.min(
|
||||
state.optionMap.size,
|
||||
state.visibleToIndex + 1,
|
||||
)
|
||||
|
||||
const nextVisibleFromIndex = nextVisibleToIndex - state.visibleOptionCount
|
||||
|
||||
return {
|
||||
...state,
|
||||
focusedValue: next.value,
|
||||
visibleFromIndex: nextVisibleFromIndex,
|
||||
visibleToIndex: nextVisibleToIndex,
|
||||
focusedValue: firstItem.value,
|
||||
visibleFromIndex: 0,
|
||||
visibleToIndex: state.visibleOptionCount,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -136,44 +136,43 @@ const reducer = <T>(state: State<T>, action: Action<T>): State<T> => {
|
||||
return state
|
||||
}
|
||||
|
||||
// Wrap to last item if at the beginning
|
||||
const previous = item.previous || state.optionMap.last
|
||||
// If there's a previous item in the list, go to it
|
||||
if (item.previous) {
|
||||
const needsToScroll = item.previous.index < state.visibleFromIndex
|
||||
|
||||
if (!previous) {
|
||||
return state
|
||||
}
|
||||
if (!needsToScroll) {
|
||||
return {
|
||||
...state,
|
||||
focusedValue: item.previous.value,
|
||||
}
|
||||
}
|
||||
|
||||
const nextVisibleFromIndex = Math.max(0, state.visibleFromIndex - 1)
|
||||
const nextVisibleToIndex = nextVisibleFromIndex + state.visibleOptionCount
|
||||
|
||||
// When wrapping to last, reset viewport to end
|
||||
if (!item.previous && previous === state.optionMap.last) {
|
||||
const nextVisibleToIndex = state.optionMap.size
|
||||
const nextVisibleFromIndex = Math.max(
|
||||
0,
|
||||
nextVisibleToIndex - state.visibleOptionCount,
|
||||
)
|
||||
return {
|
||||
...state,
|
||||
focusedValue: previous.value,
|
||||
focusedValue: item.previous.value,
|
||||
visibleFromIndex: nextVisibleFromIndex,
|
||||
visibleToIndex: nextVisibleToIndex,
|
||||
}
|
||||
}
|
||||
|
||||
const needsToScroll = previous.index <= state.visibleFromIndex
|
||||
|
||||
if (!needsToScroll) {
|
||||
return {
|
||||
...state,
|
||||
focusedValue: previous.value,
|
||||
}
|
||||
// No previous item - wrap to last item
|
||||
const lastItem = state.optionMap.last
|
||||
if (!lastItem) {
|
||||
return state
|
||||
}
|
||||
|
||||
const nextVisibleFromIndex = Math.max(0, state.visibleFromIndex - 1)
|
||||
|
||||
const nextVisibleToIndex = nextVisibleFromIndex + state.visibleOptionCount
|
||||
|
||||
// When wrapping to last, reset viewport to end
|
||||
const nextVisibleToIndex = state.optionMap.size
|
||||
const nextVisibleFromIndex = Math.max(
|
||||
0,
|
||||
nextVisibleToIndex - state.visibleOptionCount,
|
||||
)
|
||||
return {
|
||||
...state,
|
||||
focusedValue: previous.value,
|
||||
focusedValue: lastItem.value,
|
||||
visibleFromIndex: nextVisibleFromIndex,
|
||||
visibleToIndex: nextVisibleToIndex,
|
||||
}
|
||||
|
||||
152
src/components/EffortPicker.tsx
Normal file
152
src/components/EffortPicker.tsx
Normal file
@@ -0,0 +1,152 @@
|
||||
import React, { useState } from 'react'
|
||||
import { Box, Text } from '../ink.js'
|
||||
import { useMainLoopModel } from '../hooks/useMainLoopModel.js'
|
||||
import { useAppState, useSetAppState } from '../state/AppState.js'
|
||||
import type { EffortLevel, OpenAIEffortLevel } from '../utils/effort.js'
|
||||
import {
|
||||
getAvailableEffortLevels,
|
||||
getDisplayedEffortLevel,
|
||||
getEffortLevelDescription,
|
||||
getEffortLevelLabel,
|
||||
getEffortValueDescription,
|
||||
modelSupportsEffort,
|
||||
modelUsesOpenAIEffort,
|
||||
standardEffortToOpenAI,
|
||||
isOpenAIEffortLevel,
|
||||
} from '../utils/effort.js'
|
||||
import { getAPIProvider } from '../utils/model/providers.js'
|
||||
import { getReasoningEffortForModel } from '../services/api/providerConfig.js'
|
||||
import { Select } from './CustomSelect/select.js'
|
||||
import { effortLevelToSymbol } from './EffortIndicator.js'
|
||||
import { KeyboardShortcutHint } from './design-system/KeyboardShortcutHint.js'
|
||||
import { Byline } from './design-system/Byline.js'
|
||||
|
||||
type EffortOption = {
|
||||
label: React.ReactNode
|
||||
value: string
|
||||
description: string
|
||||
isAvailable: boolean
|
||||
}
|
||||
|
||||
type Props = {
|
||||
onSelect: (effort: EffortLevel | undefined) => void
|
||||
onCancel?: () => void
|
||||
}
|
||||
|
||||
export function EffortPicker({ onSelect, onCancel }: Props) {
|
||||
const model = useMainLoopModel()
|
||||
const appStateEffort = useAppState((s: any) => s.effortValue)
|
||||
const setAppState = useSetAppState()
|
||||
const provider = getAPIProvider()
|
||||
const usesOpenAIEffort = modelUsesOpenAIEffort(model)
|
||||
const availableLevels = getAvailableEffortLevels(model)
|
||||
const currentDisplayedLevel = getDisplayedEffortLevel(model, appStateEffort)
|
||||
|
||||
// For OpenAI/Codex, get the model's default reasoning effort
|
||||
const modelReasoningEffort = usesOpenAIEffort ? getReasoningEffortForModel(model) : undefined
|
||||
const defaultEffortForModel = modelReasoningEffort || currentDisplayedLevel
|
||||
|
||||
const options: EffortOption[] = [
|
||||
{
|
||||
label: <EffortOptionLabel level="auto" text="Auto" isCurrent={false} />,
|
||||
value: 'auto',
|
||||
description: 'Use the default effort level for your model',
|
||||
isAvailable: true,
|
||||
},
|
||||
...availableLevels.map(level => {
|
||||
const displayLevel = usesOpenAIEffort
|
||||
? (level === 'xhigh' ? 'max' : level)
|
||||
: level
|
||||
const isCurrent = currentDisplayedLevel === displayLevel
|
||||
return {
|
||||
label: (
|
||||
<EffortOptionLabel
|
||||
level={level as EffortLevel}
|
||||
text={getEffortLevelLabel(level as EffortLevel)}
|
||||
isCurrent={isCurrent}
|
||||
/>
|
||||
),
|
||||
value: level,
|
||||
description: getEffortLevelDescription(level as EffortLevel),
|
||||
isAvailable: true,
|
||||
}
|
||||
}),
|
||||
]
|
||||
|
||||
function handleSelect(value: string) {
|
||||
if (value === 'auto') {
|
||||
setAppState(prev => ({
|
||||
...prev,
|
||||
effortValue: undefined,
|
||||
}))
|
||||
onSelect(undefined)
|
||||
} else {
|
||||
const effortLevel = value as EffortLevel
|
||||
setAppState(prev => ({
|
||||
...prev,
|
||||
effortValue: effortLevel,
|
||||
}))
|
||||
onSelect(effortLevel)
|
||||
}
|
||||
}
|
||||
|
||||
function handleCancel() {
|
||||
onCancel?.()
|
||||
}
|
||||
|
||||
const supportsEffort = modelSupportsEffort(model)
|
||||
// For OpenAI/Codex, use the model's default reasoning effort as initial focus
|
||||
// For Claude, use the displayed effort level or 'auto'
|
||||
const initialFocus = usesOpenAIEffort
|
||||
? (modelReasoningEffort || 'auto')
|
||||
: (appStateEffort ? String(appStateEffort) : 'auto')
|
||||
|
||||
return (
|
||||
<Box flexDirection="column">
|
||||
<Box marginBottom={1} flexDirection="column">
|
||||
<Text color="remember" bold={true}>Set effort level</Text>
|
||||
<Text dimColor={true}>
|
||||
{usesOpenAIEffort
|
||||
? `OpenAI/Codex provider (${provider})`
|
||||
: supportsEffort
|
||||
? `Claude model · ${provider} provider`
|
||||
: `Effort not supported for this model`
|
||||
}
|
||||
</Text>
|
||||
</Box>
|
||||
|
||||
<Box marginBottom={1}>
|
||||
<Select
|
||||
options={options}
|
||||
defaultValue={initialFocus}
|
||||
onChange={handleSelect}
|
||||
onCancel={handleCancel}
|
||||
visibleOptionCount={Math.min(6, options.length)}
|
||||
inlineDescriptions={true}
|
||||
/>
|
||||
</Box>
|
||||
|
||||
<Box marginBottom={1}>
|
||||
<Text dimColor={true} italic={true}>
|
||||
<Byline>
|
||||
<KeyboardShortcutHint shortcut="Enter" action="confirm" />
|
||||
<KeyboardShortcutHint shortcut="Esc" action="cancel" />
|
||||
</Byline>
|
||||
</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
function EffortOptionLabel({ level, text, isCurrent }: { level: EffortLevel | 'auto', text: string, isCurrent: boolean }) {
|
||||
const symbol = level === 'auto' ? '⊘' : effortLevelToSymbol(level as EffortLevel)
|
||||
const color = isCurrent ? 'remember' : level === 'auto' ? 'subtle' : 'suggestion'
|
||||
|
||||
return (
|
||||
<>
|
||||
<Text color={color}>{symbol} </Text>
|
||||
<Text bold={isCurrent}>{text}</Text>
|
||||
{isCurrent && <Text dimColor={true}> (current)</Text>}
|
||||
</>
|
||||
)
|
||||
}
|
||||
@@ -0,0 +1,36 @@
|
||||
import figures from 'figures'
|
||||
import React from 'react'
|
||||
import { describe, expect, it } from 'bun:test'
|
||||
import { renderToString } from '../../utils/staticRender.js'
|
||||
import {
|
||||
PromptInputFooterSuggestions,
|
||||
type SuggestionItem,
|
||||
} from './PromptInputFooterSuggestions.js'
|
||||
|
||||
describe('PromptInputFooterSuggestions', () => {
|
||||
it('renders a visible marker for the selected suggestion', async () => {
|
||||
const suggestions: SuggestionItem[] = [
|
||||
{
|
||||
id: 'command-help',
|
||||
displayText: '/help',
|
||||
description: 'Show help',
|
||||
},
|
||||
{
|
||||
id: 'command-doctor',
|
||||
displayText: '/doctor',
|
||||
description: 'Run diagnostics',
|
||||
},
|
||||
]
|
||||
|
||||
const output = await renderToString(
|
||||
<PromptInputFooterSuggestions
|
||||
suggestions={suggestions}
|
||||
selectedSuggestion={1}
|
||||
/>,
|
||||
80,
|
||||
)
|
||||
|
||||
expect(output).toContain(`${figures.pointer} /doctor`)
|
||||
expect(output).toContain(' /help')
|
||||
})
|
||||
})
|
||||
File diff suppressed because one or more lines are too long
@@ -80,6 +80,7 @@ const LOGO_CLAUDE = [
|
||||
|
||||
function detectProvider(): { name: string; model: string; baseUrl: string; isLocal: boolean } {
|
||||
const useGemini = process.env.CLAUDE_CODE_USE_GEMINI === '1' || process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
const useGithub = process.env.CLAUDE_CODE_USE_GITHUB === '1' || process.env.CLAUDE_CODE_USE_GITHUB === 'true'
|
||||
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1' || process.env.CLAUDE_CODE_USE_OPENAI === 'true'
|
||||
|
||||
if (useGemini) {
|
||||
@@ -88,22 +89,53 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
|
||||
return { name: 'Google Gemini', model, baseUrl, isLocal: false }
|
||||
}
|
||||
|
||||
if (useGithub) {
|
||||
const model = process.env.OPENAI_MODEL || 'github:copilot'
|
||||
const baseUrl =
|
||||
process.env.OPENAI_BASE_URL || 'https://models.github.ai/inference'
|
||||
return { name: 'GitHub Models', model, baseUrl, isLocal: false }
|
||||
}
|
||||
|
||||
if (useOpenAI) {
|
||||
const model = process.env.OPENAI_MODEL || 'gpt-4o'
|
||||
const rawModel = process.env.OPENAI_MODEL || 'gpt-4o'
|
||||
const baseUrl = process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1'
|
||||
const isLocal = /localhost|127\.0\.0\.1|0\.0\.0\.0/.test(baseUrl)
|
||||
let name = 'OpenAI'
|
||||
if (/deepseek/i.test(baseUrl) || /deepseek/i.test(model)) name = 'DeepSeek'
|
||||
if (/deepseek/i.test(baseUrl) || /deepseek/i.test(rawModel)) name = 'DeepSeek'
|
||||
else if (/openrouter/i.test(baseUrl)) name = 'OpenRouter'
|
||||
else if (/together/i.test(baseUrl)) name = 'Together AI'
|
||||
else if (/groq/i.test(baseUrl)) name = 'Groq'
|
||||
else if (/mistral/i.test(baseUrl) || /mistral/i.test(model)) name = 'Mistral'
|
||||
else if (/mistral/i.test(baseUrl) || /mistral/i.test(rawModel)) name = 'Mistral'
|
||||
else if (/azure/i.test(baseUrl)) name = 'Azure OpenAI'
|
||||
else if (/localhost:11434/i.test(baseUrl)) name = 'Ollama'
|
||||
else if (/localhost:1234/i.test(baseUrl)) name = 'LM Studio'
|
||||
else if (/llama/i.test(model)) name = 'Meta Llama'
|
||||
else if (/llama/i.test(rawModel)) name = 'Meta Llama'
|
||||
else if (isLocal) name = 'Local'
|
||||
return { name, model, baseUrl, isLocal }
|
||||
|
||||
// Resolve model alias to actual model name + reasoning effort
|
||||
let displayModel = rawModel
|
||||
const codexAliases: Record<string, { model: string; reasoningEffort?: string }> = {
|
||||
codexplan: { model: 'gpt-5.4', reasoningEffort: 'high' },
|
||||
'gpt-5.4': { model: 'gpt-5.4', reasoningEffort: 'high' },
|
||||
'gpt-5.3-codex': { model: 'gpt-5.3-codex', reasoningEffort: 'high' },
|
||||
'gpt-5.3-codex-spark': { model: 'gpt-5.3-codex-spark' },
|
||||
codexspark: { model: 'gpt-5.3-codex-spark' },
|
||||
'gpt-5.2-codex': { model: 'gpt-5.2-codex', reasoningEffort: 'high' },
|
||||
'gpt-5.1-codex-max': { model: 'gpt-5.1-codex-max', reasoningEffort: 'high' },
|
||||
'gpt-5.1-codex-mini': { model: 'gpt-5.1-codex-mini' },
|
||||
'gpt-5.4-mini': { model: 'gpt-5.4-mini', reasoningEffort: 'medium' },
|
||||
'gpt-5.2': { model: 'gpt-5.2', reasoningEffort: 'medium' },
|
||||
}
|
||||
const alias = rawModel.toLowerCase()
|
||||
if (alias in codexAliases) {
|
||||
const resolved = codexAliases[alias]
|
||||
displayModel = resolved.model
|
||||
if (resolved.reasoningEffort) {
|
||||
displayModel = `${displayModel} (${resolved.reasoningEffort})`
|
||||
}
|
||||
}
|
||||
|
||||
return { name, model: displayModel, baseUrl, isLocal }
|
||||
}
|
||||
|
||||
// Default: Anthropic
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -3,6 +3,11 @@ import {
|
||||
resolveCodexApiCredentials,
|
||||
resolveProviderRequest,
|
||||
} from '../services/api/providerConfig.js'
|
||||
import {
|
||||
applyProfileEnvToProcessEnv,
|
||||
buildStartupEnvFromProfile,
|
||||
redactSecretValueForDisplay,
|
||||
} from '../utils/providerProfile.js'
|
||||
|
||||
// Bugfix for corepack auto-pinning, which adds yarnpkg to peoples' package.jsons
|
||||
// eslint-disable-next-line custom-rules/no-top-level-side-effects
|
||||
@@ -45,39 +50,72 @@ function isLocalProviderUrl(baseUrl: string | undefined): boolean {
|
||||
}
|
||||
}
|
||||
|
||||
function validateProviderEnvOrExit(): void {
|
||||
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
|
||||
return
|
||||
function getProviderValidationError(
|
||||
env: NodeJS.ProcessEnv = process.env,
|
||||
): string | null {
|
||||
const useOpenAI = isEnvTruthy(env.CLAUDE_CODE_USE_OPENAI)
|
||||
const useGithub = isEnvTruthy(env.CLAUDE_CODE_USE_GITHUB)
|
||||
|
||||
if (isEnvTruthy(env.CLAUDE_CODE_USE_GEMINI)) {
|
||||
if (!(env.GEMINI_API_KEY ?? env.GOOGLE_API_KEY)) {
|
||||
return 'GEMINI_API_KEY is required when CLAUDE_CODE_USE_GEMINI=1.'
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
if (useGithub && !useOpenAI) {
|
||||
const token = (env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()) ?? ''
|
||||
if (!token) {
|
||||
return 'GITHUB_TOKEN or GH_TOKEN is required when CLAUDE_CODE_USE_GITHUB=1.'
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
if (!useOpenAI) {
|
||||
return null
|
||||
}
|
||||
|
||||
const request = resolveProviderRequest({
|
||||
model: process.env.OPENAI_MODEL,
|
||||
baseUrl: process.env.OPENAI_BASE_URL,
|
||||
model: env.OPENAI_MODEL,
|
||||
baseUrl: env.OPENAI_BASE_URL,
|
||||
})
|
||||
|
||||
if (process.env.OPENAI_API_KEY === 'SUA_CHAVE') {
|
||||
console.error('Invalid OPENAI_API_KEY: placeholder value SUA_CHAVE detected. Set a real key or unset for local providers.')
|
||||
process.exit(1)
|
||||
if (env.OPENAI_API_KEY === 'SUA_CHAVE') {
|
||||
return 'Invalid OPENAI_API_KEY: placeholder value SUA_CHAVE detected. Set a real key or unset for local providers.'
|
||||
}
|
||||
|
||||
if (request.transport === 'codex_responses') {
|
||||
const credentials = resolveCodexApiCredentials()
|
||||
const credentials = resolveCodexApiCredentials(env)
|
||||
if (!credentials.apiKey) {
|
||||
const authHint = credentials.authPath
|
||||
? ` or put auth.json at ${credentials.authPath}`
|
||||
: ''
|
||||
console.error(`Codex auth is required for ${request.requestedModel}. Set CODEX_API_KEY${authHint}.`)
|
||||
process.exit(1)
|
||||
const safeModel =
|
||||
redactSecretValueForDisplay(request.requestedModel, env) ??
|
||||
'the requested model'
|
||||
return `Codex auth is required for ${safeModel}. Set CODEX_API_KEY${authHint}.`
|
||||
}
|
||||
if (!credentials.accountId) {
|
||||
console.error('Codex auth is missing chatgpt_account_id. Re-login with Codex or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID.')
|
||||
process.exit(1)
|
||||
return 'Codex auth is missing chatgpt_account_id. Re-login with Codex or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID.'
|
||||
}
|
||||
return
|
||||
return null
|
||||
}
|
||||
|
||||
if (!process.env.OPENAI_API_KEY && !isLocalProviderUrl(request.baseUrl)) {
|
||||
console.error('OPENAI_API_KEY is required when CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL is not local.')
|
||||
if (!env.OPENAI_API_KEY && !isLocalProviderUrl(request.baseUrl)) {
|
||||
const hasGithubToken = !!(env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim())
|
||||
if (useGithub && hasGithubToken) {
|
||||
return null
|
||||
}
|
||||
return 'OPENAI_API_KEY is required when CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL is not local.'
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
function validateProviderEnvOrExit(): void {
|
||||
const error = getProviderValidationError()
|
||||
if (error) {
|
||||
console.error(error)
|
||||
process.exit(1)
|
||||
}
|
||||
}
|
||||
@@ -98,6 +136,29 @@ async function main(): Promise<void> {
|
||||
return;
|
||||
}
|
||||
|
||||
{
|
||||
const { enableConfigs } = await import('../utils/config.js')
|
||||
enableConfigs()
|
||||
const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js')
|
||||
applySafeConfigEnvironmentVariables()
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../utils/githubModelsCredentials.js')
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
}
|
||||
|
||||
const startupEnv = await buildStartupEnvFromProfile({
|
||||
processEnv: process.env,
|
||||
})
|
||||
if (startupEnv !== process.env) {
|
||||
const startupProfileError = getProviderValidationError(startupEnv)
|
||||
if (startupProfileError) {
|
||||
console.error(
|
||||
`Warning: ignoring saved provider profile. ${startupProfileError}`,
|
||||
)
|
||||
} else {
|
||||
applyProfileEnvToProcessEnv(process.env, startupEnv)
|
||||
}
|
||||
}
|
||||
|
||||
validateProviderEnvOrExit()
|
||||
|
||||
// Print the gradient startup screen before the Ink UI loads
|
||||
|
||||
@@ -1242,17 +1242,25 @@ export function useTypeahead({
|
||||
const handleAutocompletePrevious = useCallback(() => {
|
||||
setSuggestionsState(prev => ({
|
||||
...prev,
|
||||
selectedSuggestion: prev.selectedSuggestion <= 0 ? suggestions.length - 1 : prev.selectedSuggestion - 1
|
||||
selectedSuggestion: prev.suggestions.length === 0
|
||||
? -1
|
||||
: prev.selectedSuggestion <= 0
|
||||
? prev.suggestions.length - 1
|
||||
: Math.min(prev.selectedSuggestion - 1, prev.suggestions.length - 1)
|
||||
}));
|
||||
}, [suggestions.length, setSuggestionsState]);
|
||||
}, [setSuggestionsState]);
|
||||
|
||||
// Handler for autocomplete:next - selects next suggestion
|
||||
const handleAutocompleteNext = useCallback(() => {
|
||||
setSuggestionsState(prev => ({
|
||||
...prev,
|
||||
selectedSuggestion: prev.selectedSuggestion >= suggestions.length - 1 ? 0 : prev.selectedSuggestion + 1
|
||||
selectedSuggestion: prev.suggestions.length === 0
|
||||
? -1
|
||||
: prev.selectedSuggestion >= prev.suggestions.length - 1
|
||||
? 0
|
||||
: Math.max(0, prev.selectedSuggestion + 1)
|
||||
}));
|
||||
}, [suggestions.length, setSuggestionsState]);
|
||||
}, [setSuggestionsState]);
|
||||
|
||||
// Autocomplete context keybindings - only active when suggestions are visible
|
||||
const autocompleteHandlers = useMemo(() => ({
|
||||
|
||||
@@ -139,6 +139,7 @@ import { validateUuid } from './utils/uuid.js';
|
||||
// Plugin startup checks are now handled non-blockingly in REPL.tsx
|
||||
|
||||
import { registerMcpAddCommand } from 'src/commands/mcp/addCommand.js';
|
||||
import { registerMcpDoctorCommand } from 'src/commands/mcp/doctorCommand.js';
|
||||
import { registerMcpXaaIdpCommand } from 'src/commands/mcp/xaaIdpCommand.js';
|
||||
import { logPermissionContextForAnts } from 'src/services/internalLogging.js';
|
||||
import { fetchClaudeAIMcpConfigsIfEligible } from 'src/services/mcp/claudeai.js';
|
||||
@@ -2313,7 +2314,11 @@ async function run(): Promise<CommanderCommand> {
|
||||
errors
|
||||
} = getSettingsWithErrors();
|
||||
const nonMcpErrors = errors.filter(e => !e.mcpErrorMetadata);
|
||||
if (nonMcpErrors.length > 0 && !isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
|
||||
if (
|
||||
nonMcpErrors.length > 0 &&
|
||||
!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) &&
|
||||
!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
) {
|
||||
await launchInvalidSettingsDialog(root, {
|
||||
settingsErrors: nonMcpErrors,
|
||||
onExit: () => gracefulShutdownSync(1)
|
||||
@@ -3887,6 +3892,7 @@ async function run(): Promise<CommanderCommand> {
|
||||
|
||||
// Register the mcp add subcommand (extracted for testability)
|
||||
registerMcpAddCommand(mcp);
|
||||
registerMcpDoctorCommand(mcp);
|
||||
if (isXaaEnabled()) {
|
||||
registerMcpXaaIdpCommand(mcp);
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ import {
|
||||
setMainLoopModelOverride,
|
||||
} from '../bootstrap/state.js'
|
||||
import { getGlobalConfig, saveGlobalConfig } from '../utils/config.js'
|
||||
import { getAPIProvider } from '../utils/model/providers.js'
|
||||
import {
|
||||
getSettingsForSource,
|
||||
updateSettingsForSource,
|
||||
@@ -23,6 +24,10 @@ import {
|
||||
* tracked by a completion flag in global config.
|
||||
*/
|
||||
export function migrateSonnet1mToSonnet45(): void {
|
||||
if (getAPIProvider() !== 'firstParty') {
|
||||
return
|
||||
}
|
||||
|
||||
const config = getGlobalConfig()
|
||||
if (config.sonnet1m45MigrationComplete) {
|
||||
return
|
||||
|
||||
@@ -154,7 +154,10 @@ export async function getAnthropicClient({
|
||||
fetch: resolvedFetch,
|
||||
}),
|
||||
}
|
||||
if (isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
|
||||
if (
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
) {
|
||||
const { createOpenAIShimClient } = await import('./openaiShim.js')
|
||||
return createOpenAIShimClient({
|
||||
defaultHeaders,
|
||||
|
||||
@@ -144,6 +144,42 @@ describe('Codex request translation', () => {
|
||||
])
|
||||
})
|
||||
|
||||
test('removes unsupported uri format from strict Responses schemas', () => {
|
||||
const tools = convertToolsToResponsesTools([
|
||||
{
|
||||
name: 'WebFetch',
|
||||
description: 'Fetch a URL',
|
||||
input_schema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
url: { type: 'string', format: 'uri' },
|
||||
prompt: { type: 'string' },
|
||||
},
|
||||
required: ['url', 'prompt'],
|
||||
additionalProperties: false,
|
||||
},
|
||||
},
|
||||
])
|
||||
|
||||
expect(tools).toEqual([
|
||||
{
|
||||
type: 'function',
|
||||
name: 'WebFetch',
|
||||
description: 'Fetch a URL',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
url: { type: 'string' },
|
||||
prompt: { type: 'string' },
|
||||
},
|
||||
required: ['url', 'prompt'],
|
||||
additionalProperties: false,
|
||||
},
|
||||
strict: true,
|
||||
},
|
||||
])
|
||||
})
|
||||
|
||||
test('converts assistant tool use and user tool result into Responses items', () => {
|
||||
const items = convertAnthropicMessagesToResponsesInput([
|
||||
{
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import { APIError } from '@anthropic-ai/sdk'
|
||||
import type {
|
||||
ResolvedCodexCredentials,
|
||||
ResolvedProviderRequest,
|
||||
@@ -234,7 +235,10 @@ export function convertAnthropicMessagesToResponsesInput(
|
||||
items.push({
|
||||
type: 'function_call_output',
|
||||
call_id: callId,
|
||||
output: convertToolResultToText(toolResult.content),
|
||||
output: (() => {
|
||||
const out = convertToolResultToText(toolResult.content)
|
||||
return toolResult.is_error ? `Error: ${out}` : out
|
||||
})(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -311,6 +315,11 @@ function enforceStrictSchema(schema: unknown): Record<string, unknown> {
|
||||
// Codex API strict schemas reject these JSON schema keywords
|
||||
delete record.$schema
|
||||
delete record.propertyNames
|
||||
// Codex Responses rejects JSON Schema's standard `uri` string format.
|
||||
// Keep URL validation in the tool layer and send a plain string here.
|
||||
if (record.format === 'uri') {
|
||||
delete record.format
|
||||
}
|
||||
|
||||
if (record.type === 'object') {
|
||||
// OpenAI structured outputs completely forbid dynamic additionalProperties.
|
||||
@@ -453,6 +462,7 @@ function convertToolChoice(toolChoice: unknown): unknown {
|
||||
if (!choice?.type) return undefined
|
||||
if (choice.type === 'auto') return 'auto'
|
||||
if (choice.type === 'any') return 'required'
|
||||
if (choice.type === 'none') return 'none'
|
||||
if (choice.type === 'tool' && choice.name) {
|
||||
return {
|
||||
type: 'function',
|
||||
@@ -553,7 +563,13 @@ export async function performCodexRequest(options: {
|
||||
|
||||
if (!response.ok) {
|
||||
const errorBody = await response.text().catch(() => 'unknown error')
|
||||
throw new Error(`Codex API error ${response.status}: ${errorBody}`)
|
||||
let errorResponse: object | undefined
|
||||
try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ }
|
||||
throw APIError.generate(
|
||||
response.status, errorResponse,
|
||||
`Codex API error ${response.status}: ${errorBody}`,
|
||||
response.headers as unknown as Record<string, string>,
|
||||
)
|
||||
}
|
||||
|
||||
return response
|
||||
@@ -633,11 +649,9 @@ export async function collectCodexCompletedResponse(
|
||||
|
||||
for await (const event of readSseEvents(response)) {
|
||||
if (event.event === 'response.failed') {
|
||||
throw new Error(
|
||||
event.data?.response?.error?.message ??
|
||||
event.data?.error?.message ??
|
||||
'Codex response failed',
|
||||
)
|
||||
const msg = event.data?.response?.error?.message ??
|
||||
event.data?.error?.message ?? 'Codex response failed'
|
||||
throw APIError.generate(500, undefined, msg, {} as Record<string, string>)
|
||||
}
|
||||
|
||||
if (
|
||||
@@ -650,7 +664,10 @@ export async function collectCodexCompletedResponse(
|
||||
}
|
||||
|
||||
if (!completedResponse) {
|
||||
throw new Error('Codex response ended without a completed payload')
|
||||
throw APIError.generate(
|
||||
500, undefined, 'Codex response ended without a completed payload',
|
||||
{} as Record<string, string>,
|
||||
)
|
||||
}
|
||||
|
||||
return completedResponse
|
||||
@@ -806,11 +823,9 @@ export async function* codexStreamToAnthropic(
|
||||
}
|
||||
|
||||
if (event.event === 'response.failed') {
|
||||
throw new Error(
|
||||
payload?.response?.error?.message ??
|
||||
payload?.error?.message ??
|
||||
'Codex response failed',
|
||||
)
|
||||
const msg = payload?.response?.error?.message ??
|
||||
payload?.error?.message ?? 'Codex response failed'
|
||||
throw APIError.generate(500, undefined, msg, {} as Record<string, string>)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -14,8 +14,16 @@
|
||||
* OPENAI_BASE_URL=http://... — base URL (default: https://api.openai.com/v1)
|
||||
* OPENAI_MODEL=gpt-4o — default model override
|
||||
* CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark
|
||||
*
|
||||
* GitHub Models (models.github.ai), OpenAI-compatible:
|
||||
* CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI)
|
||||
* GITHUB_TOKEN or GH_TOKEN — PAT with models access (mapped to Bearer auth)
|
||||
* OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs
|
||||
*/
|
||||
|
||||
import { APIError } from '@anthropic-ai/sdk'
|
||||
import { isEnvTruthy } from '../../utils/envUtils.js'
|
||||
import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubModelsCredentials.js'
|
||||
import {
|
||||
codexStreamToAnthropic,
|
||||
collectCodexCompletedResponse,
|
||||
@@ -26,10 +34,31 @@ import {
|
||||
type ShimCreateParams,
|
||||
} from './codexShim.js'
|
||||
import {
|
||||
isLocalProviderUrl,
|
||||
resolveCodexApiCredentials,
|
||||
resolveProviderRequest,
|
||||
} from './providerConfig.js'
|
||||
import { stripIncompatibleSchemaKeywords } from '../../utils/schemaSanitizer.js'
|
||||
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
|
||||
|
||||
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
|
||||
const GITHUB_API_VERSION = '2022-11-28'
|
||||
const GITHUB_429_MAX_RETRIES = 3
|
||||
const GITHUB_429_BASE_DELAY_SEC = 1
|
||||
const GITHUB_429_MAX_DELAY_SEC = 32
|
||||
|
||||
function isGithubModelsMode(): boolean {
|
||||
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
}
|
||||
|
||||
function formatRetryAfterHint(response: Response): string {
|
||||
const ra = response.headers.get('retry-after')
|
||||
return ra ? ` (Retry-After: ${ra})` : ''
|
||||
}
|
||||
|
||||
function sleepMs(ms: number): Promise<void> {
|
||||
return new Promise(resolve => setTimeout(resolve, ms))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types — minimal subset of Anthropic SDK types we need to produce
|
||||
@@ -188,7 +217,10 @@ function convertMessages(
|
||||
|
||||
const assistantMsg: OpenAIMessage = {
|
||||
role: 'assistant',
|
||||
content: convertContentBlocks(textContent) as string,
|
||||
content: (() => {
|
||||
const c = convertContentBlocks(textContent)
|
||||
return typeof c === 'string' ? c : Array.isArray(c) ? c.map((p: { text?: string }) => p.text ?? '').join('') : ''
|
||||
})(),
|
||||
}
|
||||
|
||||
if (toolUses.length > 0) {
|
||||
@@ -217,7 +249,10 @@ function convertMessages(
|
||||
} else {
|
||||
result.push({
|
||||
role: 'assistant',
|
||||
content: convertContentBlocks(content) as string,
|
||||
content: (() => {
|
||||
const c = convertContentBlocks(content)
|
||||
return typeof c === 'string' ? c : Array.isArray(c) ? c.map((p: { text?: string }) => p.text ?? '').join('') : ''
|
||||
})(),
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -296,9 +331,7 @@ function normalizeSchemaForOpenAI(
|
||||
function convertTools(
|
||||
tools: Array<{ name: string; description?: string; input_schema?: Record<string, unknown> }>,
|
||||
): OpenAITool[] {
|
||||
const isGemini =
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
const isGemini = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
|
||||
return tools
|
||||
.filter(t => t.name !== 'ToolSearchTool') // Not relevant for OpenAI
|
||||
@@ -595,7 +628,8 @@ async function* openaiStreamToAnthropic(
|
||||
if (
|
||||
!hasEmittedFinalUsage &&
|
||||
chunkUsage &&
|
||||
(chunk.choices?.length ?? 0) === 0
|
||||
(chunk.choices?.length ?? 0) === 0 &&
|
||||
lastStopReason !== null
|
||||
) {
|
||||
yield {
|
||||
type: 'message_delta',
|
||||
@@ -633,9 +667,11 @@ class OpenAIShimStream {
|
||||
|
||||
class OpenAIShimMessages {
|
||||
private defaultHeaders: Record<string, string>
|
||||
private reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh'
|
||||
|
||||
constructor(defaultHeaders: Record<string, string>) {
|
||||
constructor(defaultHeaders: Record<string, string>, reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh') {
|
||||
this.defaultHeaders = defaultHeaders
|
||||
this.reasoningEffort = reasoningEffort
|
||||
}
|
||||
|
||||
create(
|
||||
@@ -644,9 +680,12 @@ class OpenAIShimMessages {
|
||||
) {
|
||||
const self = this
|
||||
|
||||
let httpResponse: Response | undefined
|
||||
|
||||
const promise = (async () => {
|
||||
const request = resolveProviderRequest({ model: params.model })
|
||||
const request = resolveProviderRequest({ model: params.model, reasoningEffortOverride: self.reasoningEffort })
|
||||
const response = await self._doRequest(request, params, options)
|
||||
httpResponse = response
|
||||
|
||||
if (params.stream) {
|
||||
return new OpenAIShimStream(
|
||||
@@ -673,8 +712,9 @@ class OpenAIShimMessages {
|
||||
const data = await promise
|
||||
return {
|
||||
data,
|
||||
response: new Response(),
|
||||
request_id: makeMessageId(),
|
||||
response: httpResponse ?? new Response(),
|
||||
request_id:
|
||||
httpResponse?.headers.get('x-request-id') ?? makeMessageId(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -692,8 +732,11 @@ class OpenAIShimMessages {
|
||||
const authHint = credentials.authPath
|
||||
? ` or place a Codex auth.json at ${credentials.authPath}`
|
||||
: ''
|
||||
const safeModel =
|
||||
redactSecretValueForDisplay(request.requestedModel, process.env) ??
|
||||
'the requested model'
|
||||
throw new Error(
|
||||
`Codex auth is required for ${request.requestedModel}. Set CODEX_API_KEY${authHint}.`,
|
||||
`Codex auth is required for ${safeModel}. Set CODEX_API_KEY${authHint}.`,
|
||||
)
|
||||
}
|
||||
if (!credentials.accountId) {
|
||||
@@ -752,10 +795,16 @@ class OpenAIShimMessages {
|
||||
body.max_completion_tokens = maxCompletionTokensValue
|
||||
}
|
||||
|
||||
if (params.stream) {
|
||||
if (params.stream && !isLocalProviderUrl(request.baseUrl)) {
|
||||
body.stream_options = { include_usage: true }
|
||||
}
|
||||
|
||||
const isGithub = isGithubModelsMode()
|
||||
if (isGithub && body.max_completion_tokens !== undefined) {
|
||||
body.max_tokens = body.max_completion_tokens
|
||||
delete body.max_completion_tokens
|
||||
}
|
||||
|
||||
if (params.temperature !== undefined) body.temperature = params.temperature
|
||||
if (params.top_p !== undefined) body.top_p = params.top_p
|
||||
|
||||
@@ -805,6 +854,11 @@ class OpenAIShimMessages {
|
||||
}
|
||||
}
|
||||
|
||||
if (isGithub) {
|
||||
headers.Accept = 'application/vnd.github.v3+json'
|
||||
headers['X-GitHub-Api-Version'] = GITHUB_API_VERSION
|
||||
}
|
||||
|
||||
// Build the chat completions URL
|
||||
// Azure Cognitive Services / Azure OpenAI require a deployment-specific path
|
||||
// and an api-version query parameter.
|
||||
@@ -827,19 +881,50 @@ class OpenAIShimMessages {
|
||||
chatCompletionsUrl = `${request.baseUrl}/chat/completions`
|
||||
}
|
||||
|
||||
const response = await fetch(chatCompletionsUrl, {
|
||||
method: 'POST',
|
||||
const fetchInit = {
|
||||
method: 'POST' as const,
|
||||
headers,
|
||||
body: JSON.stringify(body),
|
||||
signal: options?.signal,
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorBody = await response.text().catch(() => 'unknown error')
|
||||
throw new Error(`OpenAI API error ${response.status}: ${errorBody}`)
|
||||
}
|
||||
|
||||
return response
|
||||
const maxAttempts = isGithub ? GITHUB_429_MAX_RETRIES : 1
|
||||
let response: Response | undefined
|
||||
for (let attempt = 0; attempt < maxAttempts; attempt++) {
|
||||
response = await fetch(chatCompletionsUrl, fetchInit)
|
||||
if (response.ok) {
|
||||
return response
|
||||
}
|
||||
if (
|
||||
isGithub &&
|
||||
response.status === 429 &&
|
||||
attempt < maxAttempts - 1
|
||||
) {
|
||||
await response.text().catch(() => {})
|
||||
const delaySec = Math.min(
|
||||
GITHUB_429_BASE_DELAY_SEC * 2 ** attempt,
|
||||
GITHUB_429_MAX_DELAY_SEC,
|
||||
)
|
||||
await sleepMs(delaySec * 1000)
|
||||
continue
|
||||
}
|
||||
const errorBody = await response.text().catch(() => 'unknown error')
|
||||
const rateHint =
|
||||
isGithub && response.status === 429 ? formatRetryAfterHint(response) : ''
|
||||
let errorResponse: object | undefined
|
||||
try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ }
|
||||
throw APIError.generate(
|
||||
response.status,
|
||||
errorResponse,
|
||||
`OpenAI API error ${response.status}: ${errorBody}${rateHint}`,
|
||||
response.headers as unknown as Record<string, string>,
|
||||
)
|
||||
}
|
||||
|
||||
throw APIError.generate(
|
||||
500, undefined, 'OpenAI shim: request loop exited unexpectedly',
|
||||
{} as Record<string, string>,
|
||||
)
|
||||
}
|
||||
|
||||
private _convertNonStreamingResponse(
|
||||
@@ -849,7 +934,10 @@ class OpenAIShimMessages {
|
||||
choices?: Array<{
|
||||
message?: {
|
||||
role?: string
|
||||
content?: string | null
|
||||
content?:
|
||||
| string
|
||||
| null
|
||||
| Array<{ type?: string; text?: string }>
|
||||
tool_calls?: Array<{
|
||||
id: string
|
||||
function: { name: string; arguments: string }
|
||||
@@ -868,8 +956,25 @@ class OpenAIShimMessages {
|
||||
const choice = data.choices?.[0]
|
||||
const content: Array<Record<string, unknown>> = []
|
||||
|
||||
if (choice?.message?.content) {
|
||||
content.push({ type: 'text', text: choice.message.content })
|
||||
const rawContent = choice?.message?.content
|
||||
if (typeof rawContent === 'string' && rawContent) {
|
||||
content.push({ type: 'text', text: rawContent })
|
||||
} else if (Array.isArray(rawContent) && rawContent.length > 0) {
|
||||
const parts: string[] = []
|
||||
for (const part of rawContent) {
|
||||
if (
|
||||
part &&
|
||||
typeof part === 'object' &&
|
||||
part.type === 'text' &&
|
||||
typeof part.text === 'string'
|
||||
) {
|
||||
parts.push(part.text)
|
||||
}
|
||||
}
|
||||
const joined = parts.join('\n')
|
||||
if (joined) {
|
||||
content.push({ type: 'text', text: joined })
|
||||
}
|
||||
}
|
||||
|
||||
if (choice?.message?.tool_calls) {
|
||||
@@ -917,9 +1022,11 @@ class OpenAIShimMessages {
|
||||
|
||||
class OpenAIShimBeta {
|
||||
messages: OpenAIShimMessages
|
||||
reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh'
|
||||
|
||||
constructor(defaultHeaders: Record<string, string>) {
|
||||
this.messages = new OpenAIShimMessages(defaultHeaders)
|
||||
constructor(defaultHeaders: Record<string, string>, reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh') {
|
||||
this.messages = new OpenAIShimMessages(defaultHeaders, reasoningEffort)
|
||||
this.reasoningEffort = reasoningEffort
|
||||
}
|
||||
}
|
||||
|
||||
@@ -927,13 +1034,13 @@ export function createOpenAIShimClient(options: {
|
||||
defaultHeaders?: Record<string, string>
|
||||
maxRetries?: number
|
||||
timeout?: number
|
||||
reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh'
|
||||
}): unknown {
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
|
||||
// When Gemini provider is active, map Gemini env vars to OpenAI-compatible ones
|
||||
// so the existing providerConfig.ts infrastructure picks them up correctly.
|
||||
if (
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
) {
|
||||
if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
|
||||
process.env.OPENAI_BASE_URL ??=
|
||||
process.env.GEMINI_BASE_URL ??
|
||||
'https://generativelanguage.googleapis.com/v1beta/openai'
|
||||
@@ -942,11 +1049,15 @@ export function createOpenAIShimClient(options: {
|
||||
if (process.env.GEMINI_MODEL && !process.env.OPENAI_MODEL) {
|
||||
process.env.OPENAI_MODEL = process.env.GEMINI_MODEL
|
||||
}
|
||||
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
process.env.OPENAI_BASE_URL ??= GITHUB_MODELS_DEFAULT_BASE
|
||||
process.env.OPENAI_API_KEY ??=
|
||||
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? ''
|
||||
}
|
||||
|
||||
const beta = new OpenAIShimBeta({
|
||||
...(options.defaultHeaders ?? {}),
|
||||
})
|
||||
}, options.reasoningEffort)
|
||||
|
||||
return {
|
||||
beta,
|
||||
|
||||
41
src/services/api/providerConfig.github.test.ts
Normal file
41
src/services/api/providerConfig.github.test.ts
Normal file
@@ -0,0 +1,41 @@
|
||||
import { afterEach, expect, test } from 'bun:test'
|
||||
|
||||
import {
|
||||
DEFAULT_GITHUB_MODELS_API_MODEL,
|
||||
normalizeGithubModelsApiModel,
|
||||
resolveProviderRequest,
|
||||
} from './providerConfig.js'
|
||||
|
||||
const originalUseGithub = process.env.CLAUDE_CODE_USE_GITHUB
|
||||
|
||||
afterEach(() => {
|
||||
if (originalUseGithub === undefined) {
|
||||
delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
} else {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = originalUseGithub
|
||||
}
|
||||
})
|
||||
|
||||
test.each([
|
||||
['copilot', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
['github:copilot', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
['', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
['github:gpt-4o', 'gpt-4o'],
|
||||
['gpt-4o', 'gpt-4o'],
|
||||
['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => {
|
||||
expect(normalizeGithubModelsApiModel(input)).toBe(expected)
|
||||
})
|
||||
|
||||
test('resolveProviderRequest applies GitHub normalization when CLAUDE_CODE_USE_GITHUB=1', () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
const r = resolveProviderRequest({ model: 'github:gpt-4o' })
|
||||
expect(r.resolvedModel).toBe('gpt-4o')
|
||||
expect(r.transport).toBe('chat_completions')
|
||||
})
|
||||
|
||||
test('resolveProviderRequest leaves model unchanged without GitHub flag', () => {
|
||||
delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
const r = resolveProviderRequest({ model: 'github:gpt-4o' })
|
||||
expect(r.resolvedModel).toBe('github:gpt-4o')
|
||||
})
|
||||
@@ -2,8 +2,12 @@ import { existsSync, readFileSync } from 'node:fs'
|
||||
import { homedir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
|
||||
import { isEnvTruthy } from '../../utils/envUtils.js'
|
||||
|
||||
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
|
||||
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
|
||||
/** Default GitHub Models API model when user selects copilot / github:copilot */
|
||||
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'openai/gpt-4.1'
|
||||
|
||||
const CODEX_ALIAS_MODELS: Record<
|
||||
string,
|
||||
@@ -16,13 +20,43 @@ const CODEX_ALIAS_MODELS: Record<
|
||||
model: 'gpt-5.4',
|
||||
reasoningEffort: 'high',
|
||||
},
|
||||
'gpt-5.4': {
|
||||
model: 'gpt-5.4',
|
||||
reasoningEffort: 'high',
|
||||
},
|
||||
'gpt-5.3-codex': {
|
||||
model: 'gpt-5.3-codex',
|
||||
reasoningEffort: 'high',
|
||||
},
|
||||
'gpt-5.3-codex-spark': {
|
||||
model: 'gpt-5.3-codex-spark',
|
||||
},
|
||||
codexspark: {
|
||||
model: 'gpt-5.3-codex-spark',
|
||||
},
|
||||
'gpt-5.2-codex': {
|
||||
model: 'gpt-5.2-codex',
|
||||
reasoningEffort: 'high',
|
||||
},
|
||||
'gpt-5.1-codex-max': {
|
||||
model: 'gpt-5.1-codex-max',
|
||||
reasoningEffort: 'high',
|
||||
},
|
||||
'gpt-5.1-codex-mini': {
|
||||
model: 'gpt-5.1-codex-mini',
|
||||
},
|
||||
'gpt-5.4-mini': {
|
||||
model: 'gpt-5.4-mini',
|
||||
reasoningEffort: 'medium',
|
||||
},
|
||||
'gpt-5.2': {
|
||||
model: 'gpt-5.2',
|
||||
reasoningEffort: 'medium',
|
||||
},
|
||||
} as const
|
||||
|
||||
type CodexAlias = keyof typeof CODEX_ALIAS_MODELS
|
||||
type ReasoningEffort = 'low' | 'medium' | 'high'
|
||||
type ReasoningEffort = 'low' | 'medium' | 'high' | 'xhigh'
|
||||
|
||||
export type ProviderTransport = 'chat_completions' | 'codex_responses'
|
||||
|
||||
@@ -98,7 +132,7 @@ function decodeJwtPayload(token: string): Record<string, unknown> | undefined {
|
||||
function parseReasoningEffort(value: string | undefined): ReasoningEffort | undefined {
|
||||
if (!value) return undefined
|
||||
const normalized = value.trim().toLowerCase()
|
||||
if (normalized === 'low' || normalized === 'medium' || normalized === 'high') {
|
||||
if (normalized === 'low' || normalized === 'medium' || normalized === 'high' || normalized === 'xhigh') {
|
||||
return normalized
|
||||
}
|
||||
return undefined
|
||||
@@ -171,16 +205,32 @@ export function isCodexBaseUrl(baseUrl: string | undefined): boolean {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize user model string for GitHub Models inference (models.github.ai).
|
||||
* Mirrors runtime devsper `github._normalize_model_id`.
|
||||
*/
|
||||
export function normalizeGithubModelsApiModel(requestedModel: string): string {
|
||||
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
|
||||
const segment =
|
||||
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
|
||||
if (!segment || segment.toLowerCase() === 'copilot') {
|
||||
return DEFAULT_GITHUB_MODELS_API_MODEL
|
||||
}
|
||||
return segment
|
||||
}
|
||||
|
||||
export function resolveProviderRequest(options?: {
|
||||
model?: string
|
||||
baseUrl?: string
|
||||
fallbackModel?: string
|
||||
reasoningEffortOverride?: ReasoningEffort
|
||||
}): ResolvedProviderRequest {
|
||||
const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
const requestedModel =
|
||||
options?.model?.trim() ||
|
||||
process.env.OPENAI_MODEL?.trim() ||
|
||||
options?.fallbackModel?.trim() ||
|
||||
'gpt-4o'
|
||||
(isGithubMode ? 'github:copilot' : 'gpt-4o')
|
||||
const descriptor = parseModelDescriptor(requestedModel)
|
||||
const rawBaseUrl =
|
||||
options?.baseUrl ??
|
||||
@@ -192,17 +242,28 @@ export function resolveProviderRequest(options?: {
|
||||
? 'codex_responses'
|
||||
: 'chat_completions'
|
||||
|
||||
const resolvedModel =
|
||||
transport === 'chat_completions' &&
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
? normalizeGithubModelsApiModel(requestedModel)
|
||||
: descriptor.baseModel
|
||||
|
||||
const reasoning = options?.reasoningEffortOverride
|
||||
? { effort: options.reasoningEffortOverride }
|
||||
: descriptor.reasoning
|
||||
|
||||
|
||||
return {
|
||||
transport,
|
||||
requestedModel,
|
||||
resolvedModel: descriptor.baseModel,
|
||||
resolvedModel,
|
||||
baseUrl:
|
||||
(rawBaseUrl ??
|
||||
(transport === 'codex_responses'
|
||||
? DEFAULT_CODEX_BASE_URL
|
||||
: DEFAULT_OPENAI_BASE_URL)
|
||||
).replace(/\/+$/, ''),
|
||||
reasoning: descriptor.reasoning,
|
||||
reasoning,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -311,3 +372,11 @@ export function resolveCodexApiCredentials(
|
||||
source: 'auth.json',
|
||||
}
|
||||
}
|
||||
|
||||
export function getReasoningEffortForModel(model: string): ReasoningEffort | undefined {
|
||||
const normalized = model.trim().toLowerCase()
|
||||
const base = normalized.split('?', 1)[0] ?? normalized
|
||||
const alias = base as CodexAlias
|
||||
const aliasConfig = CODEX_ALIAS_MODELS[alias]
|
||||
return aliasConfig?.reasoningEffort
|
||||
}
|
||||
|
||||
94
src/services/github/deviceFlow.test.ts
Normal file
94
src/services/github/deviceFlow.test.ts
Normal file
@@ -0,0 +1,94 @@
|
||||
import { afterEach, describe, expect, mock, test } from 'bun:test'
|
||||
|
||||
import {
|
||||
GitHubDeviceFlowError,
|
||||
pollAccessToken,
|
||||
requestDeviceCode,
|
||||
} from './deviceFlow.js'
|
||||
|
||||
describe('requestDeviceCode', () => {
|
||||
const originalFetch = globalThis.fetch
|
||||
|
||||
afterEach(() => {
|
||||
globalThis.fetch = originalFetch
|
||||
})
|
||||
|
||||
test('parses successful device code response', async () => {
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(
|
||||
new Response(
|
||||
JSON.stringify({
|
||||
device_code: 'abc',
|
||||
user_code: 'ABCD-1234',
|
||||
verification_uri: 'https://github.com/login/device',
|
||||
expires_in: 600,
|
||||
interval: 5,
|
||||
}),
|
||||
{ status: 200 },
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
const r = await requestDeviceCode({
|
||||
clientId: 'test-client',
|
||||
fetchImpl: globalThis.fetch,
|
||||
})
|
||||
expect(r.device_code).toBe('abc')
|
||||
expect(r.user_code).toBe('ABCD-1234')
|
||||
expect(r.verification_uri).toBe('https://github.com/login/device')
|
||||
expect(r.expires_in).toBe(600)
|
||||
expect(r.interval).toBe(5)
|
||||
})
|
||||
|
||||
test('throws on HTTP error', async () => {
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(new Response('bad', { status: 500 })),
|
||||
)
|
||||
await expect(
|
||||
requestDeviceCode({ clientId: 'x', fetchImpl: globalThis.fetch }),
|
||||
).rejects.toThrow(GitHubDeviceFlowError)
|
||||
})
|
||||
})
|
||||
|
||||
describe('pollAccessToken', () => {
|
||||
const originalFetch = globalThis.fetch
|
||||
|
||||
afterEach(() => {
|
||||
globalThis.fetch = originalFetch
|
||||
})
|
||||
|
||||
test('returns token when GitHub responds with access_token immediately', async () => {
|
||||
let calls = 0
|
||||
globalThis.fetch = mock(() => {
|
||||
calls++
|
||||
return Promise.resolve(
|
||||
new Response(JSON.stringify({ access_token: 'tok-xyz' }), {
|
||||
status: 200,
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
const token = await pollAccessToken('dev-code', {
|
||||
clientId: 'cid',
|
||||
fetchImpl: globalThis.fetch,
|
||||
})
|
||||
expect(token).toBe('tok-xyz')
|
||||
expect(calls).toBe(1)
|
||||
})
|
||||
|
||||
test('throws on access_denied', async () => {
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(
|
||||
new Response(JSON.stringify({ error: 'access_denied' }), {
|
||||
status: 200,
|
||||
}),
|
||||
),
|
||||
)
|
||||
await expect(
|
||||
pollAccessToken('dc', {
|
||||
clientId: 'c',
|
||||
fetchImpl: globalThis.fetch,
|
||||
}),
|
||||
).rejects.toThrow(/denied/)
|
||||
})
|
||||
})
|
||||
174
src/services/github/deviceFlow.ts
Normal file
174
src/services/github/deviceFlow.ts
Normal file
@@ -0,0 +1,174 @@
|
||||
/**
|
||||
* GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow).
|
||||
*/
|
||||
|
||||
import { execFileNoThrow } from '../../utils/execFileNoThrow.js'
|
||||
|
||||
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Ov23liXjWSSui6QIahPl'
|
||||
|
||||
export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code'
|
||||
export const GITHUB_DEVICE_ACCESS_TOKEN_URL =
|
||||
'https://github.com/login/oauth/access_token'
|
||||
|
||||
/** Match runtime devsper github_oauth DEFAULT_SCOPE */
|
||||
export const DEFAULT_GITHUB_DEVICE_SCOPE = 'read:user,models:read'
|
||||
|
||||
export class GitHubDeviceFlowError extends Error {
|
||||
constructor(message: string) {
|
||||
super(message)
|
||||
this.name = 'GitHubDeviceFlowError'
|
||||
}
|
||||
}
|
||||
|
||||
export type DeviceCodeResult = {
|
||||
device_code: string
|
||||
user_code: string
|
||||
verification_uri: string
|
||||
expires_in: number
|
||||
interval: number
|
||||
}
|
||||
|
||||
export function getGithubDeviceFlowClientId(): string {
|
||||
return (
|
||||
process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() ||
|
||||
DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID
|
||||
)
|
||||
}
|
||||
|
||||
function sleep(ms: number): Promise<void> {
|
||||
return new Promise(resolve => setTimeout(resolve, ms))
|
||||
}
|
||||
|
||||
export async function requestDeviceCode(options?: {
|
||||
clientId?: string
|
||||
scope?: string
|
||||
fetchImpl?: typeof fetch
|
||||
}): Promise<DeviceCodeResult> {
|
||||
const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
|
||||
if (!clientId) {
|
||||
throw new GitHubDeviceFlowError(
|
||||
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID or paste a PAT instead.',
|
||||
)
|
||||
}
|
||||
const fetchFn = options?.fetchImpl ?? fetch
|
||||
const res = await fetchFn(GITHUB_DEVICE_CODE_URL, {
|
||||
method: 'POST',
|
||||
headers: { Accept: 'application/json' },
|
||||
body: new URLSearchParams({
|
||||
client_id: clientId,
|
||||
scope: options?.scope ?? DEFAULT_GITHUB_DEVICE_SCOPE,
|
||||
}),
|
||||
})
|
||||
if (!res.ok) {
|
||||
const text = await res.text().catch(() => '')
|
||||
throw new GitHubDeviceFlowError(
|
||||
`Device code request failed: ${res.status} ${text}`,
|
||||
)
|
||||
}
|
||||
const data = (await res.json()) as Record<string, unknown>
|
||||
const device_code = data.device_code
|
||||
const user_code = data.user_code
|
||||
const verification_uri = data.verification_uri
|
||||
if (
|
||||
typeof device_code !== 'string' ||
|
||||
typeof user_code !== 'string' ||
|
||||
typeof verification_uri !== 'string'
|
||||
) {
|
||||
throw new GitHubDeviceFlowError('Malformed device code response from GitHub')
|
||||
}
|
||||
return {
|
||||
device_code,
|
||||
user_code,
|
||||
verification_uri,
|
||||
expires_in: typeof data.expires_in === 'number' ? data.expires_in : 900,
|
||||
interval: typeof data.interval === 'number' ? data.interval : 5,
|
||||
}
|
||||
}
|
||||
|
||||
export type PollOptions = {
|
||||
clientId?: string
|
||||
initialInterval?: number
|
||||
timeoutSeconds?: number
|
||||
fetchImpl?: typeof fetch
|
||||
}
|
||||
|
||||
export async function pollAccessToken(
|
||||
deviceCode: string,
|
||||
options?: PollOptions,
|
||||
): Promise<string> {
|
||||
const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
|
||||
if (!clientId) {
|
||||
throw new GitHubDeviceFlowError('client_id required for polling')
|
||||
}
|
||||
let interval = Math.max(1, options?.initialInterval ?? 5)
|
||||
const timeoutSeconds = options?.timeoutSeconds ?? 900
|
||||
const fetchFn = options?.fetchImpl ?? fetch
|
||||
const start = Date.now()
|
||||
|
||||
while ((Date.now() - start) / 1000 < timeoutSeconds) {
|
||||
const res = await fetchFn(GITHUB_DEVICE_ACCESS_TOKEN_URL, {
|
||||
method: 'POST',
|
||||
headers: { Accept: 'application/json' },
|
||||
body: new URLSearchParams({
|
||||
client_id: clientId,
|
||||
device_code: deviceCode,
|
||||
grant_type: 'urn:ietf:params:oauth:grant-type:device_code',
|
||||
}),
|
||||
})
|
||||
if (!res.ok) {
|
||||
const text = await res.text().catch(() => '')
|
||||
throw new GitHubDeviceFlowError(
|
||||
`Token request failed: ${res.status} ${text}`,
|
||||
)
|
||||
}
|
||||
const data = (await res.json()) as Record<string, unknown>
|
||||
const err = data.error as string | undefined
|
||||
if (err == null) {
|
||||
const token = data.access_token
|
||||
if (typeof token === 'string' && token) {
|
||||
return token
|
||||
}
|
||||
throw new GitHubDeviceFlowError('No access_token in response')
|
||||
}
|
||||
if (err === 'authorization_pending') {
|
||||
await sleep(interval * 1000)
|
||||
continue
|
||||
}
|
||||
if (err === 'slow_down') {
|
||||
interval =
|
||||
typeof data.interval === 'number' ? data.interval : interval + 5
|
||||
await sleep(interval * 1000)
|
||||
continue
|
||||
}
|
||||
if (err === 'expired_token') {
|
||||
throw new GitHubDeviceFlowError(
|
||||
'Device code expired. Start the login flow again.',
|
||||
)
|
||||
}
|
||||
if (err === 'access_denied') {
|
||||
throw new GitHubDeviceFlowError('Authorization was denied or cancelled.')
|
||||
}
|
||||
throw new GitHubDeviceFlowError(`GitHub OAuth error: ${err}`)
|
||||
}
|
||||
throw new GitHubDeviceFlowError('Timed out waiting for authorization.')
|
||||
}
|
||||
|
||||
/**
|
||||
* Best-effort open browser / OS handler for the verification URL.
|
||||
*/
|
||||
export async function openVerificationUri(uri: string): Promise<void> {
|
||||
try {
|
||||
if (process.platform === 'darwin') {
|
||||
await execFileNoThrow('open', [uri], { useCwd: false, timeout: 5000 })
|
||||
} else if (process.platform === 'win32') {
|
||||
await execFileNoThrow('cmd', ['/c', 'start', '', uri], {
|
||||
useCwd: false,
|
||||
timeout: 5000,
|
||||
})
|
||||
} else {
|
||||
await execFileNoThrow('xdg-open', [uri], { useCwd: false, timeout: 5000 })
|
||||
}
|
||||
} catch {
|
||||
// User can open the URL manually
|
||||
}
|
||||
}
|
||||
48
src/services/mcp/client.test.ts
Normal file
48
src/services/mcp/client.test.ts
Normal file
@@ -0,0 +1,48 @@
|
||||
import assert from 'node:assert/strict'
|
||||
import test from 'node:test'
|
||||
|
||||
import { cleanupFailedConnection } from './client.js'
|
||||
|
||||
test('cleanupFailedConnection awaits transport close before resolving', async () => {
|
||||
let closed = false
|
||||
let resolveClose: (() => void) | undefined
|
||||
|
||||
const transport = {
|
||||
close: async () =>
|
||||
await new Promise<void>(resolve => {
|
||||
resolveClose = () => {
|
||||
closed = true
|
||||
resolve()
|
||||
}
|
||||
}),
|
||||
}
|
||||
|
||||
const cleanupPromise = cleanupFailedConnection(transport)
|
||||
|
||||
assert.equal(closed, false)
|
||||
resolveClose?.()
|
||||
await cleanupPromise
|
||||
assert.equal(closed, true)
|
||||
})
|
||||
|
||||
test('cleanupFailedConnection closes in-process server and transport', async () => {
|
||||
let inProcessClosed = false
|
||||
let transportClosed = false
|
||||
|
||||
const inProcessServer = {
|
||||
close: async () => {
|
||||
inProcessClosed = true
|
||||
},
|
||||
}
|
||||
|
||||
const transport = {
|
||||
close: async () => {
|
||||
transportClosed = true
|
||||
},
|
||||
}
|
||||
|
||||
await cleanupFailedConnection(transport, inProcessServer)
|
||||
|
||||
assert.equal(inProcessClosed, true)
|
||||
assert.equal(transportClosed, true)
|
||||
})
|
||||
@@ -560,6 +560,22 @@ function getRemoteMcpServerConnectionBatchSize(): number {
|
||||
)
|
||||
}
|
||||
|
||||
type InProcessMcpServer = {
|
||||
connect(t: Transport): Promise<void>
|
||||
close(): Promise<void>
|
||||
}
|
||||
|
||||
export async function cleanupFailedConnection(
|
||||
transport: Pick<Transport, 'close'>,
|
||||
inProcessServer?: Pick<InProcessMcpServer, 'close'>,
|
||||
): Promise<void> {
|
||||
if (inProcessServer) {
|
||||
await inProcessServer.close().catch(() => {})
|
||||
}
|
||||
|
||||
await transport.close().catch(() => {})
|
||||
}
|
||||
|
||||
function isLocalMcpServer(config: ScopedMcpServerConfig): boolean {
|
||||
return !config.type || config.type === 'stdio' || config.type === 'sdk'
|
||||
}
|
||||
@@ -606,9 +622,7 @@ export const connectToServer = memoize(
|
||||
},
|
||||
): Promise<MCPServerConnection> => {
|
||||
const connectStartTime = Date.now()
|
||||
let inProcessServer:
|
||||
| { connect(t: Transport): Promise<void>; close(): Promise<void> }
|
||||
| undefined
|
||||
let inProcessServer: InProcessMcpServer | undefined
|
||||
try {
|
||||
let transport
|
||||
|
||||
@@ -1145,9 +1159,10 @@ export const connectToServer = memoize(
|
||||
})
|
||||
}
|
||||
if (inProcessServer) {
|
||||
inProcessServer.close().catch(() => { })
|
||||
await cleanupFailedConnection(transport, inProcessServer)
|
||||
} else {
|
||||
await cleanupFailedConnection(transport)
|
||||
}
|
||||
transport.close().catch(() => { })
|
||||
if (stderrOutput) {
|
||||
logMCPError(name, `Server stderr: ${stderrOutput}`)
|
||||
}
|
||||
|
||||
540
src/services/mcp/doctor.test.ts
Normal file
540
src/services/mcp/doctor.test.ts
Normal file
@@ -0,0 +1,540 @@
|
||||
import assert from 'node:assert/strict'
|
||||
import test from 'node:test'
|
||||
|
||||
import type { ValidationError } from '../../utils/settings/validation.js'
|
||||
|
||||
import {
|
||||
buildEmptyDoctorReport,
|
||||
doctorAllServers,
|
||||
doctorServer,
|
||||
findingsFromValidationErrors,
|
||||
type McpDoctorDependencies,
|
||||
} from './doctor.js'
|
||||
|
||||
function stdioConfig(scope: 'local' | 'project' | 'user' | 'enterprise', command: string) {
|
||||
return {
|
||||
type: 'stdio' as const,
|
||||
command,
|
||||
args: [],
|
||||
scope,
|
||||
}
|
||||
}
|
||||
|
||||
function makeDependencies(overrides: Partial<McpDoctorDependencies> = {}): McpDoctorDependencies {
|
||||
return {
|
||||
getAllMcpConfigs: async () => ({ servers: {}, errors: [] }),
|
||||
getMcpConfigsByScope: () => ({ servers: {}, errors: [] }),
|
||||
getProjectMcpServerStatus: () => 'approved',
|
||||
isMcpServerDisabled: () => false,
|
||||
describeMcpConfigFilePath: scope => `scope://${scope}`,
|
||||
clearServerCache: async () => {},
|
||||
connectToServer: async (name, config) => ({
|
||||
name,
|
||||
type: 'connected',
|
||||
capabilities: {},
|
||||
config,
|
||||
cleanup: async () => {},
|
||||
}),
|
||||
...overrides,
|
||||
}
|
||||
}
|
||||
|
||||
test('buildEmptyDoctorReport returns zeroed summary', () => {
|
||||
const report = buildEmptyDoctorReport({
|
||||
configOnly: true,
|
||||
scopeFilter: 'project',
|
||||
targetName: 'filesystem',
|
||||
})
|
||||
|
||||
assert.equal(report.targetName, 'filesystem')
|
||||
assert.equal(report.scopeFilter, 'project')
|
||||
assert.equal(report.configOnly, true)
|
||||
assert.deepEqual(report.summary, {
|
||||
totalReports: 0,
|
||||
healthy: 0,
|
||||
warnings: 0,
|
||||
blocking: 0,
|
||||
})
|
||||
assert.deepEqual(report.findings, [])
|
||||
assert.deepEqual(report.servers, [])
|
||||
})
|
||||
|
||||
test('findingsFromValidationErrors maps missing env warnings into doctor findings', () => {
|
||||
const validationErrors: ValidationError[] = [
|
||||
{
|
||||
file: '.mcp.json',
|
||||
path: 'mcpServers.filesystem',
|
||||
message: 'Missing environment variables: API_KEY, API_URL',
|
||||
suggestion: 'Set the following environment variables: API_KEY, API_URL',
|
||||
mcpErrorMetadata: {
|
||||
scope: 'project',
|
||||
serverName: 'filesystem',
|
||||
severity: 'warning',
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
const findings = findingsFromValidationErrors(validationErrors)
|
||||
|
||||
assert.equal(findings.length, 1)
|
||||
assert.deepEqual(findings[0], {
|
||||
blocking: false,
|
||||
code: 'config.missing_env_vars',
|
||||
message: 'Missing environment variables: API_KEY, API_URL',
|
||||
remediation: 'Set the following environment variables: API_KEY, API_URL',
|
||||
scope: 'project',
|
||||
serverName: 'filesystem',
|
||||
severity: 'warn',
|
||||
sourcePath: '.mcp.json',
|
||||
})
|
||||
})
|
||||
|
||||
test('findingsFromValidationErrors maps Windows npx warnings into doctor findings', () => {
|
||||
const validationErrors: ValidationError[] = [
|
||||
{
|
||||
file: '.mcp.json',
|
||||
path: 'mcpServers.node-tools',
|
||||
message: "Windows requires 'cmd /c' wrapper to execute npx",
|
||||
suggestion:
|
||||
'Change command to "cmd" with args ["/c", "npx", ...]. See: https://code.claude.com/docs/en/mcp#configure-mcp-servers',
|
||||
mcpErrorMetadata: {
|
||||
scope: 'project',
|
||||
serverName: 'node-tools',
|
||||
severity: 'warning',
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
const findings = findingsFromValidationErrors(validationErrors)
|
||||
|
||||
assert.equal(findings.length, 1)
|
||||
assert.equal(findings[0]?.code, 'config.windows_npx_wrapper_required')
|
||||
assert.equal(findings[0]?.serverName, 'node-tools')
|
||||
assert.equal(findings[0]?.severity, 'warn')
|
||||
assert.equal(findings[0]?.blocking, false)
|
||||
})
|
||||
|
||||
test('findingsFromValidationErrors maps fatal parse errors into blocking findings', () => {
|
||||
const validationErrors: ValidationError[] = [
|
||||
{
|
||||
file: 'C:/repo/.mcp.json',
|
||||
path: '',
|
||||
message: 'MCP config is not a valid JSON',
|
||||
suggestion: 'Fix the JSON syntax errors in the file',
|
||||
mcpErrorMetadata: {
|
||||
scope: 'project',
|
||||
severity: 'fatal',
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
const findings = findingsFromValidationErrors(validationErrors)
|
||||
|
||||
assert.equal(findings.length, 1)
|
||||
assert.equal(findings[0]?.code, 'config.invalid_json')
|
||||
assert.equal(findings[0]?.severity, 'error')
|
||||
assert.equal(findings[0]?.blocking, true)
|
||||
})
|
||||
|
||||
test('doctorAllServers reports global validation findings once without duplicating them into every server', async () => {
|
||||
const localConfig = stdioConfig('local', 'node-local')
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { filesystem: localConfig },
|
||||
errors: [],
|
||||
}),
|
||||
getMcpConfigsByScope: scope =>
|
||||
scope === 'project'
|
||||
? {
|
||||
servers: {},
|
||||
errors: [
|
||||
{
|
||||
file: '.mcp.json',
|
||||
path: '',
|
||||
message: 'MCP config is not a valid JSON',
|
||||
suggestion: 'Fix the JSON syntax errors in the file',
|
||||
mcpErrorMetadata: {
|
||||
scope: 'project',
|
||||
severity: 'fatal',
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
: scope === 'local'
|
||||
? { servers: { filesystem: localConfig }, errors: [] }
|
||||
: { servers: {}, errors: [] },
|
||||
})
|
||||
|
||||
const report = await doctorAllServers({ configOnly: true }, deps)
|
||||
|
||||
assert.equal(report.summary.totalReports, 1)
|
||||
assert.equal(report.summary.blocking, 1)
|
||||
assert.equal(report.findings.length, 1)
|
||||
assert.equal(report.findings[0]?.code, 'config.invalid_json')
|
||||
assert.deepEqual(report.servers[0]?.findings, [])
|
||||
})
|
||||
|
||||
test('doctorServer explains same-name shadowing across scopes', async () => {
|
||||
const localConfig = stdioConfig('local', 'node-local')
|
||||
const userConfig = stdioConfig('user', 'node-user')
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: {
|
||||
filesystem: localConfig,
|
||||
},
|
||||
errors: [],
|
||||
}),
|
||||
getMcpConfigsByScope: scope => {
|
||||
switch (scope) {
|
||||
case 'local':
|
||||
return { servers: { filesystem: localConfig }, errors: [] }
|
||||
case 'user':
|
||||
return { servers: { filesystem: userConfig }, errors: [] }
|
||||
default:
|
||||
return { servers: {}, errors: [] }
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
const report = await doctorServer('filesystem', { configOnly: true }, deps)
|
||||
assert.equal(report.servers.length, 1)
|
||||
assert.equal(report.servers[0]?.definitions.length, 2)
|
||||
assert.equal(report.servers[0]?.definitions.find(def => def.sourceType === 'local')?.runtimeActive, true)
|
||||
assert.equal(report.servers[0]?.definitions.find(def => def.sourceType === 'user')?.runtimeActive, false)
|
||||
assert.deepEqual(
|
||||
report.servers[0]?.findings.map(finding => finding.code).sort(),
|
||||
['duplicate.same_name_multiple_scopes', 'scope.shadowed'],
|
||||
)
|
||||
})
|
||||
|
||||
test('doctorServer reports project servers pending approval', async () => {
|
||||
const projectConfig = stdioConfig('project', 'node-project')
|
||||
const deps = makeDependencies({
|
||||
getMcpConfigsByScope: scope =>
|
||||
scope === 'project'
|
||||
? { servers: { sentry: projectConfig }, errors: [] }
|
||||
: { servers: {}, errors: [] },
|
||||
getProjectMcpServerStatus: name => (name === 'sentry' ? 'pending' : 'approved'),
|
||||
})
|
||||
|
||||
const report = await doctorServer('sentry', { configOnly: true }, deps)
|
||||
assert.equal(report.servers.length, 1)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.pendingApproval, true)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, false)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.runtimeVisible, false)
|
||||
assert.equal(
|
||||
report.servers[0]?.findings.some(finding => finding.code === 'state.pending_project_approval'),
|
||||
true,
|
||||
)
|
||||
})
|
||||
|
||||
test('doctorServer does not treat disabled servers as runtime-active or live-check targets', async () => {
|
||||
let connectCalls = 0
|
||||
const localConfig = stdioConfig('local', 'node-local')
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { github: localConfig },
|
||||
errors: [],
|
||||
}),
|
||||
getMcpConfigsByScope: scope =>
|
||||
scope === 'local'
|
||||
? { servers: { github: localConfig }, errors: [] }
|
||||
: { servers: {}, errors: [] },
|
||||
isMcpServerDisabled: name => name === 'github',
|
||||
connectToServer: async (name, config) => {
|
||||
connectCalls += 1
|
||||
return {
|
||||
name,
|
||||
type: 'failed',
|
||||
config,
|
||||
error: 'should not connect',
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
const report = await doctorServer('github', { configOnly: false }, deps)
|
||||
|
||||
assert.equal(connectCalls, 0)
|
||||
assert.equal(report.summary.blocking, 0)
|
||||
assert.equal(report.summary.warnings, 1)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.disabled, true)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, false)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.runtimeVisible, false)
|
||||
assert.equal(report.servers[0]?.liveCheck.result, 'disabled')
|
||||
assert.equal(
|
||||
report.servers[0]?.findings.some(finding => finding.code === 'state.disabled' && finding.severity === 'warn'),
|
||||
true,
|
||||
)
|
||||
})
|
||||
|
||||
test('doctorAllServers skips live checks in config-only mode', async () => {
|
||||
let connectCalls = 0
|
||||
const localConfig = stdioConfig('local', 'node-local')
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { linear: localConfig },
|
||||
errors: [],
|
||||
}),
|
||||
getMcpConfigsByScope: scope =>
|
||||
scope === 'local'
|
||||
? { servers: { linear: localConfig }, errors: [] }
|
||||
: { servers: {}, errors: [] },
|
||||
connectToServer: async (name, config) => {
|
||||
connectCalls += 1
|
||||
return {
|
||||
name,
|
||||
type: 'connected',
|
||||
capabilities: {},
|
||||
config,
|
||||
cleanup: async () => {},
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
const report = await doctorAllServers({ configOnly: true }, deps)
|
||||
assert.equal(connectCalls, 0)
|
||||
assert.equal(report.servers[0]?.liveCheck.attempted, false)
|
||||
assert.equal(report.servers[0]?.liveCheck.result, 'skipped')
|
||||
})
|
||||
|
||||
test('doctorAllServers honors scopeFilter when collecting names', async () => {
|
||||
const pluginConfig = {
|
||||
type: 'http' as const,
|
||||
url: 'https://example.test/mcp',
|
||||
scope: 'dynamic' as const,
|
||||
pluginSource: 'plugin:github@official',
|
||||
}
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { 'plugin:github:github': pluginConfig },
|
||||
errors: [],
|
||||
}),
|
||||
})
|
||||
|
||||
const report = await doctorAllServers({ configOnly: true, scopeFilter: 'user' }, deps)
|
||||
|
||||
assert.equal(report.summary.totalReports, 0)
|
||||
assert.deepEqual(report.servers, [])
|
||||
})
|
||||
|
||||
test('doctorAllServers honors scopeFilter when collecting validation errors', async () => {
|
||||
const userConfig = stdioConfig('user', 'node-user')
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { filesystem: userConfig },
|
||||
errors: [],
|
||||
}),
|
||||
getMcpConfigsByScope: scope => {
|
||||
switch (scope) {
|
||||
case 'project':
|
||||
return {
|
||||
servers: {},
|
||||
errors: [
|
||||
{
|
||||
file: '.mcp.json',
|
||||
path: '',
|
||||
message: 'MCP config is not a valid JSON',
|
||||
suggestion: 'Fix the JSON syntax errors in the file',
|
||||
mcpErrorMetadata: {
|
||||
scope: 'project',
|
||||
severity: 'fatal',
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
case 'user':
|
||||
return { servers: { filesystem: userConfig }, errors: [] }
|
||||
default:
|
||||
return { servers: {}, errors: [] }
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
const report = await doctorAllServers({ configOnly: true, scopeFilter: 'user' }, deps)
|
||||
|
||||
assert.equal(report.summary.totalReports, 1)
|
||||
assert.equal(report.summary.blocking, 0)
|
||||
assert.deepEqual(report.findings, [])
|
||||
assert.deepEqual(report.servers[0]?.findings, [])
|
||||
})
|
||||
|
||||
test('doctorAllServers includes observed runtime definitions for plugin-only servers', async () => {
|
||||
const pluginConfig = {
|
||||
type: 'http' as const,
|
||||
url: 'https://example.test/mcp',
|
||||
scope: 'dynamic' as const,
|
||||
pluginSource: 'plugin:github@official',
|
||||
}
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { 'plugin:github:github': pluginConfig },
|
||||
errors: [],
|
||||
}),
|
||||
})
|
||||
|
||||
const report = await doctorAllServers({ configOnly: true }, deps)
|
||||
|
||||
assert.equal(report.summary.totalReports, 1)
|
||||
assert.equal(report.servers[0]?.definitions.length, 1)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.sourceType, 'plugin')
|
||||
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, true)
|
||||
})
|
||||
|
||||
test('doctorAllServers reports disabled plugin servers as disabled, not not-found', async () => {
|
||||
const pluginConfig = {
|
||||
type: 'http' as const,
|
||||
url: 'https://example.test/mcp',
|
||||
scope: 'dynamic' as const,
|
||||
pluginSource: 'plugin:github@official',
|
||||
}
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { 'plugin:github:github': pluginConfig },
|
||||
errors: [],
|
||||
}),
|
||||
isMcpServerDisabled: name => name === 'plugin:github:github',
|
||||
})
|
||||
|
||||
const report = await doctorAllServers({ configOnly: true }, deps)
|
||||
|
||||
assert.equal(report.summary.totalReports, 1)
|
||||
assert.equal(report.summary.warnings, 1)
|
||||
assert.equal(report.summary.blocking, 0)
|
||||
assert.equal(report.servers[0]?.definitions.length, 1)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.sourceType, 'plugin')
|
||||
assert.equal(report.servers[0]?.definitions[0]?.disabled, true)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, false)
|
||||
assert.equal(
|
||||
report.servers[0]?.findings.some(finding => finding.code === 'state.disabled' && !finding.blocking),
|
||||
true,
|
||||
)
|
||||
assert.equal(
|
||||
report.servers[0]?.findings.some(finding => finding.code === 'state.not_found'),
|
||||
false,
|
||||
)
|
||||
})
|
||||
|
||||
test('doctorServer converts failed live checks into blocking findings', async () => {
|
||||
const localConfig = stdioConfig('local', 'node-local')
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { github: localConfig },
|
||||
errors: [],
|
||||
}),
|
||||
getMcpConfigsByScope: scope =>
|
||||
scope === 'local'
|
||||
? { servers: { github: localConfig }, errors: [] }
|
||||
: { servers: {}, errors: [] },
|
||||
connectToServer: async (name, config) => ({
|
||||
name,
|
||||
type: 'failed',
|
||||
config,
|
||||
error: 'command not found: node-local',
|
||||
}),
|
||||
})
|
||||
|
||||
const report = await doctorServer('github', { configOnly: false }, deps)
|
||||
|
||||
assert.equal(report.summary.blocking, 1)
|
||||
assert.equal(report.servers[0]?.liveCheck.result, 'failed')
|
||||
assert.equal(
|
||||
report.servers[0]?.findings.some(
|
||||
finding => finding.code === 'stdio.command_not_found' && finding.blocking,
|
||||
),
|
||||
true,
|
||||
)
|
||||
})
|
||||
|
||||
test('doctorServer converts needs-auth live checks into warning findings', async () => {
|
||||
const localConfig = stdioConfig('local', 'node-local')
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { sentry: localConfig },
|
||||
errors: [],
|
||||
}),
|
||||
getMcpConfigsByScope: scope =>
|
||||
scope === 'local'
|
||||
? { servers: { sentry: localConfig }, errors: [] }
|
||||
: { servers: {}, errors: [] },
|
||||
connectToServer: async (name, config) => ({
|
||||
name,
|
||||
type: 'needs-auth',
|
||||
config,
|
||||
}),
|
||||
})
|
||||
|
||||
const report = await doctorServer('sentry', { configOnly: false }, deps)
|
||||
|
||||
assert.equal(report.summary.warnings, 1)
|
||||
assert.equal(report.summary.blocking, 0)
|
||||
assert.equal(
|
||||
report.servers[0]?.findings.some(finding => finding.code === 'auth.needs_auth' && finding.severity === 'warn'),
|
||||
true,
|
||||
)
|
||||
})
|
||||
|
||||
test('doctorServer includes observed runtime definition for plugin-only targets', async () => {
|
||||
const pluginConfig = {
|
||||
type: 'http' as const,
|
||||
url: 'https://example.test/mcp',
|
||||
scope: 'dynamic' as const,
|
||||
pluginSource: 'plugin:github@official',
|
||||
}
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { 'plugin:github:github': pluginConfig },
|
||||
errors: [],
|
||||
}),
|
||||
})
|
||||
|
||||
const report = await doctorServer('plugin:github:github', { configOnly: true }, deps)
|
||||
|
||||
assert.equal(report.summary.totalReports, 1)
|
||||
assert.equal(report.servers[0]?.definitions.length, 1)
|
||||
assert.equal(report.servers[0]?.definitions[0]?.sourceType, 'plugin')
|
||||
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, true)
|
||||
})
|
||||
|
||||
test('doctorServer with scopeFilter does not leak runtime definition from another scope when target is absent', async () => {
|
||||
let connectCalls = 0
|
||||
const localConfig = stdioConfig('local', 'node-local')
|
||||
const deps = makeDependencies({
|
||||
getAllMcpConfigs: async () => ({
|
||||
servers: { github: localConfig },
|
||||
errors: [],
|
||||
}),
|
||||
getMcpConfigsByScope: scope =>
|
||||
scope === 'local'
|
||||
? { servers: { github: localConfig }, errors: [] }
|
||||
: { servers: {}, errors: [] },
|
||||
connectToServer: async (name, config) => {
|
||||
connectCalls += 1
|
||||
return {
|
||||
name,
|
||||
type: 'connected',
|
||||
capabilities: {},
|
||||
config,
|
||||
cleanup: async () => {},
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
const report = await doctorServer('github', { configOnly: false, scopeFilter: 'user' }, deps)
|
||||
|
||||
assert.equal(connectCalls, 0)
|
||||
assert.equal(report.summary.totalReports, 1)
|
||||
assert.equal(report.summary.blocking, 1)
|
||||
assert.deepEqual(report.servers[0]?.definitions, [])
|
||||
assert.equal(report.servers[0]?.liveCheck.result, 'skipped')
|
||||
assert.equal(
|
||||
report.servers[0]?.findings.some(finding => finding.code === 'state.not_found' && finding.blocking),
|
||||
true,
|
||||
)
|
||||
})
|
||||
|
||||
test('doctorServer reports blocking not-found state when no definition exists', async () => {
|
||||
const report = await doctorServer('missing-server', { configOnly: true }, makeDependencies())
|
||||
|
||||
assert.equal(report.summary.blocking, 1)
|
||||
assert.equal(report.servers[0]?.findings.some(finding => finding.code === 'state.not_found' && finding.blocking), true)
|
||||
})
|
||||
695
src/services/mcp/doctor.ts
Normal file
695
src/services/mcp/doctor.ts
Normal file
@@ -0,0 +1,695 @@
|
||||
import type { ValidationError } from '../../utils/settings/validation.js'
|
||||
import { clearServerCache, connectToServer } from './client.js'
|
||||
import {
|
||||
getAllMcpConfigs,
|
||||
getMcpConfigsByScope,
|
||||
isMcpServerDisabled,
|
||||
} from './config.js'
|
||||
import type {
|
||||
ConfigScope,
|
||||
ScopedMcpServerConfig,
|
||||
} from './types.js'
|
||||
import { describeMcpConfigFilePath, getProjectMcpServerStatus } from './utils.js'
|
||||
|
||||
export type McpDoctorSeverity = 'info' | 'warn' | 'error'
|
||||
export type McpDoctorScopeFilter = 'local' | 'project' | 'user' | 'enterprise'
|
||||
|
||||
export type McpDoctorFinding = {
|
||||
blocking: boolean
|
||||
code: string
|
||||
message: string
|
||||
remediation?: string
|
||||
scope?: string
|
||||
serverName?: string
|
||||
severity: McpDoctorSeverity
|
||||
sourcePath?: string
|
||||
}
|
||||
|
||||
export type McpDoctorLiveCheck = {
|
||||
attempted: boolean
|
||||
durationMs?: number
|
||||
error?: string
|
||||
result?: 'connected' | 'needs-auth' | 'failed' | 'pending' | 'disabled' | 'skipped'
|
||||
}
|
||||
|
||||
export type McpDoctorDefinition = {
|
||||
name: string
|
||||
sourceType:
|
||||
| 'local'
|
||||
| 'project'
|
||||
| 'user'
|
||||
| 'enterprise'
|
||||
| 'managed'
|
||||
| 'plugin'
|
||||
| 'claudeai'
|
||||
| 'dynamic'
|
||||
| 'internal'
|
||||
sourcePath?: string
|
||||
transport?: string
|
||||
runtimeVisible: boolean
|
||||
runtimeActive: boolean
|
||||
pendingApproval?: boolean
|
||||
disabled?: boolean
|
||||
}
|
||||
|
||||
export type McpDoctorServerReport = {
|
||||
serverName: string
|
||||
requestedByUser: boolean
|
||||
definitions: McpDoctorDefinition[]
|
||||
liveCheck: McpDoctorLiveCheck
|
||||
findings: McpDoctorFinding[]
|
||||
}
|
||||
|
||||
export type McpDoctorDependencies = {
|
||||
getAllMcpConfigs: typeof getAllMcpConfigs
|
||||
getMcpConfigsByScope: typeof getMcpConfigsByScope
|
||||
getProjectMcpServerStatus: typeof getProjectMcpServerStatus
|
||||
isMcpServerDisabled: typeof isMcpServerDisabled
|
||||
describeMcpConfigFilePath: typeof describeMcpConfigFilePath
|
||||
connectToServer: typeof connectToServer
|
||||
clearServerCache: typeof clearServerCache
|
||||
}
|
||||
|
||||
export type McpDoctorReport = {
|
||||
generatedAt: string
|
||||
targetName?: string
|
||||
scopeFilter?: McpDoctorScopeFilter
|
||||
configOnly: boolean
|
||||
summary: {
|
||||
totalReports: number
|
||||
healthy: number
|
||||
warnings: number
|
||||
blocking: number
|
||||
}
|
||||
findings: McpDoctorFinding[]
|
||||
servers: McpDoctorServerReport[]
|
||||
}
|
||||
|
||||
const DEFAULT_DEPENDENCIES: McpDoctorDependencies = {
|
||||
getAllMcpConfigs,
|
||||
getMcpConfigsByScope,
|
||||
getProjectMcpServerStatus,
|
||||
isMcpServerDisabled,
|
||||
describeMcpConfigFilePath,
|
||||
connectToServer,
|
||||
clearServerCache,
|
||||
}
|
||||
|
||||
export function buildEmptyDoctorReport(options: {
|
||||
configOnly: boolean
|
||||
scopeFilter?: McpDoctorScopeFilter
|
||||
targetName?: string
|
||||
}): McpDoctorReport {
|
||||
return {
|
||||
generatedAt: new Date().toISOString(),
|
||||
targetName: options.targetName,
|
||||
scopeFilter: options.scopeFilter,
|
||||
configOnly: options.configOnly,
|
||||
summary: {
|
||||
totalReports: 0,
|
||||
healthy: 0,
|
||||
warnings: 0,
|
||||
blocking: 0,
|
||||
},
|
||||
findings: [],
|
||||
servers: [],
|
||||
}
|
||||
}
|
||||
|
||||
function getFindingCode(error: ValidationError): string {
|
||||
if (error.message === 'MCP config is not a valid JSON') {
|
||||
return 'config.invalid_json'
|
||||
}
|
||||
if (error.message.startsWith('Missing environment variables:')) {
|
||||
return 'config.missing_env_vars'
|
||||
}
|
||||
if (error.message.includes("Windows requires 'cmd /c' wrapper to execute npx")) {
|
||||
return 'config.windows_npx_wrapper_required'
|
||||
}
|
||||
if (error.message === 'Does not adhere to MCP server configuration schema') {
|
||||
return 'config.invalid_schema'
|
||||
}
|
||||
return 'config.validation_error'
|
||||
}
|
||||
|
||||
function getSeverity(error: ValidationError): McpDoctorSeverity {
|
||||
const severity = error.mcpErrorMetadata?.severity
|
||||
if (severity === 'fatal') {
|
||||
return 'error'
|
||||
}
|
||||
if (severity === 'warning') {
|
||||
return 'warn'
|
||||
}
|
||||
return 'warn'
|
||||
}
|
||||
|
||||
export function findingsFromValidationErrors(
|
||||
validationErrors: ValidationError[],
|
||||
): McpDoctorFinding[] {
|
||||
return validationErrors.map(error => {
|
||||
const severity = getSeverity(error)
|
||||
return {
|
||||
blocking: severity === 'error',
|
||||
code: getFindingCode(error),
|
||||
message: error.message,
|
||||
remediation: error.suggestion,
|
||||
scope: error.mcpErrorMetadata?.scope,
|
||||
serverName: error.mcpErrorMetadata?.serverName,
|
||||
severity,
|
||||
sourcePath: error.file,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
function splitValidationFindings(validationFindings: McpDoctorFinding[]): {
|
||||
globalFindings: McpDoctorFinding[]
|
||||
serverFindingsByName: Map<string, McpDoctorFinding[]>
|
||||
} {
|
||||
const globalFindings: McpDoctorFinding[] = []
|
||||
const serverFindingsByName = new Map<string, McpDoctorFinding[]>()
|
||||
|
||||
for (const finding of validationFindings) {
|
||||
if (!finding.serverName) {
|
||||
globalFindings.push(finding)
|
||||
continue
|
||||
}
|
||||
|
||||
const findings = serverFindingsByName.get(finding.serverName) ?? []
|
||||
findings.push(finding)
|
||||
serverFindingsByName.set(finding.serverName, findings)
|
||||
}
|
||||
|
||||
return {
|
||||
globalFindings,
|
||||
serverFindingsByName,
|
||||
}
|
||||
}
|
||||
|
||||
function getSourceType(config: ScopedMcpServerConfig): McpDoctorDefinition['sourceType'] {
|
||||
if (config.scope === 'claudeai') {
|
||||
return 'claudeai'
|
||||
}
|
||||
if (config.scope === 'dynamic') {
|
||||
return config.pluginSource ? 'plugin' : 'dynamic'
|
||||
}
|
||||
if (config.scope === 'managed') {
|
||||
return 'managed'
|
||||
}
|
||||
return config.scope
|
||||
}
|
||||
|
||||
function getTransport(config: ScopedMcpServerConfig): string {
|
||||
return config.type ?? 'stdio'
|
||||
}
|
||||
|
||||
function getConfigSignature(config: ScopedMcpServerConfig): string {
|
||||
switch (config.type) {
|
||||
case 'sse':
|
||||
case 'http':
|
||||
case 'ws':
|
||||
case 'claudeai-proxy':
|
||||
return `${config.scope}:${config.type}:${config.url}`
|
||||
case 'sdk':
|
||||
return `${config.scope}:${config.type}:${config.name}`
|
||||
default:
|
||||
return `${config.scope}:${config.type ?? 'stdio'}:${config.command}:${JSON.stringify(config.args ?? [])}`
|
||||
}
|
||||
}
|
||||
|
||||
function isSameDefinition(
|
||||
config: ScopedMcpServerConfig,
|
||||
activeConfig: ScopedMcpServerConfig | undefined,
|
||||
): boolean {
|
||||
if (!activeConfig) {
|
||||
return false
|
||||
}
|
||||
return getSourceType(config) === getSourceType(activeConfig) && getConfigSignature(config) === getConfigSignature(activeConfig)
|
||||
}
|
||||
|
||||
function buildScopeDefinitions(
|
||||
name: string,
|
||||
scope: ConfigScope,
|
||||
servers: Record<string, ScopedMcpServerConfig>,
|
||||
activeConfig: ScopedMcpServerConfig | undefined,
|
||||
deps: McpDoctorDependencies,
|
||||
): McpDoctorDefinition[] {
|
||||
const config = servers[name]
|
||||
if (!config) {
|
||||
return []
|
||||
}
|
||||
|
||||
const pendingApproval =
|
||||
scope === 'project' ? deps.getProjectMcpServerStatus(name) === 'pending' : false
|
||||
const disabled = deps.isMcpServerDisabled(name)
|
||||
const runtimeActive = !disabled && isSameDefinition(config, activeConfig)
|
||||
|
||||
return [
|
||||
{
|
||||
name,
|
||||
sourceType: getSourceType(config),
|
||||
sourcePath: deps.describeMcpConfigFilePath(scope),
|
||||
transport: getTransport(config),
|
||||
runtimeVisible: runtimeActive,
|
||||
runtimeActive,
|
||||
pendingApproval,
|
||||
disabled,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
function shouldIncludeScope(
|
||||
scope: ConfigScope,
|
||||
scopeFilter: McpDoctorScopeFilter | undefined,
|
||||
): boolean {
|
||||
if (!scopeFilter) {
|
||||
return scope === 'enterprise' || scope === 'local' || scope === 'project' || scope === 'user'
|
||||
}
|
||||
return scope === scopeFilter
|
||||
}
|
||||
|
||||
function getValidationErrorsForSelectedScopes(
|
||||
scopeResults: {
|
||||
enterprise: ReturnType<McpDoctorDependencies['getMcpConfigsByScope']>
|
||||
local: ReturnType<McpDoctorDependencies['getMcpConfigsByScope']>
|
||||
project: ReturnType<McpDoctorDependencies['getMcpConfigsByScope']>
|
||||
user: ReturnType<McpDoctorDependencies['getMcpConfigsByScope']>
|
||||
},
|
||||
scopeFilter: McpDoctorScopeFilter | undefined,
|
||||
): ValidationError[] {
|
||||
return [
|
||||
...(shouldIncludeScope('enterprise', scopeFilter) ? scopeResults.enterprise.errors : []),
|
||||
...(shouldIncludeScope('local', scopeFilter) ? scopeResults.local.errors : []),
|
||||
...(shouldIncludeScope('project', scopeFilter) ? scopeResults.project.errors : []),
|
||||
...(shouldIncludeScope('user', scopeFilter) ? scopeResults.user.errors : []),
|
||||
]
|
||||
}
|
||||
|
||||
function buildObservedDefinition(
|
||||
name: string,
|
||||
activeConfig: ScopedMcpServerConfig,
|
||||
options?: {
|
||||
disabled?: boolean
|
||||
runtimeActive?: boolean
|
||||
runtimeVisible?: boolean
|
||||
},
|
||||
): McpDoctorDefinition {
|
||||
return {
|
||||
name,
|
||||
sourceType: getSourceType(activeConfig),
|
||||
sourcePath:
|
||||
getSourceType(activeConfig) === 'plugin'
|
||||
? `plugin:${activeConfig.pluginSource ?? 'unknown'}`
|
||||
: getSourceType(activeConfig) === 'claudeai'
|
||||
? 'claude.ai'
|
||||
: activeConfig.scope,
|
||||
transport: getTransport(activeConfig),
|
||||
runtimeVisible: options?.runtimeVisible ?? true,
|
||||
runtimeActive: options?.runtimeActive ?? true,
|
||||
disabled: options?.disabled ?? false,
|
||||
}
|
||||
}
|
||||
|
||||
function hasDefinitionForRuntimeSource(
|
||||
definitions: McpDoctorDefinition[],
|
||||
runtimeConfig: ScopedMcpServerConfig,
|
||||
deps: McpDoctorDependencies,
|
||||
): boolean {
|
||||
const runtimeSourceType = getSourceType(runtimeConfig)
|
||||
const runtimeSourcePath =
|
||||
runtimeSourceType === 'plugin'
|
||||
? `plugin:${runtimeConfig.pluginSource ?? 'unknown'}`
|
||||
: runtimeSourceType === 'claudeai'
|
||||
? 'claude.ai'
|
||||
: deps.describeMcpConfigFilePath(runtimeConfig.scope)
|
||||
|
||||
return definitions.some(
|
||||
definition =>
|
||||
definition.sourceType === runtimeSourceType &&
|
||||
definition.sourcePath === runtimeSourcePath &&
|
||||
definition.transport === getTransport(runtimeConfig),
|
||||
)
|
||||
}
|
||||
|
||||
function buildShadowingFindings(definitions: McpDoctorDefinition[]): McpDoctorFinding[] {
|
||||
const userEditable = definitions.filter(definition =>
|
||||
definition.sourceType === 'local' ||
|
||||
definition.sourceType === 'project' ||
|
||||
definition.sourceType === 'user' ||
|
||||
definition.sourceType === 'enterprise',
|
||||
)
|
||||
|
||||
if (userEditable.length <= 1) {
|
||||
return []
|
||||
}
|
||||
|
||||
const active = userEditable.find(definition => definition.runtimeActive) ?? userEditable[0]
|
||||
return [
|
||||
{
|
||||
blocking: false,
|
||||
code: 'duplicate.same_name_multiple_scopes',
|
||||
message: `Server is defined in multiple config scopes; active source is ${active.sourceType}`,
|
||||
remediation: 'Remove or rename one of the duplicate definitions to avoid confusion.',
|
||||
serverName: active.name,
|
||||
severity: 'warn',
|
||||
},
|
||||
{
|
||||
blocking: false,
|
||||
code: 'scope.shadowed',
|
||||
message: `${active.name} has shadowed definitions in lower-precedence config scopes.`,
|
||||
remediation: 'Inspect the other definitions and remove the ones you no longer want to keep.',
|
||||
serverName: active.name,
|
||||
severity: 'warn',
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
function buildStateFindings(definitions: McpDoctorDefinition[]): McpDoctorFinding[] {
|
||||
const findings: McpDoctorFinding[] = []
|
||||
|
||||
for (const definition of definitions) {
|
||||
if (definition.pendingApproval) {
|
||||
findings.push({
|
||||
blocking: false,
|
||||
code: 'state.pending_project_approval',
|
||||
message: `${definition.name} is declared in project config but pending project approval.`,
|
||||
remediation: 'Approve the server in the project MCP approval flow before expecting it to become active.',
|
||||
scope: 'project',
|
||||
serverName: definition.name,
|
||||
severity: 'warn',
|
||||
sourcePath: definition.sourcePath,
|
||||
})
|
||||
}
|
||||
|
||||
if (definition.disabled) {
|
||||
findings.push({
|
||||
blocking: false,
|
||||
code: 'state.disabled',
|
||||
message: `${definition.name} is currently disabled.`,
|
||||
remediation: 'Re-enable the server before expecting it to be available at runtime.',
|
||||
serverName: definition.name,
|
||||
severity: 'warn',
|
||||
sourcePath: definition.sourcePath,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return findings
|
||||
}
|
||||
|
||||
function summarizeReport(report: McpDoctorReport): McpDoctorReport {
|
||||
const allFindings = [...report.findings, ...report.servers.flatMap(server => server.findings)]
|
||||
const blocking = allFindings.filter(finding => finding.blocking).length
|
||||
const warnings = allFindings.filter(finding => finding.severity === 'warn').length
|
||||
const healthy = report.servers.filter(
|
||||
server =>
|
||||
server.liveCheck.result === 'connected' &&
|
||||
server.findings.every(finding => !finding.blocking && finding.severity !== 'warn'),
|
||||
).length
|
||||
|
||||
return {
|
||||
...report,
|
||||
summary: {
|
||||
totalReports: report.servers.length,
|
||||
healthy,
|
||||
warnings,
|
||||
blocking,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
async function getLiveCheck(
|
||||
name: string,
|
||||
activeConfig: ScopedMcpServerConfig | undefined,
|
||||
configOnly: boolean,
|
||||
definitions: McpDoctorDefinition[],
|
||||
deps: McpDoctorDependencies,
|
||||
): Promise<McpDoctorLiveCheck> {
|
||||
if (configOnly) {
|
||||
return { attempted: false, result: 'skipped' }
|
||||
}
|
||||
|
||||
if (!activeConfig) {
|
||||
if (definitions.some(definition => definition.pendingApproval)) {
|
||||
return { attempted: false, result: 'pending' }
|
||||
}
|
||||
if (definitions.some(definition => definition.disabled)) {
|
||||
return { attempted: false, result: 'disabled' }
|
||||
}
|
||||
return { attempted: false, result: 'skipped' }
|
||||
}
|
||||
|
||||
const startedAt = Date.now()
|
||||
const connection = await deps.connectToServer(name, activeConfig)
|
||||
const durationMs = Date.now() - startedAt
|
||||
|
||||
try {
|
||||
switch (connection.type) {
|
||||
case 'connected':
|
||||
return { attempted: true, result: 'connected', durationMs }
|
||||
case 'needs-auth':
|
||||
return { attempted: true, result: 'needs-auth', durationMs }
|
||||
case 'disabled':
|
||||
return { attempted: true, result: 'disabled', durationMs }
|
||||
case 'pending':
|
||||
return { attempted: true, result: 'pending', durationMs }
|
||||
case 'failed':
|
||||
return {
|
||||
attempted: true,
|
||||
result: 'failed',
|
||||
durationMs,
|
||||
error: connection.error,
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
await deps.clearServerCache(name, activeConfig).catch(() => {
|
||||
// Best-effort cleanup for diagnostic connections.
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
function buildLiveFindings(
|
||||
name: string,
|
||||
definitions: McpDoctorDefinition[],
|
||||
liveCheck: McpDoctorLiveCheck,
|
||||
): McpDoctorFinding[] {
|
||||
const activeDefinition = definitions.find(definition => definition.runtimeActive)
|
||||
|
||||
if (liveCheck.result === 'needs-auth') {
|
||||
return [
|
||||
{
|
||||
blocking: false,
|
||||
code: 'auth.needs_auth',
|
||||
message: `${name} requires authentication before it can be used.`,
|
||||
remediation: 'Authenticate the server and then rerun the doctor command.',
|
||||
serverName: name,
|
||||
severity: 'warn',
|
||||
sourcePath: activeDefinition?.sourcePath,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
if (liveCheck.result === 'failed') {
|
||||
const commandNotFound =
|
||||
activeDefinition?.transport === 'stdio' &&
|
||||
typeof liveCheck.error === 'string' &&
|
||||
liveCheck.error.toLowerCase().includes('not found')
|
||||
|
||||
return [
|
||||
{
|
||||
blocking: true,
|
||||
code: commandNotFound ? 'stdio.command_not_found' : 'health.failed',
|
||||
message: liveCheck.error
|
||||
? `${name} failed its live health check: ${liveCheck.error}`
|
||||
: `${name} failed its live health check.`,
|
||||
remediation: commandNotFound
|
||||
? 'Verify the configured executable exists on PATH or use a full executable path.'
|
||||
: 'Inspect the server configuration and retry the connection once the underlying problem is fixed.',
|
||||
serverName: name,
|
||||
severity: 'error',
|
||||
sourcePath: activeDefinition?.sourcePath,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
return []
|
||||
}
|
||||
|
||||
async function buildServerReport(
|
||||
name: string,
|
||||
options: {
|
||||
configOnly: boolean
|
||||
requestedByUser: boolean
|
||||
scopeFilter?: McpDoctorScopeFilter
|
||||
},
|
||||
validationFindingsByName: Map<string, McpDoctorFinding[]>,
|
||||
deps: McpDoctorDependencies,
|
||||
): Promise<McpDoctorServerReport> {
|
||||
const scopeResults = {
|
||||
enterprise: deps.getMcpConfigsByScope('enterprise'),
|
||||
local: deps.getMcpConfigsByScope('local'),
|
||||
project: deps.getMcpConfigsByScope('project'),
|
||||
user: deps.getMcpConfigsByScope('user'),
|
||||
}
|
||||
const { servers: activeServers } = await deps.getAllMcpConfigs()
|
||||
const serverDisabled = deps.isMcpServerDisabled(name)
|
||||
const runtimeConfig = activeServers[name] ?? undefined
|
||||
const activeConfig = serverDisabled ? undefined : runtimeConfig
|
||||
|
||||
const definitions = [
|
||||
...(shouldIncludeScope('enterprise', options.scopeFilter)
|
||||
? buildScopeDefinitions(name, 'enterprise', scopeResults.enterprise.servers, activeConfig, deps)
|
||||
: []),
|
||||
...(shouldIncludeScope('local', options.scopeFilter)
|
||||
? buildScopeDefinitions(name, 'local', scopeResults.local.servers, activeConfig, deps)
|
||||
: []),
|
||||
...(shouldIncludeScope('project', options.scopeFilter)
|
||||
? buildScopeDefinitions(name, 'project', scopeResults.project.servers, activeConfig, deps)
|
||||
: []),
|
||||
...(shouldIncludeScope('user', options.scopeFilter)
|
||||
? buildScopeDefinitions(name, 'user', scopeResults.user.servers, activeConfig, deps)
|
||||
: []),
|
||||
]
|
||||
|
||||
const shouldAddObservedDefinition =
|
||||
!!runtimeConfig &&
|
||||
!hasDefinitionForRuntimeSource(definitions, runtimeConfig, deps) &&
|
||||
((definitions.length === 0 && !options.scopeFilter) ||
|
||||
(definitions.length > 0 && definitions.every(definition => !definition.runtimeActive)))
|
||||
|
||||
if (runtimeConfig && shouldAddObservedDefinition) {
|
||||
definitions.push(
|
||||
buildObservedDefinition(name, runtimeConfig, {
|
||||
disabled: serverDisabled,
|
||||
runtimeActive: !serverDisabled,
|
||||
runtimeVisible: !serverDisabled,
|
||||
}),
|
||||
)
|
||||
}
|
||||
|
||||
const visibleRuntimeConfig =
|
||||
definitions.some(definition => definition.runtimeActive) || shouldAddObservedDefinition
|
||||
? activeConfig
|
||||
: undefined
|
||||
|
||||
const findings: McpDoctorFinding[] = [
|
||||
...(validationFindingsByName.get(name) ?? []),
|
||||
...buildShadowingFindings(definitions),
|
||||
...buildStateFindings(definitions),
|
||||
]
|
||||
|
||||
if (definitions.length === 0 && !shouldAddObservedDefinition) {
|
||||
findings.push({
|
||||
blocking: true,
|
||||
code: 'state.not_found',
|
||||
message: `${name} was not found in the selected MCP configuration sources.`,
|
||||
remediation: 'Check the server name and scope, or add the MCP server before retrying.',
|
||||
serverName: name,
|
||||
severity: 'error',
|
||||
})
|
||||
}
|
||||
|
||||
const liveCheck = await getLiveCheck(name, visibleRuntimeConfig, options.configOnly, definitions, deps)
|
||||
findings.push(...buildLiveFindings(name, definitions, liveCheck))
|
||||
|
||||
return {
|
||||
serverName: name,
|
||||
requestedByUser: options.requestedByUser,
|
||||
definitions,
|
||||
liveCheck,
|
||||
findings,
|
||||
}
|
||||
}
|
||||
|
||||
function getServerNames(
|
||||
scopeServers: Array<Record<string, ScopedMcpServerConfig>>,
|
||||
activeServers: Record<string, ScopedMcpServerConfig>,
|
||||
includeActiveServers: boolean,
|
||||
): string[] {
|
||||
const names = new Set<string>(includeActiveServers ? Object.keys(activeServers) : [])
|
||||
for (const servers of scopeServers) {
|
||||
for (const name of Object.keys(servers)) {
|
||||
names.add(name)
|
||||
}
|
||||
}
|
||||
return [...names].sort()
|
||||
}
|
||||
|
||||
export async function doctorAllServers(
|
||||
options: { configOnly: boolean; scopeFilter?: McpDoctorScopeFilter } = {
|
||||
configOnly: false,
|
||||
},
|
||||
deps: McpDoctorDependencies = DEFAULT_DEPENDENCIES,
|
||||
): Promise<McpDoctorReport> {
|
||||
const report = buildEmptyDoctorReport(options)
|
||||
const scopeResults = {
|
||||
enterprise: deps.getMcpConfigsByScope('enterprise'),
|
||||
local: deps.getMcpConfigsByScope('local'),
|
||||
project: deps.getMcpConfigsByScope('project'),
|
||||
user: deps.getMcpConfigsByScope('user'),
|
||||
}
|
||||
const validationFindings = findingsFromValidationErrors(
|
||||
getValidationErrorsForSelectedScopes(scopeResults, options.scopeFilter),
|
||||
)
|
||||
const { globalFindings, serverFindingsByName } = splitValidationFindings(validationFindings)
|
||||
const { servers: activeServers } = await deps.getAllMcpConfigs()
|
||||
const names = getServerNames(
|
||||
[
|
||||
...(shouldIncludeScope('enterprise', options.scopeFilter) ? [scopeResults.enterprise.servers] : []),
|
||||
...(shouldIncludeScope('local', options.scopeFilter) ? [scopeResults.local.servers] : []),
|
||||
...(shouldIncludeScope('project', options.scopeFilter) ? [scopeResults.project.servers] : []),
|
||||
...(shouldIncludeScope('user', options.scopeFilter) ? [scopeResults.user.servers] : []),
|
||||
],
|
||||
activeServers,
|
||||
!options.scopeFilter,
|
||||
)
|
||||
|
||||
const servers = await Promise.all(
|
||||
names.map(name =>
|
||||
buildServerReport(
|
||||
name,
|
||||
{
|
||||
configOnly: options.configOnly,
|
||||
requestedByUser: false,
|
||||
scopeFilter: options.scopeFilter,
|
||||
},
|
||||
serverFindingsByName,
|
||||
deps,
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
report.servers = servers
|
||||
report.findings = globalFindings
|
||||
return summarizeReport(report)
|
||||
}
|
||||
|
||||
export async function doctorServer(
|
||||
name: string,
|
||||
options: { configOnly: boolean; scopeFilter?: McpDoctorScopeFilter },
|
||||
deps: McpDoctorDependencies = DEFAULT_DEPENDENCIES,
|
||||
): Promise<McpDoctorReport> {
|
||||
const report = buildEmptyDoctorReport({ ...options, targetName: name })
|
||||
const scopeResults = {
|
||||
enterprise: deps.getMcpConfigsByScope('enterprise'),
|
||||
local: deps.getMcpConfigsByScope('local'),
|
||||
project: deps.getMcpConfigsByScope('project'),
|
||||
user: deps.getMcpConfigsByScope('user'),
|
||||
}
|
||||
const validationFindings = findingsFromValidationErrors(
|
||||
getValidationErrorsForSelectedScopes(scopeResults, options.scopeFilter),
|
||||
)
|
||||
const { globalFindings, serverFindingsByName } = splitValidationFindings(validationFindings)
|
||||
const server = await buildServerReport(
|
||||
name,
|
||||
{
|
||||
configOnly: options.configOnly,
|
||||
requestedByUser: true,
|
||||
scopeFilter: options.scopeFilter,
|
||||
},
|
||||
serverFindingsByName,
|
||||
deps,
|
||||
)
|
||||
report.servers = [server]
|
||||
report.findings = globalFindings
|
||||
return summarizeReport(report)
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@@ -7,6 +7,11 @@ import type { PermissionResult } from 'src/utils/permissions/PermissionResult.js
|
||||
import { z } from 'zod/v4'
|
||||
import { getFeatureValue_CACHED_MAY_BE_STALE } from '../../services/analytics/growthbook.js'
|
||||
import { queryModelWithStreaming } from '../../services/api/claude.js'
|
||||
import { collectCodexCompletedResponse } from '../../services/api/codexShim.js'
|
||||
import {
|
||||
resolveCodexApiCredentials,
|
||||
resolveProviderRequest,
|
||||
} from '../../services/api/providerConfig.js'
|
||||
import { buildTool, type ToolDef } from '../../Tool.js'
|
||||
import { lazySchema } from '../../utils/lazySchema.js'
|
||||
import { logError } from '../../utils/log.js'
|
||||
@@ -83,6 +88,213 @@ function makeToolSchema(input: Input): BetaWebSearchTool20250305 {
|
||||
}
|
||||
}
|
||||
|
||||
function isCodexResponsesWebSearchEnabled(): boolean {
|
||||
if (getAPIProvider() !== 'openai') {
|
||||
return false
|
||||
}
|
||||
|
||||
const request = resolveProviderRequest({
|
||||
model: getMainLoopModel(),
|
||||
baseUrl: process.env.OPENAI_BASE_URL,
|
||||
})
|
||||
return request.transport === 'codex_responses'
|
||||
}
|
||||
|
||||
function makeCodexWebSearchTool(input: Input): Record<string, unknown> {
|
||||
const tool: Record<string, unknown> = {
|
||||
type: 'web_search',
|
||||
}
|
||||
|
||||
if (input.allowed_domains?.length) {
|
||||
tool.filters = {
|
||||
allowed_domains: input.allowed_domains,
|
||||
}
|
||||
}
|
||||
|
||||
const timezone = Intl.DateTimeFormat().resolvedOptions().timeZone
|
||||
if (timezone) {
|
||||
tool.user_location = {
|
||||
type: 'approximate',
|
||||
timezone,
|
||||
}
|
||||
}
|
||||
|
||||
return tool
|
||||
}
|
||||
|
||||
function buildCodexWebSearchInputText(input: Input): string {
|
||||
if (!input.blocked_domains?.length) {
|
||||
return input.query
|
||||
}
|
||||
|
||||
// Responses web_search supports allowed_domains filters but not blocked domains.
|
||||
// Convert blocked domains into common search-engine exclusion operators so the
|
||||
// constraint still affects ranking and candidate selection.
|
||||
const excludedSites = input.blocked_domains.map(domain => `-site:${domain}`)
|
||||
return `${input.query} ${excludedSites.join(' ')}`
|
||||
}
|
||||
|
||||
function buildCodexWebSearchInput(input: Input): Array<Record<string, unknown>> {
|
||||
return [
|
||||
{
|
||||
type: 'message',
|
||||
role: 'user',
|
||||
content: [
|
||||
{
|
||||
type: 'input_text',
|
||||
text: buildCodexWebSearchInputText(input),
|
||||
},
|
||||
],
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
function buildCodexWebSearchInstructions(): string {
|
||||
return [
|
||||
'You are the OpenClaude web search tool.',
|
||||
'Search the web for the user query and return a concise factual answer.',
|
||||
'Include source URLs in the response.',
|
||||
].join(' ')
|
||||
}
|
||||
|
||||
function makeOutputFromCodexWebSearchResponse(
|
||||
response: Record<string, unknown>,
|
||||
query: string,
|
||||
durationSeconds: number,
|
||||
): Output {
|
||||
const results: (SearchResult | string)[] = []
|
||||
const sourceMap = new Map<string, { title: string; url: string }>()
|
||||
const output = Array.isArray(response.output) ? response.output : []
|
||||
|
||||
for (const item of output) {
|
||||
if (item?.type === 'web_search_call') {
|
||||
const sources = Array.isArray(item.action?.sources)
|
||||
? item.action.sources
|
||||
: []
|
||||
for (const source of sources) {
|
||||
if (typeof source?.url !== 'string' || !source.url) continue
|
||||
sourceMap.set(source.url, {
|
||||
title:
|
||||
typeof source.title === 'string' && source.title
|
||||
? source.title
|
||||
: source.url,
|
||||
url: source.url,
|
||||
})
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if (item?.type !== 'message' || !Array.isArray(item.content)) {
|
||||
continue
|
||||
}
|
||||
|
||||
for (const part of item.content) {
|
||||
if (part?.type === 'output_text' && typeof part.text === 'string') {
|
||||
const trimmed = part.text.trim()
|
||||
if (trimmed) {
|
||||
results.push(trimmed)
|
||||
}
|
||||
}
|
||||
|
||||
const annotations = Array.isArray(part?.annotations)
|
||||
? part.annotations
|
||||
: []
|
||||
for (const annotation of annotations) {
|
||||
if (annotation?.type !== 'url_citation') continue
|
||||
if (typeof annotation.url !== 'string' || !annotation.url) continue
|
||||
sourceMap.set(annotation.url, {
|
||||
title:
|
||||
typeof annotation.title === 'string' && annotation.title
|
||||
? annotation.title
|
||||
: annotation.url,
|
||||
url: annotation.url,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (results.length === 0 && typeof response.output_text === 'string') {
|
||||
const trimmed = response.output_text.trim()
|
||||
if (trimmed) {
|
||||
results.push(trimmed)
|
||||
}
|
||||
}
|
||||
|
||||
if (sourceMap.size > 0) {
|
||||
results.push({
|
||||
tool_use_id: 'codex-web-search',
|
||||
content: Array.from(sourceMap.values()),
|
||||
})
|
||||
}
|
||||
|
||||
return {
|
||||
query,
|
||||
results,
|
||||
durationSeconds,
|
||||
}
|
||||
}
|
||||
|
||||
async function runCodexWebSearch(
|
||||
input: Input,
|
||||
signal: AbortSignal,
|
||||
): Promise<Output> {
|
||||
const startTime = performance.now()
|
||||
const request = resolveProviderRequest({
|
||||
model: getMainLoopModel(),
|
||||
baseUrl: process.env.OPENAI_BASE_URL,
|
||||
})
|
||||
const credentials = resolveCodexApiCredentials()
|
||||
|
||||
if (!credentials.apiKey) {
|
||||
throw new Error('Codex web search requires CODEX_API_KEY or a valid auth.json.')
|
||||
}
|
||||
if (!credentials.accountId) {
|
||||
throw new Error(
|
||||
'Codex web search requires CHATGPT_ACCOUNT_ID or an auth.json with chatgpt_account_id.',
|
||||
)
|
||||
}
|
||||
|
||||
const body: Record<string, unknown> = {
|
||||
model: request.resolvedModel,
|
||||
input: buildCodexWebSearchInput(input),
|
||||
instructions: buildCodexWebSearchInstructions(),
|
||||
tools: [makeCodexWebSearchTool(input)],
|
||||
tool_choice: 'required',
|
||||
include: ['web_search_call.action.sources'],
|
||||
store: false,
|
||||
stream: true,
|
||||
}
|
||||
|
||||
if (request.reasoning) {
|
||||
body.reasoning = request.reasoning
|
||||
}
|
||||
|
||||
const response = await fetch(`${request.baseUrl}/responses`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Authorization: `Bearer ${credentials.apiKey}`,
|
||||
'chatgpt-account-id': credentials.accountId,
|
||||
originator: 'openclaude',
|
||||
},
|
||||
body: JSON.stringify(body),
|
||||
signal,
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorBody = await response.text().catch(() => 'unknown error')
|
||||
throw new Error(`Codex web search error ${response.status}: ${errorBody}`)
|
||||
}
|
||||
|
||||
const payload = await collectCodexCompletedResponse(response)
|
||||
const endTime = performance.now()
|
||||
return makeOutputFromCodexWebSearchResponse(
|
||||
payload,
|
||||
input.query,
|
||||
(endTime - startTime) / 1000,
|
||||
)
|
||||
}
|
||||
|
||||
function makeOutputFromSearchResponse(
|
||||
result: BetaContentBlock[],
|
||||
query: string,
|
||||
@@ -169,6 +381,10 @@ export const WebSearchTool = buildTool({
|
||||
const provider = getAPIProvider()
|
||||
const model = getMainLoopModel()
|
||||
|
||||
if (isCodexResponsesWebSearchEnabled()) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Enable for firstParty
|
||||
if (provider === 'firstParty') {
|
||||
return true
|
||||
@@ -221,6 +437,12 @@ export const WebSearchTool = buildTool({
|
||||
}
|
||||
},
|
||||
async prompt() {
|
||||
if (isCodexResponsesWebSearchEnabled()) {
|
||||
return getWebSearchPrompt().replace(
|
||||
/\n\s*-\s*Web search is only available in the US/,
|
||||
'',
|
||||
)
|
||||
}
|
||||
return getWebSearchPrompt()
|
||||
},
|
||||
renderToolUseMessage,
|
||||
@@ -252,6 +474,12 @@ export const WebSearchTool = buildTool({
|
||||
return { result: true }
|
||||
},
|
||||
async call(input, context, _canUseTool, _parentMessage, onProgress) {
|
||||
if (isCodexResponsesWebSearchEnabled()) {
|
||||
return {
|
||||
data: await runCodexWebSearch(input, context.abortController.signal),
|
||||
}
|
||||
}
|
||||
|
||||
const startTime = performance.now()
|
||||
const { query } = input
|
||||
const userMessage = createUserMessage({
|
||||
|
||||
@@ -117,7 +117,8 @@ export function isAnthropicAuthEnabled(): boolean {
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
|
||||
// Check if user has configured an external API key source
|
||||
// This allows externally-provided API keys to work (without requiring proxy configuration)
|
||||
@@ -1731,14 +1732,15 @@ export function getSubscriptionName(): string {
|
||||
}
|
||||
}
|
||||
|
||||
/** Check if using third-party services (Bedrock or Vertex or Foundry or OpenAI-compatible or Gemini) */
|
||||
/** Check if using third-party services (Bedrock or Vertex or Foundry or OpenAI-compatible or Gemini or GitHub Models) */
|
||||
export function isUsing3PServices(): boolean {
|
||||
return !!(
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ import {
|
||||
logEvent,
|
||||
} from 'src/services/analytics/index.js'
|
||||
import { type ReleaseChannel, saveGlobalConfig } from './config.js'
|
||||
import { getAPIProvider } from './model/providers.js'
|
||||
import { logForDebugging } from './debug.js'
|
||||
import { env } from './env.js'
|
||||
import { getClaudeConfigHomeDir } from './envUtils.js'
|
||||
@@ -72,6 +73,12 @@ export async function assertMinVersion(): Promise<void> {
|
||||
return
|
||||
}
|
||||
|
||||
// Skip version check for third-party providers — the min version
|
||||
// kill-switch is Anthropic-specific and should not block 3P users
|
||||
if (getAPIProvider() !== 'firstParty') {
|
||||
return
|
||||
}
|
||||
|
||||
try {
|
||||
const versionConfig = await getDynamicConfig_BLOCKS_ON_INIT<{
|
||||
minVersion: string
|
||||
|
||||
@@ -74,10 +74,9 @@ export function getContextWindowForModel(
|
||||
|
||||
// OpenAI-compatible provider — use known context windows for the model
|
||||
if (
|
||||
process.env.CLAUDE_CODE_USE_OPENAI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_OPENAI === 'true' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
) {
|
||||
const openaiWindow = getOpenAIContextWindow(model)
|
||||
if (openaiWindow !== undefined) {
|
||||
@@ -178,10 +177,9 @@ export function getModelMaxOutputTokens(model: string): {
|
||||
|
||||
// OpenAI-compatible provider — use known output limits to avoid 400 errors
|
||||
if (
|
||||
process.env.CLAUDE_CODE_USE_OPENAI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_OPENAI === 'true' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
) {
|
||||
const openaiMax = getOpenAIMaxOutputTokens(model)
|
||||
if (openaiMax !== undefined) {
|
||||
|
||||
@@ -17,6 +17,14 @@ export const EFFORT_LEVELS = [
|
||||
'max',
|
||||
] as const satisfies readonly EffortLevel[]
|
||||
|
||||
export const OPENAI_EFFORT_LEVELS = [
|
||||
'low',
|
||||
'medium',
|
||||
'high',
|
||||
'xhigh',
|
||||
] as const
|
||||
|
||||
export type OpenAIEffortLevel = typeof OPENAI_EFFORT_LEVELS[number]
|
||||
export type EffortValue = EffortLevel | number
|
||||
|
||||
// @[MODEL LAUNCH]: Add the new model to the allowlist if it supports the effort parameter.
|
||||
@@ -68,6 +76,46 @@ export function isEffortLevel(value: string): value is EffortLevel {
|
||||
return (EFFORT_LEVELS as readonly string[]).includes(value)
|
||||
}
|
||||
|
||||
export function isOpenAIEffortLevel(value: string): value is OpenAIEffortLevel {
|
||||
return (OPENAI_EFFORT_LEVELS as readonly string[]).includes(value)
|
||||
}
|
||||
|
||||
export function modelUsesOpenAIEffort(model: string): boolean {
|
||||
const provider = getAPIProvider()
|
||||
return provider === 'openai' || provider === 'codex'
|
||||
}
|
||||
|
||||
export function getAvailableEffortLevels(model: string): EffortLevel[] | OpenAIEffortLevel[] {
|
||||
if (modelUsesOpenAIEffort(model)) {
|
||||
return [...OPENAI_EFFORT_LEVELS] as OpenAIEffortLevel[]
|
||||
}
|
||||
const levels: EffortLevel[] = ['low', 'medium', 'high']
|
||||
if (modelSupportsMaxEffort(model)) {
|
||||
levels.push('max')
|
||||
}
|
||||
return levels
|
||||
}
|
||||
|
||||
export function getEffortLevelLabel(level: EffortLevel | OpenAIEffortLevel): string {
|
||||
if (level === 'xhigh') return 'Extra High'
|
||||
if (level === 'max') return 'Max'
|
||||
return capitalize(level)
|
||||
}
|
||||
|
||||
export function openAIEffortToStandard(level: OpenAIEffortLevel): EffortLevel {
|
||||
if (level === 'xhigh') return 'max'
|
||||
return level
|
||||
}
|
||||
|
||||
export function standardEffortToOpenAI(level: EffortLevel): OpenAIEffortLevel {
|
||||
if (level === 'max') return 'xhigh'
|
||||
return level as OpenAIEffortLevel
|
||||
}
|
||||
|
||||
function capitalize(s: string): string {
|
||||
return s.charAt(0).toUpperCase() + s.slice(1)
|
||||
}
|
||||
|
||||
export function parseEffortValue(value: unknown): EffortValue | undefined {
|
||||
if (value === undefined || value === null || value === '') {
|
||||
return undefined
|
||||
@@ -221,7 +269,7 @@ export function convertEffortValueToLevel(value: EffortValue): EffortLevel {
|
||||
* @param level The effort level to describe
|
||||
* @returns Human-readable description
|
||||
*/
|
||||
export function getEffortLevelDescription(level: EffortLevel): string {
|
||||
export function getEffortLevelDescription(level: EffortLevel | OpenAIEffortLevel): string {
|
||||
switch (level) {
|
||||
case 'low':
|
||||
return 'Quick, straightforward implementation with minimal overhead'
|
||||
@@ -231,6 +279,8 @@ export function getEffortLevelDescription(level: EffortLevel): string {
|
||||
return 'Comprehensive implementation with extensive testing and documentation'
|
||||
case 'max':
|
||||
return 'Maximum capability with deepest reasoning (Opus 4.6 only)'
|
||||
case 'xhigh':
|
||||
return 'Extra high reasoning effort for complex tasks (OpenAI/Codex)'
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
66
src/utils/githubModelsCredentials.hydrate.test.ts
Normal file
66
src/utils/githubModelsCredentials.hydrate.test.ts
Normal file
@@ -0,0 +1,66 @@
|
||||
/**
|
||||
* Hydrate tests live in a separate file with no static import of
|
||||
* githubModelsCredentials so Bun's mock.module can replace secureStorage
|
||||
* before that module is first loaded.
|
||||
*/
|
||||
import { afterEach, describe, expect, mock, test } from 'bun:test'
|
||||
|
||||
describe('hydrateGithubModelsTokenFromSecureStorage', () => {
|
||||
const orig = {
|
||||
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
|
||||
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
|
||||
GH_TOKEN: process.env.GH_TOKEN,
|
||||
CLAUDE_CODE_SIMPLE: process.env.CLAUDE_CODE_SIMPLE,
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
mock.restore()
|
||||
for (const [k, v] of Object.entries(orig)) {
|
||||
if (v === undefined) {
|
||||
delete process.env[k as keyof typeof orig]
|
||||
} else {
|
||||
process.env[k as keyof typeof orig] = v
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
test('sets GITHUB_TOKEN from secure storage when USE_GITHUB and env token empty', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
delete process.env.GITHUB_TOKEN
|
||||
delete process.env.GH_TOKEN
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
|
||||
mock.module('./secureStorage/index.js', () => ({
|
||||
getSecureStorage: () => ({
|
||||
read: () => ({
|
||||
githubModels: { accessToken: 'stored-secret' },
|
||||
}),
|
||||
}),
|
||||
}))
|
||||
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import(
|
||||
'./githubModelsCredentials.js'
|
||||
)
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
expect(process.env.GITHUB_TOKEN).toBe('stored-secret')
|
||||
})
|
||||
|
||||
test('does not override existing GITHUB_TOKEN', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
process.env.GITHUB_TOKEN = 'already'
|
||||
|
||||
mock.module('./secureStorage/index.js', () => ({
|
||||
getSecureStorage: () => ({
|
||||
read: () => ({
|
||||
githubModels: { accessToken: 'stored-secret' },
|
||||
}),
|
||||
}),
|
||||
}))
|
||||
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import(
|
||||
'./githubModelsCredentials.js'
|
||||
)
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
expect(process.env.GITHUB_TOKEN).toBe('already')
|
||||
})
|
||||
})
|
||||
47
src/utils/githubModelsCredentials.test.ts
Normal file
47
src/utils/githubModelsCredentials.test.ts
Normal file
@@ -0,0 +1,47 @@
|
||||
import { describe, expect, test } from 'bun:test'
|
||||
|
||||
import {
|
||||
clearGithubModelsToken,
|
||||
readGithubModelsToken,
|
||||
saveGithubModelsToken,
|
||||
} from './githubModelsCredentials.js'
|
||||
|
||||
describe('readGithubModelsToken', () => {
|
||||
test('returns undefined in bare mode', () => {
|
||||
const prev = process.env.CLAUDE_CODE_SIMPLE
|
||||
process.env.CLAUDE_CODE_SIMPLE = '1'
|
||||
expect(readGithubModelsToken()).toBeUndefined()
|
||||
if (prev === undefined) {
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
} else {
|
||||
process.env.CLAUDE_CODE_SIMPLE = prev
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe('saveGithubModelsToken / clearGithubModelsToken', () => {
|
||||
test('save returns failure in bare mode', () => {
|
||||
const prev = process.env.CLAUDE_CODE_SIMPLE
|
||||
process.env.CLAUDE_CODE_SIMPLE = '1'
|
||||
const r = saveGithubModelsToken('abc')
|
||||
expect(r.success).toBe(false)
|
||||
expect(r.warning).toContain('Bare mode')
|
||||
if (prev === undefined) {
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
} else {
|
||||
process.env.CLAUDE_CODE_SIMPLE = prev
|
||||
}
|
||||
})
|
||||
|
||||
test('clear succeeds in bare mode', () => {
|
||||
const prev = process.env.CLAUDE_CODE_SIMPLE
|
||||
process.env.CLAUDE_CODE_SIMPLE = '1'
|
||||
expect(clearGithubModelsToken().success).toBe(true)
|
||||
if (prev === undefined) {
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
} else {
|
||||
process.env.CLAUDE_CODE_SIMPLE = prev
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
73
src/utils/githubModelsCredentials.ts
Normal file
73
src/utils/githubModelsCredentials.ts
Normal file
@@ -0,0 +1,73 @@
|
||||
import { isBareMode, isEnvTruthy } from './envUtils.js'
|
||||
import { getSecureStorage } from './secureStorage/index.js'
|
||||
|
||||
/** JSON key in the shared OpenClaude secure storage blob. */
|
||||
export const GITHUB_MODELS_STORAGE_KEY = 'githubModels' as const
|
||||
|
||||
export type GithubModelsCredentialBlob = {
|
||||
accessToken: string
|
||||
}
|
||||
|
||||
export function readGithubModelsToken(): string | undefined {
|
||||
if (isBareMode()) return undefined
|
||||
try {
|
||||
const data = getSecureStorage().read() as
|
||||
| ({ githubModels?: GithubModelsCredentialBlob } & Record<string, unknown>)
|
||||
| null
|
||||
const t = data?.githubModels?.accessToken?.trim()
|
||||
return t || undefined
|
||||
} catch {
|
||||
return undefined
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* If GitHub Models mode is on and no token is in the environment, copy the
|
||||
* stored token into process.env so the OpenAI shim and validation see it.
|
||||
*/
|
||||
export function hydrateGithubModelsTokenFromSecureStorage(): void {
|
||||
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
return
|
||||
}
|
||||
if (process.env.GITHUB_TOKEN?.trim() || process.env.GH_TOKEN?.trim()) {
|
||||
return
|
||||
}
|
||||
if (isBareMode()) {
|
||||
return
|
||||
}
|
||||
const t = readGithubModelsToken()
|
||||
if (t) {
|
||||
process.env.GITHUB_TOKEN = t
|
||||
}
|
||||
}
|
||||
|
||||
export function saveGithubModelsToken(token: string): {
|
||||
success: boolean
|
||||
warning?: string
|
||||
} {
|
||||
if (isBareMode()) {
|
||||
return { success: false, warning: 'Bare mode: secure storage is disabled.' }
|
||||
}
|
||||
const trimmed = token.trim()
|
||||
if (!trimmed) {
|
||||
return { success: false, warning: 'Token is empty.' }
|
||||
}
|
||||
const secureStorage = getSecureStorage()
|
||||
const prev = secureStorage.read() || {}
|
||||
const merged = {
|
||||
...(prev as Record<string, unknown>),
|
||||
[GITHUB_MODELS_STORAGE_KEY]: { accessToken: trimmed },
|
||||
}
|
||||
return secureStorage.update(merged as typeof prev)
|
||||
}
|
||||
|
||||
export function clearGithubModelsToken(): { success: boolean; warning?: string } {
|
||||
if (isBareMode()) {
|
||||
return { success: true }
|
||||
}
|
||||
const secureStorage = getSecureStorage()
|
||||
const prev = secureStorage.read() || {}
|
||||
const next = { ...(prev as Record<string, unknown>) }
|
||||
delete next[GITHUB_MODELS_STORAGE_KEY]
|
||||
return secureStorage.update(next as typeof prev)
|
||||
}
|
||||
@@ -18,6 +18,7 @@ const PROVIDER_MANAGED_ENV_VARS = new Set([
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
// Endpoint config (base URLs, project/resource identifiers)
|
||||
'ANTHROPIC_BASE_URL',
|
||||
'ANTHROPIC_BEDROCK_BASE_URL',
|
||||
@@ -147,6 +148,7 @@ export const SAFE_ENV_VARS = new Set([
|
||||
'CLAUDE_CODE_SUBAGENT_MODEL',
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'DISABLE_AUTOUPDATER',
|
||||
'DISABLE_BUG_COMMAND',
|
||||
|
||||
@@ -6,8 +6,6 @@ export const MODEL_ALIASES = [
|
||||
'sonnet[1m]',
|
||||
'opus[1m]',
|
||||
'opusplan',
|
||||
'codexplan',
|
||||
'codexspark',
|
||||
] as const
|
||||
export type ModelAlias = (typeof MODEL_ALIASES)[number]
|
||||
|
||||
|
||||
@@ -123,6 +123,10 @@ export function getDefaultOpusModel(): ModelName {
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o'
|
||||
}
|
||||
// Codex provider: use user-specified model or default to gpt-5.4
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
// 3P providers (Bedrock, Vertex, Foundry) — kept as a separate branch
|
||||
// even when values match, since 3P availability lags firstParty and
|
||||
// these will diverge again at the next model launch.
|
||||
@@ -145,6 +149,10 @@ export function getDefaultSonnetModel(): ModelName {
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o'
|
||||
}
|
||||
// Codex provider
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
// Default to Sonnet 4.5 for 3P since they may not have 4.6 yet
|
||||
if (getAPIProvider() !== 'firstParty') {
|
||||
return getModelStrings().sonnet45
|
||||
@@ -165,6 +173,10 @@ export function getDefaultHaikuModel(): ModelName {
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o-mini'
|
||||
}
|
||||
// Codex provider
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
|
||||
// Haiku 4.5 is available on all platforms (first-party, Foundry, Bedrock, Vertex)
|
||||
return getModelStrings().haiku45
|
||||
@@ -217,6 +229,10 @@ export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
|
||||
if (getAPIProvider() === 'openai') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-4o'
|
||||
}
|
||||
// Codex provider: always use the configured Codex model (default gpt-5.4)
|
||||
if (getAPIProvider() === 'codex') {
|
||||
return process.env.OPENAI_MODEL || 'gpt-5.4'
|
||||
}
|
||||
|
||||
// Ants default to defaultModel from flag config, or Opus 1M if not configured
|
||||
if (process.env.USER_TYPE === 'ant') {
|
||||
@@ -343,12 +359,6 @@ export function renderDefaultModelSetting(
|
||||
if (setting === 'opusplan') {
|
||||
return 'Opus 4.6 in plan mode, else Sonnet 4.6'
|
||||
}
|
||||
if (setting === 'codexplan') {
|
||||
return 'Codex Plan (GPT-5.4 high reasoning)'
|
||||
}
|
||||
if (setting === 'codexspark') {
|
||||
return 'Codex Spark (GPT-5.3 Codex Spark)'
|
||||
}
|
||||
return renderModelName(parseUserSpecifiedModel(setting))
|
||||
}
|
||||
|
||||
@@ -383,11 +393,12 @@ export function renderModelSetting(setting: ModelName | ModelAlias): string {
|
||||
if (setting === 'opusplan') {
|
||||
return 'Opus Plan'
|
||||
}
|
||||
// Handle Codex models - show actual model name + resolved model
|
||||
if (setting === 'codexplan') {
|
||||
return 'Codex Plan'
|
||||
return 'codexplan (gpt-5.4)'
|
||||
}
|
||||
if (setting === 'codexspark') {
|
||||
return 'Codex Spark'
|
||||
return 'codexspark (gpt-5.3-codex-spark)'
|
||||
}
|
||||
if (isModelAlias(setting)) {
|
||||
return capitalize(setting)
|
||||
@@ -401,8 +412,8 @@ export function renderModelSetting(setting: ModelName | ModelAlias): string {
|
||||
* if the model is not recognized as a public model.
|
||||
*/
|
||||
export function getPublicModelDisplayName(model: ModelName): string | null {
|
||||
// For OpenAI/Gemini providers, show the actual model name not a Claude alias
|
||||
if (getAPIProvider() === 'openai' || getAPIProvider() === 'gemini') {
|
||||
// For OpenAI/Gemini/Codex providers, show the actual model name not a Claude alias
|
||||
if (getAPIProvider() === 'openai' || getAPIProvider() === 'gemini' || getAPIProvider() === 'codex') {
|
||||
return null
|
||||
}
|
||||
switch (model) {
|
||||
@@ -517,10 +528,6 @@ export function parseUserSpecifiedModel(
|
||||
|
||||
if (isModelAlias(modelString)) {
|
||||
switch (modelString) {
|
||||
case 'codexplan':
|
||||
return modelInputTrimmed
|
||||
case 'codexspark':
|
||||
return modelInputTrimmed
|
||||
case 'opusplan':
|
||||
return getDefaultSonnetModel() + (has1mTag ? '[1m]' : '') // Sonnet is default, Opus in plan mode
|
||||
case 'sonnet':
|
||||
@@ -535,6 +542,14 @@ export function parseUserSpecifiedModel(
|
||||
}
|
||||
}
|
||||
|
||||
// Handle Codex aliases - map to actual model names
|
||||
if (modelString === 'codexplan') {
|
||||
return 'gpt-5.4'
|
||||
}
|
||||
if (modelString === 'codexspark') {
|
||||
return 'gpt-5.3-codex-spark'
|
||||
}
|
||||
|
||||
// Opus 4/4.1 are no longer available on the first-party API (same as
|
||||
// Claude.ai) — silently remap to the current Opus default. The 'opus'
|
||||
// alias already resolves to 4.6, so the only users on these explicit
|
||||
|
||||
@@ -268,20 +268,65 @@ function getOpusPlanOption(): ModelOption {
|
||||
|
||||
function getCodexPlanOption(): ModelOption {
|
||||
return {
|
||||
value: 'codexplan',
|
||||
label: 'Codex Plan',
|
||||
value: 'gpt-5.4',
|
||||
label: 'gpt-5.4',
|
||||
description: 'GPT-5.4 on the Codex backend with high reasoning',
|
||||
}
|
||||
}
|
||||
|
||||
function getCodexSparkOption(): ModelOption {
|
||||
return {
|
||||
value: 'codexspark',
|
||||
label: 'Codex Spark',
|
||||
value: 'gpt-5.3-codex-spark',
|
||||
label: 'gpt-5.3-codex-spark',
|
||||
description: 'GPT-5.3 Codex Spark on the Codex backend for fast tool loops',
|
||||
}
|
||||
}
|
||||
|
||||
function getCodexModelOptions(): ModelOption[] {
|
||||
return [
|
||||
{
|
||||
value: 'gpt-5.4',
|
||||
label: 'gpt-5.4',
|
||||
description: 'GPT-5.4 with high reasoning',
|
||||
},
|
||||
{
|
||||
value: 'gpt-5.3-codex',
|
||||
label: 'gpt-5.3-codex',
|
||||
description: 'GPT-5.3 Codex with high reasoning',
|
||||
},
|
||||
{
|
||||
value: 'gpt-5.3-codex-spark',
|
||||
label: 'gpt-5.3-codex-spark',
|
||||
description: 'GPT-5.3 Codex Spark for fast tool loops',
|
||||
},
|
||||
{
|
||||
value: 'codexspark',
|
||||
label: 'codexspark',
|
||||
description: 'GPT-5.3 Codex Spark alias for fast tool loops',
|
||||
},
|
||||
{
|
||||
value: 'gpt-5.2-codex',
|
||||
label: 'gpt-5.2-codex',
|
||||
description: 'GPT-5.2 Codex with high reasoning',
|
||||
},
|
||||
{
|
||||
value: 'gpt-5.1-codex-max',
|
||||
label: 'gpt-5.1-codex-max',
|
||||
description: 'GPT-5.1 Codex Max for deep reasoning',
|
||||
},
|
||||
{
|
||||
value: 'gpt-5.1-codex-mini',
|
||||
label: 'gpt-5.1-codex-mini',
|
||||
description: 'GPT-5.1 Codex Mini - faster, cheaper',
|
||||
},
|
||||
{
|
||||
value: 'gpt-5.4-mini',
|
||||
label: 'gpt-5.4-mini',
|
||||
description: 'GPT-5.4 Mini - faster, cheaper',
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
// @[MODEL LAUNCH]: Update the model picker lists below to include/reorder options for the new model.
|
||||
// Each user tier (ant, Max/Team Premium, Pro/Team Standard/Enterprise, PAYG 1P, PAYG 3P) has its own list.
|
||||
function getModelOptionsBase(fastMode = false): ModelOption[] {
|
||||
@@ -360,8 +405,9 @@ function getModelOptionsBase(fastMode = false): ModelOption[] {
|
||||
// PAYG 3P: Default (Sonnet 4.5) + Sonnet (3P custom) or Sonnet 4.6/1M + Opus (3P custom) or Opus 4.1/Opus 4.6/Opus1M + Haiku + Opus 4.1
|
||||
const payg3pOptions = [getDefaultOptionForUser(fastMode)]
|
||||
|
||||
if (getAPIProvider() === 'openai') {
|
||||
payg3pOptions.push(getCodexPlanOption(), getCodexSparkOption())
|
||||
// Add Codex models for openai and codex providers
|
||||
if (getAPIProvider() === 'openai' || getAPIProvider() === 'codex') {
|
||||
payg3pOptions.push(...getCodexModelOptions())
|
||||
}
|
||||
|
||||
const customSonnet = getCustomSonnetOption()
|
||||
@@ -517,9 +563,9 @@ export function getModelOptions(fastMode = false): ModelOption[] {
|
||||
return filterModelOptionsByAllowlist(options)
|
||||
} else if (customModel === 'opusplan') {
|
||||
return filterModelOptionsByAllowlist([...options, getOpusPlanOption()])
|
||||
} else if (customModel === 'codexplan') {
|
||||
} else if (customModel === 'gpt-5.4') {
|
||||
return filterModelOptionsByAllowlist([...options, getCodexPlanOption()])
|
||||
} else if (customModel === 'codexspark') {
|
||||
} else if (customModel === 'gpt-5.3-codex-spark') {
|
||||
return filterModelOptionsByAllowlist([...options, getCodexSparkOption()])
|
||||
} else if (customModel === 'opus' && getAPIProvider() === 'firstParty') {
|
||||
return filterModelOptionsByAllowlist([
|
||||
@@ -554,11 +600,23 @@ export function getModelOptions(fastMode = false): ModelOption[] {
|
||||
*/
|
||||
function filterModelOptionsByAllowlist(options: ModelOption[]): ModelOption[] {
|
||||
const settings = getSettings_DEPRECATED() || {}
|
||||
if (!settings.availableModels) {
|
||||
return options // No restrictions
|
||||
}
|
||||
return options.filter(
|
||||
const filtered = !settings.availableModels
|
||||
? options // No restrictions
|
||||
: options.filter(
|
||||
opt =>
|
||||
opt.value === null || (opt.value !== null && isModelAllowed(opt.value)),
|
||||
)
|
||||
|
||||
// Select state uses option values as identity keys. If two entries share the
|
||||
// same value (e.g. provider-specific aliases collapsing to one model ID),
|
||||
// navigation/focus can become inconsistent and appear as duplicate rendering.
|
||||
const seen = new Set<string>()
|
||||
return filtered.filter(opt => {
|
||||
const key = String(opt.value)
|
||||
if (seen.has(key)) {
|
||||
return false
|
||||
}
|
||||
seen.add(key)
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
@@ -23,9 +23,12 @@ export type ModelStrings = Record<ModelKey, string>
|
||||
const MODEL_KEYS = Object.keys(ALL_MODEL_CONFIGS) as ModelKey[]
|
||||
|
||||
function getBuiltinModelStrings(provider: APIProvider): ModelStrings {
|
||||
// Codex piggybacks on the OpenAI provider transport for Anthropic tier aliases.
|
||||
// Reuse OpenAI mappings so model string lookups never return undefined.
|
||||
const providerKey = provider === 'codex' ? 'openai' : provider
|
||||
const out = {} as ModelStrings
|
||||
for (const key of MODEL_KEYS) {
|
||||
out[key] = ALL_MODEL_CONFIGS[key][provider]
|
||||
out[key] = ALL_MODEL_CONFIGS[key][providerKey]
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
@@ -44,6 +44,11 @@ const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
|
||||
'google/gemini-2.0-flash':1_048_576,
|
||||
'google/gemini-2.5-pro': 1_048_576,
|
||||
|
||||
// Google (native via CLAUDE_CODE_USE_GEMINI)
|
||||
'gemini-2.0-flash': 1_048_576,
|
||||
'gemini-2.5-pro': 1_048_576,
|
||||
'gemini-2.5-flash': 1_048_576,
|
||||
|
||||
// Ollama local models
|
||||
'llama3.3:70b': 8_192,
|
||||
'llama3.1:8b': 8_192,
|
||||
@@ -94,7 +99,12 @@ const OPENAI_MAX_OUTPUT_TOKENS: Record<string, number> = {
|
||||
|
||||
// Google (via OpenRouter)
|
||||
'google/gemini-2.0-flash': 8_192,
|
||||
'google/gemini-2.5-pro': 32_768,
|
||||
'google/gemini-2.5-pro': 65_536,
|
||||
|
||||
// Google (native via CLAUDE_CODE_USE_GEMINI)
|
||||
'gemini-2.0-flash': 8_192,
|
||||
'gemini-2.5-pro': 65_536,
|
||||
'gemini-2.5-flash': 65_536,
|
||||
|
||||
// Ollama local models (conservative safe defaults)
|
||||
'llama3.3:70b': 4_096,
|
||||
|
||||
@@ -7,6 +7,7 @@ import {
|
||||
|
||||
const originalEnv = {
|
||||
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
|
||||
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
|
||||
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
|
||||
CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK,
|
||||
CLAUDE_CODE_USE_VERTEX: process.env.CLAUDE_CODE_USE_VERTEX,
|
||||
@@ -15,6 +16,7 @@ const originalEnv = {
|
||||
|
||||
afterEach(() => {
|
||||
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
|
||||
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = originalEnv.CLAUDE_CODE_USE_BEDROCK
|
||||
process.env.CLAUDE_CODE_USE_VERTEX = originalEnv.CLAUDE_CODE_USE_VERTEX
|
||||
@@ -23,6 +25,7 @@ afterEach(() => {
|
||||
|
||||
function clearProviderEnv(): void {
|
||||
delete process.env.CLAUDE_CODE_USE_GEMINI
|
||||
delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
delete process.env.CLAUDE_CODE_USE_OPENAI
|
||||
delete process.env.CLAUDE_CODE_USE_BEDROCK
|
||||
delete process.env.CLAUDE_CODE_USE_VERTEX
|
||||
@@ -38,6 +41,7 @@ test('first-party provider keeps Anthropic account setup flow enabled', () => {
|
||||
|
||||
test.each([
|
||||
['CLAUDE_CODE_USE_OPENAI', 'openai'],
|
||||
['CLAUDE_CODE_USE_GITHUB', 'github'],
|
||||
['CLAUDE_CODE_USE_GEMINI', 'gemini'],
|
||||
['CLAUDE_CODE_USE_BEDROCK', 'bedrock'],
|
||||
['CLAUDE_CODE_USE_VERTEX', 'vertex'],
|
||||
@@ -52,3 +56,11 @@ test.each([
|
||||
expect(usesAnthropicAccountFlow()).toBe(false)
|
||||
},
|
||||
)
|
||||
|
||||
test('GEMINI takes precedence over GitHub when both are set', () => {
|
||||
clearProviderEnv()
|
||||
process.env.CLAUDE_CODE_USE_GEMINI = '1'
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
|
||||
expect(getAPIProvider()).toBe('gemini')
|
||||
})
|
||||
|
||||
@@ -1,25 +1,50 @@
|
||||
import type { AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS } from '../../services/analytics/index.js'
|
||||
import { isEnvTruthy } from '../envUtils.js'
|
||||
|
||||
export type APIProvider = 'firstParty' | 'bedrock' | 'vertex' | 'foundry' | 'openai' | 'gemini'
|
||||
export type APIProvider =
|
||||
| 'firstParty'
|
||||
| 'bedrock'
|
||||
| 'vertex'
|
||||
| 'foundry'
|
||||
| 'openai'
|
||||
| 'gemini'
|
||||
| 'github'
|
||||
| 'codex'
|
||||
|
||||
export function getAPIProvider(): APIProvider {
|
||||
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
? 'gemini'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
? 'openai'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK)
|
||||
? 'bedrock'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX)
|
||||
? 'vertex'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY)
|
||||
? 'foundry'
|
||||
: 'firstParty'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
? 'github'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
? isCodexModel()
|
||||
? 'codex'
|
||||
: 'openai'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK)
|
||||
? 'bedrock'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX)
|
||||
? 'vertex'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY)
|
||||
? 'foundry'
|
||||
: 'firstParty'
|
||||
}
|
||||
|
||||
export function usesAnthropicAccountFlow(): boolean {
|
||||
return getAPIProvider() === 'firstParty'
|
||||
}
|
||||
function isCodexModel(): boolean {
|
||||
const model = (process.env.OPENAI_MODEL || '').toLowerCase()
|
||||
return (
|
||||
model === 'codexplan' ||
|
||||
model === 'codexspark' ||
|
||||
model === 'gpt-5.4' ||
|
||||
model === 'gpt-5.3-codex' ||
|
||||
model === 'gpt-5.3-codex-spark' ||
|
||||
model === 'gpt-5.2-codex' ||
|
||||
model === 'gpt-5.1-codex-max' ||
|
||||
model === 'gpt-5.1-codex-mini'
|
||||
)
|
||||
}
|
||||
|
||||
export function getAPIProviderForStatsig(): AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS {
|
||||
return getAPIProvider() as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
|
||||
|
||||
189
src/utils/providerDiscovery.ts
Normal file
189
src/utils/providerDiscovery.ts
Normal file
@@ -0,0 +1,189 @@
|
||||
import type { OllamaModelDescriptor } from './providerRecommendation.ts'
|
||||
|
||||
export const DEFAULT_OLLAMA_BASE_URL = 'http://localhost:11434'
|
||||
export const DEFAULT_ATOMIC_CHAT_BASE_URL = 'http://127.0.0.1:1337'
|
||||
|
||||
function withTimeoutSignal(timeoutMs: number): {
|
||||
signal: AbortSignal
|
||||
clear: () => void
|
||||
} {
|
||||
const controller = new AbortController()
|
||||
const timeout = setTimeout(() => controller.abort(), timeoutMs)
|
||||
return {
|
||||
signal: controller.signal,
|
||||
clear: () => clearTimeout(timeout),
|
||||
}
|
||||
}
|
||||
|
||||
function trimTrailingSlash(value: string): string {
|
||||
return value.replace(/\/+$/, '')
|
||||
}
|
||||
|
||||
export function getOllamaApiBaseUrl(baseUrl?: string): string {
|
||||
const parsed = new URL(
|
||||
baseUrl || process.env.OLLAMA_BASE_URL || DEFAULT_OLLAMA_BASE_URL,
|
||||
)
|
||||
const pathname = trimTrailingSlash(parsed.pathname)
|
||||
parsed.pathname = pathname.endsWith('/v1')
|
||||
? pathname.slice(0, -3) || '/'
|
||||
: pathname || '/'
|
||||
parsed.search = ''
|
||||
parsed.hash = ''
|
||||
return trimTrailingSlash(parsed.toString())
|
||||
}
|
||||
|
||||
export function getOllamaChatBaseUrl(baseUrl?: string): string {
|
||||
return `${getOllamaApiBaseUrl(baseUrl)}/v1`
|
||||
}
|
||||
|
||||
export function getAtomicChatApiBaseUrl(baseUrl?: string): string {
|
||||
const parsed = new URL(
|
||||
baseUrl || process.env.ATOMIC_CHAT_BASE_URL || DEFAULT_ATOMIC_CHAT_BASE_URL,
|
||||
)
|
||||
const pathname = trimTrailingSlash(parsed.pathname)
|
||||
parsed.pathname = pathname.endsWith('/v1')
|
||||
? pathname.slice(0, -3) || '/'
|
||||
: pathname || '/'
|
||||
parsed.search = ''
|
||||
parsed.hash = ''
|
||||
return trimTrailingSlash(parsed.toString())
|
||||
}
|
||||
|
||||
export function getAtomicChatChatBaseUrl(baseUrl?: string): string {
|
||||
return `${getAtomicChatApiBaseUrl(baseUrl)}/v1`
|
||||
}
|
||||
|
||||
export async function hasLocalOllama(baseUrl?: string): Promise<boolean> {
|
||||
const { signal, clear } = withTimeoutSignal(1200)
|
||||
try {
|
||||
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/tags`, {
|
||||
method: 'GET',
|
||||
signal,
|
||||
})
|
||||
return response.ok
|
||||
} catch {
|
||||
return false
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
|
||||
export async function listOllamaModels(
|
||||
baseUrl?: string,
|
||||
): Promise<OllamaModelDescriptor[]> {
|
||||
const { signal, clear } = withTimeoutSignal(5000)
|
||||
try {
|
||||
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/tags`, {
|
||||
method: 'GET',
|
||||
signal,
|
||||
})
|
||||
if (!response.ok) {
|
||||
return []
|
||||
}
|
||||
|
||||
const data = (await response.json()) as {
|
||||
models?: Array<{
|
||||
name?: string
|
||||
size?: number
|
||||
details?: {
|
||||
family?: string
|
||||
families?: string[]
|
||||
parameter_size?: string
|
||||
quantization_level?: string
|
||||
}
|
||||
}>
|
||||
}
|
||||
|
||||
return (data.models ?? [])
|
||||
.filter(model => Boolean(model.name))
|
||||
.map(model => ({
|
||||
name: model.name!,
|
||||
sizeBytes: typeof model.size === 'number' ? model.size : null,
|
||||
family: model.details?.family ?? null,
|
||||
families: model.details?.families ?? [],
|
||||
parameterSize: model.details?.parameter_size ?? null,
|
||||
quantizationLevel: model.details?.quantization_level ?? null,
|
||||
}))
|
||||
} catch {
|
||||
return []
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
|
||||
export async function hasLocalAtomicChat(baseUrl?: string): Promise<boolean> {
|
||||
const { signal, clear } = withTimeoutSignal(1200)
|
||||
try {
|
||||
const response = await fetch(`${getAtomicChatChatBaseUrl(baseUrl)}/models`, {
|
||||
method: 'GET',
|
||||
signal,
|
||||
})
|
||||
return response.ok
|
||||
} catch {
|
||||
return false
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
|
||||
export async function listAtomicChatModels(
|
||||
baseUrl?: string,
|
||||
): Promise<string[]> {
|
||||
const { signal, clear } = withTimeoutSignal(5000)
|
||||
try {
|
||||
const response = await fetch(`${getAtomicChatChatBaseUrl(baseUrl)}/models`, {
|
||||
method: 'GET',
|
||||
signal,
|
||||
})
|
||||
if (!response.ok) {
|
||||
return []
|
||||
}
|
||||
|
||||
const data = (await response.json()) as {
|
||||
data?: Array<{ id?: string }>
|
||||
}
|
||||
|
||||
return (data.data ?? [])
|
||||
.filter(model => Boolean(model.id))
|
||||
.map(model => model.id!)
|
||||
} catch {
|
||||
return []
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
|
||||
export async function benchmarkOllamaModel(
|
||||
modelName: string,
|
||||
baseUrl?: string,
|
||||
): Promise<number | null> {
|
||||
const start = Date.now()
|
||||
const { signal, clear } = withTimeoutSignal(20000)
|
||||
try {
|
||||
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/chat`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
signal,
|
||||
body: JSON.stringify({
|
||||
model: modelName,
|
||||
stream: false,
|
||||
messages: [{ role: 'user', content: 'Reply with OK.' }],
|
||||
options: {
|
||||
temperature: 0,
|
||||
num_predict: 8,
|
||||
},
|
||||
}),
|
||||
})
|
||||
if (!response.ok) {
|
||||
return null
|
||||
}
|
||||
await response.json()
|
||||
return Date.now() - start
|
||||
} catch {
|
||||
return null
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
@@ -1,15 +1,24 @@
|
||||
import assert from 'node:assert/strict'
|
||||
import { mkdtempSync, rmSync, writeFileSync } from 'node:fs'
|
||||
import { mkdtempSync, readFileSync, rmSync, writeFileSync } from 'node:fs'
|
||||
import { tmpdir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
import test from 'node:test'
|
||||
|
||||
import {
|
||||
buildStartupEnvFromProfile,
|
||||
buildAtomicChatProfileEnv,
|
||||
buildCodexProfileEnv,
|
||||
buildGeminiProfileEnv,
|
||||
buildLaunchEnv,
|
||||
buildOllamaProfileEnv,
|
||||
buildOpenAIProfileEnv,
|
||||
createProfileFile,
|
||||
maskSecretForDisplay,
|
||||
loadProfileFile,
|
||||
PROFILE_FILE_NAME,
|
||||
redactSecretValueForDisplay,
|
||||
saveProfileFile,
|
||||
sanitizeProviderConfigValue,
|
||||
selectAutoProfile,
|
||||
type ProfileFile,
|
||||
} from './providerProfile.ts'
|
||||
@@ -359,6 +368,112 @@ test('gemini profiles require a key', () => {
|
||||
assert.equal(env, null)
|
||||
})
|
||||
|
||||
test('saveProfileFile writes a profile that loadProfileFile can read back', () => {
|
||||
const cwd = mkdtempSync(join(tmpdir(), 'openclaude-profile-file-'))
|
||||
|
||||
try {
|
||||
const persisted = createProfileFile('openai', {
|
||||
OPENAI_API_KEY: 'sk-test',
|
||||
OPENAI_MODEL: 'gpt-4o',
|
||||
})
|
||||
|
||||
const filePath = saveProfileFile(persisted, { cwd })
|
||||
|
||||
assert.equal(filePath, join(cwd, PROFILE_FILE_NAME))
|
||||
assert.equal(
|
||||
JSON.parse(readFileSync(filePath, 'utf8')).profile,
|
||||
'openai',
|
||||
)
|
||||
assert.deepEqual(loadProfileFile({ cwd }), persisted)
|
||||
} finally {
|
||||
rmSync(cwd, { recursive: true, force: true })
|
||||
}
|
||||
})
|
||||
|
||||
test('buildStartupEnvFromProfile applies persisted gemini settings when no provider is explicitly selected', async () => {
|
||||
const env = await buildStartupEnvFromProfile({
|
||||
persisted: profile('gemini', {
|
||||
GEMINI_API_KEY: 'gem-test',
|
||||
GEMINI_MODEL: 'gemini-2.5-flash',
|
||||
}),
|
||||
processEnv: {},
|
||||
})
|
||||
|
||||
assert.equal(env.CLAUDE_CODE_USE_GEMINI, '1')
|
||||
assert.equal(env.CLAUDE_CODE_USE_OPENAI, undefined)
|
||||
assert.equal(env.GEMINI_API_KEY, 'gem-test')
|
||||
assert.equal(env.GEMINI_MODEL, 'gemini-2.5-flash')
|
||||
})
|
||||
|
||||
test('buildStartupEnvFromProfile leaves explicit provider selections untouched', async () => {
|
||||
const processEnv = {
|
||||
CLAUDE_CODE_USE_GEMINI: '1',
|
||||
GEMINI_API_KEY: 'gem-live',
|
||||
GEMINI_MODEL: 'gemini-2.0-flash',
|
||||
}
|
||||
|
||||
const env = await buildStartupEnvFromProfile({
|
||||
persisted: profile('openai', {
|
||||
OPENAI_API_KEY: 'sk-persisted',
|
||||
OPENAI_MODEL: 'gpt-4o',
|
||||
}),
|
||||
processEnv,
|
||||
})
|
||||
|
||||
assert.equal(env, processEnv)
|
||||
assert.equal(env.CLAUDE_CODE_USE_GEMINI, '1')
|
||||
assert.equal(env.OPENAI_API_KEY, undefined)
|
||||
})
|
||||
|
||||
test('buildStartupEnvFromProfile treats explicit falsey provider flags as user intent', async () => {
|
||||
const processEnv = {
|
||||
CLAUDE_CODE_USE_OPENAI: '0',
|
||||
}
|
||||
|
||||
const env = await buildStartupEnvFromProfile({
|
||||
persisted: profile('gemini', {
|
||||
GEMINI_API_KEY: 'gem-persisted',
|
||||
GEMINI_MODEL: 'gemini-2.5-flash',
|
||||
}),
|
||||
processEnv,
|
||||
})
|
||||
|
||||
assert.equal(env, processEnv)
|
||||
assert.equal(env.CLAUDE_CODE_USE_OPENAI, '0')
|
||||
assert.equal(env.GEMINI_API_KEY, undefined)
|
||||
})
|
||||
|
||||
test('maskSecretForDisplay preserves only a short prefix and suffix', () => {
|
||||
assert.equal(maskSecretForDisplay('sk-secret-12345678'), 'sk-...5678')
|
||||
assert.equal(maskSecretForDisplay('AIzaSecret12345678'), 'AIza...5678')
|
||||
})
|
||||
|
||||
test('redactSecretValueForDisplay masks poisoned display fields that equal configured secrets', () => {
|
||||
const apiKey = 'sk-secret-12345678'
|
||||
|
||||
assert.equal(
|
||||
redactSecretValueForDisplay(apiKey, { OPENAI_API_KEY: apiKey }),
|
||||
'sk-...5678',
|
||||
)
|
||||
assert.equal(
|
||||
redactSecretValueForDisplay('gpt-4o', { OPENAI_API_KEY: apiKey }),
|
||||
'gpt-4o',
|
||||
)
|
||||
})
|
||||
|
||||
test('sanitizeProviderConfigValue drops secret-like poisoned values', () => {
|
||||
const apiKey = 'sk-secret-12345678'
|
||||
|
||||
assert.equal(
|
||||
sanitizeProviderConfigValue(apiKey, { OPENAI_API_KEY: apiKey }),
|
||||
undefined,
|
||||
)
|
||||
assert.equal(
|
||||
sanitizeProviderConfigValue('gpt-4o', { OPENAI_API_KEY: apiKey }),
|
||||
'gpt-4o',
|
||||
)
|
||||
})
|
||||
|
||||
test('openai profiles ignore codex shell transport hints', () => {
|
||||
const env = buildOpenAIProfileEnv({
|
||||
goal: 'balanced',
|
||||
@@ -377,7 +492,110 @@ test('openai profiles ignore codex shell transport hints', () => {
|
||||
})
|
||||
})
|
||||
|
||||
test('openai profiles ignore poisoned shell model and base url values', () => {
|
||||
const env = buildOpenAIProfileEnv({
|
||||
goal: 'balanced',
|
||||
apiKey: 'sk-live',
|
||||
processEnv: {
|
||||
OPENAI_BASE_URL: 'sk-live',
|
||||
OPENAI_MODEL: 'sk-live',
|
||||
OPENAI_API_KEY: 'sk-live',
|
||||
},
|
||||
})
|
||||
|
||||
assert.deepEqual(env, {
|
||||
OPENAI_BASE_URL: 'https://api.openai.com/v1',
|
||||
OPENAI_MODEL: 'gpt-4o',
|
||||
OPENAI_API_KEY: 'sk-live',
|
||||
})
|
||||
})
|
||||
|
||||
test('startup env ignores poisoned persisted openai model and base url', async () => {
|
||||
const env = await buildStartupEnvFromProfile({
|
||||
persisted: profile('openai', {
|
||||
OPENAI_API_KEY: 'sk-live',
|
||||
OPENAI_MODEL: 'sk-live',
|
||||
OPENAI_BASE_URL: 'sk-live',
|
||||
}),
|
||||
processEnv: {},
|
||||
})
|
||||
|
||||
assert.equal(env.CLAUDE_CODE_USE_OPENAI, '1')
|
||||
assert.equal(env.OPENAI_API_KEY, 'sk-live')
|
||||
assert.equal(env.OPENAI_MODEL, 'gpt-4o')
|
||||
assert.equal(env.OPENAI_BASE_URL, 'https://api.openai.com/v1')
|
||||
})
|
||||
|
||||
test('auto profile falls back to openai when no viable ollama model exists', () => {
|
||||
assert.equal(selectAutoProfile(null), 'openai')
|
||||
assert.equal(selectAutoProfile('qwen2.5-coder:7b'), 'ollama')
|
||||
})
|
||||
|
||||
// ── Atomic Chat profile tests ────────────────────────────────────────────────
|
||||
|
||||
test('atomic-chat profiles never persist openai api keys', () => {
|
||||
const env = buildAtomicChatProfileEnv('some-local-model', {
|
||||
getAtomicChatChatBaseUrl: () => 'http://127.0.0.1:1337/v1',
|
||||
})
|
||||
|
||||
assert.deepEqual(env, {
|
||||
OPENAI_BASE_URL: 'http://127.0.0.1:1337/v1',
|
||||
OPENAI_MODEL: 'some-local-model',
|
||||
})
|
||||
assert.equal('OPENAI_API_KEY' in env, false)
|
||||
})
|
||||
|
||||
test('atomic-chat profiles respect custom base url', () => {
|
||||
const env = buildAtomicChatProfileEnv('my-model', {
|
||||
baseUrl: 'http://192.168.1.100:1337',
|
||||
getAtomicChatChatBaseUrl: (baseUrl?: string) =>
|
||||
baseUrl ? `${baseUrl}/v1` : 'http://127.0.0.1:1337/v1',
|
||||
})
|
||||
|
||||
assert.equal(env.OPENAI_BASE_URL, 'http://192.168.1.100:1337/v1')
|
||||
assert.equal(env.OPENAI_MODEL, 'my-model')
|
||||
})
|
||||
|
||||
test('matching persisted atomic-chat env is reused for atomic-chat launch', async () => {
|
||||
const env = await buildLaunchEnv({
|
||||
profile: 'atomic-chat',
|
||||
persisted: profile('atomic-chat', {
|
||||
OPENAI_BASE_URL: 'http://127.0.0.1:1337/v1',
|
||||
OPENAI_MODEL: 'llama-3.1-8b',
|
||||
}),
|
||||
goal: 'balanced',
|
||||
processEnv: {},
|
||||
getAtomicChatChatBaseUrl: () => 'http://127.0.0.1:1337/v1',
|
||||
resolveAtomicChatDefaultModel: async () => 'other-model',
|
||||
})
|
||||
|
||||
assert.equal(env.OPENAI_BASE_URL, 'http://127.0.0.1:1337/v1')
|
||||
assert.equal(env.OPENAI_MODEL, 'llama-3.1-8b')
|
||||
assert.equal(env.OPENAI_API_KEY, undefined)
|
||||
assert.equal(env.CODEX_API_KEY, undefined)
|
||||
})
|
||||
|
||||
test('atomic-chat launch ignores mismatched persisted openai env', async () => {
|
||||
const env = await buildLaunchEnv({
|
||||
profile: 'atomic-chat',
|
||||
persisted: profile('openai', {
|
||||
OPENAI_BASE_URL: 'https://api.openai.com/v1',
|
||||
OPENAI_MODEL: 'gpt-4o',
|
||||
OPENAI_API_KEY: 'sk-persisted',
|
||||
}),
|
||||
goal: 'balanced',
|
||||
processEnv: {
|
||||
OPENAI_API_KEY: 'sk-live',
|
||||
CODEX_API_KEY: 'codex-live',
|
||||
CHATGPT_ACCOUNT_ID: 'acct_live',
|
||||
},
|
||||
getAtomicChatChatBaseUrl: () => 'http://127.0.0.1:1337/v1',
|
||||
resolveAtomicChatDefaultModel: async () => 'local-model',
|
||||
})
|
||||
|
||||
assert.equal(env.OPENAI_BASE_URL, 'http://127.0.0.1:1337/v1')
|
||||
assert.equal(env.OPENAI_MODEL, 'local-model')
|
||||
assert.equal(env.OPENAI_API_KEY, undefined)
|
||||
assert.equal(env.CODEX_API_KEY, undefined)
|
||||
assert.equal(env.CHATGPT_ACCOUNT_ID, undefined)
|
||||
})
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import { existsSync, readFileSync, rmSync, writeFileSync } from 'node:fs'
|
||||
import { resolve } from 'node:path'
|
||||
import {
|
||||
DEFAULT_CODEX_BASE_URL,
|
||||
DEFAULT_OPENAI_BASE_URL,
|
||||
@@ -7,13 +9,42 @@ import {
|
||||
} from '../services/api/providerConfig.ts'
|
||||
import {
|
||||
getGoalDefaultOpenAIModel,
|
||||
normalizeRecommendationGoal,
|
||||
type RecommendationGoal,
|
||||
} from './providerRecommendation.ts'
|
||||
import { getOllamaChatBaseUrl } from './providerDiscovery.ts'
|
||||
|
||||
const DEFAULT_GEMINI_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai'
|
||||
const DEFAULT_GEMINI_MODEL = 'gemini-2.0-flash'
|
||||
export const PROFILE_FILE_NAME = '.openclaude-profile.json'
|
||||
export const DEFAULT_GEMINI_BASE_URL =
|
||||
'https://generativelanguage.googleapis.com/v1beta/openai'
|
||||
export const DEFAULT_GEMINI_MODEL = 'gemini-2.0-flash'
|
||||
|
||||
export type ProviderProfile = 'openai' | 'ollama' | 'codex' | 'gemini'
|
||||
const PROFILE_ENV_KEYS = [
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
'CLAUDE_CODE_USE_GEMINI',
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'OPENAI_BASE_URL',
|
||||
'OPENAI_MODEL',
|
||||
'OPENAI_API_KEY',
|
||||
'CODEX_API_KEY',
|
||||
'CHATGPT_ACCOUNT_ID',
|
||||
'CODEX_ACCOUNT_ID',
|
||||
'GEMINI_API_KEY',
|
||||
'GEMINI_MODEL',
|
||||
'GEMINI_BASE_URL',
|
||||
'GOOGLE_API_KEY',
|
||||
] as const
|
||||
|
||||
const SECRET_ENV_KEYS = [
|
||||
'OPENAI_API_KEY',
|
||||
'CODEX_API_KEY',
|
||||
'GEMINI_API_KEY',
|
||||
'GOOGLE_API_KEY',
|
||||
] as const
|
||||
|
||||
export type ProviderProfile = 'openai' | 'ollama' | 'codex' | 'gemini' | 'atomic-chat'
|
||||
|
||||
export type ProfileEnv = {
|
||||
OPENAI_BASE_URL?: string
|
||||
@@ -33,6 +64,36 @@ export type ProfileFile = {
|
||||
createdAt: string
|
||||
}
|
||||
|
||||
type SecretValueSource = Partial<
|
||||
Pick<
|
||||
NodeJS.ProcessEnv & ProfileEnv,
|
||||
(typeof SECRET_ENV_KEYS)[number]
|
||||
>
|
||||
>
|
||||
|
||||
type ProfileFileLocation = {
|
||||
cwd?: string
|
||||
filePath?: string
|
||||
}
|
||||
|
||||
function resolveProfileFilePath(options?: ProfileFileLocation): string {
|
||||
if (options?.filePath) {
|
||||
return options.filePath
|
||||
}
|
||||
|
||||
return resolve(options?.cwd ?? process.cwd(), PROFILE_FILE_NAME)
|
||||
}
|
||||
|
||||
export function isProviderProfile(value: unknown): value is ProviderProfile {
|
||||
return (
|
||||
value === 'openai' ||
|
||||
value === 'ollama' ||
|
||||
value === 'codex' ||
|
||||
value === 'gemini' ||
|
||||
value === 'atomic-chat'
|
||||
)
|
||||
}
|
||||
|
||||
export function sanitizeApiKey(
|
||||
key: string | null | undefined,
|
||||
): string | undefined {
|
||||
@@ -40,6 +101,95 @@ export function sanitizeApiKey(
|
||||
return key
|
||||
}
|
||||
|
||||
function looksLikeSecretValue(value: string): boolean {
|
||||
const trimmed = value.trim()
|
||||
if (!trimmed) return false
|
||||
|
||||
if (trimmed.startsWith('sk-') || trimmed.startsWith('sk-ant-')) {
|
||||
return true
|
||||
}
|
||||
|
||||
if (trimmed.startsWith('AIza')) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
function collectSecretValues(
|
||||
sources: Array<SecretValueSource | null | undefined>,
|
||||
): string[] {
|
||||
const values = new Set<string>()
|
||||
|
||||
for (const source of sources) {
|
||||
if (!source) continue
|
||||
|
||||
for (const key of SECRET_ENV_KEYS) {
|
||||
const value = sanitizeApiKey(source[key])
|
||||
if (value) {
|
||||
values.add(value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return [...values]
|
||||
}
|
||||
|
||||
export function maskSecretForDisplay(
|
||||
value: string | null | undefined,
|
||||
): string | undefined {
|
||||
const sanitized = sanitizeApiKey(value)
|
||||
if (!sanitized) return undefined
|
||||
|
||||
if (sanitized.length <= 8) {
|
||||
return 'configured'
|
||||
}
|
||||
|
||||
if (sanitized.startsWith('sk-')) {
|
||||
return `${sanitized.slice(0, 3)}...${sanitized.slice(-4)}`
|
||||
}
|
||||
|
||||
if (sanitized.startsWith('AIza')) {
|
||||
return `${sanitized.slice(0, 4)}...${sanitized.slice(-4)}`
|
||||
}
|
||||
|
||||
return `${sanitized.slice(0, 2)}...${sanitized.slice(-4)}`
|
||||
}
|
||||
|
||||
export function redactSecretValueForDisplay(
|
||||
value: string | null | undefined,
|
||||
...sources: Array<SecretValueSource | null | undefined>
|
||||
): string | undefined {
|
||||
if (!value) return undefined
|
||||
|
||||
const trimmed = value.trim()
|
||||
if (!trimmed) return trimmed
|
||||
|
||||
const secretValues = collectSecretValues(sources)
|
||||
if (secretValues.includes(trimmed) || looksLikeSecretValue(trimmed)) {
|
||||
return maskSecretForDisplay(trimmed) ?? 'configured'
|
||||
}
|
||||
|
||||
return trimmed
|
||||
}
|
||||
|
||||
export function sanitizeProviderConfigValue(
|
||||
value: string | null | undefined,
|
||||
...sources: Array<SecretValueSource | null | undefined>
|
||||
): string | undefined {
|
||||
if (!value) return undefined
|
||||
|
||||
const trimmed = value.trim()
|
||||
if (!trimmed) return undefined
|
||||
|
||||
const secretValues = collectSecretValues(sources)
|
||||
if (secretValues.includes(trimmed) || looksLikeSecretValue(trimmed)) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
return trimmed
|
||||
}
|
||||
|
||||
export function buildOllamaProfileEnv(
|
||||
model: string,
|
||||
options: {
|
||||
@@ -53,6 +203,19 @@ export function buildOllamaProfileEnv(
|
||||
}
|
||||
}
|
||||
|
||||
export function buildAtomicChatProfileEnv(
|
||||
model: string,
|
||||
options: {
|
||||
baseUrl?: string | null
|
||||
getAtomicChatChatBaseUrl: (baseUrl?: string) => string
|
||||
},
|
||||
): ProfileEnv {
|
||||
return {
|
||||
OPENAI_BASE_URL: options.getAtomicChatChatBaseUrl(options.baseUrl ?? undefined),
|
||||
OPENAI_MODEL: model,
|
||||
}
|
||||
}
|
||||
|
||||
export function buildGeminiProfileEnv(options: {
|
||||
model?: string | null
|
||||
baseUrl?: string | null
|
||||
@@ -71,11 +234,23 @@ export function buildGeminiProfileEnv(options: {
|
||||
|
||||
const env: ProfileEnv = {
|
||||
GEMINI_MODEL:
|
||||
options.model || processEnv.GEMINI_MODEL || DEFAULT_GEMINI_MODEL,
|
||||
sanitizeProviderConfigValue(options.model, { GEMINI_API_KEY: key }, processEnv) ||
|
||||
sanitizeProviderConfigValue(
|
||||
processEnv.GEMINI_MODEL,
|
||||
{ GEMINI_API_KEY: key },
|
||||
processEnv,
|
||||
) ||
|
||||
DEFAULT_GEMINI_MODEL,
|
||||
GEMINI_API_KEY: key,
|
||||
}
|
||||
|
||||
const baseUrl = options.baseUrl || processEnv.GEMINI_BASE_URL
|
||||
const baseUrl =
|
||||
sanitizeProviderConfigValue(options.baseUrl, { GEMINI_API_KEY: key }, processEnv) ||
|
||||
sanitizeProviderConfigValue(
|
||||
processEnv.GEMINI_BASE_URL,
|
||||
{ GEMINI_API_KEY: key },
|
||||
processEnv,
|
||||
)
|
||||
if (baseUrl) {
|
||||
env.GEMINI_BASE_URL = baseUrl
|
||||
}
|
||||
@@ -97,21 +272,39 @@ export function buildOpenAIProfileEnv(options: {
|
||||
}
|
||||
|
||||
const defaultModel = getGoalDefaultOpenAIModel(options.goal)
|
||||
const shellOpenAIModel = sanitizeProviderConfigValue(
|
||||
processEnv.OPENAI_MODEL,
|
||||
{ OPENAI_API_KEY: key },
|
||||
processEnv,
|
||||
)
|
||||
const shellOpenAIBaseUrl = sanitizeProviderConfigValue(
|
||||
processEnv.OPENAI_BASE_URL,
|
||||
{ OPENAI_API_KEY: key },
|
||||
processEnv,
|
||||
)
|
||||
const shellOpenAIRequest = resolveProviderRequest({
|
||||
model: processEnv.OPENAI_MODEL,
|
||||
baseUrl: processEnv.OPENAI_BASE_URL,
|
||||
model: shellOpenAIModel,
|
||||
baseUrl: shellOpenAIBaseUrl,
|
||||
fallbackModel: defaultModel,
|
||||
})
|
||||
const useShellOpenAIConfig = shellOpenAIRequest.transport === 'chat_completions'
|
||||
|
||||
return {
|
||||
OPENAI_BASE_URL:
|
||||
options.baseUrl ||
|
||||
(useShellOpenAIConfig ? processEnv.OPENAI_BASE_URL : undefined) ||
|
||||
sanitizeProviderConfigValue(
|
||||
options.baseUrl,
|
||||
{ OPENAI_API_KEY: key },
|
||||
processEnv,
|
||||
) ||
|
||||
(useShellOpenAIConfig ? shellOpenAIBaseUrl : undefined) ||
|
||||
DEFAULT_OPENAI_BASE_URL,
|
||||
OPENAI_MODEL:
|
||||
options.model ||
|
||||
(useShellOpenAIConfig ? processEnv.OPENAI_MODEL : undefined) ||
|
||||
sanitizeProviderConfigValue(
|
||||
options.model,
|
||||
{ OPENAI_API_KEY: key },
|
||||
processEnv,
|
||||
) ||
|
||||
(useShellOpenAIConfig ? shellOpenAIModel : undefined) ||
|
||||
defaultModel,
|
||||
OPENAI_API_KEY: key,
|
||||
}
|
||||
@@ -158,6 +351,62 @@ export function createProfileFile(
|
||||
}
|
||||
}
|
||||
|
||||
export function loadProfileFile(options?: ProfileFileLocation): ProfileFile | null {
|
||||
const filePath = resolveProfileFilePath(options)
|
||||
if (!existsSync(filePath)) {
|
||||
return null
|
||||
}
|
||||
|
||||
try {
|
||||
const parsed = JSON.parse(readFileSync(filePath, 'utf8')) as Partial<ProfileFile>
|
||||
if (!isProviderProfile(parsed.profile) || !parsed.env || typeof parsed.env !== 'object') {
|
||||
return null
|
||||
}
|
||||
|
||||
return {
|
||||
profile: parsed.profile,
|
||||
env: parsed.env,
|
||||
createdAt:
|
||||
typeof parsed.createdAt === 'string'
|
||||
? parsed.createdAt
|
||||
: new Date().toISOString(),
|
||||
}
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
export function saveProfileFile(
|
||||
profileFile: ProfileFile,
|
||||
options?: ProfileFileLocation,
|
||||
): string {
|
||||
const filePath = resolveProfileFilePath(options)
|
||||
writeFileSync(filePath, JSON.stringify(profileFile, null, 2), {
|
||||
encoding: 'utf8',
|
||||
mode: 0o600,
|
||||
})
|
||||
return filePath
|
||||
}
|
||||
|
||||
export function deleteProfileFile(options?: ProfileFileLocation): string {
|
||||
const filePath = resolveProfileFilePath(options)
|
||||
rmSync(filePath, { force: true })
|
||||
return filePath
|
||||
}
|
||||
|
||||
export function hasExplicitProviderSelection(
|
||||
processEnv: NodeJS.ProcessEnv = process.env,
|
||||
): boolean {
|
||||
return (
|
||||
processEnv.CLAUDE_CODE_USE_OPENAI !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_GEMINI !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_BEDROCK !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_VERTEX !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_FOUNDRY !== undefined
|
||||
)
|
||||
}
|
||||
|
||||
export function selectAutoProfile(
|
||||
recommendedOllamaModel: string | null,
|
||||
): ProviderProfile {
|
||||
@@ -171,12 +420,46 @@ export async function buildLaunchEnv(options: {
|
||||
processEnv?: NodeJS.ProcessEnv
|
||||
getOllamaChatBaseUrl?: (baseUrl?: string) => string
|
||||
resolveOllamaDefaultModel?: (goal: RecommendationGoal) => Promise<string>
|
||||
getAtomicChatChatBaseUrl?: (baseUrl?: string) => string
|
||||
resolveAtomicChatDefaultModel?: () => Promise<string | null>
|
||||
}): Promise<NodeJS.ProcessEnv> {
|
||||
const processEnv = options.processEnv ?? process.env
|
||||
const persistedEnv =
|
||||
options.persisted?.profile === options.profile
|
||||
? options.persisted.env ?? {}
|
||||
: {}
|
||||
const persistedOpenAIModel = sanitizeProviderConfigValue(
|
||||
persistedEnv.OPENAI_MODEL,
|
||||
persistedEnv,
|
||||
)
|
||||
const persistedOpenAIBaseUrl = sanitizeProviderConfigValue(
|
||||
persistedEnv.OPENAI_BASE_URL,
|
||||
persistedEnv,
|
||||
)
|
||||
const shellOpenAIModel = sanitizeProviderConfigValue(
|
||||
processEnv.OPENAI_MODEL,
|
||||
processEnv,
|
||||
)
|
||||
const shellOpenAIBaseUrl = sanitizeProviderConfigValue(
|
||||
processEnv.OPENAI_BASE_URL,
|
||||
processEnv,
|
||||
)
|
||||
const persistedGeminiModel = sanitizeProviderConfigValue(
|
||||
persistedEnv.GEMINI_MODEL,
|
||||
persistedEnv,
|
||||
)
|
||||
const persistedGeminiBaseUrl = sanitizeProviderConfigValue(
|
||||
persistedEnv.GEMINI_BASE_URL,
|
||||
persistedEnv,
|
||||
)
|
||||
const shellGeminiModel = sanitizeProviderConfigValue(
|
||||
processEnv.GEMINI_MODEL,
|
||||
processEnv,
|
||||
)
|
||||
const shellGeminiBaseUrl = sanitizeProviderConfigValue(
|
||||
processEnv.GEMINI_BASE_URL,
|
||||
processEnv,
|
||||
)
|
||||
|
||||
const shellGeminiKey = sanitizeApiKey(
|
||||
processEnv.GEMINI_API_KEY ?? processEnv.GOOGLE_API_KEY,
|
||||
@@ -190,14 +473,15 @@ export async function buildLaunchEnv(options: {
|
||||
}
|
||||
|
||||
delete env.CLAUDE_CODE_USE_OPENAI
|
||||
delete env.CLAUDE_CODE_USE_GITHUB
|
||||
|
||||
env.GEMINI_MODEL =
|
||||
processEnv.GEMINI_MODEL ||
|
||||
persistedEnv.GEMINI_MODEL ||
|
||||
shellGeminiModel ||
|
||||
persistedGeminiModel ||
|
||||
DEFAULT_GEMINI_MODEL
|
||||
env.GEMINI_BASE_URL =
|
||||
processEnv.GEMINI_BASE_URL ||
|
||||
persistedEnv.GEMINI_BASE_URL ||
|
||||
shellGeminiBaseUrl ||
|
||||
persistedGeminiBaseUrl ||
|
||||
DEFAULT_GEMINI_BASE_URL
|
||||
|
||||
const geminiKey = shellGeminiKey || persistedGeminiKey
|
||||
@@ -224,6 +508,7 @@ export async function buildLaunchEnv(options: {
|
||||
}
|
||||
|
||||
delete env.CLAUDE_CODE_USE_GEMINI
|
||||
delete env.CLAUDE_CODE_USE_GITHUB
|
||||
delete env.GEMINI_API_KEY
|
||||
delete env.GEMINI_MODEL
|
||||
delete env.GEMINI_BASE_URL
|
||||
@@ -235,10 +520,30 @@ export async function buildLaunchEnv(options: {
|
||||
const resolveOllamaModel =
|
||||
options.resolveOllamaDefaultModel ?? (async () => 'llama3.1:8b')
|
||||
|
||||
env.OPENAI_BASE_URL = persistedEnv.OPENAI_BASE_URL || getOllamaBaseUrl()
|
||||
env.OPENAI_BASE_URL = persistedOpenAIBaseUrl || getOllamaBaseUrl()
|
||||
env.OPENAI_MODEL =
|
||||
persistedOpenAIModel ||
|
||||
(await resolveOllamaModel(options.goal))
|
||||
|
||||
delete env.OPENAI_API_KEY
|
||||
delete env.CODEX_API_KEY
|
||||
delete env.CHATGPT_ACCOUNT_ID
|
||||
delete env.CODEX_ACCOUNT_ID
|
||||
|
||||
return env
|
||||
}
|
||||
|
||||
if (options.profile === 'atomic-chat') {
|
||||
const getAtomicChatBaseUrl =
|
||||
options.getAtomicChatChatBaseUrl ?? (() => 'http://127.0.0.1:1337/v1')
|
||||
const resolveModel =
|
||||
options.resolveAtomicChatDefaultModel ?? (async () => null as string | null)
|
||||
|
||||
env.OPENAI_BASE_URL = persistedEnv.OPENAI_BASE_URL || getAtomicChatBaseUrl()
|
||||
env.OPENAI_MODEL =
|
||||
persistedEnv.OPENAI_MODEL ||
|
||||
(await resolveOllamaModel(options.goal))
|
||||
(await resolveModel()) ||
|
||||
''
|
||||
|
||||
delete env.OPENAI_API_KEY
|
||||
delete env.CODEX_API_KEY
|
||||
@@ -250,10 +555,10 @@ export async function buildLaunchEnv(options: {
|
||||
|
||||
if (options.profile === 'codex') {
|
||||
env.OPENAI_BASE_URL =
|
||||
persistedEnv.OPENAI_BASE_URL && isCodexBaseUrl(persistedEnv.OPENAI_BASE_URL)
|
||||
? persistedEnv.OPENAI_BASE_URL
|
||||
persistedOpenAIBaseUrl && isCodexBaseUrl(persistedOpenAIBaseUrl)
|
||||
? persistedOpenAIBaseUrl
|
||||
: DEFAULT_CODEX_BASE_URL
|
||||
env.OPENAI_MODEL = persistedEnv.OPENAI_MODEL || 'codexplan'
|
||||
env.OPENAI_MODEL = persistedOpenAIModel || 'codexplan'
|
||||
delete env.OPENAI_API_KEY
|
||||
|
||||
const codexKey =
|
||||
@@ -284,27 +589,27 @@ export async function buildLaunchEnv(options: {
|
||||
|
||||
const defaultOpenAIModel = getGoalDefaultOpenAIModel(options.goal)
|
||||
const shellOpenAIRequest = resolveProviderRequest({
|
||||
model: processEnv.OPENAI_MODEL,
|
||||
baseUrl: processEnv.OPENAI_BASE_URL,
|
||||
model: shellOpenAIModel,
|
||||
baseUrl: shellOpenAIBaseUrl,
|
||||
fallbackModel: defaultOpenAIModel,
|
||||
})
|
||||
const persistedOpenAIRequest = resolveProviderRequest({
|
||||
model: persistedEnv.OPENAI_MODEL,
|
||||
baseUrl: persistedEnv.OPENAI_BASE_URL,
|
||||
model: persistedOpenAIModel,
|
||||
baseUrl: persistedOpenAIBaseUrl,
|
||||
fallbackModel: defaultOpenAIModel,
|
||||
})
|
||||
const useShellOpenAIConfig = shellOpenAIRequest.transport === 'chat_completions'
|
||||
const usePersistedOpenAIConfig =
|
||||
(!persistedEnv.OPENAI_MODEL && !persistedEnv.OPENAI_BASE_URL) ||
|
||||
(!persistedOpenAIModel && !persistedOpenAIBaseUrl) ||
|
||||
persistedOpenAIRequest.transport === 'chat_completions'
|
||||
|
||||
env.OPENAI_BASE_URL =
|
||||
(useShellOpenAIConfig ? processEnv.OPENAI_BASE_URL : undefined) ||
|
||||
(usePersistedOpenAIConfig ? persistedEnv.OPENAI_BASE_URL : undefined) ||
|
||||
(useShellOpenAIConfig ? shellOpenAIBaseUrl : undefined) ||
|
||||
(usePersistedOpenAIConfig ? persistedOpenAIBaseUrl : undefined) ||
|
||||
DEFAULT_OPENAI_BASE_URL
|
||||
env.OPENAI_MODEL =
|
||||
(useShellOpenAIConfig ? processEnv.OPENAI_MODEL : undefined) ||
|
||||
(usePersistedOpenAIConfig ? persistedEnv.OPENAI_MODEL : undefined) ||
|
||||
(useShellOpenAIConfig ? shellOpenAIModel : undefined) ||
|
||||
(usePersistedOpenAIConfig ? persistedOpenAIModel : undefined) ||
|
||||
defaultOpenAIModel
|
||||
env.OPENAI_API_KEY = processEnv.OPENAI_API_KEY || persistedEnv.OPENAI_API_KEY
|
||||
delete env.CODEX_API_KEY
|
||||
@@ -312,3 +617,44 @@ export async function buildLaunchEnv(options: {
|
||||
delete env.CODEX_ACCOUNT_ID
|
||||
return env
|
||||
}
|
||||
|
||||
export async function buildStartupEnvFromProfile(options?: {
|
||||
persisted?: ProfileFile | null
|
||||
goal?: RecommendationGoal
|
||||
processEnv?: NodeJS.ProcessEnv
|
||||
getOllamaChatBaseUrl?: (baseUrl?: string) => string
|
||||
resolveOllamaDefaultModel?: (goal: RecommendationGoal) => Promise<string>
|
||||
}): Promise<NodeJS.ProcessEnv> {
|
||||
const processEnv = options?.processEnv ?? process.env
|
||||
if (hasExplicitProviderSelection(processEnv)) {
|
||||
return processEnv
|
||||
}
|
||||
|
||||
const persisted = options?.persisted ?? loadProfileFile()
|
||||
if (!persisted) {
|
||||
return processEnv
|
||||
}
|
||||
|
||||
return buildLaunchEnv({
|
||||
profile: persisted.profile,
|
||||
persisted,
|
||||
goal:
|
||||
options?.goal ??
|
||||
normalizeRecommendationGoal(processEnv.OPENCLAUDE_PROFILE_GOAL),
|
||||
processEnv,
|
||||
getOllamaChatBaseUrl:
|
||||
options?.getOllamaChatBaseUrl ?? getOllamaChatBaseUrl,
|
||||
resolveOllamaDefaultModel: options?.resolveOllamaDefaultModel,
|
||||
})
|
||||
}
|
||||
|
||||
export function applyProfileEnvToProcessEnv(
|
||||
targetEnv: NodeJS.ProcessEnv,
|
||||
nextEnv: NodeJS.ProcessEnv,
|
||||
): void {
|
||||
for (const key of PROFILE_ENV_KEYS) {
|
||||
delete targetEnv[key]
|
||||
}
|
||||
|
||||
Object.assign(targetEnv, nextEnv)
|
||||
}
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -286,6 +286,25 @@ function createCommandSuggestionItem(
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure suggestion IDs are unique for React keys and selection logic.
|
||||
* If duplicates exist, append a stable numeric suffix to subsequent entries.
|
||||
*/
|
||||
function ensureUniqueSuggestionIds(items: SuggestionItem[]): SuggestionItem[] {
|
||||
const counts = new Map<string, number>()
|
||||
return items.map(item => {
|
||||
const seen = counts.get(item.id) ?? 0
|
||||
counts.set(item.id, seen + 1)
|
||||
if (seen === 0) {
|
||||
return item
|
||||
}
|
||||
return {
|
||||
...item,
|
||||
id: `${item.id}#${seen + 1}`,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate command suggestions based on input
|
||||
*/
|
||||
@@ -369,14 +388,14 @@ export function generateCommandSuggestions(
|
||||
|
||||
// Combine with built-in commands prioritized after recently used,
|
||||
// so they remain visible even when many skills are installed
|
||||
return [
|
||||
return ensureUniqueSuggestionIds([
|
||||
...recentlyUsed,
|
||||
...builtinCommands,
|
||||
...userCommands,
|
||||
...projectCommands,
|
||||
...policyCommands,
|
||||
...otherCommands,
|
||||
].map(cmd => createCommandSuggestionItem(cmd))
|
||||
].map(cmd => createCommandSuggestionItem(cmd)))
|
||||
}
|
||||
|
||||
// The Fuse index filters isHidden at build time and is keyed on the
|
||||
@@ -491,10 +510,13 @@ export function generateCommandSuggestions(
|
||||
if (hiddenExact) {
|
||||
const hiddenId = getCommandId(hiddenExact)
|
||||
if (!fuseSuggestions.some(s => s.id === hiddenId)) {
|
||||
return [createCommandSuggestionItem(hiddenExact), ...fuseSuggestions]
|
||||
return ensureUniqueSuggestionIds([
|
||||
createCommandSuggestionItem(hiddenExact),
|
||||
...fuseSuggestions,
|
||||
])
|
||||
}
|
||||
}
|
||||
return fuseSuggestions
|
||||
return ensureUniqueSuggestionIds(fuseSuggestions)
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -99,6 +99,18 @@ const TEAMMATE_ENV_VARS = [
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
'CLAUDE_CODE_USE_GEMINI',
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
'GITHUB_TOKEN',
|
||||
'GH_TOKEN',
|
||||
'OPENAI_API_KEY',
|
||||
'OPENAI_BASE_URL',
|
||||
'OPENAI_MODEL',
|
||||
'GEMINI_API_KEY',
|
||||
'GEMINI_BASE_URL',
|
||||
'GEMINI_MODEL',
|
||||
'GOOGLE_API_KEY',
|
||||
// Custom API endpoint
|
||||
'ANTHROPIC_BASE_URL',
|
||||
// Config directory override
|
||||
|
||||
130
test_atomic_chat_provider.py
Normal file
130
test_atomic_chat_provider.py
Normal file
@@ -0,0 +1,130 @@
|
||||
"""
|
||||
test_atomic_chat_provider.py
|
||||
Run: pytest test_atomic_chat_provider.py -v
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
from atomic_chat_provider import (
|
||||
atomic_chat,
|
||||
list_atomic_chat_models,
|
||||
check_atomic_chat_running,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_running_true():
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.get = AsyncMock(return_value=mock_response)
|
||||
result = await check_atomic_chat_running()
|
||||
assert result is True
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_running_false_on_exception():
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.get = AsyncMock(side_effect=Exception("refused"))
|
||||
result = await check_atomic_chat_running()
|
||||
assert result is False
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_models_returns_ids():
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = {
|
||||
"data": [{"id": "llama-3.1-8b"}, {"id": "mistral-7b"}],
|
||||
}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.get = AsyncMock(return_value=mock_response)
|
||||
models = await list_atomic_chat_models()
|
||||
assert "llama-3.1-8b" in models
|
||||
assert "mistral-7b" in models
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_models_empty_on_failure():
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.get = AsyncMock(side_effect=Exception("down"))
|
||||
models = await list_atomic_chat_models()
|
||||
assert models == []
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_returns_anthropic_format():
|
||||
mock_response = MagicMock()
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_response.json.return_value = {
|
||||
"id": "chatcmpl-abc123",
|
||||
"choices": [{"message": {"content": "42 is the answer."}}],
|
||||
"usage": {"prompt_tokens": 10, "completion_tokens": 8},
|
||||
}
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.post = AsyncMock(return_value=mock_response)
|
||||
result = await atomic_chat(
|
||||
model="llama-3.1-8b",
|
||||
messages=[{"role": "user", "content": "What is 6*7?"}],
|
||||
)
|
||||
assert result["type"] == "message"
|
||||
assert result["role"] == "assistant"
|
||||
assert "42" in result["content"][0]["text"]
|
||||
assert result["usage"]["input_tokens"] == 10
|
||||
assert result["usage"]["output_tokens"] == 8
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_prepends_system():
|
||||
captured = {}
|
||||
|
||||
async def mock_post(url, json=None, **kwargs):
|
||||
captured.update(json or {})
|
||||
m = MagicMock()
|
||||
m.raise_for_status = MagicMock()
|
||||
m.json.return_value = {
|
||||
"id": "chatcmpl-xyz",
|
||||
"choices": [{"message": {"content": "ok"}}],
|
||||
"usage": {"prompt_tokens": 1, "completion_tokens": 1},
|
||||
}
|
||||
return m
|
||||
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.post = mock_post
|
||||
await atomic_chat(
|
||||
model="llama-3.1-8b",
|
||||
messages=[{"role": "user", "content": "Hi"}],
|
||||
system="Be helpful.",
|
||||
)
|
||||
assert captured["messages"][0]["role"] == "system"
|
||||
assert "helpful" in captured["messages"][0]["content"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_sends_correct_payload():
|
||||
captured = {}
|
||||
|
||||
async def mock_post(url, json=None, **kwargs):
|
||||
captured.update(json or {})
|
||||
m = MagicMock()
|
||||
m.raise_for_status = MagicMock()
|
||||
m.json.return_value = {
|
||||
"id": "chatcmpl-xyz",
|
||||
"choices": [{"message": {"content": "ok"}}],
|
||||
"usage": {"prompt_tokens": 1, "completion_tokens": 1},
|
||||
}
|
||||
return m
|
||||
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.post = mock_post
|
||||
await atomic_chat(
|
||||
model="test-model",
|
||||
messages=[{"role": "user", "content": "Test"}],
|
||||
max_tokens=2048,
|
||||
temperature=0.5,
|
||||
)
|
||||
assert captured["model"] == "test-model"
|
||||
assert captured["max_tokens"] == 2048
|
||||
assert captured["temperature"] == 0.5
|
||||
assert captured["stream"] is False
|
||||
Reference in New Issue
Block a user