Merge origin/main into provider-setup-wizard
This commit is contained in:
6
.github/workflows/pr-checks.yml
vendored
6
.github/workflows/pr-checks.yml
vendored
@@ -12,15 +12,15 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Check out repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Set up Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
uses: oven-sh/setup-bun@4bc047ad259df6fc24a6c9b0f9a0cb08cf17fbe5 # v2.0.1
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
|
||||
|
||||
36
README.md
36
README.md
@@ -2,7 +2,7 @@
|
||||
|
||||
Use Claude Code with **any LLM** — not just Claude.
|
||||
|
||||
OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`.
|
||||
OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`, and local inference via [Atomic Chat](https://atomic.chat/) on Apple Silicon.
|
||||
|
||||
All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
|
||||
|
||||
@@ -122,9 +122,13 @@ export OPENAI_MODEL=deepseek-chat
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_API_KEY=sk-or-...
|
||||
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
||||
export OPENAI_MODEL=google/gemini-2.0-flash
|
||||
export OPENAI_MODEL=google/gemini-2.0-flash-001
|
||||
```
|
||||
|
||||
OpenRouter model availability changes over time. If a model stops working,
|
||||
pick another currently available OpenRouter model before assuming the
|
||||
OpenAI-compatible setup is broken.
|
||||
|
||||
### Ollama (local, free)
|
||||
|
||||
```bash
|
||||
@@ -136,6 +140,23 @@ export OPENAI_MODEL=llama3.3:70b
|
||||
# no API key needed for local models
|
||||
```
|
||||
|
||||
### Atomic Chat (local, Apple Silicon)
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
export OPENAI_BASE_URL=http://127.0.0.1:1337/v1
|
||||
export OPENAI_MODEL=your-model-name
|
||||
# no API key needed for local models
|
||||
```
|
||||
|
||||
Or use the profile launcher:
|
||||
|
||||
```bash
|
||||
bun run dev:atomic-chat
|
||||
```
|
||||
|
||||
Download Atomic Chat from [atomic.chat](https://atomic.chat/). The app must be running with a model loaded before launching.
|
||||
|
||||
### LM Studio (local)
|
||||
|
||||
```bash
|
||||
@@ -187,7 +208,7 @@ export OPENAI_MODEL=gpt-4o
|
||||
| Variable | Required | Description |
|
||||
|----------|----------|-------------|
|
||||
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
|
||||
| `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
|
||||
| `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama/Atomic Chat) |
|
||||
| `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
|
||||
| `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
|
||||
| `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
|
||||
@@ -250,6 +271,9 @@ bun run profile:codex
|
||||
# openai bootstrap with explicit key
|
||||
bun run profile:init -- --provider openai --api-key sk-...
|
||||
|
||||
# atomic-chat bootstrap (auto-detects running model)
|
||||
bun run profile:init -- --provider atomic-chat
|
||||
|
||||
# ollama bootstrap with custom model
|
||||
bun run profile:init -- --provider ollama --model llama3.1:8b
|
||||
|
||||
@@ -270,6 +294,9 @@ bun run dev:openai
|
||||
|
||||
# Ollama profile (defaults: localhost:11434, llama3.1:8b)
|
||||
bun run dev:ollama
|
||||
|
||||
# Atomic Chat profile (Apple Silicon local LLMs at 127.0.0.1:1337)
|
||||
bun run dev:atomic-chat
|
||||
```
|
||||
|
||||
`profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
|
||||
@@ -280,8 +307,9 @@ Goal-based Ollama selection only recommends among models that are already instal
|
||||
|
||||
Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
|
||||
|
||||
`dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
|
||||
`dev:openai`, `dev:ollama`, `dev:atomic-chat`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
|
||||
For `dev:ollama`, make sure Ollama is running locally before launch.
|
||||
For `dev:atomic-chat`, make sure Atomic Chat is running with a model loaded.
|
||||
|
||||
---
|
||||
|
||||
|
||||
146
atomic_chat_provider.py
Normal file
146
atomic_chat_provider.py
Normal file
@@ -0,0 +1,146 @@
|
||||
"""
|
||||
atomic_chat_provider.py
|
||||
-----------------------
|
||||
Adds native Atomic Chat support to openclaude.
|
||||
Lets Claude Code route requests to any locally-running model via
|
||||
Atomic Chat (Apple Silicon only) at 127.0.0.1:1337.
|
||||
|
||||
Atomic Chat exposes an OpenAI-compatible API, so messages are forwarded
|
||||
directly without translation.
|
||||
|
||||
Usage (.env):
|
||||
PREFERRED_PROVIDER=atomic-chat
|
||||
ATOMIC_CHAT_BASE_URL=http://127.0.0.1:1337
|
||||
"""
|
||||
|
||||
import httpx
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import AsyncIterator
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
ATOMIC_CHAT_BASE_URL = os.getenv("ATOMIC_CHAT_BASE_URL", "http://127.0.0.1:1337")
|
||||
|
||||
|
||||
def _api_url(path: str) -> str:
|
||||
return f"{ATOMIC_CHAT_BASE_URL}/v1{path}"
|
||||
|
||||
|
||||
async def check_atomic_chat_running() -> bool:
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=3.0) as client:
|
||||
resp = await client.get(_api_url("/models"))
|
||||
return resp.status_code == 200
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
async def list_atomic_chat_models() -> list[str]:
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
resp = await client.get(_api_url("/models"))
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
return [m["id"] for m in data.get("data", [])]
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not list Atomic Chat models: {e}")
|
||||
return []
|
||||
|
||||
|
||||
async def atomic_chat(
|
||||
model: str,
|
||||
messages: list[dict],
|
||||
system: str | None = None,
|
||||
max_tokens: int = 4096,
|
||||
temperature: float = 1.0,
|
||||
) -> dict:
|
||||
chat_messages = list(messages)
|
||||
if system:
|
||||
chat_messages.insert(0, {"role": "system", "content": system})
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": chat_messages,
|
||||
"max_tokens": max_tokens,
|
||||
"temperature": temperature,
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient(timeout=120.0) as client:
|
||||
resp = await client.post(_api_url("/chat/completions"), json=payload)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
|
||||
choice = data.get("choices", [{}])[0]
|
||||
assistant_text = choice.get("message", {}).get("content", "")
|
||||
usage = data.get("usage", {})
|
||||
|
||||
return {
|
||||
"id": data.get("id", "msg_atomic_chat"),
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"content": [{"type": "text", "text": assistant_text}],
|
||||
"model": model,
|
||||
"stop_reason": "end_turn",
|
||||
"stop_sequence": None,
|
||||
"usage": {
|
||||
"input_tokens": usage.get("prompt_tokens", 0),
|
||||
"output_tokens": usage.get("completion_tokens", 0),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
async def atomic_chat_stream(
|
||||
model: str,
|
||||
messages: list[dict],
|
||||
system: str | None = None,
|
||||
max_tokens: int = 4096,
|
||||
temperature: float = 1.0,
|
||||
) -> AsyncIterator[str]:
|
||||
chat_messages = list(messages)
|
||||
if system:
|
||||
chat_messages.insert(0, {"role": "system", "content": system})
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": chat_messages,
|
||||
"max_tokens": max_tokens,
|
||||
"temperature": temperature,
|
||||
"stream": True,
|
||||
}
|
||||
|
||||
yield "event: message_start\n"
|
||||
yield f'data: {json.dumps({"type": "message_start", "message": {"id": "msg_atomic_chat_stream", "type": "message", "role": "assistant", "content": [], "model": model, "stop_reason": None, "usage": {"input_tokens": 0, "output_tokens": 0}}})}\n\n'
|
||||
yield "event: content_block_start\n"
|
||||
yield f'data: {json.dumps({"type": "content_block_start", "index": 0, "content_block": {"type": "text", "text": ""}})}\n\n'
|
||||
|
||||
async with httpx.AsyncClient(timeout=120.0) as client:
|
||||
async with client.stream("POST", _api_url("/chat/completions"), json=payload) as resp:
|
||||
resp.raise_for_status()
|
||||
async for line in resp.aiter_lines():
|
||||
if not line or not line.startswith("data: "):
|
||||
continue
|
||||
raw = line[len("data: "):]
|
||||
if raw.strip() == "[DONE]":
|
||||
break
|
||||
try:
|
||||
chunk = json.loads(raw)
|
||||
delta = chunk.get("choices", [{}])[0].get("delta", {})
|
||||
delta_text = delta.get("content", "")
|
||||
if delta_text:
|
||||
yield "event: content_block_delta\n"
|
||||
yield f'data: {json.dumps({"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": delta_text}})}\n\n'
|
||||
|
||||
finish_reason = chunk.get("choices", [{}])[0].get("finish_reason")
|
||||
if finish_reason:
|
||||
usage = chunk.get("usage", {})
|
||||
yield "event: content_block_stop\n"
|
||||
yield f'data: {json.dumps({"type": "content_block_stop", "index": 0})}\n\n'
|
||||
yield "event: message_delta\n"
|
||||
yield f'data: {json.dumps({"type": "message_delta", "delta": {"stop_reason": "end_turn", "stop_sequence": None}, "usage": {"output_tokens": usage.get("completion_tokens", 0)}})}\n\n'
|
||||
yield "event: message_stop\n"
|
||||
yield f'data: {json.dumps({"type": "message_stop"})}\n\n'
|
||||
break
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
@@ -21,6 +21,7 @@
|
||||
"dev:gemini": "bun run scripts/provider-launch.ts gemini",
|
||||
"dev:ollama": "bun run scripts/provider-launch.ts ollama",
|
||||
"dev:ollama:fast": "bun run scripts/provider-launch.ts ollama --fast --bare",
|
||||
"dev:atomic-chat": "bun run scripts/provider-launch.ts atomic-chat",
|
||||
"profile:init": "bun run scripts/provider-bootstrap.ts",
|
||||
"profile:recommend": "bun run scripts/provider-recommend.ts",
|
||||
"profile:auto": "bun run scripts/provider-recommend.ts --apply",
|
||||
@@ -30,7 +31,7 @@
|
||||
"dev:fast": "bun run profile:fast && bun run dev:ollama:fast",
|
||||
"dev:code": "bun run profile:code && bun run dev:profile",
|
||||
"start": "node dist/cli.mjs",
|
||||
"test:provider-recommendation": "node --test --experimental-strip-types src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts",
|
||||
"test:provider-recommendation": "bun test src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts",
|
||||
"typecheck": "tsc --noEmit",
|
||||
"smoke": "bun run build && node dist/cli.mjs --version",
|
||||
"test:provider": "bun test src/services/api/*.test.ts src/utils/context.test.ts",
|
||||
|
||||
@@ -8,6 +8,7 @@ import {
|
||||
recommendOllamaModel,
|
||||
} from '../src/utils/providerRecommendation.ts'
|
||||
import {
|
||||
buildAtomicChatProfileEnv,
|
||||
buildCodexProfileEnv,
|
||||
buildGeminiProfileEnv,
|
||||
buildOllamaProfileEnv,
|
||||
@@ -19,8 +20,11 @@ import {
|
||||
type ProviderProfile,
|
||||
} from '../src/utils/providerProfile.ts'
|
||||
import {
|
||||
getAtomicChatChatBaseUrl,
|
||||
getOllamaChatBaseUrl,
|
||||
hasLocalAtomicChat,
|
||||
hasLocalOllama,
|
||||
listAtomicChatModels,
|
||||
listOllamaModels,
|
||||
} from './provider-discovery.ts'
|
||||
|
||||
@@ -33,7 +37,7 @@ function parseArg(name: string): string | null {
|
||||
|
||||
function parseProviderArg(): ProviderProfile | 'auto' {
|
||||
const p = parseArg('--provider')?.toLowerCase()
|
||||
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini') return p
|
||||
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini' || p === 'atomic-chat') return p
|
||||
return 'auto'
|
||||
}
|
||||
|
||||
@@ -101,6 +105,21 @@ async function main(): Promise<void> {
|
||||
getOllamaChatBaseUrl,
|
||||
},
|
||||
)
|
||||
} else if (selected === 'atomic-chat') {
|
||||
const model = argModel || (await listAtomicChatModels(argBaseUrl || undefined))[0]
|
||||
if (!model) {
|
||||
if (!(await hasLocalAtomicChat(argBaseUrl || undefined))) {
|
||||
console.error('Atomic Chat is not running (could not connect to 127.0.0.1:1337).\n Download from https://atomic.chat/ and launch the application.')
|
||||
} else {
|
||||
console.error('Atomic Chat is running but no model is loaded. Open Atomic Chat and download or start a model first.')
|
||||
}
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
env = buildAtomicChatProfileEnv(model, {
|
||||
baseUrl: argBaseUrl,
|
||||
getAtomicChatChatBaseUrl,
|
||||
})
|
||||
} else if (selected === 'codex') {
|
||||
const builtEnv = buildCodexProfileEnv({
|
||||
model: argModel,
|
||||
|
||||
@@ -1,8 +1,13 @@
|
||||
export {
|
||||
benchmarkOllamaModel,
|
||||
DEFAULT_ATOMIC_CHAT_BASE_URL,
|
||||
DEFAULT_OLLAMA_BASE_URL,
|
||||
getAtomicChatApiBaseUrl,
|
||||
getAtomicChatChatBaseUrl,
|
||||
getOllamaApiBaseUrl,
|
||||
getOllamaChatBaseUrl,
|
||||
hasLocalAtomicChat,
|
||||
hasLocalOllama,
|
||||
listAtomicChatModels,
|
||||
listOllamaModels,
|
||||
} from '../src/utils/providerDiscovery.ts'
|
||||
|
||||
@@ -15,8 +15,11 @@ import {
|
||||
type ProviderProfile,
|
||||
} from '../src/utils/providerProfile.ts'
|
||||
import {
|
||||
getAtomicChatChatBaseUrl,
|
||||
getOllamaChatBaseUrl,
|
||||
hasLocalAtomicChat,
|
||||
hasLocalOllama,
|
||||
listAtomicChatModels,
|
||||
listOllamaModels,
|
||||
} from './provider-discovery.ts'
|
||||
|
||||
@@ -47,7 +50,7 @@ function parseLaunchOptions(argv: string[]): LaunchOptions {
|
||||
continue
|
||||
}
|
||||
|
||||
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini') && requestedProfile === 'auto') {
|
||||
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini' || lower === 'atomic-chat') && requestedProfile === 'auto') {
|
||||
requestedProfile = lower as ProviderProfile | 'auto'
|
||||
continue
|
||||
}
|
||||
@@ -85,6 +88,11 @@ async function resolveOllamaDefaultModel(
|
||||
return recommended?.name ?? null
|
||||
}
|
||||
|
||||
async function resolveAtomicChatDefaultModel(): Promise<string | null> {
|
||||
const models = await listAtomicChatModels()
|
||||
return models[0] ?? null
|
||||
}
|
||||
|
||||
function runCommand(command: string, env: NodeJS.ProcessEnv): Promise<number> {
|
||||
return runProcess(command, [], env)
|
||||
}
|
||||
@@ -121,6 +129,10 @@ function printSummary(profile: ProviderProfile, env: NodeJS.ProcessEnv): void {
|
||||
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
|
||||
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
|
||||
console.log(`CODEX_API_KEY_SET=${Boolean(resolveCodexApiCredentials(env).apiKey)}`)
|
||||
} else if (profile === 'atomic-chat') {
|
||||
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
|
||||
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
|
||||
console.log('OPENAI_API_KEY_SET=false (local provider, no key required)')
|
||||
} else {
|
||||
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
|
||||
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
|
||||
@@ -132,7 +144,7 @@ async function main(): Promise<void> {
|
||||
const options = parseLaunchOptions(process.argv.slice(2))
|
||||
const requestedProfile = options.requestedProfile
|
||||
if (!requestedProfile) {
|
||||
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
|
||||
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|atomic-chat|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
@@ -164,12 +176,30 @@ async function main(): Promise<void> {
|
||||
}
|
||||
}
|
||||
|
||||
let resolvedAtomicChatModel: string | null = null
|
||||
if (
|
||||
profile === 'atomic-chat' &&
|
||||
(persisted?.profile !== 'atomic-chat' || !persisted?.env?.OPENAI_MODEL)
|
||||
) {
|
||||
if (!(await hasLocalAtomicChat())) {
|
||||
console.error('Atomic Chat is not running (could not connect to 127.0.0.1:1337).\n Download from https://atomic.chat/ and launch the application.')
|
||||
process.exit(1)
|
||||
}
|
||||
resolvedAtomicChatModel = await resolveAtomicChatDefaultModel()
|
||||
if (!resolvedAtomicChatModel) {
|
||||
console.error('Atomic Chat is running but no model is loaded. Open Atomic Chat and download or start a model first.')
|
||||
process.exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
const env = await buildLaunchEnv({
|
||||
profile,
|
||||
persisted,
|
||||
goal: options.goal,
|
||||
getOllamaChatBaseUrl,
|
||||
resolveOllamaDefaultModel: async () => resolvedOllamaModel || 'llama3.1:8b',
|
||||
getAtomicChatChatBaseUrl,
|
||||
resolveAtomicChatDefaultModel: async () => resolvedAtomicChatModel,
|
||||
})
|
||||
if (options.fast) {
|
||||
applyFastFlags(env)
|
||||
|
||||
@@ -93,11 +93,15 @@ function isLocalBaseUrl(baseUrl: string): boolean {
|
||||
}
|
||||
|
||||
const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai'
|
||||
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
|
||||
|
||||
function currentBaseUrl(): string {
|
||||
if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
|
||||
return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL
|
||||
}
|
||||
if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
return process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
|
||||
}
|
||||
return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1'
|
||||
}
|
||||
|
||||
@@ -126,15 +130,47 @@ function checkGeminiEnv(): CheckResult[] {
|
||||
return results
|
||||
}
|
||||
|
||||
function checkGithubEnv(): CheckResult[] {
|
||||
const results: CheckResult[] = []
|
||||
const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
|
||||
results.push(pass('Provider mode', 'GitHub Models provider enabled.'))
|
||||
|
||||
const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
|
||||
if (!token?.trim()) {
|
||||
results.push(fail('GITHUB_TOKEN', 'Missing. Set GITHUB_TOKEN or GH_TOKEN.'))
|
||||
} else {
|
||||
results.push(pass('GITHUB_TOKEN', 'Configured.'))
|
||||
}
|
||||
|
||||
if (!process.env.OPENAI_MODEL) {
|
||||
results.push(
|
||||
pass(
|
||||
'OPENAI_MODEL',
|
||||
'Not set. Default github:copilot → openai/gpt-4.1 at runtime.',
|
||||
),
|
||||
)
|
||||
} else {
|
||||
results.push(pass('OPENAI_MODEL', process.env.OPENAI_MODEL))
|
||||
}
|
||||
|
||||
results.push(pass('OPENAI_BASE_URL', baseUrl))
|
||||
return results
|
||||
}
|
||||
|
||||
function checkOpenAIEnv(): CheckResult[] {
|
||||
const results: CheckResult[] = []
|
||||
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
|
||||
if (useGemini) {
|
||||
return checkGeminiEnv()
|
||||
}
|
||||
|
||||
if (useGithub && !useOpenAI) {
|
||||
return checkGithubEnv()
|
||||
}
|
||||
|
||||
if (!useOpenAI) {
|
||||
results.push(pass('Provider mode', 'Anthropic login flow enabled (CLAUDE_CODE_USE_OPENAI is off).'))
|
||||
return results
|
||||
@@ -181,12 +217,21 @@ function checkOpenAIEnv(): CheckResult[] {
|
||||
}
|
||||
|
||||
const key = process.env.OPENAI_API_KEY
|
||||
const githubToken = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
|
||||
if (key === 'SUA_CHAVE') {
|
||||
results.push(fail('OPENAI_API_KEY', 'Placeholder value detected: SUA_CHAVE.'))
|
||||
} else if (!key && !isLocalBaseUrl(request.baseUrl)) {
|
||||
} else if (
|
||||
!key &&
|
||||
!isLocalBaseUrl(request.baseUrl) &&
|
||||
!(useGithub && githubToken?.trim())
|
||||
) {
|
||||
results.push(fail('OPENAI_API_KEY', 'Missing key for non-local provider URL.'))
|
||||
} else if (!key && useGithub && githubToken?.trim()) {
|
||||
results.push(
|
||||
pass('OPENAI_API_KEY', 'Not set; GITHUB_TOKEN/GH_TOKEN will be used for GitHub Models.'),
|
||||
)
|
||||
} else if (!key) {
|
||||
results.push(pass('OPENAI_API_KEY', 'Not set (allowed for local providers like Ollama/LM Studio).'))
|
||||
results.push(pass('OPENAI_API_KEY', 'Not set (allowed for local providers like Atomic Chat/Ollama/LM Studio).'))
|
||||
} else {
|
||||
results.push(pass('OPENAI_API_KEY', 'Configured.'))
|
||||
}
|
||||
@@ -197,11 +242,19 @@ function checkOpenAIEnv(): CheckResult[] {
|
||||
async function checkBaseUrlReachability(): Promise<CheckResult> {
|
||||
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
|
||||
if (!useGemini && !useOpenAI) {
|
||||
if (!useGemini && !useOpenAI && !useGithub) {
|
||||
return pass('Provider reachability', 'Skipped (OpenAI-compatible mode disabled).')
|
||||
}
|
||||
|
||||
if (useGithub) {
|
||||
return pass(
|
||||
'Provider reachability',
|
||||
'Skipped for GitHub Models (inference endpoint differs from OpenAI /models probe).',
|
||||
)
|
||||
}
|
||||
|
||||
const geminiBaseUrl = 'https://generativelanguage.googleapis.com/v1beta/openai'
|
||||
const resolvedBaseUrl = useGemini
|
||||
? (process.env.GEMINI_BASE_URL ?? geminiBaseUrl)
|
||||
@@ -271,8 +324,21 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
|
||||
}
|
||||
}
|
||||
|
||||
function isAtomicChatUrl(baseUrl: string): boolean {
|
||||
try {
|
||||
const parsed = new URL(baseUrl)
|
||||
return parsed.port === '1337' && isLocalBaseUrl(baseUrl)
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
function checkOllamaProcessorMode(): CheckResult {
|
||||
if (!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) || isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
|
||||
if (
|
||||
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
|
||||
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
) {
|
||||
return pass('Ollama processor mode', 'Skipped (OpenAI-compatible mode disabled).')
|
||||
}
|
||||
|
||||
@@ -281,6 +347,10 @@ function checkOllamaProcessorMode(): CheckResult {
|
||||
return pass('Ollama processor mode', 'Skipped (provider URL is not local).')
|
||||
}
|
||||
|
||||
if (isAtomicChatUrl(baseUrl)) {
|
||||
return pass('Ollama processor mode', 'Skipped (Atomic Chat local provider detected, not Ollama).')
|
||||
}
|
||||
|
||||
const result = spawnSync('ollama', ['ps'], {
|
||||
cwd: process.cwd(),
|
||||
encoding: 'utf8',
|
||||
@@ -319,6 +389,22 @@ function serializeSafeEnvSummary(): Record<string, string | boolean> {
|
||||
GEMINI_API_KEY_SET: Boolean(process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY),
|
||||
}
|
||||
}
|
||||
if (
|
||||
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
|
||||
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
) {
|
||||
return {
|
||||
CLAUDE_CODE_USE_GITHUB: true,
|
||||
OPENAI_MODEL:
|
||||
process.env.OPENAI_MODEL ??
|
||||
'(unset, default: github:copilot → openai/gpt-4.1)',
|
||||
OPENAI_BASE_URL:
|
||||
process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE,
|
||||
GITHUB_TOKEN_SET: Boolean(
|
||||
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN,
|
||||
),
|
||||
}
|
||||
}
|
||||
const request = resolveProviderRequest({
|
||||
model: process.env.OPENAI_MODEL,
|
||||
baseUrl: process.env.OPENAI_BASE_URL,
|
||||
@@ -374,6 +460,13 @@ async function main(): Promise<void> {
|
||||
const options = parseOptions(process.argv.slice(2))
|
||||
const results: CheckResult[] = []
|
||||
|
||||
const { enableConfigs } = await import('../src/utils/config.js')
|
||||
enableConfigs()
|
||||
const { applySafeConfigEnvironmentVariables } = await import('../src/utils/managedEnv.js')
|
||||
applySafeConfigEnvironmentVariables()
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../src/utils/githubModelsCredentials.js')
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
|
||||
results.push(checkNodeVersion())
|
||||
results.push(checkBunRuntime())
|
||||
results.push(checkBuildArtifacts())
|
||||
|
||||
@@ -57,8 +57,8 @@ class Provider:
|
||||
@property
|
||||
def is_configured(self) -> bool:
|
||||
"""True if the provider has an API key set."""
|
||||
if self.name == "ollama":
|
||||
return True # Ollama needs no API key
|
||||
if self.name in ("ollama", "atomic-chat"):
|
||||
return True # Local providers need no API key
|
||||
return bool(self.api_key)
|
||||
|
||||
@property
|
||||
@@ -93,6 +93,7 @@ def build_default_providers() -> list[Provider]:
|
||||
big = os.getenv("BIG_MODEL", "gpt-4.1")
|
||||
small = os.getenv("SMALL_MODEL", "gpt-4.1-mini")
|
||||
ollama_url = os.getenv("OLLAMA_BASE_URL", "http://localhost:11434")
|
||||
atomic_chat_url = os.getenv("ATOMIC_CHAT_BASE_URL", "http://127.0.0.1:1337")
|
||||
|
||||
return [
|
||||
Provider(
|
||||
@@ -119,6 +120,14 @@ def build_default_providers() -> list[Provider]:
|
||||
big_model=big if "gemini" not in big and "gpt" not in big else "llama3:8b",
|
||||
small_model=small if "gemini" not in small and "gpt" not in small else "llama3:8b",
|
||||
),
|
||||
Provider(
|
||||
name="atomic-chat",
|
||||
ping_url=f"{atomic_chat_url}/v1/models",
|
||||
api_key_env="",
|
||||
cost_per_1k_tokens=0.0, # free — local (Apple Silicon)
|
||||
big_model=big if "gemini" not in big and "gpt" not in big else "llama3:8b",
|
||||
small_model=small if "gemini" not in small and "gpt" not in small else "llama3:8b",
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@ import cost from './commands/cost/index.js'
|
||||
import diff from './commands/diff/index.js'
|
||||
import ctx_viz from './commands/ctx_viz/index.js'
|
||||
import doctor from './commands/doctor/index.js'
|
||||
import onboardGithub from './commands/onboard-github/index.js'
|
||||
import memory from './commands/memory/index.js'
|
||||
import help from './commands/help/index.js'
|
||||
import ide from './commands/ide/index.js'
|
||||
@@ -289,6 +290,7 @@ const COMMANDS = memoize((): Command[] => [
|
||||
memory,
|
||||
mobile,
|
||||
model,
|
||||
onboardGithub,
|
||||
outputStyle,
|
||||
remoteEnv,
|
||||
plugin,
|
||||
|
||||
11
src/commands/onboard-github/index.ts
Normal file
11
src/commands/onboard-github/index.ts
Normal file
@@ -0,0 +1,11 @@
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const onboardGithub: Command = {
|
||||
name: 'onboard-github',
|
||||
description:
|
||||
'Interactive setup for GitHub Models: device login or PAT, saved to secure storage',
|
||||
type: 'local-jsx',
|
||||
load: () => import('./onboard-github.js'),
|
||||
}
|
||||
|
||||
export default onboardGithub
|
||||
237
src/commands/onboard-github/onboard-github.tsx
Normal file
237
src/commands/onboard-github/onboard-github.tsx
Normal file
@@ -0,0 +1,237 @@
|
||||
import * as React from 'react'
|
||||
import { useCallback, useState } from 'react'
|
||||
import { Select } from '../../components/CustomSelect/select.js'
|
||||
import { Spinner } from '../../components/Spinner.js'
|
||||
import TextInput from '../../components/TextInput.js'
|
||||
import { Box, Text } from '../../ink.js'
|
||||
import {
|
||||
openVerificationUri,
|
||||
pollAccessToken,
|
||||
requestDeviceCode,
|
||||
} from '../../services/github/deviceFlow.js'
|
||||
import type { LocalJSXCommandCall } from '../../types/command.js'
|
||||
import {
|
||||
hydrateGithubModelsTokenFromSecureStorage,
|
||||
saveGithubModelsToken,
|
||||
} from '../../utils/githubModelsCredentials.js'
|
||||
import { updateSettingsForSource } from '../../utils/settings/settings.js'
|
||||
|
||||
const DEFAULT_MODEL = 'github:copilot'
|
||||
|
||||
type Step =
|
||||
| 'menu'
|
||||
| 'device-busy'
|
||||
| 'pat'
|
||||
| 'error'
|
||||
|
||||
function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
|
||||
const { error } = updateSettingsForSource('userSettings', {
|
||||
env: {
|
||||
CLAUDE_CODE_USE_GITHUB: '1',
|
||||
OPENAI_MODEL: model,
|
||||
CLAUDE_CODE_USE_OPENAI: undefined as any,
|
||||
CLAUDE_CODE_USE_GEMINI: undefined as any,
|
||||
CLAUDE_CODE_USE_BEDROCK: undefined as any,
|
||||
CLAUDE_CODE_USE_VERTEX: undefined as any,
|
||||
CLAUDE_CODE_USE_FOUNDRY: undefined as any,
|
||||
},
|
||||
})
|
||||
if (error) {
|
||||
return { ok: false, detail: error.message }
|
||||
}
|
||||
return { ok: true }
|
||||
}
|
||||
|
||||
function OnboardGithub(props: {
|
||||
onDone: Parameters<LocalJSXCommandCall>[0]
|
||||
onChangeAPIKey: () => void
|
||||
}): React.ReactNode {
|
||||
const { onDone, onChangeAPIKey } = props
|
||||
const [step, setStep] = useState<Step>('menu')
|
||||
const [errorMsg, setErrorMsg] = useState<string | null>(null)
|
||||
const [deviceHint, setDeviceHint] = useState<{
|
||||
user_code: string
|
||||
verification_uri: string
|
||||
} | null>(null)
|
||||
const [patDraft, setPatDraft] = useState('')
|
||||
const [cursorOffset, setCursorOffset] = useState(0)
|
||||
|
||||
const finalize = useCallback(
|
||||
async (token: string, model: string = DEFAULT_MODEL) => {
|
||||
const saved = saveGithubModelsToken(token)
|
||||
if (!saved.success) {
|
||||
setErrorMsg(saved.warning ?? 'Could not save token to secure storage.')
|
||||
setStep('error')
|
||||
return
|
||||
}
|
||||
const merged = mergeUserSettingsEnv(model.trim() || DEFAULT_MODEL)
|
||||
if (!merged.ok) {
|
||||
setErrorMsg(
|
||||
`Token saved, but settings were not updated: ${merged.detail ?? 'unknown error'}. ` +
|
||||
`Add env CLAUDE_CODE_USE_GITHUB=1 and OPENAI_MODEL to ~/.claude/settings.json manually.`,
|
||||
)
|
||||
setStep('error')
|
||||
return
|
||||
}
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
process.env.OPENAI_MODEL = model.trim() || DEFAULT_MODEL
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
onChangeAPIKey()
|
||||
onDone(
|
||||
'GitHub Models onboard complete. Token stored in secure storage; user settings updated. Restart if the model does not switch.',
|
||||
{ display: 'user' },
|
||||
)
|
||||
},
|
||||
[onChangeAPIKey, onDone],
|
||||
)
|
||||
|
||||
const runDeviceFlow = useCallback(async () => {
|
||||
setStep('device-busy')
|
||||
setErrorMsg(null)
|
||||
setDeviceHint(null)
|
||||
try {
|
||||
const device = await requestDeviceCode()
|
||||
setDeviceHint({
|
||||
user_code: device.user_code,
|
||||
verification_uri: device.verification_uri,
|
||||
})
|
||||
await openVerificationUri(device.verification_uri)
|
||||
const token = await pollAccessToken(device.device_code, {
|
||||
initialInterval: device.interval,
|
||||
timeoutSeconds: device.expires_in,
|
||||
})
|
||||
await finalize(token, DEFAULT_MODEL)
|
||||
} catch (e) {
|
||||
setErrorMsg(e instanceof Error ? e.message : String(e))
|
||||
setStep('error')
|
||||
}
|
||||
}, [finalize])
|
||||
|
||||
if (step === 'error' && errorMsg) {
|
||||
const options = [
|
||||
{
|
||||
label: 'Back to menu',
|
||||
value: 'back' as const,
|
||||
},
|
||||
{
|
||||
label: 'Exit',
|
||||
value: 'exit' as const,
|
||||
},
|
||||
]
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text color="red">{errorMsg}</Text>
|
||||
<Select
|
||||
options={options}
|
||||
onChange={(v: string) => {
|
||||
if (v === 'back') {
|
||||
setStep('menu')
|
||||
setErrorMsg(null)
|
||||
} else {
|
||||
onDone('GitHub onboard cancelled', { display: 'system' })
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
if (step === 'device-busy') {
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text>GitHub device login</Text>
|
||||
{deviceHint ? (
|
||||
<>
|
||||
<Text>
|
||||
Enter code <Text bold>{deviceHint.user_code}</Text> at{' '}
|
||||
{deviceHint.verification_uri}
|
||||
</Text>
|
||||
<Text dimColor>
|
||||
A browser window may have opened. Waiting for authorization…
|
||||
</Text>
|
||||
</>
|
||||
) : (
|
||||
<Text dimColor>Requesting device code from GitHub…</Text>
|
||||
)}
|
||||
<Spinner />
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
if (step === 'pat') {
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text>Paste a GitHub personal access token with access to GitHub Models.</Text>
|
||||
<Text dimColor>Input is masked. Enter to submit; Esc to go back.</Text>
|
||||
<TextInput
|
||||
value={patDraft}
|
||||
mask="*"
|
||||
onChange={setPatDraft}
|
||||
onSubmit={async (value: string) => {
|
||||
const t = value.trim()
|
||||
if (!t) {
|
||||
return
|
||||
}
|
||||
await finalize(t, DEFAULT_MODEL)
|
||||
}}
|
||||
onExit={() => {
|
||||
setStep('menu')
|
||||
setPatDraft('')
|
||||
}}
|
||||
columns={80}
|
||||
cursorOffset={cursorOffset}
|
||||
onChangeCursorOffset={setCursorOffset}
|
||||
/>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
const menuOptions = [
|
||||
{
|
||||
label: 'Sign in with browser (device code)',
|
||||
value: 'device' as const,
|
||||
},
|
||||
{
|
||||
label: 'Paste personal access token',
|
||||
value: 'pat' as const,
|
||||
},
|
||||
{
|
||||
label: 'Cancel',
|
||||
value: 'cancel' as const,
|
||||
},
|
||||
]
|
||||
|
||||
return (
|
||||
<Box flexDirection="column" gap={1}>
|
||||
<Text bold>GitHub Models setup</Text>
|
||||
<Text dimColor>
|
||||
Stores your token in the OS credential store (macOS Keychain when available)
|
||||
and enables CLAUDE_CODE_USE_GITHUB in your user settings — no export
|
||||
GITHUB_TOKEN needed for future runs.
|
||||
</Text>
|
||||
<Select
|
||||
options={menuOptions}
|
||||
onChange={(v: string) => {
|
||||
if (v === 'cancel') {
|
||||
onDone('GitHub onboard cancelled', { display: 'system' })
|
||||
return
|
||||
}
|
||||
if (v === 'pat') {
|
||||
setStep('pat')
|
||||
return
|
||||
}
|
||||
void runDeviceFlow()
|
||||
}}
|
||||
/>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
export const call: LocalJSXCommandCall = async (onDone, context) => {
|
||||
return (
|
||||
<OnboardGithub
|
||||
onDone={onDone}
|
||||
onChangeAPIKey={context.onChangeAPIKey}
|
||||
/>
|
||||
)
|
||||
}
|
||||
@@ -1,50 +1,53 @@
|
||||
import { c as _c } from "react-compiler-runtime";
|
||||
import React from 'react';
|
||||
import { Box, Link, Text } from '../ink.js';
|
||||
import { Select } from './CustomSelect/index.js';
|
||||
import { Dialog } from './design-system/Dialog.js';
|
||||
import React from 'react'
|
||||
import { Box, Link, Text } from '../ink.js'
|
||||
import { Select } from './CustomSelect/index.js'
|
||||
import { Dialog } from './design-system/Dialog.js'
|
||||
import { getAPIProvider } from '../utils/model/providers.js'
|
||||
|
||||
type Props = {
|
||||
onDone: () => void;
|
||||
};
|
||||
export function CostThresholdDialog(t0) {
|
||||
const $ = _c(7);
|
||||
const {
|
||||
onDone
|
||||
} = t0;
|
||||
let t1;
|
||||
if ($[0] === Symbol.for("react.memo_cache_sentinel")) {
|
||||
t1 = <Box flexDirection="column"><Text>Learn more about how to monitor your spending:</Text><Link url="https://code.claude.com/docs/en/costs" /></Box>;
|
||||
$[0] = t1;
|
||||
} else {
|
||||
t1 = $[0];
|
||||
onDone: () => void
|
||||
}
|
||||
let t2;
|
||||
if ($[1] === Symbol.for("react.memo_cache_sentinel")) {
|
||||
t2 = [{
|
||||
value: "ok",
|
||||
label: "Got it, thanks!"
|
||||
}];
|
||||
$[1] = t2;
|
||||
} else {
|
||||
t2 = $[1];
|
||||
|
||||
function getProviderLabel(): string {
|
||||
const provider = getAPIProvider()
|
||||
switch (provider) {
|
||||
case 'firstParty':
|
||||
return 'Anthropic API'
|
||||
case 'bedrock':
|
||||
return 'AWS Bedrock'
|
||||
case 'vertex':
|
||||
return 'Google Vertex'
|
||||
case 'foundry':
|
||||
return 'Azure Foundry'
|
||||
case 'openai':
|
||||
return 'OpenAI-compatible API'
|
||||
case 'gemini':
|
||||
return 'Gemini API'
|
||||
default:
|
||||
return 'API'
|
||||
}
|
||||
let t3;
|
||||
if ($[2] !== onDone) {
|
||||
t3 = <Select options={t2} onChange={onDone} />;
|
||||
$[2] = onDone;
|
||||
$[3] = t3;
|
||||
} else {
|
||||
t3 = $[3];
|
||||
}
|
||||
let t4;
|
||||
if ($[4] !== onDone || $[5] !== t3) {
|
||||
t4 = <Dialog title="You've spent $5 on the Anthropic API this session." onCancel={onDone}>{t1}{t3}</Dialog>;
|
||||
$[4] = onDone;
|
||||
$[5] = t3;
|
||||
$[6] = t4;
|
||||
} else {
|
||||
t4 = $[6];
|
||||
|
||||
export function CostThresholdDialog({ onDone }: Props): React.ReactNode {
|
||||
const providerLabel = getProviderLabel()
|
||||
return (
|
||||
<Dialog
|
||||
title={`You've spent $5 on the ${providerLabel} this session.`}
|
||||
onCancel={onDone}
|
||||
>
|
||||
<Box flexDirection="column">
|
||||
<Text>Learn more about how to monitor your spending:</Text>
|
||||
<Link url="https://code.claude.com/docs/en/costs" />
|
||||
</Box>
|
||||
<Select
|
||||
options={[
|
||||
{
|
||||
value: 'ok',
|
||||
label: 'Got it, thanks!',
|
||||
},
|
||||
]}
|
||||
onChange={onDone}
|
||||
/>
|
||||
</Dialog>
|
||||
)
|
||||
}
|
||||
return t4;
|
||||
}
|
||||
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkJveCIsIkxpbmsiLCJUZXh0IiwiU2VsZWN0IiwiRGlhbG9nIiwiUHJvcHMiLCJvbkRvbmUiLCJDb3N0VGhyZXNob2xkRGlhbG9nIiwidDAiLCIkIiwiX2MiLCJ0MSIsIlN5bWJvbCIsImZvciIsInQyIiwidmFsdWUiLCJsYWJlbCIsInQzIiwidDQiXSwic291cmNlcyI6WyJDb3N0VGhyZXNob2xkRGlhbG9nLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgUmVhY3QgZnJvbSAncmVhY3QnXG5pbXBvcnQgeyBCb3gsIExpbmssIFRleHQgfSBmcm9tICcuLi9pbmsuanMnXG5pbXBvcnQgeyBTZWxlY3QgfSBmcm9tICcuL0N1c3RvbVNlbGVjdC9pbmRleC5qcydcbmltcG9ydCB7IERpYWxvZyB9IGZyb20gJy4vZGVzaWduLXN5c3RlbS9EaWFsb2cuanMnXG5cbnR5cGUgUHJvcHMgPSB7XG4gIG9uRG9uZTogKCkgPT4gdm9pZFxufVxuXG5leHBvcnQgZnVuY3Rpb24gQ29zdFRocmVzaG9sZERpYWxvZyh7IG9uRG9uZSB9OiBQcm9wcyk6IFJlYWN0LlJlYWN0Tm9kZSB7XG4gIHJldHVybiAoXG4gICAgPERpYWxvZ1xuICAgICAgdGl0bGU9XCJZb3UndmUgc3BlbnQgJDUgb24gdGhlIEFudGhyb3BpYyBBUEkgdGhpcyBzZXNzaW9uLlwiXG4gICAgICBvbkNhbmNlbD17b25Eb25lfVxuICAgID5cbiAgICAgIDxCb3ggZmxleERpcmVjdGlvbj1cImNvbHVtblwiPlxuICAgICAgICA8VGV4dD5MZWFybiBtb3JlIGFib3V0IGhvdyB0byBtb25pdG9yIHlvdXIgc3BlbmRpbmc6PC9UZXh0PlxuICAgICAgICA8TGluayB1cmw9XCJodHRwczovL2NvZGUuY2xhdWRlLmNvbS9kb2NzL2VuL2Nvc3RzXCIgLz5cbiAgICAgIDwvQm94PlxuICAgICAgPFNlbGVjdFxuICAgICAgICBvcHRpb25zPXtbXG4gICAgICAgICAge1xuICAgICAgICAgICAgdmFsdWU6ICdvaycsXG4gICAgICAgICAgICBsYWJlbDogJ0dvdCBpdCwgdGhhbmtzIScsXG4gICAgICAgICAgfSxcbiAgICAgICAgXX1cbiAgICAgICAgb25DaGFuZ2U9e29uRG9uZX1cbiAgICAgIC8+XG4gICAgPC9EaWFsb2c+XG4gIClcbn1cbiJdLCJtYXBwaW5ncyI6IjtBQUFBLE9BQU9BLEtBQUssTUFBTSxPQUFPO0FBQ3pCLFNBQVNDLEdBQUcsRUFBRUMsSUFBSSxFQUFFQyxJQUFJLFFBQVEsV0FBVztBQUMzQyxTQUFTQyxNQUFNLFFBQVEseUJBQXlCO0FBQ2hELFNBQVNDLE1BQU0sUUFBUSwyQkFBMkI7QUFFbEQsS0FBS0MsS0FBSyxHQUFHO0VBQ1hDLE1BQU0sRUFBRSxHQUFHLEdBQUcsSUFBSTtBQUNwQixDQUFDO0FBRUQsT0FBTyxTQUFBQyxvQkFBQUMsRUFBQTtFQUFBLE1BQUFDLENBQUEsR0FBQUMsRUFBQTtFQUE2QjtJQUFBSjtFQUFBLElBQUFFLEVBQWlCO0VBQUEsSUFBQUcsRUFBQTtFQUFBLElBQUFGLENBQUEsUUFBQUcsTUFBQSxDQUFBQyxHQUFBO0lBTS9DRixFQUFBLElBQUMsR0FBRyxDQUFlLGFBQVEsQ0FBUixRQUFRLENBQ3pCLENBQUMsSUFBSSxDQUFDLDhDQUE4QyxFQUFuRCxJQUFJLENBQ0wsQ0FBQyxJQUFJLENBQUssR0FBdUMsQ0FBdkMsdUNBQXVDLEdBQ25ELEVBSEMsR0FBRyxDQUdFO0lBQUFGLENBQUEsTUFBQUUsRUFBQTtFQUFBO0lBQUFBLEVBQUEsR0FBQUYsQ0FBQTtFQUFBO0VBQUEsSUFBQUssRUFBQTtFQUFBLElBQUFMLENBQUEsUUFBQUcsTUFBQSxDQUFBQyxHQUFBO0lBRUtDLEVBQUEsSUFDUDtNQUFBQyxLQUFBLEVBQ1MsSUFBSTtNQUFBQyxLQUFBLEVBQ0o7SUFDVCxDQUFDLENBQ0Y7SUFBQVAsQ0FBQSxNQUFBSyxFQUFBO0VBQUE7SUFBQUEsRUFBQSxHQUFBTCxDQUFBO0VBQUE7RUFBQSxJQUFBUSxFQUFBO0VBQUEsSUFBQVIsQ0FBQSxRQUFBSCxNQUFBO0lBTkhXLEVBQUEsSUFBQyxNQUFNLENBQ0ksT0FLUixDQUxRLENBQUFILEVBS1QsQ0FBQyxDQUNTUixRQUFNLENBQU5BLE9BQUssQ0FBQyxHQUNoQjtJQUFBRyxDQUFBLE1BQUFILE1BQUE7SUFBQUcsQ0FBQSxNQUFBUSxFQUFBO0VBQUE7SUFBQUEsRUFBQSxHQUFBUixDQUFBO0VBQUE7RUFBQSxJQUFBUyxFQUFBO0VBQUEsSUFBQVQsQ0FBQSxRQUFBSCxNQUFBLElBQUFHLENBQUEsUUFBQVEsRUFBQTtJQWhCSkMsRUFBQSxJQUFDLE1BQU0sQ0FDQyxLQUFvRCxDQUFwRCxvREFBb0QsQ0FDaERaLFFBQU0sQ0FBTkEsT0FBSyxDQUFDLENBRWhCLENBQUFLLEVBR0ssQ0FDTCxDQUFBTSxFQVFDLENBQ0gsRUFqQkMsTUFBTSxDQWlCRTtJQUFBUixDQUFBLE1BQUFILE1BQUE7SUFBQUcsQ0FBQSxNQUFBUSxFQUFBO0lBQUFSLENBQUEsTUFBQVMsRUFBQTtFQUFBO0lBQUFBLEVBQUEsR0FBQVQsQ0FBQTtFQUFBO0VBQUEsT0FqQlRTLEVBaUJTO0FBQUEiLCJpZ25vcmVMaXN0IjpbXX0=
|
||||
@@ -80,6 +80,7 @@ const LOGO_CLAUDE = [
|
||||
|
||||
function detectProvider(): { name: string; model: string; baseUrl: string; isLocal: boolean } {
|
||||
const useGemini = process.env.CLAUDE_CODE_USE_GEMINI === '1' || process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
const useGithub = process.env.CLAUDE_CODE_USE_GITHUB === '1' || process.env.CLAUDE_CODE_USE_GITHUB === 'true'
|
||||
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1' || process.env.CLAUDE_CODE_USE_OPENAI === 'true'
|
||||
|
||||
if (useGemini) {
|
||||
@@ -88,6 +89,13 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
|
||||
return { name: 'Google Gemini', model, baseUrl, isLocal: false }
|
||||
}
|
||||
|
||||
if (useGithub) {
|
||||
const model = process.env.OPENAI_MODEL || 'github:copilot'
|
||||
const baseUrl =
|
||||
process.env.OPENAI_BASE_URL || 'https://models.github.ai/inference'
|
||||
return { name: 'GitHub Models', model, baseUrl, isLocal: false }
|
||||
}
|
||||
|
||||
if (useOpenAI) {
|
||||
const model = process.env.OPENAI_MODEL || 'gpt-4o'
|
||||
const baseUrl = process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1'
|
||||
|
||||
@@ -441,3 +441,8 @@ export async function connectRemoteControl(
|
||||
): Promise<RemoteControlHandle | null> {
|
||||
throw new Error('not implemented')
|
||||
}
|
||||
|
||||
// add exit reason types for removing the error within gracefulShutdown file
|
||||
export type ExitReason = {
|
||||
|
||||
}
|
||||
@@ -53,6 +53,9 @@ function isLocalProviderUrl(baseUrl: string | undefined): boolean {
|
||||
function getProviderValidationError(
|
||||
env: NodeJS.ProcessEnv = process.env,
|
||||
): string | null {
|
||||
const useOpenAI = isEnvTruthy(env.CLAUDE_CODE_USE_OPENAI)
|
||||
const useGithub = isEnvTruthy(env.CLAUDE_CODE_USE_GITHUB)
|
||||
|
||||
if (isEnvTruthy(env.CLAUDE_CODE_USE_GEMINI)) {
|
||||
if (!(env.GEMINI_API_KEY ?? env.GOOGLE_API_KEY)) {
|
||||
return 'GEMINI_API_KEY is required when CLAUDE_CODE_USE_GEMINI=1.'
|
||||
@@ -60,7 +63,15 @@ function getProviderValidationError(
|
||||
return null
|
||||
}
|
||||
|
||||
if (!isEnvTruthy(env.CLAUDE_CODE_USE_OPENAI)) {
|
||||
if (useGithub && !useOpenAI) {
|
||||
const token = (env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()) ?? ''
|
||||
if (!token) {
|
||||
return 'GITHUB_TOKEN or GH_TOKEN is required when CLAUDE_CODE_USE_GITHUB=1.'
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
if (!useOpenAI) {
|
||||
return null
|
||||
}
|
||||
|
||||
@@ -91,6 +102,10 @@ function getProviderValidationError(
|
||||
}
|
||||
|
||||
if (!env.OPENAI_API_KEY && !isLocalProviderUrl(request.baseUrl)) {
|
||||
const hasGithubToken = !!(env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim())
|
||||
if (useGithub && hasGithubToken) {
|
||||
return null
|
||||
}
|
||||
return 'OPENAI_API_KEY is required when CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL is not local.'
|
||||
}
|
||||
|
||||
@@ -121,6 +136,15 @@ async function main(): Promise<void> {
|
||||
return;
|
||||
}
|
||||
|
||||
{
|
||||
const { enableConfigs } = await import('../utils/config.js')
|
||||
enableConfigs()
|
||||
const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js')
|
||||
applySafeConfigEnvironmentVariables()
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../utils/githubModelsCredentials.js')
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
}
|
||||
|
||||
const startupEnv = await buildStartupEnvFromProfile({
|
||||
processEnv: process.env,
|
||||
})
|
||||
|
||||
@@ -2313,7 +2313,11 @@ async function run(): Promise<CommanderCommand> {
|
||||
errors
|
||||
} = getSettingsWithErrors();
|
||||
const nonMcpErrors = errors.filter(e => !e.mcpErrorMetadata);
|
||||
if (nonMcpErrors.length > 0 && !isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
|
||||
if (
|
||||
nonMcpErrors.length > 0 &&
|
||||
!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) &&
|
||||
!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
) {
|
||||
await launchInvalidSettingsDialog(root, {
|
||||
settingsErrors: nonMcpErrors,
|
||||
onExit: () => gracefulShutdownSync(1)
|
||||
|
||||
@@ -3,6 +3,7 @@ import {
|
||||
setMainLoopModelOverride,
|
||||
} from '../bootstrap/state.js'
|
||||
import { getGlobalConfig, saveGlobalConfig } from '../utils/config.js'
|
||||
import { getAPIProvider } from '../utils/model/providers.js'
|
||||
import {
|
||||
getSettingsForSource,
|
||||
updateSettingsForSource,
|
||||
@@ -23,6 +24,10 @@ import {
|
||||
* tracked by a completion flag in global config.
|
||||
*/
|
||||
export function migrateSonnet1mToSonnet45(): void {
|
||||
if (getAPIProvider() !== 'firstParty') {
|
||||
return
|
||||
}
|
||||
|
||||
const config = getGlobalConfig()
|
||||
if (config.sonnet1m45MigrationComplete) {
|
||||
return
|
||||
|
||||
@@ -137,7 +137,7 @@ import { generateSessionTitle } from '../utils/sessionTitle.js';
|
||||
import { BASH_INPUT_TAG, COMMAND_MESSAGE_TAG, COMMAND_NAME_TAG, LOCAL_COMMAND_STDOUT_TAG } from '../constants/xml.js';
|
||||
import { escapeXml } from '../utils/xml.js';
|
||||
import type { ThinkingConfig } from '../utils/thinking.js';
|
||||
import { gracefulShutdownSync } from '../utils/gracefulShutdown.js';
|
||||
import { gracefulShutdownSync, isShuttingDown } from '../utils/gracefulShutdown.js';
|
||||
import { handlePromptSubmit, type PromptInputHelpers } from '../utils/handlePromptSubmit.js';
|
||||
import { useQueueProcessor } from '../hooks/useQueueProcessor.js';
|
||||
import { useMailboxBridge } from '../hooks/useMailboxBridge.js';
|
||||
@@ -4886,7 +4886,7 @@ export function REPL({
|
||||
|
||||
{mrRender()}
|
||||
|
||||
{!toolJSX?.shouldHidePromptInput && !focusedInputDialog && !isExiting && !disabled && !cursor && <>
|
||||
{!toolJSX?.shouldHidePromptInput && !focusedInputDialog && !isExiting && !disabled && !cursor && !isShuttingDown() && <>
|
||||
{autoRunIssueReason && <AutoRunIssueNotification onRun={handleAutoRunIssue} onCancel={handleCancelAutoRunIssue} reason={getAutoRunIssueReasonText(autoRunIssueReason)} />}
|
||||
{postCompactSurvey.state !== 'closed' ? <FeedbackSurvey state={postCompactSurvey.state} lastResponse={postCompactSurvey.lastResponse} handleSelect={postCompactSurvey.handleSelect} inputValue={inputValue} setInputValue={setInputValue} onRequestFeedback={handleSurveyRequestFeedback} /> : memorySurvey.state !== 'closed' ? <FeedbackSurvey state={memorySurvey.state} lastResponse={memorySurvey.lastResponse} handleSelect={memorySurvey.handleSelect} handleTranscriptSelect={memorySurvey.handleTranscriptSelect} inputValue={inputValue} setInputValue={setInputValue} onRequestFeedback={handleSurveyRequestFeedback} message="How well did Claude use its memory? (optional)" /> : <FeedbackSurvey state={feedbackSurvey.state} lastResponse={feedbackSurvey.lastResponse} handleSelect={feedbackSurvey.handleSelect} handleTranscriptSelect={feedbackSurvey.handleTranscriptSelect} inputValue={inputValue} setInputValue={setInputValue} onRequestFeedback={didAutoRunIssueRef.current ? undefined : handleSurveyRequestFeedback} />}
|
||||
{/* Frustration-triggered transcript sharing prompt */}
|
||||
|
||||
@@ -154,7 +154,10 @@ export async function getAnthropicClient({
|
||||
fetch: resolvedFetch,
|
||||
}),
|
||||
}
|
||||
if (isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
|
||||
if (
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
) {
|
||||
const { createOpenAIShimClient } = await import('./openaiShim.js')
|
||||
return createOpenAIShimClient({
|
||||
defaultHeaders,
|
||||
|
||||
@@ -14,8 +14,15 @@
|
||||
* OPENAI_BASE_URL=http://... — base URL (default: https://api.openai.com/v1)
|
||||
* OPENAI_MODEL=gpt-4o — default model override
|
||||
* CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark
|
||||
*
|
||||
* GitHub Models (models.github.ai), OpenAI-compatible:
|
||||
* CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI)
|
||||
* GITHUB_TOKEN or GH_TOKEN — PAT with models access (mapped to Bearer auth)
|
||||
* OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs
|
||||
*/
|
||||
|
||||
import { isEnvTruthy } from '../../utils/envUtils.js'
|
||||
import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubModelsCredentials.js'
|
||||
import {
|
||||
codexStreamToAnthropic,
|
||||
collectCodexCompletedResponse,
|
||||
@@ -31,6 +38,25 @@ import {
|
||||
} from './providerConfig.js'
|
||||
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
|
||||
|
||||
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
|
||||
const GITHUB_API_VERSION = '2022-11-28'
|
||||
const GITHUB_429_MAX_RETRIES = 3
|
||||
const GITHUB_429_BASE_DELAY_SEC = 1
|
||||
const GITHUB_429_MAX_DELAY_SEC = 32
|
||||
|
||||
function isGithubModelsMode(): boolean {
|
||||
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
}
|
||||
|
||||
function formatRetryAfterHint(response: Response): string {
|
||||
const ra = response.headers.get('retry-after')
|
||||
return ra ? ` (Retry-After: ${ra})` : ''
|
||||
}
|
||||
|
||||
function sleepMs(ms: number): Promise<void> {
|
||||
return new Promise(resolve => setTimeout(resolve, ms))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types — minimal subset of Anthropic SDK types we need to produce
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -236,28 +262,66 @@ function normalizeSchemaForOpenAI(
|
||||
schema: Record<string, unknown>,
|
||||
strict = true,
|
||||
): Record<string, unknown> {
|
||||
if (schema.type !== 'object' || !schema.properties) return schema
|
||||
const properties = schema.properties as Record<string, unknown>
|
||||
const existingRequired = Array.isArray(schema.required) ? schema.required as string[] : []
|
||||
// OpenAI strict mode requires every property to be listed in required[].
|
||||
// Gemini rejects schemas where required[] contains keys absent from properties,
|
||||
// so only promote keys that actually exist in properties.
|
||||
if (strict) {
|
||||
const allKeys = Object.keys(properties)
|
||||
const required = Array.from(new Set([...existingRequired, ...allKeys]))
|
||||
return { ...schema, required }
|
||||
if (!schema || typeof schema !== 'object' || Array.isArray(schema)) {
|
||||
return (schema ?? {}) as Record<string, unknown>
|
||||
}
|
||||
|
||||
const record = { ...schema }
|
||||
|
||||
if (record.type === 'object' && record.properties) {
|
||||
const properties = record.properties as Record<string, Record<string, unknown>>
|
||||
const existingRequired = Array.isArray(record.required) ? record.required as string[] : []
|
||||
|
||||
// Recurse into each property
|
||||
const normalizedProps: Record<string, unknown> = {}
|
||||
for (const [key, value] of Object.entries(properties)) {
|
||||
normalizedProps[key] = normalizeSchemaForOpenAI(
|
||||
value as Record<string, unknown>,
|
||||
strict,
|
||||
)
|
||||
}
|
||||
record.properties = normalizedProps
|
||||
|
||||
if (strict) {
|
||||
// OpenAI strict mode requires every property to be listed in required[]
|
||||
const allKeys = Object.keys(normalizedProps)
|
||||
record.required = Array.from(new Set([...existingRequired, ...allKeys]))
|
||||
// OpenAI strict mode requires additionalProperties: false on all object
|
||||
// schemas — override unconditionally to ensure nested objects comply.
|
||||
record.additionalProperties = false
|
||||
} else {
|
||||
// For Gemini: keep only existing required keys that are present in properties
|
||||
const required = existingRequired.filter(k => k in properties)
|
||||
return { ...schema, required }
|
||||
record.required = existingRequired.filter(k => k in normalizedProps)
|
||||
}
|
||||
}
|
||||
|
||||
// Recurse into array items
|
||||
if ('items' in record) {
|
||||
if (Array.isArray(record.items)) {
|
||||
record.items = (record.items as unknown[]).map(
|
||||
item => normalizeSchemaForOpenAI(item as Record<string, unknown>, strict),
|
||||
)
|
||||
} else {
|
||||
record.items = normalizeSchemaForOpenAI(record.items as Record<string, unknown>, strict)
|
||||
}
|
||||
}
|
||||
|
||||
// Recurse into combinators
|
||||
for (const key of ['anyOf', 'oneOf', 'allOf'] as const) {
|
||||
if (key in record && Array.isArray(record[key])) {
|
||||
record[key] = (record[key] as unknown[]).map(
|
||||
item => normalizeSchemaForOpenAI(item as Record<string, unknown>, strict),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return record
|
||||
}
|
||||
|
||||
function convertTools(
|
||||
tools: Array<{ name: string; description?: string; input_schema?: Record<string, unknown> }>,
|
||||
): OpenAITool[] {
|
||||
const isGemini =
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
const isGemini = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
|
||||
return tools
|
||||
.filter(t => t.name !== 'ToolSearchTool') // Not relevant for OpenAI
|
||||
@@ -343,7 +407,7 @@ async function* openaiStreamToAnthropic(
|
||||
): AsyncGenerator<AnthropicStreamEvent> {
|
||||
const messageId = makeMessageId()
|
||||
let contentBlockIndex = 0
|
||||
const activeToolCalls = new Map<number, { id: string; name: string; index: number }>()
|
||||
const activeToolCalls = new Map<number, { id: string; name: string; index: number; jsonBuffer: string }>()
|
||||
let hasEmittedContentStart = false
|
||||
let lastStopReason: 'tool_use' | 'max_tokens' | 'end_turn' | null = null
|
||||
let hasEmittedFinalUsage = false
|
||||
@@ -375,6 +439,7 @@ async function* openaiStreamToAnthropic(
|
||||
const decoder = new TextDecoder()
|
||||
let buffer = ''
|
||||
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read()
|
||||
if (done) break
|
||||
@@ -437,6 +502,7 @@ async function* openaiStreamToAnthropic(
|
||||
id: tc.id,
|
||||
name: tc.function.name,
|
||||
index: toolBlockIndex,
|
||||
jsonBuffer: tc.function.arguments ?? '',
|
||||
})
|
||||
|
||||
yield {
|
||||
@@ -467,6 +533,9 @@ async function* openaiStreamToAnthropic(
|
||||
// Continuation of existing tool call
|
||||
const active = activeToolCalls.get(tc.index)
|
||||
if (active) {
|
||||
if (tc.function.arguments) {
|
||||
active.jsonBuffer += tc.function.arguments
|
||||
}
|
||||
yield {
|
||||
type: 'content_block_delta',
|
||||
index: active.index,
|
||||
@@ -494,6 +563,36 @@ async function* openaiStreamToAnthropic(
|
||||
}
|
||||
// Close active tool calls
|
||||
for (const [, tc] of activeToolCalls) {
|
||||
let suffixToAdd = ''
|
||||
if (tc.jsonBuffer) {
|
||||
try {
|
||||
JSON.parse(tc.jsonBuffer)
|
||||
} catch {
|
||||
const str = tc.jsonBuffer.trimEnd()
|
||||
const combinations = [
|
||||
'}', '"}', ']}', '"]}', '}}', '"}}', ']}}', '"]}}', '"]}]}', '}]}'
|
||||
]
|
||||
for (const combo of combinations) {
|
||||
try {
|
||||
JSON.parse(str + combo)
|
||||
suffixToAdd = combo
|
||||
break
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (suffixToAdd) {
|
||||
yield {
|
||||
type: 'content_block_delta',
|
||||
index: tc.index,
|
||||
delta: {
|
||||
type: 'input_json_delta',
|
||||
partial_json: suffixToAdd,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
yield { type: 'content_block_stop', index: tc.index }
|
||||
}
|
||||
|
||||
@@ -530,6 +629,9 @@ async function* openaiStreamToAnthropic(
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
reader.releaseLock()
|
||||
}
|
||||
|
||||
yield { type: 'message_stop' }
|
||||
}
|
||||
@@ -680,6 +782,12 @@ class OpenAIShimMessages {
|
||||
body.stream_options = { include_usage: true }
|
||||
}
|
||||
|
||||
const isGithub = isGithubModelsMode()
|
||||
if (isGithub && body.max_completion_tokens !== undefined) {
|
||||
body.max_tokens = body.max_completion_tokens
|
||||
delete body.max_completion_tokens
|
||||
}
|
||||
|
||||
if (params.temperature !== undefined) body.temperature = params.temperature
|
||||
if (params.top_p !== undefined) body.top_p = params.top_p
|
||||
|
||||
@@ -729,6 +837,11 @@ class OpenAIShimMessages {
|
||||
}
|
||||
}
|
||||
|
||||
if (isGithub) {
|
||||
headers.Accept = 'application/vnd.github.v3+json'
|
||||
headers['X-GitHub-Api-Version'] = GITHUB_API_VERSION
|
||||
}
|
||||
|
||||
// Build the chat completions URL
|
||||
// Azure Cognitive Services / Azure OpenAI require a deployment-specific path
|
||||
// and an api-version query parameter.
|
||||
@@ -751,20 +864,43 @@ class OpenAIShimMessages {
|
||||
chatCompletionsUrl = `${request.baseUrl}/chat/completions`
|
||||
}
|
||||
|
||||
const response = await fetch(chatCompletionsUrl, {
|
||||
method: 'POST',
|
||||
const fetchInit = {
|
||||
method: 'POST' as const,
|
||||
headers,
|
||||
body: JSON.stringify(body),
|
||||
signal: options?.signal,
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorBody = await response.text().catch(() => 'unknown error')
|
||||
throw new Error(`OpenAI API error ${response.status}: ${errorBody}`)
|
||||
}
|
||||
|
||||
const maxAttempts = isGithub ? GITHUB_429_MAX_RETRIES : 1
|
||||
let response: Response | undefined
|
||||
for (let attempt = 0; attempt < maxAttempts; attempt++) {
|
||||
response = await fetch(chatCompletionsUrl, fetchInit)
|
||||
if (response.ok) {
|
||||
return response
|
||||
}
|
||||
if (
|
||||
isGithub &&
|
||||
response.status === 429 &&
|
||||
attempt < maxAttempts - 1
|
||||
) {
|
||||
await response.text().catch(() => {})
|
||||
const delaySec = Math.min(
|
||||
GITHUB_429_BASE_DELAY_SEC * 2 ** attempt,
|
||||
GITHUB_429_MAX_DELAY_SEC,
|
||||
)
|
||||
await sleepMs(delaySec * 1000)
|
||||
continue
|
||||
}
|
||||
const errorBody = await response.text().catch(() => 'unknown error')
|
||||
const rateHint =
|
||||
isGithub && response.status === 429 ? formatRetryAfterHint(response) : ''
|
||||
throw new Error(
|
||||
`OpenAI API error ${response.status}: ${errorBody}${rateHint}`,
|
||||
)
|
||||
}
|
||||
|
||||
throw new Error('OpenAI shim: request loop exited unexpectedly')
|
||||
}
|
||||
|
||||
private _convertNonStreamingResponse(
|
||||
data: {
|
||||
@@ -773,7 +909,10 @@ class OpenAIShimMessages {
|
||||
choices?: Array<{
|
||||
message?: {
|
||||
role?: string
|
||||
content?: string | null
|
||||
content?:
|
||||
| string
|
||||
| null
|
||||
| Array<{ type?: string; text?: string }>
|
||||
tool_calls?: Array<{
|
||||
id: string
|
||||
function: { name: string; arguments: string }
|
||||
@@ -792,8 +931,25 @@ class OpenAIShimMessages {
|
||||
const choice = data.choices?.[0]
|
||||
const content: Array<Record<string, unknown>> = []
|
||||
|
||||
if (choice?.message?.content) {
|
||||
content.push({ type: 'text', text: choice.message.content })
|
||||
const rawContent = choice?.message?.content
|
||||
if (typeof rawContent === 'string' && rawContent) {
|
||||
content.push({ type: 'text', text: rawContent })
|
||||
} else if (Array.isArray(rawContent) && rawContent.length > 0) {
|
||||
const parts: string[] = []
|
||||
for (const part of rawContent) {
|
||||
if (
|
||||
part &&
|
||||
typeof part === 'object' &&
|
||||
part.type === 'text' &&
|
||||
typeof part.text === 'string'
|
||||
) {
|
||||
parts.push(part.text)
|
||||
}
|
||||
}
|
||||
const joined = parts.join('\n')
|
||||
if (joined) {
|
||||
content.push({ type: 'text', text: joined })
|
||||
}
|
||||
}
|
||||
|
||||
if (choice?.message?.tool_calls) {
|
||||
@@ -852,12 +1008,11 @@ export function createOpenAIShimClient(options: {
|
||||
maxRetries?: number
|
||||
timeout?: number
|
||||
}): unknown {
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
|
||||
// When Gemini provider is active, map Gemini env vars to OpenAI-compatible ones
|
||||
// so the existing providerConfig.ts infrastructure picks them up correctly.
|
||||
if (
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
) {
|
||||
if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
|
||||
process.env.OPENAI_BASE_URL ??=
|
||||
process.env.GEMINI_BASE_URL ??
|
||||
'https://generativelanguage.googleapis.com/v1beta/openai'
|
||||
@@ -866,6 +1021,10 @@ export function createOpenAIShimClient(options: {
|
||||
if (process.env.GEMINI_MODEL && !process.env.OPENAI_MODEL) {
|
||||
process.env.OPENAI_MODEL = process.env.GEMINI_MODEL
|
||||
}
|
||||
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
process.env.OPENAI_BASE_URL ??= GITHUB_MODELS_DEFAULT_BASE
|
||||
process.env.OPENAI_API_KEY ??=
|
||||
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? ''
|
||||
}
|
||||
|
||||
const beta = new OpenAIShimBeta({
|
||||
|
||||
41
src/services/api/providerConfig.github.test.ts
Normal file
41
src/services/api/providerConfig.github.test.ts
Normal file
@@ -0,0 +1,41 @@
|
||||
import { afterEach, expect, test } from 'bun:test'
|
||||
|
||||
import {
|
||||
DEFAULT_GITHUB_MODELS_API_MODEL,
|
||||
normalizeGithubModelsApiModel,
|
||||
resolveProviderRequest,
|
||||
} from './providerConfig.js'
|
||||
|
||||
const originalUseGithub = process.env.CLAUDE_CODE_USE_GITHUB
|
||||
|
||||
afterEach(() => {
|
||||
if (originalUseGithub === undefined) {
|
||||
delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
} else {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = originalUseGithub
|
||||
}
|
||||
})
|
||||
|
||||
test.each([
|
||||
['copilot', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
['github:copilot', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
['', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
['github:gpt-4o', 'gpt-4o'],
|
||||
['gpt-4o', 'gpt-4o'],
|
||||
['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL],
|
||||
] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => {
|
||||
expect(normalizeGithubModelsApiModel(input)).toBe(expected)
|
||||
})
|
||||
|
||||
test('resolveProviderRequest applies GitHub normalization when CLAUDE_CODE_USE_GITHUB=1', () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
const r = resolveProviderRequest({ model: 'github:gpt-4o' })
|
||||
expect(r.resolvedModel).toBe('gpt-4o')
|
||||
expect(r.transport).toBe('chat_completions')
|
||||
})
|
||||
|
||||
test('resolveProviderRequest leaves model unchanged without GitHub flag', () => {
|
||||
delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
const r = resolveProviderRequest({ model: 'github:gpt-4o' })
|
||||
expect(r.resolvedModel).toBe('github:gpt-4o')
|
||||
})
|
||||
@@ -2,8 +2,12 @@ import { existsSync, readFileSync } from 'node:fs'
|
||||
import { homedir } from 'node:os'
|
||||
import { join } from 'node:path'
|
||||
|
||||
import { isEnvTruthy } from '../../utils/envUtils.js'
|
||||
|
||||
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
|
||||
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
|
||||
/** Default GitHub Models API model when user selects copilot / github:copilot */
|
||||
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'openai/gpt-4.1'
|
||||
|
||||
const CODEX_ALIAS_MODELS: Record<
|
||||
string,
|
||||
@@ -171,16 +175,31 @@ export function isCodexBaseUrl(baseUrl: string | undefined): boolean {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize user model string for GitHub Models inference (models.github.ai).
|
||||
* Mirrors runtime devsper `github._normalize_model_id`.
|
||||
*/
|
||||
export function normalizeGithubModelsApiModel(requestedModel: string): string {
|
||||
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
|
||||
const segment =
|
||||
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
|
||||
if (!segment || segment.toLowerCase() === 'copilot') {
|
||||
return DEFAULT_GITHUB_MODELS_API_MODEL
|
||||
}
|
||||
return segment
|
||||
}
|
||||
|
||||
export function resolveProviderRequest(options?: {
|
||||
model?: string
|
||||
baseUrl?: string
|
||||
fallbackModel?: string
|
||||
}): ResolvedProviderRequest {
|
||||
const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
const requestedModel =
|
||||
options?.model?.trim() ||
|
||||
process.env.OPENAI_MODEL?.trim() ||
|
||||
options?.fallbackModel?.trim() ||
|
||||
'gpt-4o'
|
||||
(isGithubMode ? 'github:copilot' : 'gpt-4o')
|
||||
const descriptor = parseModelDescriptor(requestedModel)
|
||||
const rawBaseUrl =
|
||||
options?.baseUrl ??
|
||||
@@ -192,10 +211,16 @@ export function resolveProviderRequest(options?: {
|
||||
? 'codex_responses'
|
||||
: 'chat_completions'
|
||||
|
||||
const resolvedModel =
|
||||
transport === 'chat_completions' &&
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
? normalizeGithubModelsApiModel(requestedModel)
|
||||
: descriptor.baseModel
|
||||
|
||||
return {
|
||||
transport,
|
||||
requestedModel,
|
||||
resolvedModel: descriptor.baseModel,
|
||||
resolvedModel,
|
||||
baseUrl:
|
||||
(rawBaseUrl ??
|
||||
(transport === 'codex_responses'
|
||||
|
||||
94
src/services/github/deviceFlow.test.ts
Normal file
94
src/services/github/deviceFlow.test.ts
Normal file
@@ -0,0 +1,94 @@
|
||||
import { afterEach, describe, expect, mock, test } from 'bun:test'
|
||||
|
||||
import {
|
||||
GitHubDeviceFlowError,
|
||||
pollAccessToken,
|
||||
requestDeviceCode,
|
||||
} from './deviceFlow.js'
|
||||
|
||||
describe('requestDeviceCode', () => {
|
||||
const originalFetch = globalThis.fetch
|
||||
|
||||
afterEach(() => {
|
||||
globalThis.fetch = originalFetch
|
||||
})
|
||||
|
||||
test('parses successful device code response', async () => {
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(
|
||||
new Response(
|
||||
JSON.stringify({
|
||||
device_code: 'abc',
|
||||
user_code: 'ABCD-1234',
|
||||
verification_uri: 'https://github.com/login/device',
|
||||
expires_in: 600,
|
||||
interval: 5,
|
||||
}),
|
||||
{ status: 200 },
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
const r = await requestDeviceCode({
|
||||
clientId: 'test-client',
|
||||
fetchImpl: globalThis.fetch,
|
||||
})
|
||||
expect(r.device_code).toBe('abc')
|
||||
expect(r.user_code).toBe('ABCD-1234')
|
||||
expect(r.verification_uri).toBe('https://github.com/login/device')
|
||||
expect(r.expires_in).toBe(600)
|
||||
expect(r.interval).toBe(5)
|
||||
})
|
||||
|
||||
test('throws on HTTP error', async () => {
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(new Response('bad', { status: 500 })),
|
||||
)
|
||||
await expect(
|
||||
requestDeviceCode({ clientId: 'x', fetchImpl: globalThis.fetch }),
|
||||
).rejects.toThrow(GitHubDeviceFlowError)
|
||||
})
|
||||
})
|
||||
|
||||
describe('pollAccessToken', () => {
|
||||
const originalFetch = globalThis.fetch
|
||||
|
||||
afterEach(() => {
|
||||
globalThis.fetch = originalFetch
|
||||
})
|
||||
|
||||
test('returns token when GitHub responds with access_token immediately', async () => {
|
||||
let calls = 0
|
||||
globalThis.fetch = mock(() => {
|
||||
calls++
|
||||
return Promise.resolve(
|
||||
new Response(JSON.stringify({ access_token: 'tok-xyz' }), {
|
||||
status: 200,
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
const token = await pollAccessToken('dev-code', {
|
||||
clientId: 'cid',
|
||||
fetchImpl: globalThis.fetch,
|
||||
})
|
||||
expect(token).toBe('tok-xyz')
|
||||
expect(calls).toBe(1)
|
||||
})
|
||||
|
||||
test('throws on access_denied', async () => {
|
||||
globalThis.fetch = mock(() =>
|
||||
Promise.resolve(
|
||||
new Response(JSON.stringify({ error: 'access_denied' }), {
|
||||
status: 200,
|
||||
}),
|
||||
),
|
||||
)
|
||||
await expect(
|
||||
pollAccessToken('dc', {
|
||||
clientId: 'c',
|
||||
fetchImpl: globalThis.fetch,
|
||||
}),
|
||||
).rejects.toThrow(/denied/)
|
||||
})
|
||||
})
|
||||
174
src/services/github/deviceFlow.ts
Normal file
174
src/services/github/deviceFlow.ts
Normal file
@@ -0,0 +1,174 @@
|
||||
/**
|
||||
* GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow).
|
||||
*/
|
||||
|
||||
import { execFileNoThrow } from '../../utils/execFileNoThrow.js'
|
||||
|
||||
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Ov23liXjWSSui6QIahPl'
|
||||
|
||||
export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code'
|
||||
export const GITHUB_DEVICE_ACCESS_TOKEN_URL =
|
||||
'https://github.com/login/oauth/access_token'
|
||||
|
||||
/** Match runtime devsper github_oauth DEFAULT_SCOPE */
|
||||
export const DEFAULT_GITHUB_DEVICE_SCOPE = 'read:user,models:read'
|
||||
|
||||
export class GitHubDeviceFlowError extends Error {
|
||||
constructor(message: string) {
|
||||
super(message)
|
||||
this.name = 'GitHubDeviceFlowError'
|
||||
}
|
||||
}
|
||||
|
||||
export type DeviceCodeResult = {
|
||||
device_code: string
|
||||
user_code: string
|
||||
verification_uri: string
|
||||
expires_in: number
|
||||
interval: number
|
||||
}
|
||||
|
||||
export function getGithubDeviceFlowClientId(): string {
|
||||
return (
|
||||
process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() ||
|
||||
DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID
|
||||
)
|
||||
}
|
||||
|
||||
function sleep(ms: number): Promise<void> {
|
||||
return new Promise(resolve => setTimeout(resolve, ms))
|
||||
}
|
||||
|
||||
export async function requestDeviceCode(options?: {
|
||||
clientId?: string
|
||||
scope?: string
|
||||
fetchImpl?: typeof fetch
|
||||
}): Promise<DeviceCodeResult> {
|
||||
const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
|
||||
if (!clientId) {
|
||||
throw new GitHubDeviceFlowError(
|
||||
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID or paste a PAT instead.',
|
||||
)
|
||||
}
|
||||
const fetchFn = options?.fetchImpl ?? fetch
|
||||
const res = await fetchFn(GITHUB_DEVICE_CODE_URL, {
|
||||
method: 'POST',
|
||||
headers: { Accept: 'application/json' },
|
||||
body: new URLSearchParams({
|
||||
client_id: clientId,
|
||||
scope: options?.scope ?? DEFAULT_GITHUB_DEVICE_SCOPE,
|
||||
}),
|
||||
})
|
||||
if (!res.ok) {
|
||||
const text = await res.text().catch(() => '')
|
||||
throw new GitHubDeviceFlowError(
|
||||
`Device code request failed: ${res.status} ${text}`,
|
||||
)
|
||||
}
|
||||
const data = (await res.json()) as Record<string, unknown>
|
||||
const device_code = data.device_code
|
||||
const user_code = data.user_code
|
||||
const verification_uri = data.verification_uri
|
||||
if (
|
||||
typeof device_code !== 'string' ||
|
||||
typeof user_code !== 'string' ||
|
||||
typeof verification_uri !== 'string'
|
||||
) {
|
||||
throw new GitHubDeviceFlowError('Malformed device code response from GitHub')
|
||||
}
|
||||
return {
|
||||
device_code,
|
||||
user_code,
|
||||
verification_uri,
|
||||
expires_in: typeof data.expires_in === 'number' ? data.expires_in : 900,
|
||||
interval: typeof data.interval === 'number' ? data.interval : 5,
|
||||
}
|
||||
}
|
||||
|
||||
export type PollOptions = {
|
||||
clientId?: string
|
||||
initialInterval?: number
|
||||
timeoutSeconds?: number
|
||||
fetchImpl?: typeof fetch
|
||||
}
|
||||
|
||||
export async function pollAccessToken(
|
||||
deviceCode: string,
|
||||
options?: PollOptions,
|
||||
): Promise<string> {
|
||||
const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
|
||||
if (!clientId) {
|
||||
throw new GitHubDeviceFlowError('client_id required for polling')
|
||||
}
|
||||
let interval = Math.max(1, options?.initialInterval ?? 5)
|
||||
const timeoutSeconds = options?.timeoutSeconds ?? 900
|
||||
const fetchFn = options?.fetchImpl ?? fetch
|
||||
const start = Date.now()
|
||||
|
||||
while ((Date.now() - start) / 1000 < timeoutSeconds) {
|
||||
const res = await fetchFn(GITHUB_DEVICE_ACCESS_TOKEN_URL, {
|
||||
method: 'POST',
|
||||
headers: { Accept: 'application/json' },
|
||||
body: new URLSearchParams({
|
||||
client_id: clientId,
|
||||
device_code: deviceCode,
|
||||
grant_type: 'urn:ietf:params:oauth:grant-type:device_code',
|
||||
}),
|
||||
})
|
||||
if (!res.ok) {
|
||||
const text = await res.text().catch(() => '')
|
||||
throw new GitHubDeviceFlowError(
|
||||
`Token request failed: ${res.status} ${text}`,
|
||||
)
|
||||
}
|
||||
const data = (await res.json()) as Record<string, unknown>
|
||||
const err = data.error as string | undefined
|
||||
if (err == null) {
|
||||
const token = data.access_token
|
||||
if (typeof token === 'string' && token) {
|
||||
return token
|
||||
}
|
||||
throw new GitHubDeviceFlowError('No access_token in response')
|
||||
}
|
||||
if (err === 'authorization_pending') {
|
||||
await sleep(interval * 1000)
|
||||
continue
|
||||
}
|
||||
if (err === 'slow_down') {
|
||||
interval =
|
||||
typeof data.interval === 'number' ? data.interval : interval + 5
|
||||
await sleep(interval * 1000)
|
||||
continue
|
||||
}
|
||||
if (err === 'expired_token') {
|
||||
throw new GitHubDeviceFlowError(
|
||||
'Device code expired. Start the login flow again.',
|
||||
)
|
||||
}
|
||||
if (err === 'access_denied') {
|
||||
throw new GitHubDeviceFlowError('Authorization was denied or cancelled.')
|
||||
}
|
||||
throw new GitHubDeviceFlowError(`GitHub OAuth error: ${err}`)
|
||||
}
|
||||
throw new GitHubDeviceFlowError('Timed out waiting for authorization.')
|
||||
}
|
||||
|
||||
/**
|
||||
* Best-effort open browser / OS handler for the verification URL.
|
||||
*/
|
||||
export async function openVerificationUri(uri: string): Promise<void> {
|
||||
try {
|
||||
if (process.platform === 'darwin') {
|
||||
await execFileNoThrow('open', [uri], { useCwd: false, timeout: 5000 })
|
||||
} else if (process.platform === 'win32') {
|
||||
await execFileNoThrow('cmd', ['/c', 'start', '', uri], {
|
||||
useCwd: false,
|
||||
timeout: 5000,
|
||||
})
|
||||
} else {
|
||||
await execFileNoThrow('xdg-open', [uri], { useCwd: false, timeout: 5000 })
|
||||
}
|
||||
} catch {
|
||||
// User can open the URL manually
|
||||
}
|
||||
}
|
||||
@@ -117,7 +117,8 @@ export function isAnthropicAuthEnabled(): boolean {
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
|
||||
// Check if user has configured an external API key source
|
||||
// This allows externally-provided API keys to work (without requiring proxy configuration)
|
||||
@@ -1731,14 +1732,15 @@ export function getSubscriptionName(): string {
|
||||
}
|
||||
}
|
||||
|
||||
/** Check if using third-party services (Bedrock or Vertex or Foundry or OpenAI-compatible or Gemini) */
|
||||
/** Check if using third-party services (Bedrock or Vertex or Foundry or OpenAI-compatible or Gemini or GitHub Models) */
|
||||
export function isUsing3PServices(): boolean {
|
||||
return !!(
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
|
||||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ import {
|
||||
logEvent,
|
||||
} from 'src/services/analytics/index.js'
|
||||
import { type ReleaseChannel, saveGlobalConfig } from './config.js'
|
||||
import { getAPIProvider } from './model/providers.js'
|
||||
import { logForDebugging } from './debug.js'
|
||||
import { env } from './env.js'
|
||||
import { getClaudeConfigHomeDir } from './envUtils.js'
|
||||
@@ -72,6 +73,12 @@ export async function assertMinVersion(): Promise<void> {
|
||||
return
|
||||
}
|
||||
|
||||
// Skip version check for third-party providers — the min version
|
||||
// kill-switch is Anthropic-specific and should not block 3P users
|
||||
if (getAPIProvider() !== 'firstParty') {
|
||||
return
|
||||
}
|
||||
|
||||
try {
|
||||
const versionConfig = await getDynamicConfig_BLOCKS_ON_INIT<{
|
||||
minVersion: string
|
||||
|
||||
@@ -77,7 +77,9 @@ export function getContextWindowForModel(
|
||||
process.env.CLAUDE_CODE_USE_OPENAI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_OPENAI === 'true' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true' ||
|
||||
process.env.CLAUDE_CODE_USE_GITHUB === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GITHUB === 'true'
|
||||
) {
|
||||
const openaiWindow = getOpenAIContextWindow(model)
|
||||
if (openaiWindow !== undefined) {
|
||||
@@ -181,7 +183,9 @@ export function getModelMaxOutputTokens(model: string): {
|
||||
process.env.CLAUDE_CODE_USE_OPENAI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_OPENAI === 'true' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
|
||||
process.env.CLAUDE_CODE_USE_GEMINI === 'true' ||
|
||||
process.env.CLAUDE_CODE_USE_GITHUB === '1' ||
|
||||
process.env.CLAUDE_CODE_USE_GITHUB === 'true'
|
||||
) {
|
||||
const openaiMax = getOpenAIMaxOutputTokens(model)
|
||||
if (openaiMax !== undefined) {
|
||||
|
||||
66
src/utils/githubModelsCredentials.hydrate.test.ts
Normal file
66
src/utils/githubModelsCredentials.hydrate.test.ts
Normal file
@@ -0,0 +1,66 @@
|
||||
/**
|
||||
* Hydrate tests live in a separate file with no static import of
|
||||
* githubModelsCredentials so Bun's mock.module can replace secureStorage
|
||||
* before that module is first loaded.
|
||||
*/
|
||||
import { afterEach, describe, expect, mock, test } from 'bun:test'
|
||||
|
||||
describe('hydrateGithubModelsTokenFromSecureStorage', () => {
|
||||
const orig = {
|
||||
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
|
||||
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
|
||||
GH_TOKEN: process.env.GH_TOKEN,
|
||||
CLAUDE_CODE_SIMPLE: process.env.CLAUDE_CODE_SIMPLE,
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
mock.restore()
|
||||
for (const [k, v] of Object.entries(orig)) {
|
||||
if (v === undefined) {
|
||||
delete process.env[k as keyof typeof orig]
|
||||
} else {
|
||||
process.env[k as keyof typeof orig] = v
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
test('sets GITHUB_TOKEN from secure storage when USE_GITHUB and env token empty', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
delete process.env.GITHUB_TOKEN
|
||||
delete process.env.GH_TOKEN
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
|
||||
mock.module('./secureStorage/index.js', () => ({
|
||||
getSecureStorage: () => ({
|
||||
read: () => ({
|
||||
githubModels: { accessToken: 'stored-secret' },
|
||||
}),
|
||||
}),
|
||||
}))
|
||||
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import(
|
||||
'./githubModelsCredentials.js'
|
||||
)
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
expect(process.env.GITHUB_TOKEN).toBe('stored-secret')
|
||||
})
|
||||
|
||||
test('does not override existing GITHUB_TOKEN', async () => {
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
process.env.GITHUB_TOKEN = 'already'
|
||||
|
||||
mock.module('./secureStorage/index.js', () => ({
|
||||
getSecureStorage: () => ({
|
||||
read: () => ({
|
||||
githubModels: { accessToken: 'stored-secret' },
|
||||
}),
|
||||
}),
|
||||
}))
|
||||
|
||||
const { hydrateGithubModelsTokenFromSecureStorage } = await import(
|
||||
'./githubModelsCredentials.js'
|
||||
)
|
||||
hydrateGithubModelsTokenFromSecureStorage()
|
||||
expect(process.env.GITHUB_TOKEN).toBe('already')
|
||||
})
|
||||
})
|
||||
47
src/utils/githubModelsCredentials.test.ts
Normal file
47
src/utils/githubModelsCredentials.test.ts
Normal file
@@ -0,0 +1,47 @@
|
||||
import { describe, expect, test } from 'bun:test'
|
||||
|
||||
import {
|
||||
clearGithubModelsToken,
|
||||
readGithubModelsToken,
|
||||
saveGithubModelsToken,
|
||||
} from './githubModelsCredentials.js'
|
||||
|
||||
describe('readGithubModelsToken', () => {
|
||||
test('returns undefined in bare mode', () => {
|
||||
const prev = process.env.CLAUDE_CODE_SIMPLE
|
||||
process.env.CLAUDE_CODE_SIMPLE = '1'
|
||||
expect(readGithubModelsToken()).toBeUndefined()
|
||||
if (prev === undefined) {
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
} else {
|
||||
process.env.CLAUDE_CODE_SIMPLE = prev
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe('saveGithubModelsToken / clearGithubModelsToken', () => {
|
||||
test('save returns failure in bare mode', () => {
|
||||
const prev = process.env.CLAUDE_CODE_SIMPLE
|
||||
process.env.CLAUDE_CODE_SIMPLE = '1'
|
||||
const r = saveGithubModelsToken('abc')
|
||||
expect(r.success).toBe(false)
|
||||
expect(r.warning).toContain('Bare mode')
|
||||
if (prev === undefined) {
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
} else {
|
||||
process.env.CLAUDE_CODE_SIMPLE = prev
|
||||
}
|
||||
})
|
||||
|
||||
test('clear succeeds in bare mode', () => {
|
||||
const prev = process.env.CLAUDE_CODE_SIMPLE
|
||||
process.env.CLAUDE_CODE_SIMPLE = '1'
|
||||
expect(clearGithubModelsToken().success).toBe(true)
|
||||
if (prev === undefined) {
|
||||
delete process.env.CLAUDE_CODE_SIMPLE
|
||||
} else {
|
||||
process.env.CLAUDE_CODE_SIMPLE = prev
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
73
src/utils/githubModelsCredentials.ts
Normal file
73
src/utils/githubModelsCredentials.ts
Normal file
@@ -0,0 +1,73 @@
|
||||
import { isBareMode, isEnvTruthy } from './envUtils.js'
|
||||
import { getSecureStorage } from './secureStorage/index.js'
|
||||
|
||||
/** JSON key in the shared OpenClaude secure storage blob. */
|
||||
export const GITHUB_MODELS_STORAGE_KEY = 'githubModels' as const
|
||||
|
||||
export type GithubModelsCredentialBlob = {
|
||||
accessToken: string
|
||||
}
|
||||
|
||||
export function readGithubModelsToken(): string | undefined {
|
||||
if (isBareMode()) return undefined
|
||||
try {
|
||||
const data = getSecureStorage().read() as
|
||||
| ({ githubModels?: GithubModelsCredentialBlob } & Record<string, unknown>)
|
||||
| null
|
||||
const t = data?.githubModels?.accessToken?.trim()
|
||||
return t || undefined
|
||||
} catch {
|
||||
return undefined
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* If GitHub Models mode is on and no token is in the environment, copy the
|
||||
* stored token into process.env so the OpenAI shim and validation see it.
|
||||
*/
|
||||
export function hydrateGithubModelsTokenFromSecureStorage(): void {
|
||||
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
|
||||
return
|
||||
}
|
||||
if (process.env.GITHUB_TOKEN?.trim() || process.env.GH_TOKEN?.trim()) {
|
||||
return
|
||||
}
|
||||
if (isBareMode()) {
|
||||
return
|
||||
}
|
||||
const t = readGithubModelsToken()
|
||||
if (t) {
|
||||
process.env.GITHUB_TOKEN = t
|
||||
}
|
||||
}
|
||||
|
||||
export function saveGithubModelsToken(token: string): {
|
||||
success: boolean
|
||||
warning?: string
|
||||
} {
|
||||
if (isBareMode()) {
|
||||
return { success: false, warning: 'Bare mode: secure storage is disabled.' }
|
||||
}
|
||||
const trimmed = token.trim()
|
||||
if (!trimmed) {
|
||||
return { success: false, warning: 'Token is empty.' }
|
||||
}
|
||||
const secureStorage = getSecureStorage()
|
||||
const prev = secureStorage.read() || {}
|
||||
const merged = {
|
||||
...(prev as Record<string, unknown>),
|
||||
[GITHUB_MODELS_STORAGE_KEY]: { accessToken: trimmed },
|
||||
}
|
||||
return secureStorage.update(merged as typeof prev)
|
||||
}
|
||||
|
||||
export function clearGithubModelsToken(): { success: boolean; warning?: string } {
|
||||
if (isBareMode()) {
|
||||
return { success: true }
|
||||
}
|
||||
const secureStorage = getSecureStorage()
|
||||
const prev = secureStorage.read() || {}
|
||||
const next = { ...(prev as Record<string, unknown>) }
|
||||
delete next[GITHUB_MODELS_STORAGE_KEY]
|
||||
return secureStorage.update(next as typeof prev)
|
||||
}
|
||||
@@ -56,7 +56,7 @@ import { profileReport } from './startupProfiler.js'
|
||||
* 3. Failing to disable leaves the terminal in a broken state
|
||||
*/
|
||||
/* eslint-disable custom-rules/no-sync-fs -- must be sync to flush before process.exit */
|
||||
function cleanupTerminalModes(): void {
|
||||
function cleanupTerminalModes(skipUnmount: boolean = false): void {
|
||||
if (!process.stdout.isTTY) {
|
||||
return
|
||||
}
|
||||
@@ -84,7 +84,7 @@ function cleanupTerminalModes(): void {
|
||||
// Calling unmount() now does the final render on the alt buffer,
|
||||
// unsubscribes from signal-exit, and writes 1049l exactly once.
|
||||
const inst = instances.get(process.stdout)
|
||||
if (inst?.isAltScreenActive) {
|
||||
if (!skipUnmount && inst?.isAltScreenActive) {
|
||||
try {
|
||||
inst.unmount()
|
||||
} catch {
|
||||
@@ -92,6 +92,11 @@ function cleanupTerminalModes(): void {
|
||||
// so printResumeHint still hits the main buffer.
|
||||
writeSync(1, EXIT_ALT_SCREEN)
|
||||
}
|
||||
} else if (skipUnmount && inst?.isAltScreenActive) {
|
||||
// We already unmounted asynchronously in gracefulShutdown, but if we
|
||||
// fallback to manual alt-screen exit here just in case Ink didn't write it or is dead.
|
||||
// Actually, AlternateScreen unmount writes EXIT_ALT_SCREEN, so if we awaited unmount,
|
||||
// we shouldn't emit it again. So we just do nothing here.
|
||||
}
|
||||
// Catches events that arrived during the unmount tree-walk.
|
||||
// detachForShutdown() below also drains.
|
||||
@@ -411,12 +416,17 @@ export async function gracefulShutdown(
|
||||
)
|
||||
const sessionEndTimeoutMs = getSessionEndHookTimeoutMs()
|
||||
|
||||
// Await one tick so React can flush pending updates from commands (e.g. hiding
|
||||
// the autocomplete menu on /exit) before we detach Ink. This lets log-update
|
||||
// erase floating UI elements from the terminal so they don't linger after exit.
|
||||
await new Promise(r => setTimeout(r, 20))
|
||||
|
||||
// Failsafe: guarantee process exits even if cleanup hangs (e.g., MCP connections).
|
||||
// Runs cleanupTerminalModes first so a hung cleanup doesn't leave the terminal dirty.
|
||||
// Budget = max(5s, hook budget + 3.5s headroom for cleanup + analytics flush).
|
||||
failsafeTimer = setTimeout(
|
||||
code => {
|
||||
cleanupTerminalModes()
|
||||
cleanupTerminalModes(true)
|
||||
printResumeHint()
|
||||
forceExit(code)
|
||||
},
|
||||
@@ -433,7 +443,7 @@ export async function gracefulShutdown(
|
||||
// cleanup (e.g., SIGKILL during macOS reboot). Without this, the resume
|
||||
// hint would only appear after cleanup functions, hooks, and analytics
|
||||
// flush — which can take several seconds.
|
||||
cleanupTerminalModes()
|
||||
cleanupTerminalModes(true)
|
||||
printResumeHint()
|
||||
|
||||
// Flush session data first — this is the most critical cleanup. If the
|
||||
|
||||
@@ -18,6 +18,7 @@ const PROVIDER_MANAGED_ENV_VARS = new Set([
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
// Endpoint config (base URLs, project/resource identifiers)
|
||||
'ANTHROPIC_BASE_URL',
|
||||
'ANTHROPIC_BEDROCK_BASE_URL',
|
||||
@@ -147,6 +148,7 @@ export const SAFE_ENV_VARS = new Set([
|
||||
'CLAUDE_CODE_SUBAGENT_MODEL',
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'DISABLE_AUTOUPDATER',
|
||||
'DISABLE_BUG_COMMAND',
|
||||
|
||||
@@ -7,6 +7,7 @@ import {
|
||||
|
||||
const originalEnv = {
|
||||
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
|
||||
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
|
||||
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
|
||||
CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK,
|
||||
CLAUDE_CODE_USE_VERTEX: process.env.CLAUDE_CODE_USE_VERTEX,
|
||||
@@ -15,6 +16,7 @@ const originalEnv = {
|
||||
|
||||
afterEach(() => {
|
||||
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
|
||||
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = originalEnv.CLAUDE_CODE_USE_BEDROCK
|
||||
process.env.CLAUDE_CODE_USE_VERTEX = originalEnv.CLAUDE_CODE_USE_VERTEX
|
||||
@@ -23,6 +25,7 @@ afterEach(() => {
|
||||
|
||||
function clearProviderEnv(): void {
|
||||
delete process.env.CLAUDE_CODE_USE_GEMINI
|
||||
delete process.env.CLAUDE_CODE_USE_GITHUB
|
||||
delete process.env.CLAUDE_CODE_USE_OPENAI
|
||||
delete process.env.CLAUDE_CODE_USE_BEDROCK
|
||||
delete process.env.CLAUDE_CODE_USE_VERTEX
|
||||
@@ -38,6 +41,7 @@ test('first-party provider keeps Anthropic account setup flow enabled', () => {
|
||||
|
||||
test.each([
|
||||
['CLAUDE_CODE_USE_OPENAI', 'openai'],
|
||||
['CLAUDE_CODE_USE_GITHUB', 'github'],
|
||||
['CLAUDE_CODE_USE_GEMINI', 'gemini'],
|
||||
['CLAUDE_CODE_USE_BEDROCK', 'bedrock'],
|
||||
['CLAUDE_CODE_USE_VERTEX', 'vertex'],
|
||||
@@ -52,3 +56,11 @@ test.each([
|
||||
expect(usesAnthropicAccountFlow()).toBe(false)
|
||||
},
|
||||
)
|
||||
|
||||
test('GEMINI takes precedence over GitHub when both are set', () => {
|
||||
clearProviderEnv()
|
||||
process.env.CLAUDE_CODE_USE_GEMINI = '1'
|
||||
process.env.CLAUDE_CODE_USE_GITHUB = '1'
|
||||
|
||||
expect(getAPIProvider()).toBe('gemini')
|
||||
})
|
||||
|
||||
@@ -1,11 +1,20 @@
|
||||
import type { AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS } from '../../services/analytics/index.js'
|
||||
import { isEnvTruthy } from '../envUtils.js'
|
||||
|
||||
export type APIProvider = 'firstParty' | 'bedrock' | 'vertex' | 'foundry' | 'openai' | 'gemini'
|
||||
export type APIProvider =
|
||||
| 'firstParty'
|
||||
| 'bedrock'
|
||||
| 'vertex'
|
||||
| 'foundry'
|
||||
| 'openai'
|
||||
| 'gemini'
|
||||
| 'github'
|
||||
|
||||
export function getAPIProvider(): APIProvider {
|
||||
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
|
||||
? 'gemini'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
|
||||
? 'github'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
|
||||
? 'openai'
|
||||
: isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK)
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import type { OllamaModelDescriptor } from './providerRecommendation.ts'
|
||||
|
||||
export const DEFAULT_OLLAMA_BASE_URL = 'http://localhost:11434'
|
||||
export const DEFAULT_ATOMIC_CHAT_BASE_URL = 'http://127.0.0.1:1337'
|
||||
|
||||
function withTimeoutSignal(timeoutMs: number): {
|
||||
signal: AbortSignal
|
||||
@@ -35,6 +36,23 @@ export function getOllamaChatBaseUrl(baseUrl?: string): string {
|
||||
return `${getOllamaApiBaseUrl(baseUrl)}/v1`
|
||||
}
|
||||
|
||||
export function getAtomicChatApiBaseUrl(baseUrl?: string): string {
|
||||
const parsed = new URL(
|
||||
baseUrl || process.env.ATOMIC_CHAT_BASE_URL || DEFAULT_ATOMIC_CHAT_BASE_URL,
|
||||
)
|
||||
const pathname = trimTrailingSlash(parsed.pathname)
|
||||
parsed.pathname = pathname.endsWith('/v1')
|
||||
? pathname.slice(0, -3) || '/'
|
||||
: pathname || '/'
|
||||
parsed.search = ''
|
||||
parsed.hash = ''
|
||||
return trimTrailingSlash(parsed.toString())
|
||||
}
|
||||
|
||||
export function getAtomicChatChatBaseUrl(baseUrl?: string): string {
|
||||
return `${getAtomicChatApiBaseUrl(baseUrl)}/v1`
|
||||
}
|
||||
|
||||
export async function hasLocalOllama(baseUrl?: string): Promise<boolean> {
|
||||
const { signal, clear } = withTimeoutSignal(1200)
|
||||
try {
|
||||
@@ -93,6 +111,48 @@ export async function listOllamaModels(
|
||||
}
|
||||
}
|
||||
|
||||
export async function hasLocalAtomicChat(baseUrl?: string): Promise<boolean> {
|
||||
const { signal, clear } = withTimeoutSignal(1200)
|
||||
try {
|
||||
const response = await fetch(`${getAtomicChatChatBaseUrl(baseUrl)}/models`, {
|
||||
method: 'GET',
|
||||
signal,
|
||||
})
|
||||
return response.ok
|
||||
} catch {
|
||||
return false
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
|
||||
export async function listAtomicChatModels(
|
||||
baseUrl?: string,
|
||||
): Promise<string[]> {
|
||||
const { signal, clear } = withTimeoutSignal(5000)
|
||||
try {
|
||||
const response = await fetch(`${getAtomicChatChatBaseUrl(baseUrl)}/models`, {
|
||||
method: 'GET',
|
||||
signal,
|
||||
})
|
||||
if (!response.ok) {
|
||||
return []
|
||||
}
|
||||
|
||||
const data = (await response.json()) as {
|
||||
data?: Array<{ id?: string }>
|
||||
}
|
||||
|
||||
return (data.data ?? [])
|
||||
.filter(model => Boolean(model.id))
|
||||
.map(model => model.id!)
|
||||
} catch {
|
||||
return []
|
||||
} finally {
|
||||
clear()
|
||||
}
|
||||
}
|
||||
|
||||
export async function benchmarkOllamaModel(
|
||||
modelName: string,
|
||||
baseUrl?: string,
|
||||
|
||||
@@ -6,6 +6,7 @@ import test from 'node:test'
|
||||
|
||||
import {
|
||||
buildStartupEnvFromProfile,
|
||||
buildAtomicChatProfileEnv,
|
||||
buildCodexProfileEnv,
|
||||
buildGeminiProfileEnv,
|
||||
buildLaunchEnv,
|
||||
@@ -529,3 +530,72 @@ test('auto profile falls back to openai when no viable ollama model exists', ()
|
||||
assert.equal(selectAutoProfile(null), 'openai')
|
||||
assert.equal(selectAutoProfile('qwen2.5-coder:7b'), 'ollama')
|
||||
})
|
||||
|
||||
// ── Atomic Chat profile tests ────────────────────────────────────────────────
|
||||
|
||||
test('atomic-chat profiles never persist openai api keys', () => {
|
||||
const env = buildAtomicChatProfileEnv('some-local-model', {
|
||||
getAtomicChatChatBaseUrl: () => 'http://127.0.0.1:1337/v1',
|
||||
})
|
||||
|
||||
assert.deepEqual(env, {
|
||||
OPENAI_BASE_URL: 'http://127.0.0.1:1337/v1',
|
||||
OPENAI_MODEL: 'some-local-model',
|
||||
})
|
||||
assert.equal('OPENAI_API_KEY' in env, false)
|
||||
})
|
||||
|
||||
test('atomic-chat profiles respect custom base url', () => {
|
||||
const env = buildAtomicChatProfileEnv('my-model', {
|
||||
baseUrl: 'http://192.168.1.100:1337',
|
||||
getAtomicChatChatBaseUrl: (baseUrl?: string) =>
|
||||
baseUrl ? `${baseUrl}/v1` : 'http://127.0.0.1:1337/v1',
|
||||
})
|
||||
|
||||
assert.equal(env.OPENAI_BASE_URL, 'http://192.168.1.100:1337/v1')
|
||||
assert.equal(env.OPENAI_MODEL, 'my-model')
|
||||
})
|
||||
|
||||
test('matching persisted atomic-chat env is reused for atomic-chat launch', async () => {
|
||||
const env = await buildLaunchEnv({
|
||||
profile: 'atomic-chat',
|
||||
persisted: profile('atomic-chat', {
|
||||
OPENAI_BASE_URL: 'http://127.0.0.1:1337/v1',
|
||||
OPENAI_MODEL: 'llama-3.1-8b',
|
||||
}),
|
||||
goal: 'balanced',
|
||||
processEnv: {},
|
||||
getAtomicChatChatBaseUrl: () => 'http://127.0.0.1:1337/v1',
|
||||
resolveAtomicChatDefaultModel: async () => 'other-model',
|
||||
})
|
||||
|
||||
assert.equal(env.OPENAI_BASE_URL, 'http://127.0.0.1:1337/v1')
|
||||
assert.equal(env.OPENAI_MODEL, 'llama-3.1-8b')
|
||||
assert.equal(env.OPENAI_API_KEY, undefined)
|
||||
assert.equal(env.CODEX_API_KEY, undefined)
|
||||
})
|
||||
|
||||
test('atomic-chat launch ignores mismatched persisted openai env', async () => {
|
||||
const env = await buildLaunchEnv({
|
||||
profile: 'atomic-chat',
|
||||
persisted: profile('openai', {
|
||||
OPENAI_BASE_URL: 'https://api.openai.com/v1',
|
||||
OPENAI_MODEL: 'gpt-4o',
|
||||
OPENAI_API_KEY: 'sk-persisted',
|
||||
}),
|
||||
goal: 'balanced',
|
||||
processEnv: {
|
||||
OPENAI_API_KEY: 'sk-live',
|
||||
CODEX_API_KEY: 'codex-live',
|
||||
CHATGPT_ACCOUNT_ID: 'acct_live',
|
||||
},
|
||||
getAtomicChatChatBaseUrl: () => 'http://127.0.0.1:1337/v1',
|
||||
resolveAtomicChatDefaultModel: async () => 'local-model',
|
||||
})
|
||||
|
||||
assert.equal(env.OPENAI_BASE_URL, 'http://127.0.0.1:1337/v1')
|
||||
assert.equal(env.OPENAI_MODEL, 'local-model')
|
||||
assert.equal(env.OPENAI_API_KEY, undefined)
|
||||
assert.equal(env.CODEX_API_KEY, undefined)
|
||||
assert.equal(env.CHATGPT_ACCOUNT_ID, undefined)
|
||||
})
|
||||
|
||||
@@ -44,7 +44,7 @@ const SECRET_ENV_KEYS = [
|
||||
'GOOGLE_API_KEY',
|
||||
] as const
|
||||
|
||||
export type ProviderProfile = 'openai' | 'ollama' | 'codex' | 'gemini'
|
||||
export type ProviderProfile = 'openai' | 'ollama' | 'codex' | 'gemini' | 'atomic-chat'
|
||||
|
||||
export type ProfileEnv = {
|
||||
OPENAI_BASE_URL?: string
|
||||
@@ -89,7 +89,8 @@ export function isProviderProfile(value: unknown): value is ProviderProfile {
|
||||
value === 'openai' ||
|
||||
value === 'ollama' ||
|
||||
value === 'codex' ||
|
||||
value === 'gemini'
|
||||
value === 'gemini' ||
|
||||
value === 'atomic-chat'
|
||||
)
|
||||
}
|
||||
|
||||
@@ -202,6 +203,19 @@ export function buildOllamaProfileEnv(
|
||||
}
|
||||
}
|
||||
|
||||
export function buildAtomicChatProfileEnv(
|
||||
model: string,
|
||||
options: {
|
||||
baseUrl?: string | null
|
||||
getAtomicChatChatBaseUrl: (baseUrl?: string) => string
|
||||
},
|
||||
): ProfileEnv {
|
||||
return {
|
||||
OPENAI_BASE_URL: options.getAtomicChatChatBaseUrl(options.baseUrl ?? undefined),
|
||||
OPENAI_MODEL: model,
|
||||
}
|
||||
}
|
||||
|
||||
export function buildGeminiProfileEnv(options: {
|
||||
model?: string | null
|
||||
baseUrl?: string | null
|
||||
@@ -385,6 +399,7 @@ export function hasExplicitProviderSelection(
|
||||
): boolean {
|
||||
return (
|
||||
processEnv.CLAUDE_CODE_USE_OPENAI !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_GEMINI !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_BEDROCK !== undefined ||
|
||||
processEnv.CLAUDE_CODE_USE_VERTEX !== undefined ||
|
||||
@@ -405,6 +420,8 @@ export async function buildLaunchEnv(options: {
|
||||
processEnv?: NodeJS.ProcessEnv
|
||||
getOllamaChatBaseUrl?: (baseUrl?: string) => string
|
||||
resolveOllamaDefaultModel?: (goal: RecommendationGoal) => Promise<string>
|
||||
getAtomicChatChatBaseUrl?: (baseUrl?: string) => string
|
||||
resolveAtomicChatDefaultModel?: () => Promise<string | null>
|
||||
}): Promise<NodeJS.ProcessEnv> {
|
||||
const processEnv = options.processEnv ?? process.env
|
||||
const persistedEnv =
|
||||
@@ -456,6 +473,7 @@ export async function buildLaunchEnv(options: {
|
||||
}
|
||||
|
||||
delete env.CLAUDE_CODE_USE_OPENAI
|
||||
delete env.CLAUDE_CODE_USE_GITHUB
|
||||
|
||||
env.GEMINI_MODEL =
|
||||
shellGeminiModel ||
|
||||
@@ -490,6 +508,7 @@ export async function buildLaunchEnv(options: {
|
||||
}
|
||||
|
||||
delete env.CLAUDE_CODE_USE_GEMINI
|
||||
delete env.CLAUDE_CODE_USE_GITHUB
|
||||
delete env.GEMINI_API_KEY
|
||||
delete env.GEMINI_MODEL
|
||||
delete env.GEMINI_BASE_URL
|
||||
@@ -514,6 +533,26 @@ export async function buildLaunchEnv(options: {
|
||||
return env
|
||||
}
|
||||
|
||||
if (options.profile === 'atomic-chat') {
|
||||
const getAtomicChatBaseUrl =
|
||||
options.getAtomicChatChatBaseUrl ?? (() => 'http://127.0.0.1:1337/v1')
|
||||
const resolveModel =
|
||||
options.resolveAtomicChatDefaultModel ?? (async () => null as string | null)
|
||||
|
||||
env.OPENAI_BASE_URL = persistedEnv.OPENAI_BASE_URL || getAtomicChatBaseUrl()
|
||||
env.OPENAI_MODEL =
|
||||
persistedEnv.OPENAI_MODEL ||
|
||||
(await resolveModel()) ||
|
||||
''
|
||||
|
||||
delete env.OPENAI_API_KEY
|
||||
delete env.CODEX_API_KEY
|
||||
delete env.CHATGPT_ACCOUNT_ID
|
||||
delete env.CODEX_ACCOUNT_ID
|
||||
|
||||
return env
|
||||
}
|
||||
|
||||
if (options.profile === 'codex') {
|
||||
env.OPENAI_BASE_URL =
|
||||
persistedOpenAIBaseUrl && isCodexBaseUrl(persistedOpenAIBaseUrl)
|
||||
|
||||
@@ -99,6 +99,18 @@ const TEAMMATE_ENV_VARS = [
|
||||
'CLAUDE_CODE_USE_BEDROCK',
|
||||
'CLAUDE_CODE_USE_VERTEX',
|
||||
'CLAUDE_CODE_USE_FOUNDRY',
|
||||
'CLAUDE_CODE_USE_GITHUB',
|
||||
'CLAUDE_CODE_USE_GEMINI',
|
||||
'CLAUDE_CODE_USE_OPENAI',
|
||||
'GITHUB_TOKEN',
|
||||
'GH_TOKEN',
|
||||
'OPENAI_API_KEY',
|
||||
'OPENAI_BASE_URL',
|
||||
'OPENAI_MODEL',
|
||||
'GEMINI_API_KEY',
|
||||
'GEMINI_BASE_URL',
|
||||
'GEMINI_MODEL',
|
||||
'GOOGLE_API_KEY',
|
||||
// Custom API endpoint
|
||||
'ANTHROPIC_BASE_URL',
|
||||
// Config directory override
|
||||
|
||||
130
test_atomic_chat_provider.py
Normal file
130
test_atomic_chat_provider.py
Normal file
@@ -0,0 +1,130 @@
|
||||
"""
|
||||
test_atomic_chat_provider.py
|
||||
Run: pytest test_atomic_chat_provider.py -v
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
from atomic_chat_provider import (
|
||||
atomic_chat,
|
||||
list_atomic_chat_models,
|
||||
check_atomic_chat_running,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_running_true():
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.get = AsyncMock(return_value=mock_response)
|
||||
result = await check_atomic_chat_running()
|
||||
assert result is True
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_running_false_on_exception():
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.get = AsyncMock(side_effect=Exception("refused"))
|
||||
result = await check_atomic_chat_running()
|
||||
assert result is False
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_models_returns_ids():
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = {
|
||||
"data": [{"id": "llama-3.1-8b"}, {"id": "mistral-7b"}],
|
||||
}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.get = AsyncMock(return_value=mock_response)
|
||||
models = await list_atomic_chat_models()
|
||||
assert "llama-3.1-8b" in models
|
||||
assert "mistral-7b" in models
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_models_empty_on_failure():
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.get = AsyncMock(side_effect=Exception("down"))
|
||||
models = await list_atomic_chat_models()
|
||||
assert models == []
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_returns_anthropic_format():
|
||||
mock_response = MagicMock()
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_response.json.return_value = {
|
||||
"id": "chatcmpl-abc123",
|
||||
"choices": [{"message": {"content": "42 is the answer."}}],
|
||||
"usage": {"prompt_tokens": 10, "completion_tokens": 8},
|
||||
}
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.post = AsyncMock(return_value=mock_response)
|
||||
result = await atomic_chat(
|
||||
model="llama-3.1-8b",
|
||||
messages=[{"role": "user", "content": "What is 6*7?"}],
|
||||
)
|
||||
assert result["type"] == "message"
|
||||
assert result["role"] == "assistant"
|
||||
assert "42" in result["content"][0]["text"]
|
||||
assert result["usage"]["input_tokens"] == 10
|
||||
assert result["usage"]["output_tokens"] == 8
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_prepends_system():
|
||||
captured = {}
|
||||
|
||||
async def mock_post(url, json=None, **kwargs):
|
||||
captured.update(json or {})
|
||||
m = MagicMock()
|
||||
m.raise_for_status = MagicMock()
|
||||
m.json.return_value = {
|
||||
"id": "chatcmpl-xyz",
|
||||
"choices": [{"message": {"content": "ok"}}],
|
||||
"usage": {"prompt_tokens": 1, "completion_tokens": 1},
|
||||
}
|
||||
return m
|
||||
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.post = mock_post
|
||||
await atomic_chat(
|
||||
model="llama-3.1-8b",
|
||||
messages=[{"role": "user", "content": "Hi"}],
|
||||
system="Be helpful.",
|
||||
)
|
||||
assert captured["messages"][0]["role"] == "system"
|
||||
assert "helpful" in captured["messages"][0]["content"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_atomic_chat_sends_correct_payload():
|
||||
captured = {}
|
||||
|
||||
async def mock_post(url, json=None, **kwargs):
|
||||
captured.update(json or {})
|
||||
m = MagicMock()
|
||||
m.raise_for_status = MagicMock()
|
||||
m.json.return_value = {
|
||||
"id": "chatcmpl-xyz",
|
||||
"choices": [{"message": {"content": "ok"}}],
|
||||
"usage": {"prompt_tokens": 1, "completion_tokens": 1},
|
||||
}
|
||||
return m
|
||||
|
||||
with patch("atomic_chat_provider.httpx.AsyncClient") as MockClient:
|
||||
MockClient.return_value.__aenter__.return_value.post = mock_post
|
||||
await atomic_chat(
|
||||
model="test-model",
|
||||
messages=[{"role": "user", "content": "Test"}],
|
||||
max_tokens=2048,
|
||||
temperature=0.5,
|
||||
)
|
||||
assert captured["model"] == "test-model"
|
||||
assert captured["max_tokens"] == 2048
|
||||
assert captured["temperature"] == 0.5
|
||||
assert captured["stream"] is False
|
||||
Reference in New Issue
Block a user