Compare commits

..

5 Commits

Author SHA1 Message Date
OpenClaude Worker 3
af5bb8fed8 fix: gate startup checks strictly on first submission, remove grace period (issue #363)
As gnanam1990 pointed out, the 3s grace period still allows the failure
mode: if a user pauses for a few seconds before typing, startup checks
fire and recommendation dialogs steal focus. A grace period is still a
timing mitigation, not a reliable fix.

New approach: startup checks only run after the user has submitted their
first message (submitCount > 0). No grace period, no timeout. This
guarantees the prompt gets first interaction — no dialog can steal focus
before the user has actually used the CLI.

If the user never submits a message, startup checks never run. That's
acceptable because with no user interaction there's no need for plugin
installations or marketplace seeding.
2026-04-08 11:55:23 +05:30
OpenClaude Worker 3
ad76b1174a fix: move startup checks after submitCount declaration to avoid temporal dead zone
Code quality bot flagged that submitCount was used before its declaration.
Moved the entire startup checks block to after the submitCount useState
declaration. Also added nullish coalescing (submitCount ?? 0) per bot
suggestion.
2026-04-08 11:47:26 +05:30
OpenClaude Worker 3
c457d9db3c fix: gate startup checks on prompt readiness, not just a timeout (issue #363)
The previous approach used a fixed 1500ms timeout, but as gnanam1990
pointed out, if a user pauses for >1.5s before typing the timer can
still fire and recommendation dialogs can steal focus. This is a
timing mitigation, not a reliable fix.

New approach: gate startup checks on actual prompt readiness:
1. After first message submission (submitCount > 0) — always safe
2. After grace period (3s) elapsed AND user is idle — safe because
   no dialog will interrupt an idle user who hasn't started typing
3. While user is actively typing — deferrred until they stop

This ensures startup checks never steal focus from a prompt the user
is about to type into, regardless of how long they pause before typing.

Also removes the old STARTUP_CHECK_DELAY_MS constant in favor of
STARTUP_GRACE_PERIOD_MS with clearer semantics.
2026-04-08 11:39:21 +05:30
OpenClaude Worker 3
d1f79088a1 fix: move startup checks effect after promptTypingSuppressionActive declaration
Fixes temporal dead zone warning flagged by code-quality bot.
promptTypingSuppressionActive is declared on line ~1340 but the
useEffect was on line ~800, causing a reference-before-declaration.
Also adds missing semicolons for style consistency.
2026-04-08 11:35:48 +05:30
OpenClaude Worker 3
106f85d0bf fix: defer startup plugin checks and suppress recommendation dialogs during startup window (issue #363)
Root cause: performStartupChecks() fires immediately on REPL mount,
triggering plugin loading which populates trackedFiles, which triggers
useLspPluginRecommendation to surface an LSP recommendation dialog.
Since promptTypingSuppressionActive is false before any user input,
getFocusedInputDialog() returns the dialog, unmounting PromptInput
entirely and making the CLI appear frozen.

Fix: Two-pronged approach:
1. Defer performStartupChecks by 1500ms and gate on
   promptTypingSuppressionActive so startup checks dont run while
   the user is typing or has early input buffered
2. Suppress lower-priority startup dialogs (LSP recommendation,
   plugin hint, desktop upsell) until startupChecksStartedRef is
   true, preventing them from stealing focus during the vulnerable
   startup window

This also explains why --bare mode and disabling plugins work:
--bare mode skips plugin loading entirely, and disabling the
autoresearch plugin eliminates the LSP match, so lspRecommendation
stays null and PromptInput renders normally.
2026-04-08 11:24:36 +05:30
153 changed files with 842 additions and 9674 deletions

View File

@@ -248,93 +248,3 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
# Enable debug logging # Enable debug logging
# CLAUDE_DEBUG=1 # CLAUDE_DEBUG=1
# =============================================================================
# WEB SEARCH (OPTIONAL)
# =============================================================================
# OpenClaude includes a web search tool. By default it uses DuckDuckGo (free)
# or the provider's native search (Anthropic firstParty / vertex).
#
# Set one API key below to enable a provider. That's it.
# ── Provider API keys — set ONE of these ────────────────────────────
# Tavily (AI-optimized search, recommended)
# TAVILY_API_KEY=tvly-your-key-here
# Exa (neural/semantic search)
# EXA_API_KEY=your-exa-key-here
# You.com (RAG-ready snippets)
# YOU_API_KEY=your-you-key-here
# Jina (s.jina.ai endpoint)
# JINA_API_KEY=your-jina-key-here
# Bing Web Search
# BING_API_KEY=your-bing-key-here
# Mojeek (privacy-focused)
# MOJEEK_API_KEY=your-mojeek-key-here
# Linkup
# LINKUP_API_KEY=your-linkup-key-here
# Firecrawl (premium, uses @mendable/firecrawl-js)
# FIRECRAWL_API_KEY=fc-your-key-here
# ── Provider selection mode ─────────────────────────────────────────
#
# WEB_SEARCH_PROVIDER controls fallback behavior:
#
# "auto" (default) — try all configured providers, fall through on failure
# "custom" — custom API only, throw on failure (NOT in auto chain)
# "firecrawl" — firecrawl only
# "tavily" — tavily only
# "exa" — exa only
# "you" — you.com only
# "jina" — jina only
# "bing" — bing only
# "mojeek" — mojeek only
# "linkup" — linkup only
# "ddg" — duckduckgo only
# "native" — anthropic native / codex only
#
# Auto mode priority: firecrawl → tavily → exa → you → jina → bing → mojeek →
# linkup → ddg
# Note: "custom" is NOT in the auto chain. To use the custom API provider,
# you must explicitly set WEB_SEARCH_PROVIDER=custom.
#
# WEB_SEARCH_PROVIDER=auto
# ── Built-in custom API presets ─────────────────────────────────────
#
# Use with WEB_KEY for the API key:
# WEB_PROVIDER=searxng|google|brave|serpapi
# WEB_KEY=your-api-key-here
# ── Custom API endpoint (advanced) ──────────────────────────────────
#
# WEB_SEARCH_API — base URL of your search endpoint
# WEB_QUERY_PARAM — query parameter name (default: "q")
# WEB_METHOD — GET or POST (default: GET)
# WEB_PARAMS — extra static query params as JSON: {"lang":"en","count":"10"}
# WEB_URL_TEMPLATE — URL template with {query} for path embedding
# WEB_BODY_TEMPLATE — custom POST body with {query} placeholder
# WEB_AUTH_HEADER — header name for API key (default: "Authorization")
# WEB_AUTH_SCHEME — prefix before key (default: "Bearer")
# WEB_HEADERS — extra headers as "Name: value; Name2: value2"
# WEB_JSON_PATH — dot-path to results array in response
# ── Custom API security guardrails ──────────────────────────────────
#
# The custom provider enforces security guardrails by default.
# Override these only if you understand the risks.
#
# WEB_CUSTOM_TIMEOUT_SEC=15 — request timeout in seconds (default 15)
# WEB_CUSTOM_MAX_BODY_KB=300 — max POST body size in KB (default 300)
# WEB_CUSTOM_ALLOW_ARBITRARY_HEADERS=false — set "true" to use non-standard headers
# WEB_CUSTOM_ALLOW_HTTP=false — set "true" to allow http:// URLs
# WEB_CUSTOM_ALLOW_PRIVATE=false — set "true" to target localhost/private IPs
# (needed for self-hosted SearXNG)

View File

@@ -29,13 +29,6 @@ jobs:
with: with:
bun-version: 1.3.11 bun-version: 1.3.11
- name: Set up Python
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: "3.12"
cache: "pip"
cache-dependency-path: python/requirements.txt
- name: Install dependencies - name: Install dependencies
run: bun install --frozen-lockfile run: bun install --frozen-lockfile
@@ -45,12 +38,6 @@ jobs:
- name: Full unit test suite - name: Full unit test suite
run: bun test --max-concurrency=1 run: bun test --max-concurrency=1
- name: Install Python test dependencies
run: python -m pip install -r python/requirements.txt
- name: Python unit tests
run: python -m pytest -q python/tests
- name: Suspicious PR intent scan - name: Suspicious PR intent scan
run: bun run security:pr-scan -- --base ${{ github.event.pull_request.base.sha || 'origin/main' }} run: bun run security:pr-scan -- --base ${{ github.event.pull_request.base.sha || 'origin/main' }}
- name: Provider tests - name: Provider tests

View File

@@ -137,9 +137,10 @@ export OPENAI_MODEL=llama-3.3-70b-versatile
### Mistral ### Mistral
```bash ```bash
export CLAUDE_CODE_USE_MISTRAL=1 export CLAUDE_CODE_USE_OPENAI=1
export MISTRAL_API_KEY=... export OPENAI_API_KEY=...
export MISTRAL_MODEL=mistral-large-latest export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
``` ```
### Azure OpenAI ### Azure OpenAI

View File

@@ -1,3 +0,0 @@
pytest==7.4.4
pytest-asyncio==0.23.3
httpx==0.25.2

View File

@@ -112,14 +112,6 @@ def build_default_providers() -> list[Provider]:
big_model=big if "gemini" in big else "gemini-2.5-pro", big_model=big if "gemini" in big else "gemini-2.5-pro",
small_model=small if "gemini" in small else "gemini-2.0-flash", small_model=small if "gemini" in small else "gemini-2.0-flash",
), ),
Provider(
name="mistral",
ping_url="",
api_key_env="MISTRAL_API_KEY",
cost_per_1k_tokens=0.0001,
big_model=big if "mistral" in big else "devstral-latest",
small_model=small if "small" in small else "ministral-3b-latest",
),
Provider( Provider(
name="ollama", name="ollama",
ping_url=f"{ollama_url}/api/tags", ping_url=f"{ollama_url}/api/tags",

View File

@@ -11,7 +11,6 @@ import {
buildAtomicChatProfileEnv, buildAtomicChatProfileEnv,
buildCodexProfileEnv, buildCodexProfileEnv,
buildGeminiProfileEnv, buildGeminiProfileEnv,
buildMistralProfileEnv,
buildOllamaProfileEnv, buildOllamaProfileEnv,
buildOpenAIProfileEnv, buildOpenAIProfileEnv,
createProfileFile, createProfileFile,
@@ -38,7 +37,7 @@ function parseArg(name: string): string | null {
function parseProviderArg(): ProviderProfile | 'auto' { function parseProviderArg(): ProviderProfile | 'auto' {
const p = parseArg('--provider')?.toLowerCase() const p = parseArg('--provider')?.toLowerCase()
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini' || p === 'mistral' || p === 'atomic-chat') return p if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini' || p === 'atomic-chat') return p
return 'auto' return 'auto'
} }
@@ -91,21 +90,6 @@ async function main(): Promise<void> {
process.exit(1) process.exit(1)
} }
env = builtEnv
} else if (selected === 'mistral') {
const builtEnv = buildMistralProfileEnv({
model: argModel || null,
baseUrl: argBaseUrl || null,
apiKey: argApiKey || null,
processEnv: process.env,
})
if (!builtEnv) {
console.error('Mistral profile requires an API key. Use --api-key or set MISTRAL_API_KEY.')
console.error('Get a free key at: https://admin.mistral.ai/organization/api-keys')
process.exit(1)
}
env = builtEnv env = builtEnv
} else if (selected === 'ollama') { } else if (selected === 'ollama') {
resolvedOllamaModel ??= await resolveOllamaModel(argModel, argBaseUrl, goal) resolvedOllamaModel ??= await resolveOllamaModel(argModel, argBaseUrl, goal)
@@ -185,7 +169,7 @@ async function main(): Promise<void> {
console.log(`Saved profile: ${selected}`) console.log(`Saved profile: ${selected}`)
console.log(`Goal: ${goal}`) console.log(`Goal: ${goal}`)
console.log(`Model: ${profile.env.GEMINI_MODEL || profile.env.MISTRAL_MODEL || profile.env.OPENAI_MODEL || getGoalDefaultOpenAIModel(goal)}`) console.log(`Model: ${profile.env.GEMINI_MODEL || profile.env.OPENAI_MODEL || getGoalDefaultOpenAIModel(goal)}`)
console.log(`Path: ${outputPath}`) console.log(`Path: ${outputPath}`)
console.log('Next: bun run dev:profile') console.log('Next: bun run dev:profile')
} }

View File

@@ -50,7 +50,7 @@ function parseLaunchOptions(argv: string[]): LaunchOptions {
continue continue
} }
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini' || lower ==='mistral' || lower === 'atomic-chat') && requestedProfile === 'auto') { if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini' || lower === 'atomic-chat') && requestedProfile === 'auto') {
requestedProfile = lower as ProviderProfile | 'auto' requestedProfile = lower as ProviderProfile | 'auto'
continue continue
} }
@@ -124,8 +124,6 @@ function printSummary(profile: ProviderProfile): void {
console.log(`Launching profile: ${profile}`) console.log(`Launching profile: ${profile}`)
if (profile === 'gemini') { if (profile === 'gemini') {
console.log('Using configured Gemini provider settings.') console.log('Using configured Gemini provider settings.')
} else if (profile === 'mistral') {
console.log('Using configured Mistral provider settings.')
} else if (profile === 'codex') { } else if (profile === 'codex') {
console.log('Using configured Codex/OpenAI-compatible provider settings.') console.log('Using configured Codex/OpenAI-compatible provider settings.')
} else if (profile === 'atomic-chat') { } else if (profile === 'atomic-chat') {
@@ -141,7 +139,7 @@ async function main(): Promise<void> {
const options = parseLaunchOptions(process.argv.slice(2)) const options = parseLaunchOptions(process.argv.slice(2))
const requestedProfile = options.requestedProfile const requestedProfile = options.requestedProfile
if (!requestedProfile) { if (!requestedProfile) {
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|mistral|atomic-chat|mistral|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]') console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|atomic-chat|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
process.exit(1) process.exit(1)
} }
@@ -207,11 +205,6 @@ async function main(): Promise<void> {
process.exit(1) process.exit(1)
} }
if (profile === 'mistral' && !env.MISTRAL_API_KEY) {
console.error('MISTRAL_API_KEY is required for mistral profile. Run: bun run profile:init -- --provider mistral --api-key <key>')
process.exit(1)
}
if (profile === 'openai' && (!env.OPENAI_API_KEY || env.OPENAI_API_KEY === 'SUA_CHAVE')) { if (profile === 'openai' && (!env.OPENAI_API_KEY || env.OPENAI_API_KEY === 'SUA_CHAVE')) {
console.error('OPENAI_API_KEY is required for openai profile and cannot be SUA_CHAVE. Run: bun run profile:init -- --provider openai --api-key <key>') console.error('OPENAI_API_KEY is required for openai profile and cannot be SUA_CHAVE. Run: bun run profile:init -- --provider openai --api-key <key>')
process.exit(1) process.exit(1)

View File

@@ -118,18 +118,14 @@ function isLocalBaseUrl(baseUrl: string): boolean {
} }
const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai' const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai'
const MISTRAL_DEFAULT_BASE_URL = 'https://api.mistral.ai/v1' const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
const GITHUB_COPILOT_BASE = 'https://api.githubcopilot.com'
function currentBaseUrl(): string { function currentBaseUrl(): string {
if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) { if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL
} }
if (isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)) {
return process.env.MISTRAL_BASE_URL ?? MISTRAL_DEFAULT_BASE_URL
}
if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) { if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
return process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE return process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
} }
return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1' return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1'
} }
@@ -159,34 +155,9 @@ function checkGeminiEnv(): CheckResult[] {
return results return results
} }
function checkMistralEnv(): CheckResult[] {
const results: CheckResult[] = []
const model = process.env.MISTRAL_MODEL
const key = process.env.MISTRAL_API_KEY
const baseUrl = process.env.MISTRAL_BASE_URL ?? MISTRAL_DEFAULT_BASE_URL
results.push(pass('Provider mode', 'Mistral provider enabled.'))
if (!model) {
results.push(pass('MISTRAL_MODEL', 'Not set. Default will be used at runtime.'))
} else {
results.push(pass('MISTRAL_MODEL', model))
}
results.push(pass('MISTRAL_BASE_URL', baseUrl))
if (!key) {
results.push(fail('MISTRAL_API_KEY', 'Missing. Set MISTRAL_API_KEY.'))
} else {
results.push(pass('MISTRAL_API_KEY', 'Configured.'))
}
return results
}
function checkGithubEnv(): CheckResult[] { function checkGithubEnv(): CheckResult[] {
const results: CheckResult[] = [] const results: CheckResult[] = []
const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
results.push(pass('Provider mode', 'GitHub Models provider enabled.')) results.push(pass('Provider mode', 'GitHub Models provider enabled.'))
const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
@@ -215,17 +186,12 @@ function checkOpenAIEnv(): CheckResult[] {
const results: CheckResult[] = [] const results: CheckResult[] = []
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const useMistral = isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
if (useGemini) { if (useGemini) {
return checkGeminiEnv() return checkGeminiEnv()
} }
if (useMistral) {
return checkMistralEnv()
}
if (useGithub && !useOpenAI) { if (useGithub && !useOpenAI) {
return checkGithubEnv() return checkGithubEnv()
} }
@@ -302,9 +268,8 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const useMistral = isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
if (!useGemini && !useOpenAI && !useGithub && !useMistral) { if (!useGemini && !useOpenAI && !useGithub) {
return pass('Provider reachability', 'Skipped (OpenAI-compatible mode disabled).') return pass('Provider reachability', 'Skipped (OpenAI-compatible mode disabled).')
} }
@@ -361,8 +326,6 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
}) })
} else if (useGemini && (process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY)) { } else if (useGemini && (process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY)) {
headers.Authorization = `Bearer ${process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY}` headers.Authorization = `Bearer ${process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY}`
} else if (useMistral && process.env.MISTRAL_API_KEY) {
headers.Authorization = `Bearer ${process.env.MISTRAL_API_KEY}`
} else if (process.env.OPENAI_API_KEY) { } else if (process.env.OPENAI_API_KEY) {
headers.Authorization = `Bearer ${process.env.OPENAI_API_KEY}` headers.Authorization = `Bearer ${process.env.OPENAI_API_KEY}`
} }
@@ -410,8 +373,7 @@ function checkOllamaProcessorMode(): CheckResult {
if ( if (
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) || !isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) || isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) || isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
) { ) {
return pass('Ollama processor mode', 'Skipped (OpenAI-compatible mode disabled).') return pass('Ollama processor mode', 'Skipped (OpenAI-compatible mode disabled).')
} }
@@ -463,14 +425,6 @@ function serializeSafeEnvSummary(): Record<string, string | boolean> {
GEMINI_API_KEY_SET: Boolean(process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY), GEMINI_API_KEY_SET: Boolean(process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY),
} }
} }
if (isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)) {
return {
CLAUDE_CODE_USE_MISTRAL: true,
MISTRAL_MODEL: process.env.MISTRAL_MODEL ?? '(unset, default: devstral-latest)',
MISTRAL_BASE_URL: process.env.MISTRAL_BASE_URL ?? 'https://api.mistral.ai/v1',
MISTRAL_API_KEY_SET: Boolean(process.env.MISTRAL_API_KEY),
}
}
if ( if (
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) && isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) !isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
@@ -481,7 +435,7 @@ function serializeSafeEnvSummary(): Record<string, string | boolean> {
process.env.OPENAI_MODEL ?? process.env.OPENAI_MODEL ??
'(unset, default: github:copilot → openai/gpt-4.1)', '(unset, default: github:copilot → openai/gpt-4.1)',
OPENAI_BASE_URL: OPENAI_BASE_URL:
process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE, process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE,
GITHUB_TOKEN_SET: Boolean( GITHUB_TOKEN_SET: Boolean(
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN, process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN,
), ),

View File

@@ -400,12 +400,12 @@ export async function update() {
if (useLocalUpdate) { if (useLocalUpdate) {
process.stderr.write('Try manually updating with:\n') process.stderr.write('Try manually updating with:\n')
process.stderr.write( process.stderr.write(
` cd ~/.openclaude/local && npm update ${MACRO.PACKAGE_URL}\n`, ` cd ~/.claude/local && npm update ${MACRO.PACKAGE_URL}\n`,
) )
} else { } else {
process.stderr.write('Try running with sudo or fix npm permissions\n') process.stderr.write('Try running with sudo or fix npm permissions\n')
process.stderr.write( process.stderr.write(
'Or consider using native installation with: openclaude install\n', 'Or consider using native installation with: claude install\n',
) )
} }
await gracefulShutdown(1) await gracefulShutdown(1)
@@ -415,11 +415,11 @@ export async function update() {
if (useLocalUpdate) { if (useLocalUpdate) {
process.stderr.write('Try manually updating with:\n') process.stderr.write('Try manually updating with:\n')
process.stderr.write( process.stderr.write(
` cd ~/.openclaude/local && npm update ${MACRO.PACKAGE_URL}\n`, ` cd ~/.claude/local && npm update ${MACRO.PACKAGE_URL}\n`,
) )
} else { } else {
process.stderr.write( process.stderr.write(
'Or consider using native installation with: openclaude install\n', 'Or consider using native installation with: claude install\n',
) )
} }
await gracefulShutdown(1) await gracefulShutdown(1)

View File

@@ -32,7 +32,6 @@ import logout from './commands/logout/index.js'
import installGitHubApp from './commands/install-github-app/index.js' import installGitHubApp from './commands/install-github-app/index.js'
import installSlackApp from './commands/install-slack-app/index.js' import installSlackApp from './commands/install-slack-app/index.js'
import breakCache from './commands/break-cache/index.js' import breakCache from './commands/break-cache/index.js'
import cacheProbe from './commands/cache-probe/index.js'
import mcp from './commands/mcp/index.js' import mcp from './commands/mcp/index.js'
import mobile from './commands/mobile/index.js' import mobile from './commands/mobile/index.js'
import onboarding from './commands/onboarding/index.js' import onboarding from './commands/onboarding/index.js'
@@ -137,7 +136,6 @@ import hooks from './commands/hooks/index.js'
import files from './commands/files/index.js' import files from './commands/files/index.js'
import branch from './commands/branch/index.js' import branch from './commands/branch/index.js'
import agents from './commands/agents/index.js' import agents from './commands/agents/index.js'
import autoFix from './commands/auto-fix.js'
import plugin from './commands/plugin/index.js' import plugin from './commands/plugin/index.js'
import reloadPlugins from './commands/reload-plugins/index.js' import reloadPlugins from './commands/reload-plugins/index.js'
import rewind from './commands/rewind/index.js' import rewind from './commands/rewind/index.js'
@@ -145,7 +143,6 @@ import heapDump from './commands/heapdump/index.js'
import mockLimits from './commands/mock-limits/index.js' import mockLimits from './commands/mock-limits/index.js'
import bridgeKick from './commands/bridge-kick.js' import bridgeKick from './commands/bridge-kick.js'
import version from './commands/version.js' import version from './commands/version.js'
import wiki from './commands/wiki/index.js'
import summary from './commands/summary/index.js' import summary from './commands/summary/index.js'
import { import {
resetLimits, resetLimits,
@@ -266,10 +263,8 @@ const COMMANDS = memoize((): Command[] => [
addDir, addDir,
advisor, advisor,
agents, agents,
autoFix,
branch, branch,
btw, btw,
cacheProbe,
chrome, chrome,
clear, clear,
color, color,
@@ -329,7 +324,6 @@ const COMMANDS = memoize((): Command[] => [
usage, usage,
usageReport, usageReport,
vim, vim,
wiki,
...(webCmd ? [webCmd] : []), ...(webCmd ? [webCmd] : []),
...(forkCmd ? [forkCmd] : []), ...(forkCmd ? [forkCmd] : []),
...(buddy ? [buddy] : []), ...(buddy ? [buddy] : []),

View File

@@ -1,25 +0,0 @@
import type { Command } from '../types/command.js'
const command: Command = {
name: 'auto-fix',
description: 'Configure auto-fix: run lint/test after AI edits',
isEnabled: () => true,
type: 'prompt',
progressMessage: 'Configuring auto-fix...',
contentLength: 0,
source: 'builtin',
async getPromptForCommand() {
return [
{
type: 'text',
text:
'The user wants to configure auto-fix settings. Auto-fix automatically runs lint and test commands after AI file edits, feeding errors back for self-repair.\n\n' +
'Current settings location: `.claude/settings.json` or `.claude/settings.local.json`\n\n' +
'Example configuration:\n```json\n{\n "autoFix": {\n "enabled": true,\n "lint": "eslint . --fix",\n "test": "bun test",\n "maxRetries": 3,\n "timeout": 30000\n }\n}\n```\n\n' +
'Ask the user what lint and test commands they use, then help them set up the configuration.',
},
]
},
}
export default command

View File

@@ -1,413 +0,0 @@
import { getSessionId } from '../../bootstrap/state.js'
import { resolveProviderRequest } from '../../services/api/providerConfig.js'
import type { LocalCommandCall } from '../../types/command.js'
import { logForDebugging } from '../../utils/debug.js'
import { isEnvTruthy } from '../../utils/envUtils.js'
import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubModelsCredentials.js'
import { getMainLoopModel } from '../../utils/model/model.js'
const COPILOT_HEADERS: Record<string, string> = {
'User-Agent': 'GitHubCopilotChat/0.26.7',
'Editor-Version': 'vscode/1.99.3',
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
'Copilot-Integration-Id': 'vscode-chat',
}
// Large system prompt (~6000 chars, ~1500 tokens) to cross the 1024-token cache threshold
const SYSTEM_PROMPT = [
'You are a coding assistant. Answer concisely.',
'CONTEXT: User is working on a TypeScript project with Bun runtime.',
...Array.from(
{ length: 80 },
(_, i) =>
`Rule ${i + 1}: Follow best practices for TypeScript including strict typing, error handling, testing, and clean code. Prefer explicit types over any. Use const assertions. Await all async operations.`,
),
].join('\n\n')
const USER_MESSAGE = 'Say "hello" and nothing else.'
const DELAY_MS = 3000
/**
* Extract model family from a versioned model string.
* e.g. "gpt-5.4-0626" → "gpt-5.4", "codex-mini-latest" → "codex-mini"
*/
function getModelFamily(model: string | undefined): string {
if (!model) return 'unknown'
return model
.replace(/-\d{4,}$/, '')
.replace(/-latest$/, '')
.replace(/-preview$/, '')
}
function getField(obj: unknown, path: string): unknown {
return path
.split('.')
.reduce((o: any, k: string) => (o != null ? o[k] : undefined), obj)
}
interface ProbeResult {
label: string
status: number
elapsed: number
headers: Record<string, string>
usage: Record<string, unknown> | null
responseId: string | null
error: string | null
}
async function sendProbe(
url: string,
headers: Record<string, string>,
body: Record<string, unknown>,
label: string,
): Promise<ProbeResult> {
const start = Date.now()
let response: Response
try {
response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify(body),
})
} catch (err: any) {
return {
label,
status: 0,
elapsed: Date.now() - start,
headers: {},
usage: null,
responseId: null,
error: err.message,
}
}
const elapsed = Date.now() - start
const respHeaders: Record<string, string> = {}
response.headers.forEach((value, key) => {
respHeaders[key] = value
})
if (!response.ok) {
const errorBody = await response.text().catch(() => '')
return {
label,
status: response.status,
elapsed,
headers: respHeaders,
usage: null,
responseId: null,
error: errorBody,
}
}
// Parse SSE stream for usage data
const text = await response.text()
let usage: Record<string, unknown> | null = null
let responseId: string | null = null
const isResponses = url.endsWith('/responses')
for (const chunk of text.split('\n\n')) {
const lines = chunk
.split('\n')
.map((l) => l.trim())
.filter(Boolean)
if (isResponses) {
const eventLine = lines.find((l) => l.startsWith('event: '))
const dataLines = lines.filter((l) => l.startsWith('data: '))
if (!eventLine || !dataLines.length) continue
const event = eventLine.slice(7).trim()
if (
event === 'response.completed' ||
event === 'response.incomplete'
) {
try {
const data = JSON.parse(
dataLines.map((l) => l.slice(6)).join('\n'),
)
usage = (data?.response?.usage as Record<string, unknown>) ?? null
responseId = (data?.response?.id as string) ?? null
} catch {}
}
} else {
for (const line of lines) {
if (!line.startsWith('data: ')) continue
const raw = line.slice(6).trim()
if (raw === '[DONE]') continue
try {
const data = JSON.parse(raw) as Record<string, unknown>
if (data.usage) {
usage = data.usage as Record<string, unknown>
responseId = (data.id as string) ?? null
}
} catch {}
}
}
}
return { label, status: response.status, elapsed, headers: respHeaders, usage, responseId, error: null }
}
function formatResult(r: ProbeResult): string {
const lines: string[] = [`--- ${r.label} ---`]
if (r.error) {
lines.push(` ERROR (HTTP ${r.status}): ${r.error.slice(0, 200)}`)
return lines.join('\n')
}
lines.push(` HTTP ${r.status}${r.elapsed}ms`)
if (r.responseId) lines.push(` response.id: ${r.responseId}`)
if (r.usage) {
lines.push(' Usage:')
lines.push(` ${JSON.stringify(r.usage, null, 2).replace(/\n/g, '\n ')}`)
} else {
lines.push(' Usage: null')
}
// Interesting headers
for (const h of [
'openai-processing-ms',
'x-ratelimit-remaining',
'x-ratelimit-limit',
'x-ms-region',
'x-github-request-id',
'x-request-id',
]) {
if (r.headers[h]) lines.push(` ${h}: ${r.headers[h]}`)
}
return lines.join('\n')
}
export const call: LocalCommandCall = async (args) => {
const parts = (args ?? '').trim().split(/\s+/).filter(Boolean)
const noKey = parts.includes('--no-key')
const modelOverride = parts.find((p) => !p.startsWith('--')) || undefined
const modelStr = modelOverride ?? getMainLoopModel()
const request = resolveProviderRequest({ model: modelStr })
const isGithub = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
// Resolve API key the same way the OpenAI shim does
let apiKey = process.env.OPENAI_API_KEY ?? ''
if (!apiKey && isGithub) {
hydrateGithubModelsTokenFromSecureStorage()
apiKey =
process.env.OPENAI_API_KEY ??
process.env.GITHUB_TOKEN ??
process.env.GH_TOKEN ??
''
}
if (!apiKey) {
return {
type: 'text',
value:
'No API key found. Make sure you are in an active OpenAI-compatible or GitHub Copilot session.\n' +
'For GitHub Copilot: run /onboard-github first.\n' +
'For OpenAI-compatible: set OPENAI_API_KEY.',
}
}
const useResponses = request.transport === 'codex_responses'
const endpoint = useResponses ? '/responses' : '/chat/completions'
const url = `${request.baseUrl}${endpoint}`
const family = getModelFamily(request.resolvedModel)
const cacheKey = `${getSessionId()}:${family}`
const headers: Record<string, string> = {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
originator: 'openclaude',
}
if (isGithub) {
Object.assign(headers, COPILOT_HEADERS)
}
let body: Record<string, unknown>
if (useResponses) {
body = {
model: request.resolvedModel,
instructions: SYSTEM_PROMPT,
input: [
{
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: USER_MESSAGE }],
},
],
stream: true,
...(noKey ? {} : {
store: false,
prompt_cache_key: cacheKey,
prompt_cache_retention: '24h',
}),
}
} else {
body = {
model: request.resolvedModel,
messages: [
{ role: 'system', content: SYSTEM_PROMPT },
{ role: 'user', content: USER_MESSAGE },
],
stream: true,
stream_options: { include_usage: true },
max_tokens: 20,
...(noKey ? {} : {
store: false,
prompt_cache_key: cacheKey,
}),
}
}
// Log configuration
const config = [
`[cache-probe] Starting cache probe${noKey ? ' (--no-key: cache params OMITTED)' : ''}`,
` model: ${request.resolvedModel} (family: ${family})`,
` transport: ${request.transport}`,
` endpoint: ${url}`,
` prompt_cache_key: ${noKey ? 'NOT SENT' : cacheKey}`,
` store: ${noKey ? 'NOT SENT' : 'false'}`,
` system prompt: ~${Math.round(SYSTEM_PROMPT.length / 4)} tokens`,
` delay between calls: ${DELAY_MS}ms`,
].join('\n')
logForDebugging(config)
// Call 1 — Cold
const r1 = await sendProbe(url, headers, body, 'CALL 1 — Cold (no cache)')
logForDebugging(`[cache-probe]\n${formatResult(r1)}`)
if (r1.error) {
return {
type: 'text',
value: `Cache probe failed on first call: HTTP ${r1.status}\n${r1.error.slice(0, 300)}\n\nFull details in debug log.`,
}
}
// Wait
await new Promise((r) => setTimeout(r, DELAY_MS))
// Call 2 — Warm
const r2 = await sendProbe(url, headers, body, 'CALL 2 — Warm (cache expected)')
logForDebugging(`[cache-probe]\n${formatResult(r2)}`)
// --- Comparison ---
const fields = [
'input_tokens',
'output_tokens',
'total_tokens',
'prompt_tokens',
'completion_tokens',
'input_tokens_details.cached_tokens',
'prompt_tokens_details.cached_tokens',
'output_tokens_details.reasoning_tokens',
]
const comparison: string[] = ['[cache-probe] COMPARISON']
comparison.push(
` ${'Field'.padEnd(42)} ${'Call 1'.padStart(8)} ${'Call 2'.padStart(8)} ${'Delta'.padStart(8)}`,
)
comparison.push(` ${'-'.repeat(72)}`)
for (const f of fields) {
const v1 = getField(r1.usage, f)
const v2 = getField(r2.usage, f)
if (v1 === undefined && v2 === undefined) continue
const d =
typeof v1 === 'number' && typeof v2 === 'number' ? v2 - v1 : ''
comparison.push(
` ${f.padEnd(42)} ${String(v1 ?? '-').padStart(8)} ${String(v2 ?? '-').padStart(8)} ${String(d).padStart(8)}`,
)
}
comparison.push('')
comparison.push(
` Latency: ${r1.elapsed}ms → ${r2.elapsed}ms (${r2.elapsed - r1.elapsed > 0 ? '+' : ''}${r2.elapsed - r1.elapsed}ms)`,
)
// Header comparison
for (const h of ['openai-processing-ms', 'x-ms-region', 'x-ratelimit-remaining']) {
const v1 = r1.headers[h]
const v2 = r2.headers[h]
if (v1 || v2) {
comparison.push(` ${h}: ${v1 ?? '-'}${v2 ?? '-'}`)
}
}
// Verdict
const cached2 =
(getField(r2.usage, 'input_tokens_details.cached_tokens') as number) ??
(getField(r2.usage, 'prompt_tokens_details.cached_tokens') as number) ??
0
const input1 =
((r1.usage?.input_tokens ?? r1.usage?.prompt_tokens) as number) ?? 0
const input2 =
((r2.usage?.input_tokens ?? r2.usage?.prompt_tokens) as number) ?? 0
let verdict: string
if (cached2 > 0) {
const rate = input2 > 0 ? Math.round((cached2 / input2) * 100) : '?'
verdict = `CACHE HIT: ${cached2} cached tokens (${rate}% of input)`
} else if (input1 === 0 && input2 === 0) {
verdict = 'INCONCLUSIVE: Server returns 0 input_tokens — cannot measure'
} else if (r2.elapsed < r1.elapsed * 0.6 && input1 > 100) {
verdict = `POSSIBLE SILENT CACHING: Call 2 was ${Math.round((1 - r2.elapsed / r1.elapsed) * 100)}% faster but no cached_tokens reported`
} else {
verdict = 'NO CACHE DETECTED'
}
comparison.push(`\n Verdict: ${verdict}`)
// --- Simulate what main's shim code does with this usage ---
// codexShim.ts makeUsage() — used for Responses API (GPT-5+/Codex)
function mainMakeUsage(u: any) {
return {
input_tokens: u?.input_tokens ?? 0,
output_tokens: u?.output_tokens ?? 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0, // ← main hardcodes this to 0
}
}
// openaiShim.ts convertChunkUsage() — used for Chat Completions
function mainConvertChunkUsage(u: any) {
return {
input_tokens: u?.prompt_tokens ?? 0,
output_tokens: u?.completion_tokens ?? 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: u?.prompt_tokens_details?.cached_tokens ?? 0,
}
}
const shimFn = useResponses ? mainMakeUsage : mainConvertChunkUsage
const shim1 = shimFn(r1.usage)
const shim2 = shimFn(r2.usage)
comparison.push('')
comparison.push(` --- What main's shim reports (${useResponses ? 'codexShim.makeUsage' : 'openaiShim.convertChunkUsage'}) ---`)
comparison.push(` Call 1: cache_read_input_tokens=${shim1.cache_read_input_tokens}`)
comparison.push(` Call 2: cache_read_input_tokens=${shim2.cache_read_input_tokens}`)
if (useResponses && cached2 > 0) {
comparison.push(` BUG: Server returned ${cached2} cached tokens but main's makeUsage() drops it → reports 0`)
} else if (!useResponses && shim2.cache_read_input_tokens > 0) {
comparison.push(` OK: Chat Completions path on main correctly reads cached_tokens`)
}
logForDebugging(comparison.join('\n'))
// User-facing summary
const mode = noKey ? ' (NO cache key sent)' : ''
const shimLabel = useResponses ? 'codexShim.makeUsage()' : 'openaiShim.convertChunkUsage()'
const summary = [
`Cache Probe — ${request.resolvedModel} via ${useResponses ? 'Responses API' : 'Chat Completions'}${mode}`,
'',
`Call 1: ${r1.elapsed}ms, input=${input1}, cached=${(getField(r1.usage, 'input_tokens_details.cached_tokens') as number) ?? (getField(r1.usage, 'prompt_tokens_details.cached_tokens') as number) ?? 0}`,
`Call 2: ${r2.elapsed}ms, input=${input2}, cached=${cached2}`,
'',
verdict,
'',
`What main's ${shimLabel} reports:`,
` Call 2 cache_read_input_tokens = ${shim2.cache_read_input_tokens}${useResponses && cached2 > 0 ? ' ← BUG: server sent ' + cached2 + ' but main drops it' : ''}`,
'',
'Full details written to debug log.',
].join('\n')
return { type: 'text', value: summary }
}

View File

@@ -1,17 +0,0 @@
import type { Command } from '../../commands.js'
import { isEnvTruthy } from '../../utils/envUtils.js'
const cacheProbe: Command = {
type: 'local',
name: 'cache-probe',
description:
'Send identical requests to test prompt caching (results in debug log)',
argumentHint: '[model] [--no-key]',
isEnabled: () =>
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB),
supportsNonInteractive: false,
load: () => import('./cache-probe.js'),
}
export default cacheProbe

View File

@@ -39,16 +39,16 @@ type InstallState = {
message: string; message: string;
warnings?: string[]; warnings?: string[];
}; };
export function getInstallationPath(): string { function getInstallationPath(): string {
const isWindows = env.platform === 'win32'; const isWindows = env.platform === 'win32';
const homeDir = homedir(); const homeDir = homedir();
if (isWindows) { if (isWindows) {
// Convert to Windows-style path // Convert to Windows-style path
const windowsPath = join(homeDir, '.local', 'bin', 'openclaude.exe'); const windowsPath = join(homeDir, '.local', 'bin', 'claude.exe');
// Replace forward slashes with backslashes for Windows display // Replace forward slashes with backslashes for Windows display
return windowsPath.replace(/\//g, '\\'); return windowsPath.replace(/\//g, '\\');
} }
return '~/.local/bin/openclaude'; return '~/.local/bin/claude';
} }
function SetupNotes(t0) { function SetupNotes(t0) {
const $ = _c(5); const $ = _c(5);

View File

@@ -1,44 +1,20 @@
import { afterEach, expect, mock, test } from 'bun:test' import { afterEach, expect, mock, test } from 'bun:test'
import { getAdditionalModelOptionsCacheScope } from '../../services/api/providerConfig.js'
import { getAPIProvider } from '../../utils/model/providers.js'
const originalEnv = { const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI, CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
CLAUDE_CODE_USE_MISTRAL: process.env.CLAUDE_CODE_USE_MISTRAL,
CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK,
CLAUDE_CODE_USE_VERTEX: process.env.CLAUDE_CODE_USE_VERTEX,
CLAUDE_CODE_USE_FOUNDRY: process.env.CLAUDE_CODE_USE_FOUNDRY,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL, OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE,
OPENAI_MODEL: process.env.OPENAI_MODEL, OPENAI_MODEL: process.env.OPENAI_MODEL,
} }
afterEach(() => { afterEach(() => {
mock.restore() mock.restore()
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
process.env.CLAUDE_CODE_USE_MISTRAL = originalEnv.CLAUDE_CODE_USE_MISTRAL
process.env.CLAUDE_CODE_USE_BEDROCK = originalEnv.CLAUDE_CODE_USE_BEDROCK
process.env.CLAUDE_CODE_USE_VERTEX = originalEnv.CLAUDE_CODE_USE_VERTEX
process.env.CLAUDE_CODE_USE_FOUNDRY = originalEnv.CLAUDE_CODE_USE_FOUNDRY
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
}) })
test('opens the model picker without awaiting local model discovery refresh', async () => { test('opens the model picker without awaiting local model discovery refresh', async () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1' process.env.CLAUDE_CODE_USE_OPENAI = '1'
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.CLAUDE_CODE_USE_MISTRAL
delete process.env.CLAUDE_CODE_USE_BEDROCK
delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY
delete process.env.OPENAI_API_BASE
process.env.OPENAI_BASE_URL = 'http://127.0.0.1:8080/v1' process.env.OPENAI_BASE_URL = 'http://127.0.0.1:8080/v1'
process.env.OPENAI_MODEL = 'qwen2.5-coder-7b-instruct' process.env.OPENAI_MODEL = 'qwen2.5-coder-7b-instruct'
@@ -54,9 +30,7 @@ test('opens the model picker without awaiting local model discovery refresh', as
discoverOpenAICompatibleModelOptions, discoverOpenAICompatibleModelOptions,
})) }))
expect(getAdditionalModelOptionsCacheScope()).toBe('openai:http://127.0.0.1:8080/v1') const { call } = await import(`./model.js?ts=${Date.now()}-${Math.random()}`)
const { call } = await import('./model.js')
const result = await Promise.race([ const result = await Promise.race([
call(() => {}, {} as never, ''), call(() => {}, {} as never, ''),
new Promise(resolve => setTimeout(() => resolve('timeout'), 50)), new Promise(resolve => setTimeout(() => resolve('timeout'), 50)),

View File

@@ -284,7 +284,7 @@ function haveSameModelOptions(left: ModelOption[], right: ModelOption[]): boolea
}); });
} }
async function refreshOpenAIModelOptionsCache(): Promise<void> { async function refreshOpenAIModelOptionsCache(): Promise<void> {
if (!getAdditionalModelOptionsCacheScope()?.startsWith('openai:')) { if (getAPIProvider() !== 'openai') {
return; return;
} }
try { try {

View File

@@ -4,7 +4,7 @@ const onboardGithub: Command = {
name: 'onboard-github', name: 'onboard-github',
aliases: ['onboarding-github', 'onboardgithub', 'onboardinggithub'], aliases: ['onboarding-github', 'onboardgithub', 'onboardinggithub'],
description: description:
'Interactive setup for GitHub Copilot: OAuth device login stored in secure storage', 'Interactive setup for GitHub Models: device login or PAT, saved to secure storage',
type: 'local-jsx', type: 'local-jsx',
load: () => import('./onboard-github.js'), load: () => import('./onboard-github.js'),
} }

View File

@@ -2,9 +2,9 @@ import * as React from 'react'
import { useCallback, useState } from 'react' import { useCallback, useState } from 'react'
import { Select } from '../../components/CustomSelect/select.js' import { Select } from '../../components/CustomSelect/select.js'
import { Spinner } from '../../components/Spinner.js' import { Spinner } from '../../components/Spinner.js'
import TextInput from '../../components/TextInput.js'
import { Box, Text } from '../../ink.js' import { Box, Text } from '../../ink.js'
import { import {
exchangeForCopilotToken,
openVerificationUri, openVerificationUri,
pollAccessToken, pollAccessToken,
requestDeviceCode, requestDeviceCode,
@@ -15,7 +15,7 @@ import {
readGithubModelsToken, readGithubModelsToken,
saveGithubModelsToken, saveGithubModelsToken,
} from '../../utils/githubModelsCredentials.js' } from '../../utils/githubModelsCredentials.js'
import { getSettingsForSource, updateSettingsForSource } from '../../utils/settings/settings.js' import { updateSettingsForSource } from '../../utils/settings/settings.js'
const DEFAULT_MODEL = 'github:copilot' const DEFAULT_MODEL = 'github:copilot'
const FORCE_RELOGIN_ARGS = new Set([ const FORCE_RELOGIN_ARGS = new Set([
@@ -27,25 +27,11 @@ const FORCE_RELOGIN_ARGS = new Set([
'--reauth', '--reauth',
]) ])
type Step = 'menu' | 'device-busy' | 'error' type Step =
| 'menu'
const PROVIDER_SPECIFIC_KEYS = new Set([ | 'device-busy'
'CLAUDE_CODE_USE_OPENAI', | 'pat'
'CLAUDE_CODE_USE_GEMINI', | 'error'
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_BASE_URL',
'OPENAI_API_BASE',
'OPENAI_API_KEY',
'OPENAI_MODEL',
'GEMINI_API_KEY',
'GOOGLE_API_KEY',
'GEMINI_BASE_URL',
'GEMINI_MODEL',
'GEMINI_ACCESS_TOKEN',
'GEMINI_AUTH_MODE',
])
export function shouldForceGithubRelogin(args?: string): boolean { export function shouldForceGithubRelogin(args?: string): boolean {
const normalized = (args ?? '').trim().toLowerCase() const normalized = (args ?? '').trim().toLowerCase()
@@ -55,29 +41,15 @@ export function shouldForceGithubRelogin(args?: string): boolean {
return normalized.split(/\s+/).some(arg => FORCE_RELOGIN_ARGS.has(arg)) return normalized.split(/\s+/).some(arg => FORCE_RELOGIN_ARGS.has(arg))
} }
const GITHUB_PAT_PREFIXES = ['ghp_', 'gho_','ghs_', 'ghr_', 'github_pat_']
function isGithubPat(token: string): boolean {
return GITHUB_PAT_PREFIXES.some(prefix => token.startsWith(prefix))
}
export function hasExistingGithubModelsLoginToken( export function hasExistingGithubModelsLoginToken(
env: NodeJS.ProcessEnv = process.env, env: NodeJS.ProcessEnv = process.env,
storedToken?: string, storedToken?: string,
): boolean { ): boolean {
const envToken = env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim() const envToken = env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()
if (envToken) { if (envToken) {
// PATs are no longer supported - require OAuth re-auth
if (isGithubPat(envToken)) {
return false
}
return true return true
} }
const persisted = (storedToken ?? readGithubModelsToken())?.trim() const persisted = (storedToken ?? readGithubModelsToken())?.trim()
// PATs are no longer supported - require OAuth re-auth
if (persisted && isGithubPat(persisted)) {
return false
}
return Boolean(persisted) return Boolean(persisted)
} }
@@ -125,21 +97,8 @@ export function applyGithubOnboardingProcessEnv(
} }
function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } { function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
const currentSettings = getSettingsForSource('userSettings')
const currentEnv = currentSettings?.env ?? {}
const newEnv: Record<string, string> = {}
for (const [key, value] of Object.entries(currentEnv)) {
if (!PROVIDER_SPECIFIC_KEYS.has(key)) {
newEnv[key] = value
}
}
newEnv.CLAUDE_CODE_USE_GITHUB = '1'
newEnv.OPENAI_MODEL = model
const { error } = updateSettingsForSource('userSettings', { const { error } = updateSettingsForSource('userSettings', {
env: newEnv, env: buildGithubOnboardingSettingsEnv(model) as any,
}) })
if (error) { if (error) {
return { ok: false, detail: error.message } return { ok: false, detail: error.message }
@@ -184,14 +143,12 @@ function OnboardGithub(props: {
user_code: string user_code: string
verification_uri: string verification_uri: string
} | null>(null) } | null>(null)
const [patDraft, setPatDraft] = useState('')
const [cursorOffset, setCursorOffset] = useState(0)
const finalize = useCallback( const finalize = useCallback(
async ( async (token: string, model: string = DEFAULT_MODEL) => {
token: string, const saved = saveGithubModelsToken(token)
model: string = DEFAULT_MODEL,
oauthToken?: string,
) => {
const saved = saveGithubModelsToken(token, oauthToken)
if (!saved.success) { if (!saved.success) {
setErrorMsg(saved.warning ?? 'Could not save token to secure storage.') setErrorMsg(saved.warning ?? 'Could not save token to secure storage.')
setStep('error') setStep('error')
@@ -208,18 +165,8 @@ function OnboardGithub(props: {
setStep('error') setStep('error')
return return
} }
// Clear stale provider-specific env vars from the current session
// so resolveProviderRequest() doesn't pick up a previous provider's
// base URL or key after onboarding completes.
for (const key of PROVIDER_SPECIFIC_KEYS) {
delete process.env[key]
}
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_MODEL = model.trim() || DEFAULT_MODEL
hydrateGithubModelsTokenFromSecureStorage()
onChangeAPIKey()
onDone( onDone(
'GitHub Copilot onboard complete. Copilot token and OAuth token stored in secure storage (Windows/Linux: ~/.claude/.credentials.json, macOS: Keychain fallback to ~/.claude/.credentials.json); user settings updated. Restart if the model does not switch.', 'GitHub Models onboard complete. Token stored in secure storage; user settings updated. Restart if the model does not switch.',
{ display: 'user' }, { display: 'user' },
) )
}, },
@@ -237,12 +184,11 @@ function OnboardGithub(props: {
verification_uri: device.verification_uri, verification_uri: device.verification_uri,
}) })
await openVerificationUri(device.verification_uri) await openVerificationUri(device.verification_uri)
const oauthToken = await pollAccessToken(device.device_code, { const token = await pollAccessToken(device.device_code, {
initialInterval: device.interval, initialInterval: device.interval,
timeoutSeconds: device.expires_in, timeoutSeconds: device.expires_in,
}) })
const copilotToken = await exchangeForCopilotToken(oauthToken) await finalize(token, DEFAULT_MODEL)
await finalize(copilotToken.token, DEFAULT_MODEL, oauthToken)
} catch (e) { } catch (e) {
setErrorMsg(e instanceof Error ? e.message : String(e)) setErrorMsg(e instanceof Error ? e.message : String(e))
setStep('error') setStep('error')
@@ -281,7 +227,7 @@ function OnboardGithub(props: {
if (step === 'device-busy') { if (step === 'device-busy') {
return ( return (
<Box flexDirection="column" gap={1}> <Box flexDirection="column" gap={1}>
<Text>GitHub Copilot sign-in</Text> <Text>GitHub device login</Text>
{deviceHint ? ( {deviceHint ? (
<> <>
<Text> <Text>
@@ -300,11 +246,43 @@ function OnboardGithub(props: {
) )
} }
if (step === 'pat') {
return (
<Box flexDirection="column" gap={1}>
<Text>Paste a GitHub personal access token with access to GitHub Models.</Text>
<Text dimColor>Input is masked. Enter to submit; Esc to go back.</Text>
<TextInput
value={patDraft}
mask="*"
onChange={setPatDraft}
onSubmit={async (value: string) => {
const t = value.trim()
if (!t) {
return
}
await finalize(t, DEFAULT_MODEL)
}}
onExit={() => {
setStep('menu')
setPatDraft('')
}}
columns={80}
cursorOffset={cursorOffset}
onChangeCursorOffset={setCursorOffset}
/>
</Box>
)
}
const menuOptions = [ const menuOptions = [
{ {
label: 'Sign in with browser', label: 'Sign in with browser (device code)',
value: 'device' as const, value: 'device' as const,
}, },
{
label: 'Paste personal access token',
value: 'pat' as const,
},
{ {
label: 'Cancel', label: 'Cancel',
value: 'cancel' as const, value: 'cancel' as const,
@@ -313,7 +291,7 @@ function OnboardGithub(props: {
return ( return (
<Box flexDirection="column" gap={1}> <Box flexDirection="column" gap={1}>
<Text bold>GitHub Copilot setup</Text> <Text bold>GitHub Models setup</Text>
<Text dimColor> <Text dimColor>
Stores your token in the OS credential store (macOS Keychain when available) Stores your token in the OS credential store (macOS Keychain when available)
and enables CLAUDE_CODE_USE_GITHUB in your user settings - no export and enables CLAUDE_CODE_USE_GITHUB in your user settings - no export
@@ -326,6 +304,10 @@ function OnboardGithub(props: {
onDone('GitHub onboard cancelled', { display: 'system' }) onDone('GitHub onboard cancelled', { display: 'system' })
return return
} }
if (v === 'pat') {
setStep('pat')
return
}
void runDeviceFlow() void runDeviceFlow()
}} }}
/> />

View File

@@ -22,14 +22,11 @@ import {
import { import {
buildCodexProfileEnv, buildCodexProfileEnv,
buildGeminiProfileEnv, buildGeminiProfileEnv,
buildMistralProfileEnv,
buildOllamaProfileEnv, buildOllamaProfileEnv,
buildOpenAIProfileEnv, buildOpenAIProfileEnv,
createProfileFile, createProfileFile,
DEFAULT_GEMINI_BASE_URL, DEFAULT_GEMINI_BASE_URL,
DEFAULT_GEMINI_MODEL, DEFAULT_GEMINI_MODEL,
DEFAULT_MISTRAL_BASE_URL,
DEFAULT_MISTRAL_MODEL,
deleteProfileFile, deleteProfileFile,
loadProfileFile, loadProfileFile,
maskSecretForDisplay, maskSecretForDisplay,
@@ -77,14 +74,6 @@ type Step =
baseUrl: string | null baseUrl: string | null
defaultModel: string defaultModel: string
} }
| { name: 'mistral-key'; defaultModel: string }
| { name: 'mistral-base'; apiKey: string; defaultModel: string }
| {
name: 'mistral-model'
apiKey: string
baseUrl: string | null
defaultModel: string
}
| { name: 'gemini-auth-method' } | { name: 'gemini-auth-method' }
| { name: 'gemini-key' } | { name: 'gemini-key' }
| { name: 'gemini-access-token' } | { name: 'gemini-access-token' }
@@ -127,8 +116,6 @@ type ProviderWizardDefaults = {
openAIModel: string openAIModel: string
openAIBaseUrl: string openAIBaseUrl: string
geminiModel: string geminiModel: string
mistralModel: string
mistralBaseUrl: string
} }
function isEnvTruthy(value: string | undefined): boolean { function isEnvTruthy(value: string | undefined): boolean {
@@ -160,19 +147,11 @@ export function getProviderWizardDefaults(
const safeGeminiModel = const safeGeminiModel =
sanitizeProviderConfigValue(processEnv.GEMINI_MODEL, processEnv) || sanitizeProviderConfigValue(processEnv.GEMINI_MODEL, processEnv) ||
DEFAULT_GEMINI_MODEL DEFAULT_GEMINI_MODEL
const safeMistralModel =
sanitizeProviderConfigValue(processEnv.MISTRAL_MODEL, processEnv) ||
DEFAULT_MISTRAL_MODEL
const safeMistralBaseUrl =
sanitizeProviderConfigValue(processEnv.MISTRAL_BASE_URL, processEnv) ||
DEFAULT_MISTRAL_BASE_URL
return { return {
openAIModel: safeOpenAIModel, openAIModel: safeOpenAIModel,
openAIBaseUrl: safeOpenAIBaseUrl, openAIBaseUrl: safeOpenAIBaseUrl,
geminiModel: safeGeminiModel, geminiModel: safeGeminiModel,
mistralModel: safeMistralModel,
mistralBaseUrl: safeMistralBaseUrl,
} }
} }
@@ -199,21 +178,6 @@ export function buildCurrentProviderSummary(options?: {
} }
} }
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_MISTRAL)) {
return {
providerLabel: 'Mistral',
modelLabel: getSafeDisplayValue(
processEnv.MISTRAL_MODEL ?? DEFAULT_MISTRAL_MODEL,
processEnv
),
endpointLabel: getSafeDisplayValue(
processEnv.MISTRAL_BASE_URL ?? DEFAULT_MISTRAL_BASE_URL,
processEnv
),
savedProfileLabel,
}
}
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB)) { if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB)) {
return { return {
providerLabel: 'GitHub Models', providerLabel: 'GitHub Models',
@@ -295,24 +259,6 @@ function buildSavedProfileSummary(
? 'configured' ? 'configured'
: undefined, : undefined,
} }
case 'mistral':
return {
providerLabel: 'Mistral',
modelLabel: getSafeDisplayValue(
env.MISTRAL_MODEL ?? DEFAULT_MISTRAL_MODEL,
process.env,
env,
),
endpointLabel: getSafeDisplayValue(
env.MISTRAL_BASE_URL ?? DEFAULT_MISTRAL_BASE_URL,
process.env,
env,
),
credentialLabel:
maskSecretForDisplay(env.MISTRAL_API_KEY) !== undefined
? 'configured'
: undefined,
}
case 'codex': case 'codex':
return { return {
providerLabel: 'Codex', providerLabel: 'Codex',
@@ -527,11 +473,6 @@ function ProviderChooser({
value: 'gemini', value: 'gemini',
description: 'Use Google Gemini with API key, access token, or local ADC', description: 'Use Google Gemini with API key, access token, or local ADC',
}, },
{
label: 'Mistral',
value: 'mistral',
description: 'Use Mistral with API key'
},
{ {
label: 'Codex', label: 'Codex',
value: 'codex', value: 'codex',
@@ -1030,11 +971,6 @@ export function ProviderWizard({
}) })
} else if (value === 'gemini') { } else if (value === 'gemini') {
setStep({ name: 'gemini-auth-method' }) setStep({ name: 'gemini-auth-method' })
} else if (value === 'mistral') {
setStep({
name: 'mistral-key',
defaultModel: defaults.mistralModel,
})
} else if (value === 'clear') { } else if (value === 'clear') {
const filePath = deleteProfileFile() const filePath = deleteProfileFile()
onDone(`Removed saved provider profile at ${filePath}. Restart OpenClaude to go back to normal startup.`, { onDone(`Removed saved provider profile at ${filePath}. Restart OpenClaude to go back to normal startup.`, {
@@ -1174,101 +1110,6 @@ export function ProviderWizard({
/> />
) )
case 'mistral-key':
return (
<TextEntryDialog
resetStateKey={step.name}
title="Mistral setup"
subtitle="Step 1 of 3"
description={
process.env.MISTRAL_API_KEY
? 'Enter an API key, or leave this blank to reuse the current MISTRAL_API_KEY from this session.'
: 'Enter the API key for your Mistral provider.'
}
initialValue=""
placeholder="..."
mask="*"
allowEmpty={Boolean(process.env.MISTRAL_API_KEY)}
validate={value => {
const candidate = value.trim() || process.env.MISTRAL_API_KEY || ''
return sanitizeApiKey(candidate)
? null
: 'Enter a real API key. Placeholder values like SUA_CHAVE are not valid.'
}}
onSubmit={value => {
const apiKey = value.trim() || process.env.MISTRAL_API_KEY || ''
setStep({
name: 'mistral-base',
apiKey,
defaultModel: step.defaultModel,
})
}}
onCancel={() => setStep({ name: 'choose' })}
/>
)
case 'mistral-base':
return (
<TextEntryDialog
resetStateKey={step.name}
title="Mistral setup"
subtitle="Step 2 of 3"
description={`Optionally enter a base URL. Leave blank for ${DEFAULT_MISTRAL_BASE_URL}.`}
initialValue={
defaults.mistralBaseUrl === DEFAULT_MISTRAL_BASE_URL
? ''
: defaults.mistralBaseUrl
}
placeholder={DEFAULT_MISTRAL_BASE_URL}
allowEmpty
onSubmit={value => {
setStep({
name: 'mistral-model',
apiKey: step.apiKey,
baseUrl: value.trim() || null,
defaultModel: step.defaultModel,
})
}}
onCancel={() =>
setStep({
name: 'mistral-key',
defaultModel: step.defaultModel,
})
}
/>
)
case 'mistral-model':
return (
<TextEntryDialog
resetStateKey={step.name}
title="Mistral setup"
subtitle="Step 3 of 3"
description={`Enter a model name. Leave blank for ${step.defaultModel}.`}
initialValue={defaults.mistralModel ?? step.defaultModel}
placeholder={step.defaultModel}
allowEmpty
onSubmit={value => {
const env = buildMistralProfileEnv({
model: value.trim() || step.defaultModel,
baseUrl: step.baseUrl,
apiKey: step.apiKey,
processEnv: process.env,
})
if (env) {
finishProfileSave(onDone, 'mistral', env)
}
}}
onCancel={() =>
setStep({
name: 'mistral-base',
apiKey: step.apiKey,
defaultModel: step.defaultModel,
})
}
/>
)
case 'gemini-auth-method': { case 'gemini-auth-method': {
const hasShellGeminiKey = Boolean( const hasShellGeminiKey = Boolean(
process.env.GEMINI_API_KEY || process.env.GOOGLE_API_KEY, process.env.GEMINI_API_KEY || process.env.GOOGLE_API_KEY,

View File

@@ -65,7 +65,7 @@ export async function call(onDone: (result?: string) => void, _context: unknown,
// Get the local settings path and make it relative to cwd // Get the local settings path and make it relative to cwd
const localSettingsPath = getSettingsFilePathForSource('localSettings'); const localSettingsPath = getSettingsFilePathForSource('localSettings');
const relativePath = localSettingsPath ? relative(getCwdState(), localSettingsPath) : '.openclaude/settings.local.json'; const relativePath = localSettingsPath ? relative(getCwdState(), localSettingsPath) : '.claude/settings.local.json';
const message = color('success', themeName)(`Added "${cleanPattern}" to excluded commands in ${relativePath}`); const message = color('success', themeName)(`Added "${cleanPattern}" to excluded commands in ${relativePath}`);
onDone(message); onDone(message);
return null; return null;

View File

@@ -1,12 +0,0 @@
import type { Command } from '../../commands.js'
const wiki = {
type: 'local-jsx',
name: 'wiki',
description: 'Initialize and inspect the OpenClaude project wiki',
argumentHint: '[init|status]',
immediate: true,
load: () => import('./wiki.js'),
} satisfies Command
export default wiki

View File

@@ -1,123 +0,0 @@
import React from 'react'
import { COMMON_HELP_ARGS, COMMON_INFO_ARGS } from '../../constants/xml.js'
import { ingestLocalWikiSource } from '../../services/wiki/ingest.js'
import { initializeWiki } from '../../services/wiki/init.js'
import { getWikiStatus } from '../../services/wiki/status.js'
import type {
LocalJSXCommandCall,
LocalJSXCommandOnDone,
} from '../../types/command.js'
import { getCwd } from '../../utils/cwd.js'
function renderHelp(): string {
return `Usage: /wiki [init|status|ingest <path>]
Manage the OpenClaude project wiki stored in .openclaude/wiki.
Commands:
/wiki init Initialize the wiki structure in the current project
/wiki status Show wiki status and page/source counts
/wiki ingest Ingest a local file into wiki sources
Examples:
/wiki init
/wiki status
/wiki ingest README.md`
}
function formatInitResult(result: Awaited<ReturnType<typeof initializeWiki>>): string {
const lines = [`Initialized OpenClaude wiki at ${result.root}`]
if (result.alreadyExisted) {
lines.push('', 'Wiki already existed. No new files were created.')
return lines.join('\n')
}
if (result.createdFiles.length > 0) {
lines.push('', 'Created files:')
for (const file of result.createdFiles) {
lines.push(`- ${file}`)
}
}
return lines.join('\n')
}
function formatStatus(status: Awaited<ReturnType<typeof getWikiStatus>>): string {
if (!status.initialized) {
return `OpenClaude wiki is not initialized in this project.\n\nRun /wiki init to create ${status.root}.`
}
return [
'OpenClaude wiki status',
'',
`Root: ${status.root}`,
`Pages: ${status.pageCount}`,
`Sources: ${status.sourceCount}`,
`Schema: ${status.hasSchema ? 'present' : 'missing'}`,
`Index: ${status.hasIndex ? 'present' : 'missing'}`,
`Log: ${status.hasLog ? 'present' : 'missing'}`,
`Last updated: ${status.lastUpdatedAt ?? 'unknown'}`,
].join('\n')
}
function formatIngestResult(
result: Awaited<ReturnType<typeof ingestLocalWikiSource>>,
): string {
return [
`Ingested ${result.sourceFile} into the OpenClaude wiki.`,
'',
`Title: ${result.title}`,
`Source note: ${result.sourceNote}`,
`Summary: ${result.summary}`,
].join('\n')
}
async function runWikiCommand(
onDone: LocalJSXCommandOnDone,
args: string,
): Promise<void> {
const cwd = getCwd()
const normalized = args.trim().toLowerCase()
if (COMMON_HELP_ARGS.includes(normalized) || COMMON_INFO_ARGS.includes(normalized)) {
onDone(renderHelp(), { display: 'system' })
return
}
if (!normalized || normalized === 'status') {
onDone(formatStatus(await getWikiStatus(cwd)), { display: 'system' })
return
}
if (normalized === 'init') {
onDone(formatInitResult(await initializeWiki(cwd)), { display: 'system' })
return
}
if (normalized.startsWith('ingest')) {
const pathArg = args.trim().slice('ingest'.length).trim()
if (!pathArg) {
onDone('Usage: /wiki ingest <local-file-path>', { display: 'system' })
return
}
onDone(formatIngestResult(await ingestLocalWikiSource(cwd, pathArg)), {
display: 'system',
})
return
}
onDone(`Unknown wiki subcommand: ${args.trim()}\n\n${renderHelp()}`, {
display: 'system',
})
}
export const call: LocalJSXCommandCall = async (
onDone,
_context,
args,
): Promise<React.ReactNode> => {
await runWikiCommand(onDone, args ?? '')
return null
}

View File

@@ -188,9 +188,9 @@ export function AutoUpdater({
Update installed · Restart to apply Update installed · Restart to apply
</Text>} </Text>}
{(autoUpdaterResult?.status === 'install_failed' || autoUpdaterResult?.status === 'no_permissions') && <Text color="error" wrap="truncate"> {(autoUpdaterResult?.status === 'install_failed' || autoUpdaterResult?.status === 'no_permissions') && <Text color="error" wrap="truncate">
Auto-update failed &middot; Try <Text bold>openclaude doctor</Text> or{' '} Auto-update failed &middot; Try <Text bold>claude doctor</Text> or{' '}
<Text bold> <Text bold>
{hasLocalInstall ? `cd ~/.openclaude/local && npm update ${MACRO.PACKAGE_URL}` : `npm i -g ${MACRO.PACKAGE_URL}`} {hasLocalInstall ? `cd ~/.claude/local && npm update ${MACRO.PACKAGE_URL}` : `npm i -g ${MACRO.PACKAGE_URL}`}
</Text> </Text>
</Text>} </Text>}
</Box>; </Box>;

View File

@@ -31,11 +31,9 @@ export function BaseTextInput(t0) {
} = t0; } = t0;
const { const {
onInput, onInput,
value,
renderedValue, renderedValue,
cursorLine, cursorLine,
cursorColumn, cursorColumn
offset,
} = inputState; } = inputState;
const t1 = Boolean(props.focus && props.showCursor && terminalFocus); const t1 = Boolean(props.focus && props.showCursor && terminalFocus);
let t2; let t2;
@@ -80,7 +78,7 @@ export function BaseTextInput(t0) {
renderedPlaceholder renderedPlaceholder
} = renderPlaceholder({ } = renderPlaceholder({
placeholder: props.placeholder, placeholder: props.placeholder,
value, value: props.value,
showCursor: props.showCursor, showCursor: props.showCursor,
focus: props.focus, focus: props.focus,
terminalFocus, terminalFocus,
@@ -90,9 +88,9 @@ export function BaseTextInput(t0) {
useInput(wrappedOnInput, { useInput(wrappedOnInput, {
isActive: props.focus isActive: props.focus
}); });
const commandWithoutArgs = value && value.trim().indexOf(" ") === -1 || value && value.endsWith(" "); const commandWithoutArgs = props.value && props.value.trim().indexOf(" ") === -1 || props.value && props.value.endsWith(" ");
const showArgumentHint = Boolean(props.argumentHint && value && commandWithoutArgs && value.startsWith("/")); const showArgumentHint = Boolean(props.argumentHint && props.value && commandWithoutArgs && props.value.startsWith("/"));
const cursorFiltered = props.showCursor && props.highlights ? props.highlights.filter(h => h.dimColor || offset < h.start || offset >= h.end) : props.highlights; const cursorFiltered = props.showCursor && props.highlights ? props.highlights.filter(h => h.dimColor || props.cursorOffset < h.start || props.cursorOffset >= h.end) : props.highlights;
const { const {
viewportCharOffset, viewportCharOffset,
viewportCharEnd viewportCharEnd
@@ -104,13 +102,13 @@ export function BaseTextInput(t0) {
})) : cursorFiltered; })) : cursorFiltered;
const hasHighlights = filteredHighlights && filteredHighlights.length > 0; const hasHighlights = filteredHighlights && filteredHighlights.length > 0;
if (hasHighlights) { if (hasHighlights) {
return <Box ref={cursorRef}><HighlightedInput text={renderedValue} highlights={filteredHighlights} />{showArgumentHint && <Text dimColor={true}>{value.endsWith(" ") ? "" : " "}{props.argumentHint}</Text>}{children}</Box>; return <Box ref={cursorRef}><HighlightedInput text={renderedValue} highlights={filteredHighlights} />{showArgumentHint && <Text dimColor={true}>{props.value?.endsWith(" ") ? "" : " "}{props.argumentHint}</Text>}{children}</Box>;
} }
const T0 = Box; const T0 = Box;
const T1 = Text; const T1 = Text;
const t4 = "truncate-end"; const t4 = "truncate-end";
const t5 = showPlaceholder && props.placeholderElement ? props.placeholderElement : showPlaceholder && renderedPlaceholder ? <Ansi>{renderedPlaceholder}</Ansi> : <Ansi>{renderedValue}</Ansi>; const t5 = showPlaceholder && props.placeholderElement ? props.placeholderElement : showPlaceholder && renderedPlaceholder ? <Ansi>{renderedPlaceholder}</Ansi> : <Ansi>{renderedValue}</Ansi>;
const t6 = showArgumentHint && <Text dimColor={true}>{value.endsWith(" ") ? "" : " "}{props.argumentHint}</Text>; const t6 = showArgumentHint && <Text dimColor={true}>{props.value?.endsWith(" ") ? "" : " "}{props.argumentHint}</Text>;
let t7; let t7;
if ($[4] !== T1 || $[5] !== children || $[6] !== props || $[7] !== t5 || $[8] !== t6) { if ($[4] !== T1 || $[5] !== children || $[6] !== props || $[7] !== t5 || $[8] !== t6) {
t7 = <T1 wrap={t4} dimColor={props.dimColor}>{t5}{t6}{children}</T1>; t7 = <T1 wrap={t4} dimColor={props.dimColor}>{t5}{t6}{children}</T1>;

View File

@@ -103,7 +103,7 @@ test('login picker shows the third-party platform option', async () => {
expect(output).toContain('3rd-party platform') expect(output).toContain('3rd-party platform')
}) })
test('third-party provider branch opens the first-run provider manager', async () => { test('third-party provider branch opens the provider wizard', async () => {
const output = await renderFrame( const output = await renderFrame(
<ConsoleOAuthFlow <ConsoleOAuthFlow
initialStatus={{ state: 'platform_setup' }} initialStatus={{ state: 'platform_setup' }}
@@ -111,9 +111,7 @@ test('third-party provider branch opens the first-run provider manager', async (
/>, />,
) )
expect(output).toContain('Set up provider') expect(output).toContain('Set up a provider profile')
expect(output).toContain('Anthropic') expect(output).toContain('OpenAI-compatible')
expect(output).toContain('OpenAI')
expect(output).toContain('Ollama') expect(output).toContain('Ollama')
expect(output).toContain('LM Studio')
}) })

View File

@@ -12,7 +12,7 @@ import { OAuthService } from '../services/oauth/index.js';
import { getOauthAccountInfo, validateForceLoginOrg } from '../utils/auth.js'; import { getOauthAccountInfo, validateForceLoginOrg } from '../utils/auth.js';
import { logError } from '../utils/log.js'; import { logError } from '../utils/log.js';
import { getSettings_DEPRECATED } from '../utils/settings/settings.js'; import { getSettings_DEPRECATED } from '../utils/settings/settings.js';
import { ProviderManager } from './ProviderManager.js'; import { ProviderWizard } from '../commands/provider/provider.js';
import { Select } from './CustomSelect/select.js'; import { Select } from './CustomSelect/select.js';
import { KeyboardShortcutHint } from './design-system/KeyboardShortcutHint.js'; import { KeyboardShortcutHint } from './design-system/KeyboardShortcutHint.js';
import { Spinner } from './Spinner.js'; import { Spinner } from './Spinner.js';
@@ -450,17 +450,16 @@ function OAuthStatusMessage({
case 'platform_setup': case 'platform_setup':
return ( return (
<ProviderManager <ProviderWizard
mode="first-run"
onDone={result => { onDone={result => {
if (!result || result.action !== 'saved' || !result.message) { if (!result) {
setOAuthStatus({ state: 'idle' }) setOAuthStatus({ state: 'idle' })
return return
} }
setOAuthStatus({ setOAuthStatus({
state: 'platform_setup_complete', state: 'platform_setup_complete',
message: result.message, message: result,
}) })
}} }}
/> />

View File

@@ -285,7 +285,7 @@ export function Select(t0) {
onChange, onChange,
onCancel, onCancel,
onFocus, onFocus,
defaultFocusValue, focusValue: defaultFocusValue
}; };
$[7] = defaultFocusValue; $[7] = defaultFocusValue;
$[8] = defaultValue; $[8] = defaultValue;

View File

@@ -1,4 +1,5 @@
import { useCallback, useState } from 'react' import { useCallback, useState } from 'react'
import { isDeepStrictEqual } from 'util'
import { useRegisterOverlay } from '../../context/overlayContext.js' import { useRegisterOverlay } from '../../context/overlayContext.js'
import type { InputEvent } from '../../ink/events/input-event.js' import type { InputEvent } from '../../ink/events/input-event.js'
// eslint-disable-next-line custom-rules/prefer-use-keybindings -- raw space/arrow multiselect input // eslint-disable-next-line custom-rules/prefer-use-keybindings -- raw space/arrow multiselect input
@@ -8,7 +9,6 @@ import {
normalizeFullWidthSpace, normalizeFullWidthSpace,
} from '../../utils/stringUtils.js' } from '../../utils/stringUtils.js'
import type { OptionWithDescription } from './select.js' import type { OptionWithDescription } from './select.js'
import { optionsNavigateEqual } from './use-select-navigation.js'
import { useSelectNavigation } from './use-select-navigation.js' import { useSelectNavigation } from './use-select-navigation.js'
export type UseMultiSelectStateProps<T> = { export type UseMultiSelectStateProps<T> = {
@@ -174,7 +174,7 @@ export function useMultiSelectState<T>({
// and the deleted ui/useMultiSelectState.ts — without this, MCPServerDesktopImportDialog // and the deleted ui/useMultiSelectState.ts — without this, MCPServerDesktopImportDialog
// keeps colliding servers checked after getAllMcpConfigs() resolves. // keeps colliding servers checked after getAllMcpConfigs() resolves.
const [lastOptions, setLastOptions] = useState(options) const [lastOptions, setLastOptions] = useState(options)
if (options !== lastOptions && !optionsNavigateEqual(options, lastOptions)) { if (options !== lastOptions && !isDeepStrictEqual(options, lastOptions)) {
setSelectedValues(defaultValue) setSelectedValues(defaultValue)
setLastOptions(options) setLastOptions(options)
} }

View File

@@ -6,34 +6,10 @@ import {
useRef, useRef,
useState, useState,
} from 'react' } from 'react'
import { isDeepStrictEqual } from 'util'
import OptionMap from './option-map.js' import OptionMap from './option-map.js'
import type { OptionWithDescription } from './select.js' import type { OptionWithDescription } from './select.js'
/**
* Compare two option arrays for structural equality on properties that
* affect navigation behavior. ReactNode `label` and function `onChange`
* are intentionally excluded — they are identity-unstable (new reference
* each render) but don't change navigation semantics.
*/
export function optionsNavigateEqual<T>(
a: OptionWithDescription<T>[],
b: OptionWithDescription<T>[],
): boolean {
if (a.length !== b.length) return false
for (let i = 0; i < a.length; i++) {
const ao = a[i]!
const bo = b[i]!
if (
ao.value !== bo.value ||
ao.disabled !== bo.disabled ||
ao.type !== bo.type
) {
return false
}
}
return true
}
type State<T> = { type State<T> = {
/** /**
* Map where key is option's value and value is option's index. * Map where key is option's value and value is option's index.
@@ -548,7 +524,7 @@ export function useSelectNavigation<T>({
const [lastOptions, setLastOptions] = useState(options) const [lastOptions, setLastOptions] = useState(options)
if (options !== lastOptions && !optionsNavigateEqual(options, lastOptions)) { if (options !== lastOptions && !isDeepStrictEqual(options, lastOptions)) {
dispatch({ dispatch({
type: 'reset', type: 'reset',
state: createDefaultState({ state: createDefaultState({

View File

@@ -35,11 +35,6 @@ export type UseSelectStateProps<T> = {
*/ */
onFocus?: (value: T) => void onFocus?: (value: T) => void
/**
* Initial value to focus when the component mounts.
*/
defaultFocusValue?: T
/** /**
* Value to focus * Value to focus
*/ */
@@ -136,7 +131,6 @@ export function useSelectState<T>({
onChange, onChange,
onCancel, onCancel,
onFocus, onFocus,
defaultFocusValue,
focusValue, focusValue,
}: UseSelectStateProps<T>): SelectState<T> { }: UseSelectStateProps<T>): SelectState<T> {
const [value, setValue] = useState<T | undefined>(defaultValue) const [value, setValue] = useState<T | undefined>(defaultValue)
@@ -144,7 +138,7 @@ export function useSelectState<T>({
const navigation = useSelectNavigation<T>({ const navigation = useSelectNavigation<T>({
visibleOptionCount, visibleOptionCount,
options, options,
initialFocusValue: defaultFocusValue, initialFocusValue: undefined,
onFocus, onFocus,
focusValue, focusValue,
}) })

View File

@@ -112,7 +112,7 @@ export function HelpV2(t0) {
} }
tabs.push(t6); tabs.push(t6);
if (false && antOnlyCommands.length > 0) { if (false && antOnlyCommands.length > 0) {
let t7; let t7;
if ($[26] !== antOnlyCommands || $[27] !== close || $[28] !== columns || $[29] !== maxHeight) { if ($[26] !== antOnlyCommands || $[27] !== close || $[28] !== columns || $[29] !== maxHeight) {
t7 = <Tab key="internal-only" title="[internal-only]"><Commands commands={antOnlyCommands} maxHeight={maxHeight} columns={columns} title="Browse internal-only commands:" onCancel={close} /></Tab>; t7 = <Tab key="internal-only" title="[internal-only]"><Commands commands={antOnlyCommands} maxHeight={maxHeight} columns={columns} title="Browse internal-only commands:" onCancel={close} /></Tab>;
$[26] = antOnlyCommands; $[26] = antOnlyCommands;

View File

@@ -252,24 +252,14 @@ function PromptInput({
show: false show: false
}); });
const [cursorOffset, setCursorOffset] = useState<number>(input.length); const [cursorOffset, setCursorOffset] = useState<number>(input.length);
// Track the last input value set via internal handlers so external updates // Track the last input value set via internal handlers so we can detect
// (for example speech-to-text injection) can still move the cursor to end // external input changes (e.g. speech-to-text injection) and move cursor to end.
// without clobbering a pending internal keystroke during render.
const lastInternalInputRef = React.useRef(input); const lastInternalInputRef = React.useRef(input);
const lastPropInputRef = React.useRef(input); if (input !== lastInternalInputRef.current) {
React.useLayoutEffect(() => { // Input changed externally (not through any internal handler) — move cursor to end
if (input === lastPropInputRef.current) { setCursorOffset(input.length);
return;
}
lastPropInputRef.current = input;
if (input === lastInternalInputRef.current) {
return;
}
lastInternalInputRef.current = input; lastInternalInputRef.current = input;
setCursorOffset(prev => prev === input.length ? prev : input.length); }
}, [input]);
// Wrap onInputChange to track internal changes before they trigger re-render // Wrap onInputChange to track internal changes before they trigger re-render
const trackAndSetInput = React.useCallback((value: string) => { const trackAndSetInput = React.useCallback((value: string) => {
lastInternalInputRef.current = value; lastInternalInputRef.current = value;
@@ -2211,7 +2201,7 @@ function PromptInput({
multiline: true, multiline: true,
onSubmit, onSubmit,
onChange, onChange,
value: isSearchingHistory && historyMatch ? getValueFromInput(typeof historyMatch === 'string' ? historyMatch : historyMatch.display) : input, value: historyMatch ? getValueFromInput(typeof historyMatch === 'string' ? historyMatch : historyMatch.display) : input,
// History navigation is handled via TextInput props (onHistoryUp/onHistoryDown), // History navigation is handled via TextInput props (onHistoryUp/onHistoryDown),
// NOT via useKeybindings. This allows useTextInput's upOrHistoryUp/downOrHistoryDown // NOT via useKeybindings. This allows useTextInput's upOrHistoryUp/downOrHistoryDown
// to try cursor movement first and only fall through to history navigation when the // to try cursor movement first and only fall through to history navigation when the

View File

@@ -6,7 +6,6 @@ import stripAnsi from 'strip-ansi'
import { createRoot } from '../ink.js' import { createRoot } from '../ink.js'
import { AppStateProvider } from '../state/AppState.js' import { AppStateProvider } from '../state/AppState.js'
import { KeybindingSetup } from '../keybindings/KeybindingProviderSetup.js'
const SYNC_START = '\x1B[?2026h' const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l' const SYNC_END = '\x1B[?2026l'
@@ -107,30 +106,19 @@ function createDeferred<T>(): {
return { promise, resolve } return { promise, resolve }
} }
function mockProviderProfilesModule(options?: { function mockProviderProfilesModule(): void {
addProviderProfile?: (...args: unknown[]) => unknown
}): void {
mock.module('../utils/providerProfiles.js', () => ({ mock.module('../utils/providerProfiles.js', () => ({
addProviderProfile: options?.addProviderProfile ?? (() => null), addProviderProfile: () => null,
applyActiveProviderProfileFromConfig: () => {}, applyActiveProviderProfileFromConfig: () => {},
deleteProviderProfile: () => ({ removed: false, activeProfileId: null }), deleteProviderProfile: () => ({ removed: false, activeProfileId: null }),
getActiveProviderProfile: () => null, getActiveProviderProfile: () => null,
getProviderPresetDefaults: (preset: string) => getProviderPresetDefaults: () => ({
preset === 'ollama' provider: 'openai',
? { name: 'Mock provider',
provider: 'openai', baseUrl: 'http://localhost:11434/v1',
name: 'Ollama', model: 'mock-model',
baseUrl: 'http://localhost:11434/v1', apiKey: '',
model: 'llama3.1:8b', }),
apiKey: '',
}
: {
provider: 'openai',
name: 'Mock provider',
baseUrl: 'http://localhost:11434/v1',
model: 'mock-model',
apiKey: '',
},
getProviderProfiles: () => [], getProviderProfiles: () => [],
setActiveProviderProfile: () => null, setActiveProviderProfile: () => null,
updateProviderProfile: () => null, updateProviderProfile: () => null,
@@ -140,27 +128,8 @@ function mockProviderProfilesModule(options?: {
function mockProviderManagerDependencies( function mockProviderManagerDependencies(
syncRead: () => string | undefined, syncRead: () => string | undefined,
asyncRead: () => Promise<string | undefined>, asyncRead: () => Promise<string | undefined>,
options?: {
addProviderProfile?: (...args: unknown[]) => unknown
hasLocalOllama?: () => Promise<boolean>
listOllamaModels?: () => Promise<
Array<{
name: string
sizeBytes?: number | null
family?: string | null
families?: string[]
parameterSize?: string | null
quantizationLevel?: string | null
}>
>
},
): void { ): void {
mockProviderProfilesModule({ addProviderProfile: options?.addProviderProfile }) mockProviderProfilesModule()
mock.module('../utils/providerDiscovery.js', () => ({
hasLocalOllama: options?.hasLocalOllama ?? (async () => false),
listOllamaModels: options?.listOllamaModels ?? (async () => []),
}))
mock.module('../utils/githubModelsCredentials.js', () => ({ mock.module('../utils/githubModelsCredentials.js', () => ({
clearGithubModelsToken: () => ({ success: true }), clearGithubModelsToken: () => ({ success: true }),
@@ -193,14 +162,9 @@ async function waitForFrameOutput(
async function mountProviderManager( async function mountProviderManager(
ProviderManager: React.ComponentType<{ ProviderManager: React.ComponentType<{
mode: 'first-run' | 'manage' mode: 'first-run' | 'manage'
onDone: (result?: unknown) => void onDone: () => void
}>, }>,
options?: {
mode?: 'first-run' | 'manage'
onDone?: (result?: unknown) => void
},
): Promise<{ ): Promise<{
stdin: PassThrough
getOutput: () => string getOutput: () => string
dispose: () => Promise<void> dispose: () => Promise<void>
}> { }> {
@@ -213,17 +177,14 @@ async function mountProviderManager(
root.render( root.render(
<AppStateProvider> <AppStateProvider>
<KeybindingSetup> <ProviderManager
<ProviderManager mode="manage"
mode={options?.mode ?? 'manage'} onDone={() => {}}
onDone={options?.onDone ?? (() => {})} />
/>
</KeybindingSetup>
</AppStateProvider>, </AppStateProvider>,
) )
return { return {
stdin,
getOutput, getOutput,
dispose: async () => { dispose: async () => {
root.unmount() root.unmount()
@@ -237,17 +198,14 @@ async function mountProviderManager(
async function renderProviderManagerFrame( async function renderProviderManagerFrame(
ProviderManager: React.ComponentType<{ ProviderManager: React.ComponentType<{
mode: 'first-run' | 'manage' mode: 'first-run' | 'manage'
onDone: (result?: unknown) => void onDone: () => void
}>, }>,
options?: { options?: {
waitForOutput?: (output: string) => boolean waitForOutput?: (output: string) => boolean
timeoutMs?: number timeoutMs?: number
mode?: 'first-run' | 'manage'
}, },
): Promise<string> { ): Promise<string> {
const mounted = await mountProviderManager(ProviderManager, { const mounted = await mountProviderManager(ProviderManager)
mode: options?.mode,
})
const output = await waitForFrameOutput( const output = await waitForFrameOutput(
mounted.getOutput, mounted.getOutput,
frame => { frame => {
@@ -305,96 +263,6 @@ test('ProviderManager resolves GitHub virtual provider from async storage withou
expect(asyncRead).toHaveBeenCalled() expect(asyncRead).toHaveBeenCalled()
}) })
test('ProviderManager first-run Ollama preset auto-detects installed models', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const onDone = mock(() => {})
const addProviderProfile = mock((payload: {
provider: string
name: string
baseUrl: string
model: string
apiKey?: string
}) => ({
id: 'provider_ollama',
provider: payload.provider,
name: payload.name,
baseUrl: payload.baseUrl,
model: payload.model,
apiKey: payload.apiKey,
}))
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
addProviderProfile,
hasLocalOllama: async () => true,
listOllamaModels: async () => [
{
name: 'gemma4:31b-cloud',
family: 'gemma',
parameterSize: '31b',
},
{
name: 'kimi-k2.5:cloud',
family: 'kimi',
parameterSize: '2.5b',
},
],
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
mode: 'first-run',
onDone,
})
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Set up provider') && frame.includes('Ollama'),
)
mounted.stdin.write('j')
await Bun.sleep(50)
mounted.stdin.write('\r')
const modelFrame = await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Choose an Ollama model') &&
frame.includes('gemma4:31b-cloud') &&
frame.includes('kimi-k2.5:cloud'),
)
expect(modelFrame).toContain('Choose an Ollama model')
expect(modelFrame).toContain('gemma4:31b-cloud')
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForCondition(() => onDone.mock.calls.length > 0)
expect(addProviderProfile).toHaveBeenCalled()
expect(addProviderProfile.mock.calls[0]?.[0]).toMatchObject({
name: 'Ollama',
baseUrl: 'http://localhost:11434/v1',
model: 'gemma4:31b-cloud',
})
expect(onDone).toHaveBeenCalledWith(
expect.objectContaining({
action: 'saved',
message: 'Provider configured: Ollama',
}),
)
await mounted.dispose()
})
test('ProviderManager avoids first-frame false negative while stored-token lookup is pending', async () => { test('ProviderManager avoids first-frame false negative while stored-token lookup is pending', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN delete process.env.GITHUB_TOKEN

View File

@@ -3,7 +3,6 @@ import * as React from 'react'
import { Box, Text } from '../ink.js' import { Box, Text } from '../ink.js'
import { useKeybinding } from '../keybindings/useKeybinding.js' import { useKeybinding } from '../keybindings/useKeybinding.js'
import type { ProviderProfile } from '../utils/config.js' import type { ProviderProfile } from '../utils/config.js'
import { hasLocalOllama, listOllamaModels } from '../utils/providerDiscovery.js'
import { import {
addProviderProfile, addProviderProfile,
applyActiveProviderProfileFromConfig, applyActiveProviderProfileFromConfig,
@@ -16,10 +15,6 @@ import {
type ProviderProfileInput, type ProviderProfileInput,
updateProviderProfile, updateProviderProfile,
} from '../utils/providerProfiles.js' } from '../utils/providerProfiles.js'
import {
rankOllamaModels,
recommendOllamaModel,
} from '../utils/providerRecommendation.js'
import { import {
clearGithubModelsToken, clearGithubModelsToken,
GITHUB_MODELS_HYDRATED_ENV_MARKER, GITHUB_MODELS_HYDRATED_ENV_MARKER,
@@ -29,7 +24,7 @@ import {
} from '../utils/githubModelsCredentials.js' } from '../utils/githubModelsCredentials.js'
import { isEnvTruthy } from '../utils/envUtils.js' import { isEnvTruthy } from '../utils/envUtils.js'
import { updateSettingsForSource } from '../utils/settings/settings.js' import { updateSettingsForSource } from '../utils/settings/settings.js'
import { type OptionWithDescription, Select } from './CustomSelect/index.js' import { Select } from './CustomSelect/index.js'
import { Pane } from './design-system/Pane.js' import { Pane } from './design-system/Pane.js'
import TextInput from './TextInput.js' import TextInput from './TextInput.js'
@@ -47,7 +42,6 @@ type Props = {
type Screen = type Screen =
| 'menu' | 'menu'
| 'select-preset' | 'select-preset'
| 'select-ollama-model'
| 'form' | 'form'
| 'select-active' | 'select-active'
| 'select-edit' | 'select-edit'
@@ -57,16 +51,6 @@ type DraftField = 'name' | 'baseUrl' | 'model' | 'apiKey'
type ProviderDraft = Record<DraftField, string> type ProviderDraft = Record<DraftField, string>
type OllamaSelectionState =
| { state: 'idle' }
| { state: 'loading' }
| {
state: 'ready'
options: OptionWithDescription<string>[]
defaultValue?: string
}
| { state: 'unavailable'; message: string }
const FORM_STEPS: Array<{ const FORM_STEPS: Array<{
key: DraftField key: DraftField
label: string label: string
@@ -226,9 +210,6 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
const [cursorOffset, setCursorOffset] = React.useState(0) const [cursorOffset, setCursorOffset] = React.useState(0)
const [statusMessage, setStatusMessage] = React.useState<string | undefined>() const [statusMessage, setStatusMessage] = React.useState<string | undefined>()
const [errorMessage, setErrorMessage] = React.useState<string | undefined>() const [errorMessage, setErrorMessage] = React.useState<string | undefined>()
const [ollamaSelection, setOllamaSelection] = React.useState<OllamaSelectionState>({
state: 'idle',
})
const currentStep = FORM_STEPS[formStepIndex] ?? FORM_STEPS[0] const currentStep = FORM_STEPS[formStepIndex] ?? FORM_STEPS[0]
const currentStepKey = currentStep.key const currentStepKey = currentStep.key
@@ -383,59 +364,6 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
return null return null
} }
React.useEffect(() => {
if (screen !== 'select-ollama-model') {
return
}
let cancelled = false
setOllamaSelection({ state: 'loading' })
void (async () => {
const available = await hasLocalOllama(draft.baseUrl)
if (!available) {
if (!cancelled) {
setOllamaSelection({
state: 'unavailable',
message:
'Could not reach Ollama. Start Ollama first, or enter the endpoint manually.',
})
}
return
}
const models = await listOllamaModels(draft.baseUrl)
if (models.length === 0) {
if (!cancelled) {
setOllamaSelection({
state: 'unavailable',
message:
'Ollama is running, but no installed models were found. Pull a chat model such as qwen2.5-coder:7b or llama3.1:8b first, or enter details manually.',
})
}
return
}
const ranked = rankOllamaModels(models, 'balanced')
const recommended = recommendOllamaModel(models, 'balanced')
if (!cancelled) {
setOllamaSelection({
state: 'ready',
defaultValue: recommended?.name ?? ranked[0]?.name,
options: ranked.map(model => ({
label: model.name,
value: model.name,
description: model.summary,
})),
})
}
})()
return () => {
cancelled = true
}
}, [draft.baseUrl, screen])
function startCreateFromPreset(preset: ProviderPreset): void { function startCreateFromPreset(preset: ProviderPreset): void {
const defaults = getProviderPresetDefaults(preset) const defaults = getProviderPresetDefaults(preset)
const nextDraft = { const nextDraft = {
@@ -450,13 +378,6 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
setFormStepIndex(0) setFormStepIndex(0)
setCursorOffset(nextDraft.name.length) setCursorOffset(nextDraft.name.length)
setErrorMessage(undefined) setErrorMessage(undefined)
if (preset === 'ollama') {
setOllamaSelection({ state: 'loading' })
setScreen('select-ollama-model')
return
}
setScreen('form') setScreen('form')
} }
@@ -476,13 +397,13 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
setScreen('form') setScreen('form')
} }
function persistDraft(nextDraft: ProviderDraft = draft): void { function persistDraft(): void {
const payload: ProviderProfileInput = { const payload: ProviderProfileInput = {
provider: draftProvider, provider: draftProvider,
name: nextDraft.name, name: draft.name,
baseUrl: nextDraft.baseUrl, baseUrl: draft.baseUrl,
model: nextDraft.model, model: draft.model,
apiKey: nextDraft.apiKey, apiKey: draft.apiKey,
} }
const saved = editingProfileId const saved = editingProfileId
@@ -525,83 +446,6 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
setScreen('menu') setScreen('menu')
} }
function renderOllamaSelection(): React.ReactNode {
if (ollamaSelection.state === 'loading' || ollamaSelection.state === 'idle') {
return (
<Box flexDirection="column" gap={1}>
<Text color="remember" bold>
Checking Ollama
</Text>
<Text dimColor>Looking for installed Ollama models...</Text>
</Box>
)
}
if (ollamaSelection.state === 'unavailable') {
return (
<Box flexDirection="column" gap={1}>
<Text color="remember" bold>
Ollama setup
</Text>
<Text dimColor>{ollamaSelection.message}</Text>
<Select
options={[
{
value: 'manual',
label: 'Enter manually',
description: 'Fill in the base URL and model yourself',
},
{
value: 'back',
label: 'Back',
description: 'Choose another provider preset',
},
]}
onChange={value => {
if (value === 'manual') {
setFormStepIndex(0)
setCursorOffset(draft.name.length)
setScreen('form')
return
}
setScreen('select-preset')
}}
onCancel={() => setScreen('select-preset')}
visibleOptionCount={2}
/>
</Box>
)
}
return (
<Box flexDirection="column" gap={1}>
<Text color="remember" bold>
Choose an Ollama model
</Text>
<Text dimColor>
Pick one of the installed Ollama models to save into a local provider
profile.
</Text>
<Select
options={ollamaSelection.options}
defaultValue={ollamaSelection.defaultValue}
defaultFocusValue={ollamaSelection.defaultValue}
inlineDescriptions
visibleOptionCount={Math.min(8, ollamaSelection.options.length)}
onChange={value => {
const nextDraft = {
...draft,
model: value,
}
setDraft(nextDraft)
persistDraft(nextDraft)
}}
onCancel={() => setScreen('select-preset')}
/>
</Box>
)
}
function handleFormSubmit(value: string): void { function handleFormSubmit(value: string): void {
const trimmed = value.trim() const trimmed = value.trim()
@@ -626,7 +470,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
return return
} }
persistDraft(nextDraft) persistDraft()
} }
function handleBackFromForm(): void { function handleBackFromForm(): void {
@@ -975,16 +819,13 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
let content: React.ReactNode let content: React.ReactNode
switch (screen) { switch (screen) {
case 'select-preset': case 'select-preset':
content = renderPresetSelection() content = renderPresetSelection()
break break
case 'select-ollama-model': case 'form':
content = renderOllamaSelection() content = renderForm()
break break
case 'form':
content = renderForm()
break
case 'select-active': case 'select-active':
content = renderProfileSelection( content = renderProfileSelection(
'Set active provider', 'Set active provider',

View File

@@ -7,8 +7,6 @@
import { isLocalProviderUrl } from '../services/api/providerConfig.js' import { isLocalProviderUrl } from '../services/api/providerConfig.js'
import { getLocalOpenAICompatibleProviderLabel } from '../utils/providerDiscovery.js' import { getLocalOpenAICompatibleProviderLabel } from '../utils/providerDiscovery.js'
import { getSettings_DEPRECATED } from '../utils/settings/settings.js'
import { parseUserSpecifiedModel } from '../utils/model/model.js'
declare const MACRO: { VERSION: string; DISPLAY_VERSION?: string } declare const MACRO: { VERSION: string; DISPLAY_VERSION?: string }
@@ -87,7 +85,6 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
const useGemini = process.env.CLAUDE_CODE_USE_GEMINI === '1' || process.env.CLAUDE_CODE_USE_GEMINI === 'true' const useGemini = process.env.CLAUDE_CODE_USE_GEMINI === '1' || process.env.CLAUDE_CODE_USE_GEMINI === 'true'
const useGithub = process.env.CLAUDE_CODE_USE_GITHUB === '1' || process.env.CLAUDE_CODE_USE_GITHUB === 'true' const useGithub = process.env.CLAUDE_CODE_USE_GITHUB === '1' || process.env.CLAUDE_CODE_USE_GITHUB === 'true'
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1' || process.env.CLAUDE_CODE_USE_OPENAI === 'true' const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1' || process.env.CLAUDE_CODE_USE_OPENAI === 'true'
const useMistral = process.env.CLAUDE_CODE_USE_MISTRAL === '1' || process.env.CLAUDE_CODE_USE_MISTRAL === 'true'
if (useGemini) { if (useGemini) {
const model = process.env.GEMINI_MODEL || 'gemini-2.0-flash' const model = process.env.GEMINI_MODEL || 'gemini-2.0-flash'
@@ -95,17 +92,11 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
return { name: 'Google Gemini', model, baseUrl, isLocal: false } return { name: 'Google Gemini', model, baseUrl, isLocal: false }
} }
if (useMistral) {
const model = process.env.MISTRAL_MODEL || 'devstral-latest'
const baseUrl = process.env.MISTRAL_BASE_URL || 'https://api.mistral.ai/v1'
return { name: 'Mistral', model, baseUrl, isLocal: false }
}
if (useGithub) { if (useGithub) {
const model = process.env.OPENAI_MODEL || 'github:copilot' const model = process.env.OPENAI_MODEL || 'github:copilot'
const baseUrl = const baseUrl =
process.env.OPENAI_BASE_URL || 'https://api.githubcopilot.com' process.env.OPENAI_BASE_URL || 'https://models.github.ai/inference'
return { name: 'GitHub Copilot', model, baseUrl, isLocal: false } return { name: 'GitHub Models', model, baseUrl, isLocal: false }
} }
if (useOpenAI) { if (useOpenAI) {
@@ -148,11 +139,9 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
return { name, model: displayModel, baseUrl, isLocal } return { name, model: displayModel, baseUrl, isLocal }
} }
// Default: Anthropic - check settings.model first, then env vars // Default: Anthropic
const settings = getSettings_DEPRECATED() || {} const model = process.env.ANTHROPIC_MODEL || process.env.CLAUDE_MODEL || 'claude-sonnet-4-6'
const modelSetting = settings.model || process.env.ANTHROPIC_MODEL || process.env.CLAUDE_MODEL || 'claude-sonnet-4-6' return { name: 'Anthropic', model, baseUrl: 'https://api.anthropic.com', isLocal: false }
const resolvedModel = parseUserSpecifiedModel(modelSetting)
return { name: 'Anthropic', model: resolvedModel, baseUrl: 'https://api.anthropic.com', isLocal: false }
} }
// ─── Box drawing ────────────────────────────────────────────────────────────── // ─── Box drawing ──────────────────────────────────────────────────────────────

View File

@@ -1,231 +0,0 @@
import { PassThrough } from 'node:stream'
import { expect, test } from 'bun:test'
import React from 'react'
import stripAnsi from 'strip-ansi'
import { createRoot } from '../ink.js'
import { AppStateProvider } from '../state/AppState.js'
import TextInput from './TextInput.js'
import VimTextInput from './VimTextInput.js'
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
function extractLastFrame(output: string): string {
let lastFrame: string | null = null
let cursor = 0
while (cursor < output.length) {
const start = output.indexOf(SYNC_START, cursor)
if (start === -1) {
break
}
const contentStart = start + SYNC_START.length
const end = output.indexOf(SYNC_END, contentStart)
if (end === -1) {
break
}
const frame = output.slice(contentStart, end)
if (frame.trim().length > 0) {
lastFrame = frame
}
cursor = end + SYNC_END.length
}
return lastFrame ?? output
}
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
getOutput: () => string
} {
let output = ''
const stdout = new PassThrough()
const stdin = new PassThrough() as PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
stdout.on('data', chunk => {
output += chunk.toString()
})
return {
stdout,
stdin,
getOutput: () => output,
}
}
function DelayedControlledTextInput(): React.ReactNode {
const [value, setValue] = React.useState('')
const [cursorOffset, setCursorOffset] = React.useState(0)
const valueTimerRef = React.useRef<ReturnType<typeof setTimeout> | null>(null)
const offsetTimerRef = React.useRef<ReturnType<typeof setTimeout> | null>(null)
React.useEffect(() => {
return () => {
if (valueTimerRef.current) {
clearTimeout(valueTimerRef.current)
}
if (offsetTimerRef.current) {
clearTimeout(offsetTimerRef.current)
}
}
}, [])
return (
<AppStateProvider>
<TextInput
value={value}
onChange={nextValue => {
if (valueTimerRef.current) {
clearTimeout(valueTimerRef.current)
}
valueTimerRef.current = setTimeout(() => {
setValue(nextValue)
}, 200)
}}
onSubmit={() => {}}
placeholder="Type here..."
columns={60}
cursorOffset={cursorOffset}
onChangeCursorOffset={nextOffset => {
if (offsetTimerRef.current) {
clearTimeout(offsetTimerRef.current)
}
offsetTimerRef.current = setTimeout(() => {
setCursorOffset(nextOffset)
}, 200)
}}
focus
showCursor
multiline
/>
</AppStateProvider>
)
}
function DelayedControlledVimTextInput(): React.ReactNode {
const [value, setValue] = React.useState('')
const [cursorOffset, setCursorOffset] = React.useState(0)
const valueTimerRef = React.useRef<ReturnType<typeof setTimeout> | null>(null)
const offsetTimerRef = React.useRef<ReturnType<typeof setTimeout> | null>(null)
React.useEffect(() => {
return () => {
if (valueTimerRef.current) {
clearTimeout(valueTimerRef.current)
}
if (offsetTimerRef.current) {
clearTimeout(offsetTimerRef.current)
}
}
}, [])
return (
<AppStateProvider>
<VimTextInput
value={value}
onChange={nextValue => {
if (valueTimerRef.current) {
clearTimeout(valueTimerRef.current)
}
valueTimerRef.current = setTimeout(() => {
setValue(nextValue)
}, 200)
}}
onSubmit={() => {}}
placeholder="Type here..."
columns={60}
cursorOffset={cursorOffset}
onChangeCursorOffset={nextOffset => {
if (offsetTimerRef.current) {
clearTimeout(offsetTimerRef.current)
}
offsetTimerRef.current = setTimeout(() => {
setCursorOffset(nextOffset)
}, 200)
}}
initialMode="INSERT"
focus
showCursor
multiline
/>
</AppStateProvider>
)
}
test('TextInput renders typed characters before delayed parent value commits', async () => {
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(<DelayedControlledTextInput />)
await Bun.sleep(50)
stdin.write('a')
await Bun.sleep(25)
stdin.write('b')
await Bun.sleep(25)
const output = stripAnsi(extractLastFrame(getOutput()))
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(25)
expect(output).toContain('ab')
expect(output).not.toContain('Type here...')
})
test('VimTextInput preserves rapid typed characters before delayed parent value commits', async () => {
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(<DelayedControlledVimTextInput />)
await Bun.sleep(50)
stdin.write('a')
await Bun.sleep(25)
stdin.write('s')
await Bun.sleep(25)
stdin.write('d')
await Bun.sleep(25)
stdin.write('f')
await Bun.sleep(25)
const output = stripAnsi(extractLastFrame(getOutput()))
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(25)
expect(output).toContain('asdf')
expect(output).not.toContain('Type here...')
})

View File

@@ -1,161 +1,113 @@
import { PassThrough } from 'node:stream' import { describe, expect, it, mock } from 'bun:test'
import { afterEach, expect, mock, test } from 'bun:test' // We can't fully render ThemePicker due to complex dependencies
import React from 'react' // But we can test the theme options generation logic
import stripAnsi from 'strip-ansi' describe('ThemePicker', () => {
describe('theme options', () => {
it('generates correct theme options without AUTO_THEME feature flag', () => {
// Since we can't easily mock bun:bundle, test the options structure
// The real test would require integration testing
const expectedOptions = [
{ label: "Dark mode", value: "dark" },
{ label: "Light mode", value: "light" },
{ label: "Dark mode (colorblind-friendly)", value: "dark-daltonized" },
{ label: "Light mode (colorblind-friendly)", value: "light-daltonized" },
{ label: "Dark mode (ANSI colors only)", value: "dark-ansi" },
{ label: "Light mode (ANSI colors only)", value: "light-ansi" },
]
expect(expectedOptions.length).toBe(6)
})
import { createRoot, Text, useTheme } from '../ink.js' it('includes auto theme when AUTO_THEME feature is enabled', () => {
import { KeybindingSetup } from '../keybindings/KeybindingProviderSetup.js' // Test the structure when auto is present
import { AppStateProvider } from '../state/AppState.js' const optionsWithAuto = [
import { ThemeProvider } from './design-system/ThemeProvider.js' { label: "Auto (match terminal)", value: "auto" },
{ label: "Dark mode", value: "dark" },
mock.module('./StructuredDiff.js', () => ({ ]
StructuredDiff: function StructuredDiffPreview(): React.ReactNode { expect(optionsWithAuto[0].value).toBe('auto')
const [theme] = useTheme() })
return <Text>{`Preview theme: ${theme}`}</Text>
},
}))
mock.module('./StructuredDiff/colorDiff.js', () => ({
getColorModuleUnavailableReason: () => 'env',
getSyntaxTheme: () => null,
}))
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
function extractLastFrame(output: string): string {
let lastFrame: string | null = null
let cursor = 0
while (cursor < output.length) {
const start = output.indexOf(SYNC_START, cursor)
if (start === -1) {
break
}
const contentStart = start + SYNC_START.length
const end = output.indexOf(SYNC_END, contentStart)
if (end === -1) {
break
}
const frame = output.slice(contentStart, end)
if (frame.trim().length > 0) {
lastFrame = frame
}
cursor = end + SYNC_END.length
}
return lastFrame ?? output
}
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
getOutput: () => string
} {
let output = ''
const stdout = new PassThrough()
const stdin = new PassThrough() as PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
stdout.on('data', chunk => {
output += chunk.toString()
}) })
return { describe('handleRowFocus callback', () => {
stdout, it('setPreviewTheme is called with theme setting', () => {
stdin, const setPreviewTheme = mock()
getOutput: () => output, const handleRowFocus = (setting: string) => setPreviewTheme(setting)
}
}
async function waitForCondition( handleRowFocus('dark')
predicate: () => boolean, expect(setPreviewTheme).toHaveBeenCalledWith('dark')
timeoutMs = 2000, })
): Promise<void> {
const startedAt = Date.now()
while (Date.now() - startedAt < timeoutMs) {
if (predicate()) {
return
}
await Bun.sleep(10)
}
throw new Error('Timed out waiting for ThemePicker test condition')
}
async function waitForFrame(
getOutput: () => string,
predicate: (frame: string) => boolean,
): Promise<string> {
let frame = ''
await waitForCondition(() => {
frame = stripAnsi(extractLastFrame(getOutput()))
return predicate(frame)
}) })
return frame describe('handleSelect callback', () => {
} it('calls savePreview and onThemeSelect', () => {
const savePreview = mock()
const onThemeSelect = mock()
const handleSelect = (setting: string) => {
savePreview()
onThemeSelect(setting)
}
afterEach(() => { handleSelect('light')
mock.restore() expect(savePreview).toHaveBeenCalled()
}) expect(onThemeSelect).toHaveBeenCalledWith('light')
})
test('updates the preview when keyboard focus moves to another theme', async () => { })
const { ThemePicker } = await import('./ThemePicker.js')
const { stdout, stdin, getOutput } = createTestStreams() describe('handleCancel callback', () => {
const root = await createRoot({ it('calls cancelPreview and gracefulShutdown when not skipExitHandling', () => {
stdout: stdout as unknown as NodeJS.WriteStream, const cancelPreview = mock()
stdin: stdin as unknown as NodeJS.ReadStream, const gracefulShutdown = mock()
patchConsole: false, const handleCancel = (skipExitHandling: boolean, onCancelProp?: () => void) => {
}) cancelPreview()
if (skipExitHandling) {
root.render( onCancelProp?.()
<AppStateProvider> } else {
<KeybindingSetup> gracefulShutdown(0)
<ThemeProvider initialState="dark"> }
<ThemePicker onThemeSelect={() => {}} /> }
</ThemeProvider>
</KeybindingSetup> handleCancel(false)
</AppStateProvider>, expect(cancelPreview).toHaveBeenCalled()
) expect(gracefulShutdown).toHaveBeenCalledWith(0)
})
try {
const initialFrame = await waitForFrame( it('calls onCancelProp when skipExitHandling is true', () => {
getOutput, const cancelPreview = mock()
frame => frame.includes('Preview theme: dark'), const onCancelProp = mock()
) const handleCancel = (skipExitHandling: boolean, onCancelProp?: () => void) => {
expect(initialFrame).toContain('Preview theme: dark') cancelPreview()
if (skipExitHandling) {
stdin.write('j') onCancelProp?.()
}
const updatedFrame = await waitForFrame( }
getOutput,
frame => frame.includes('Preview theme: light'), handleCancel(true, onCancelProp)
) expect(cancelPreview).toHaveBeenCalled()
expect(updatedFrame).toContain('Preview theme: light') expect(onCancelProp).toHaveBeenCalled()
} finally { })
root.unmount() })
stdin.end()
stdout.end() describe('syntax hint logic', () => {
await Bun.sleep(0) it('shows disabled hint when syntax highlighting is disabled', () => {
} const syntaxHighlightingDisabled = true
const syntaxToggleShortcut = 'Ctrl+T'
const hint = syntaxHighlightingDisabled
? `Syntax highlighting disabled (${syntaxToggleShortcut} to enable)`
: `Syntax highlighting enabled (${syntaxToggleShortcut} to disable)`
expect(hint).toContain('disabled')
})
it('shows enabled hint when syntax highlighting is active', () => {
const syntaxHighlightingDisabled = false
const syntaxToggleShortcut = 'Ctrl+T'
const hint = !syntaxHighlightingDisabled
? `Syntax highlighting enabled (${syntaxToggleShortcut} to disable)`
: `Syntax highlighting disabled (${syntaxToggleShortcut} to enable)`
expect(hint).toContain('enabled')
})
})
}) })

View File

@@ -1,3 +1,4 @@
import { c as _c } from "react-compiler-runtime";
import { feature } from 'bun:bundle'; import { feature } from 'bun:bundle';
import React, { createContext, useContext, useEffect, useMemo, useState } from 'react'; import React, { createContext, useContext, useEffect, useMemo, useState } from 'react';
import useStdin from '../../ink/hooks/use-stdin.js'; import useStdin from '../../ink/hooks/use-stdin.js';
@@ -119,8 +120,21 @@ export function ThemeProvider({
* accepts any ThemeSetting (including 'auto'). * accepts any ThemeSetting (including 'auto').
*/ */
export function useTheme() { export function useTheme() {
const { currentTheme, setThemeSetting } = useContext(ThemeContext); const $ = _c(3);
return [currentTheme, setThemeSetting] as const; const {
currentTheme,
setThemeSetting
} = useContext(ThemeContext);
let t0;
if ($[0] !== currentTheme || $[1] !== setThemeSetting) {
t0 = [currentTheme, setThemeSetting];
$[0] = currentTheme;
$[1] = setThemeSetting;
$[2] = t0;
} else {
t0 = $[2];
}
return t0;
} }
/** /**
@@ -131,10 +145,25 @@ export function useThemeSetting() {
return useContext(ThemeContext).themeSetting; return useContext(ThemeContext).themeSetting;
} }
export function usePreviewTheme() { export function usePreviewTheme() {
const { setPreviewTheme, savePreview, cancelPreview } = useContext(ThemeContext); const $ = _c(4);
return { const {
setPreviewTheme, setPreviewTheme,
savePreview, savePreview,
cancelPreview, cancelPreview
}; } = useContext(ThemeContext);
let t0;
if ($[0] !== cancelPreview || $[1] !== savePreview || $[2] !== setPreviewTheme) {
t0 = {
setPreviewTheme,
savePreview,
cancelPreview
};
$[0] = cancelPreview;
$[1] = savePreview;
$[2] = setPreviewTheme;
$[3] = t0;
} else {
t0 = $[3];
}
return t0;
} }

View File

@@ -32,7 +32,7 @@ export function optionForPermissionSaveDestination(saveDestination: EditableSett
case 'userSettings': case 'userSettings':
return { return {
label: 'User settings', label: 'User settings',
description: `Saved in ~/.openclaude/settings.json`, description: `Saved in at ~/.claude/settings.json`,
value: saveDestination value: saveDestination
}; };
} }

View File

@@ -33,14 +33,14 @@ export const IMAGE_TARGET_RAW_SIZE = (API_IMAGE_MAX_BASE64_SIZE * 3) / 4 // 3.75
* *
* Note: The API internally resizes images larger than 1568px (source: * Note: The API internally resizes images larger than 1568px (source:
* encoding/full_encoding.py), but this is handled server-side and doesn't * encoding/full_encoding.py), but this is handled server-side and doesn't
* cause errors. These client-side limits (1568px) are slightly larger to * cause errors. These client-side limits (2000px) are slightly larger to
* preserve quality when beneficial. * preserve quality when beneficial.
* *
* The API_IMAGE_MAX_BASE64_SIZE (5MB) is the actual hard limit that causes * The API_IMAGE_MAX_BASE64_SIZE (5MB) is the actual hard limit that causes
* API errors if exceeded. * API errors if exceeded.
*/ */
export const IMAGE_MAX_WIDTH = 1568 export const IMAGE_MAX_WIDTH = 2000
export const IMAGE_MAX_HEIGHT = 1568 export const IMAGE_MAX_HEIGHT = 2000
// ============================================================================= // =============================================================================
// PDF LIMITS // PDF LIMITS

View File

@@ -2,11 +2,8 @@ import { afterEach, expect, test } from 'bun:test'
import { getSystemPrompt, DEFAULT_AGENT_PROMPT } from './prompts.js' import { getSystemPrompt, DEFAULT_AGENT_PROMPT } from './prompts.js'
import { CLI_SYSPROMPT_PREFIXES, getCLISyspromptPrefix } from './system.js' import { CLI_SYSPROMPT_PREFIXES, getCLISyspromptPrefix } from './system.js'
import { CLAUDE_CODE_GUIDE_AGENT } from '../tools/AgentTool/built-in/claudeCodeGuideAgent.js'
import { GENERAL_PURPOSE_AGENT } from '../tools/AgentTool/built-in/generalPurposeAgent.js' import { GENERAL_PURPOSE_AGENT } from '../tools/AgentTool/built-in/generalPurposeAgent.js'
import { EXPLORE_AGENT } from '../tools/AgentTool/built-in/exploreAgent.js' import { EXPLORE_AGENT } from '../tools/AgentTool/built-in/exploreAgent.js'
import { PLAN_AGENT } from '../tools/AgentTool/built-in/planAgent.js'
import { STATUSLINE_SETUP_AGENT } from '../tools/AgentTool/built-in/statuslineSetup.js'
const originalSimpleEnv = process.env.CLAUDE_CODE_SIMPLE const originalSimpleEnv = process.env.CLAUDE_CODE_SIMPLE
@@ -16,12 +13,10 @@ afterEach(() => {
test('CLI identity prefixes describe OpenClaude instead of Claude Code', () => { test('CLI identity prefixes describe OpenClaude instead of Claude Code', () => {
expect(getCLISyspromptPrefix()).toContain('OpenClaude') expect(getCLISyspromptPrefix()).toContain('OpenClaude')
expect(getCLISyspromptPrefix()).not.toContain('Claude Code')
expect(getCLISyspromptPrefix()).not.toContain("Anthropic's official CLI for Claude") expect(getCLISyspromptPrefix()).not.toContain("Anthropic's official CLI for Claude")
for (const prefix of CLI_SYSPROMPT_PREFIXES) { for (const prefix of CLI_SYSPROMPT_PREFIXES) {
expect(prefix).toContain('OpenClaude') expect(prefix).toContain('OpenClaude')
expect(prefix).not.toContain('Claude Code')
expect(prefix).not.toContain("Anthropic's official CLI for Claude") expect(prefix).not.toContain("Anthropic's official CLI for Claude")
} }
}) })
@@ -32,53 +27,22 @@ test('simple mode identity describes OpenClaude instead of Claude Code', async (
const prompt = await getSystemPrompt([], 'gpt-4o') const prompt = await getSystemPrompt([], 'gpt-4o')
expect(prompt[0]).toContain('OpenClaude') expect(prompt[0]).toContain('OpenClaude')
expect(prompt[0]).not.toContain('Claude Code')
expect(prompt[0]).not.toContain("Anthropic's official CLI for Claude") expect(prompt[0]).not.toContain("Anthropic's official CLI for Claude")
}) })
test('built-in agent prompts describe OpenClaude instead of Claude Code', () => { test('built-in agent prompts describe OpenClaude instead of Claude Code', () => {
expect(DEFAULT_AGENT_PROMPT).toContain('OpenClaude') expect(DEFAULT_AGENT_PROMPT).toContain('OpenClaude')
expect(DEFAULT_AGENT_PROMPT).not.toContain('Claude Code')
expect(DEFAULT_AGENT_PROMPT).not.toContain("Anthropic's official CLI for Claude") expect(DEFAULT_AGENT_PROMPT).not.toContain("Anthropic's official CLI for Claude")
const generalPrompt = GENERAL_PURPOSE_AGENT.getSystemPrompt({ const generalPrompt = GENERAL_PURPOSE_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never }, toolUseContext: { options: {} as never },
}) })
expect(generalPrompt).toContain('OpenClaude') expect(generalPrompt).toContain('OpenClaude')
expect(generalPrompt).not.toContain('Claude Code')
expect(generalPrompt).not.toContain("Anthropic's official CLI for Claude") expect(generalPrompt).not.toContain("Anthropic's official CLI for Claude")
const explorePrompt = EXPLORE_AGENT.getSystemPrompt({ const explorePrompt = EXPLORE_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never }, toolUseContext: { options: {} as never },
}) })
expect(explorePrompt).toContain('OpenClaude') expect(explorePrompt).toContain('OpenClaude')
expect(explorePrompt).not.toContain('Claude Code')
expect(explorePrompt).not.toContain("Anthropic's official CLI for Claude") expect(explorePrompt).not.toContain("Anthropic's official CLI for Claude")
const planPrompt = PLAN_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(planPrompt).toContain('OpenClaude')
expect(planPrompt).not.toContain('Claude Code')
const statuslinePrompt = STATUSLINE_SETUP_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(statuslinePrompt).toContain('OpenClaude')
expect(statuslinePrompt).not.toContain('Claude Code')
const guidePrompt = CLAUDE_CODE_GUIDE_AGENT.getSystemPrompt({
toolUseContext: {
options: {
commands: [],
agentDefinitions: { activeAgents: [] },
mcpClients: [],
} as never,
},
})
expect(guidePrompt).toContain('OpenClaude')
expect(guidePrompt).toContain('You are the OpenClaude guide agent.')
expect(guidePrompt).toContain('**OpenClaude** (the CLI tool)')
expect(guidePrompt).not.toContain('You are the Claude guide agent.')
expect(guidePrompt).not.toContain('**Claude Code** (the CLI tool)')
}) })

View File

@@ -214,7 +214,7 @@ function getSimpleDoingTasksSection(): string {
] ]
const userHelpSubitems = [ const userHelpSubitems = [
`/help: Get help with using OpenClaude`, `/help: Get help with using Claude Code`,
`To give feedback, users should ${MACRO.ISSUES_EXPLAINER}`, `To give feedback, users should ${MACRO.ISSUES_EXPLAINER}`,
] ]
@@ -242,7 +242,7 @@ function getSimpleDoingTasksSection(): string {
: []), : []),
...(process.env.USER_TYPE === 'ant' ...(process.env.USER_TYPE === 'ant'
? [ ? [
`If the user reports a bug, slowness, or unexpected behavior with OpenClaude itself (as opposed to asking you to fix their own code), recommend the appropriate slash command: /issue for model-related problems (odd outputs, wrong tool choices, hallucinations, refusals), or /share to upload the full session transcript for product bugs, crashes, slowness, or general issues. Only recommend these when the user is describing a problem with OpenClaude.`, `If the user reports a bug, slowness, or unexpected behavior with Claude Code itself (as opposed to asking you to fix their own code), recommend the appropriate slash command: /issue for model-related problems (odd outputs, wrong tool choices, hallucinations, refusals), or /share to upload the full session transcript for product bugs, crashes, slowness, or general issues. Only recommend these when the user is describing a problem with Claude Code.`,
] ]
: []), : []),
`If the user asks for help or wants to give feedback inform them of the following:`, `If the user asks for help or wants to give feedback inform them of the following:`,
@@ -449,7 +449,7 @@ export async function getSystemPrompt(
): Promise<string[]> { ): Promise<string[]> {
if (isEnvTruthy(process.env.CLAUDE_CODE_SIMPLE)) { if (isEnvTruthy(process.env.CLAUDE_CODE_SIMPLE)) {
return [ return [
`You are OpenClaude, an open-source coding agent and CLI.\n\nCWD: ${getCwd()}\nDate: ${getSessionStartDate()}`, `You are OpenClaude, an open-source fork of Claude Code.\n\nCWD: ${getCwd()}\nDate: ${getSessionStartDate()}`,
] ]
} }
@@ -696,10 +696,10 @@ export async function computeSimpleEnvInfo(
: `The most recent Claude model family is Claude 4.5/4.6. Model IDs — Opus 4.6: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.opus}', Sonnet 4.6: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.sonnet}', Haiku 4.5: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.haiku}'. When building AI applications, default to the latest and most capable Claude models.`, : `The most recent Claude model family is Claude 4.5/4.6. Model IDs — Opus 4.6: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.opus}', Sonnet 4.6: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.sonnet}', Haiku 4.5: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.haiku}'. When building AI applications, default to the latest and most capable Claude models.`,
process.env.USER_TYPE === 'ant' && isUndercover() process.env.USER_TYPE === 'ant' && isUndercover()
? null ? null
: `OpenClaude is available as a CLI in the terminal and can be used across local development environments and IDE workflows.`, : `Claude Code is available as a CLI in the terminal, desktop app (Mac/Windows), web app (claude.ai/code), and IDE extensions (VS Code, JetBrains).`,
process.env.USER_TYPE === 'ant' && isUndercover() process.env.USER_TYPE === 'ant' && isUndercover()
? null ? null
: `Fast mode for OpenClaude uses the same ${FRONTIER_MODEL_NAME} model with faster output. It does NOT switch to a different model. It can be toggled with /fast.`, : `Fast mode for Claude Code uses the same ${FRONTIER_MODEL_NAME} model with faster output. It does NOT switch to a different model. It can be toggled with /fast.`,
].filter(item => item !== null) ].filter(item => item !== null)
return [ return [
@@ -755,7 +755,7 @@ export function getUnameSR(): string {
return `${osType()} ${osRelease()}` return `${osType()} ${osRelease()}`
} }
export const DEFAULT_AGENT_PROMPT = `You are an agent for OpenClaude, an open-source coding agent and CLI. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done. When you complete the task, respond with a concise report covering what was done and any key findings — the caller will relay this to the user, so it only needs the essentials.` export const DEFAULT_AGENT_PROMPT = `You are an agent for OpenClaude, an open-source fork of Claude Code. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done. When you complete the task, respond with a concise report covering what was done and any key findings — the caller will relay this to the user, so it only needs the essentials.`
export async function enhanceSystemPromptWithEnvDetails( export async function enhanceSystemPromptWithEnvDetails(
existingSystemPrompt: string[], existingSystemPrompt: string[],

View File

@@ -8,11 +8,11 @@ import { getAPIProvider } from '../utils/model/providers.js'
import { getWorkload } from '../utils/workloadContext.js' import { getWorkload } from '../utils/workloadContext.js'
const DEFAULT_PREFIX = const DEFAULT_PREFIX =
`You are OpenClaude, an open-source coding agent and CLI.` `You are OpenClaude, an open-source fork of Claude Code.`
const AGENT_SDK_CLAUDE_CODE_PRESET_PREFIX = const AGENT_SDK_CLAUDE_CODE_PRESET_PREFIX =
`You are OpenClaude, an open-source coding agent and CLI running within the Claude Agent SDK.` `You are OpenClaude, an open-source fork of Claude Code, running within the Claude Agent SDK.`
const AGENT_SDK_PREFIX = const AGENT_SDK_PREFIX =
`You are OpenClaude, built on the Claude Agent SDK.` `You are a Claude agent running in OpenClaude, built on the Claude Agent SDK.`
const CLI_SYSPROMPT_PREFIX_VALUES = [ const CLI_SYSPROMPT_PREFIX_VALUES = [
DEFAULT_PREFIX, DEFAULT_PREFIX,

View File

@@ -181,7 +181,7 @@ function formatCost(cost: number, maxDecimalPlaces: number = 4): string {
function formatModelUsage(): string { function formatModelUsage(): string {
const modelUsageMap = getModelUsage() const modelUsageMap = getModelUsage()
if (Object.keys(modelUsageMap).length === 0) { if (Object.keys(modelUsageMap).length === 0) {
return 'Usage: 0 input, 0 output' return 'Usage: 0 input, 0 output, 0 cache read, 0 cache write'
} }
// Accumulate usage by short name // Accumulate usage by short name
@@ -211,19 +211,15 @@ function formatModelUsage(): string {
let result = 'Usage by model:' let result = 'Usage by model:'
for (const [shortName, usage] of Object.entries(usageByShortName)) { for (const [shortName, usage] of Object.entries(usageByShortName)) {
let usageString = const usageString =
` ${formatNumber(usage.inputTokens)} input, ` + ` ${formatNumber(usage.inputTokens)} input, ` +
`${formatNumber(usage.outputTokens)} output` `${formatNumber(usage.outputTokens)} output, ` +
if (usage.cacheReadInputTokens > 0) { `${formatNumber(usage.cacheReadInputTokens)} cache read, ` +
usageString += `, ${formatNumber(usage.cacheReadInputTokens)} cache read` `${formatNumber(usage.cacheCreationInputTokens)} cache write` +
} (usage.webSearchRequests > 0
if (usage.cacheCreationInputTokens > 0) { ? `, ${formatNumber(usage.webSearchRequests)} web search`
usageString += `, ${formatNumber(usage.cacheCreationInputTokens)} cache write` : '') +
} ` (${formatCost(usage.costUSD)})`
if (usage.webSearchRequests > 0) {
usageString += `, ${formatNumber(usage.webSearchRequests)} web search`
}
usageString += ` (${formatCost(usage.costUSD)})`
result += `\n` + `${shortName}:`.padStart(21) + usageString result += `\n` + `${shortName}:`.padStart(21) + usageString
} }
return result return result

View File

@@ -96,16 +96,15 @@ async function main(): Promise<void> {
} }
} }
// Enable configs first so we can read settings
{ {
const { enableConfigs } = await import('../utils/config.js') const { enableConfigs } = await import('../utils/config.js')
enableConfigs() enableConfigs()
}
// Apply settings.env from user settings (includes GitHub provider settings from /onboard-github)
{
const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js') const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js')
applySafeConfigEnvironmentVariables() applySafeConfigEnvironmentVariables()
const { hydrateGeminiAccessTokenFromSecureStorage } = await import('../utils/geminiCredentials.js')
hydrateGeminiAccessTokenFromSecureStorage()
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../utils/githubModelsCredentials.js')
hydrateGithubModelsTokenFromSecureStorage()
} }
const startupEnv = await buildStartupEnvFromProfile({ const startupEnv = await buildStartupEnvFromProfile({
@@ -122,16 +121,6 @@ async function main(): Promise<void> {
} }
} }
// Hydrate GitHub credentials after profile is applied so CLAUDE_CODE_USE_GITHUB from profile is available
{
const {
hydrateGithubModelsTokenFromSecureStorage,
refreshGithubModelsTokenIfNeeded,
} = await import('../utils/githubModelsCredentials.js')
await refreshGithubModelsTokenIfNeeded()
hydrateGithubModelsTokenFromSecureStorage()
}
await validateProviderEnvOrExit() await validateProviderEnvOrExit()
// Print the gradient startup screen before the Ink UI loads // Print the gradient startup screen before the Ink UI loads

View File

@@ -1,4 +1,4 @@
import { useCallback, useEffect, useSyncExternalStore } from 'react' import { useCallback, useEffect } from 'react'
import type { Command } from '../commands.js' import type { Command } from '../commands.js'
import { useNotifications } from '../context/notifications.js' import { useNotifications } from '../context/notifications.js'
import { import {
@@ -7,11 +7,6 @@ import {
} from '../services/analytics/index.js' } from '../services/analytics/index.js'
import { reinitializeLspServerManager } from '../services/lsp/manager.js' import { reinitializeLspServerManager } from '../services/lsp/manager.js'
import { useAppState, useSetAppState } from '../state/AppState.js' import { useAppState, useSetAppState } from '../state/AppState.js'
import {
getPluginCommandsState,
setPluginCommandsState,
subscribePluginCommands,
} from '../state/pluginCommandsStore.js'
import type { AgentDefinition } from '../tools/AgentTool/loadAgentsDir.js' import type { AgentDefinition } from '../tools/AgentTool/loadAgentsDir.js'
import { count } from '../utils/array.js' import { count } from '../utils/array.js'
import { logForDebugging } from '../utils/debug.js' import { logForDebugging } from '../utils/debug.js'
@@ -44,11 +39,6 @@ export function useManagePlugins({
}: { }: {
enabled?: boolean enabled?: boolean
} = {}) { } = {}) {
const pluginCommands = useSyncExternalStore(
subscribePluginCommands,
getPluginCommandsState,
getPluginCommandsState,
)
const setAppState = useSetAppState() const setAppState = useSetAppState()
const needsRefresh = useAppState(s => s.plugins.needsRefresh) const needsRefresh = useAppState(s => s.plugins.needsRefresh)
const { addNotification } = useNotifications() const { addNotification } = useNotifications()
@@ -84,7 +74,6 @@ export function useManagePlugins({
try { try {
commands = await getPluginCommands() commands = await getPluginCommands()
setPluginCommandsState(commands)
} catch (error) { } catch (error) {
const errorMessage = const errorMessage =
error instanceof Error ? error.message : String(error) error instanceof Error ? error.message : String(error)
@@ -93,7 +82,6 @@ export function useManagePlugins({
source: 'plugin-commands', source: 'plugin-commands',
error: `Failed to load plugin commands: ${errorMessage}`, error: `Failed to load plugin commands: ${errorMessage}`,
}) })
setPluginCommandsState([])
} }
try { try {
@@ -185,7 +173,7 @@ export function useManagePlugins({
...prevState.plugins, ...prevState.plugins,
enabled, enabled,
disabled, disabled,
commands: [], commands,
errors: mergedErrors, errors: mergedErrors,
}, },
} }
@@ -238,7 +226,6 @@ export function useManagePlugins({
logError(errorObj) logError(errorObj)
logForDebugging(`Error loading plugins: ${error}`) logForDebugging(`Error loading plugins: ${error}`)
// Set empty state on error, but preserve LSP errors and add the new error // Set empty state on error, but preserve LSP errors and add the new error
setPluginCommandsState([])
setAppState(prevState => { setAppState(prevState => {
// Keep existing LSP/non-plugin-loading errors // Keep existing LSP/non-plugin-loading errors
const existingLspErrors = prevState.plugins.errors.filter( const existingLspErrors = prevState.plugins.errors.filter(
@@ -297,11 +284,6 @@ export function useManagePlugins({
}) })
}, [initialPluginLoad, enabled]) }, [initialPluginLoad, enabled])
useEffect(() => {
if (enabled) return
setPluginCommandsState([])
}, [enabled])
// Plugin state changed on disk (background reconcile, /plugin menu, // Plugin state changed on disk (background reconcile, /plugin menu,
// external settings edit). Show a notification; user runs /reload-plugins // external settings edit). Show a notification; user runs /reload-plugins
// to apply. The previous auto-refresh here had a stale-cache bug (only // to apply. The previous auto-refresh here had a stale-cache bug (only
@@ -319,6 +301,4 @@ export function useManagePlugins({
// Do NOT auto-refresh. Do NOT reset needsRefresh — /reload-plugins // Do NOT auto-refresh. Do NOT reset needsRefresh — /reload-plugins
// consumes it via refreshActivePlugins(). // consumes it via refreshActivePlugins().
}, [enabled, needsRefresh, addNotification]) }, [enabled, needsRefresh, addNotification])
return enabled ? pluginCommands : []
} }

View File

@@ -1,4 +1,3 @@
import { useLayoutEffect, useRef, useState } from 'react'
import { isInputModeCharacter } from 'src/components/PromptInput/inputModes.js' import { isInputModeCharacter } from 'src/components/PromptInput/inputModes.js'
import { useNotifications } from 'src/context/notifications.js' import { useNotifications } from 'src/context/notifications.js'
import stripAnsi from 'strip-ansi' import stripAnsi from 'strip-ansi'
@@ -101,74 +100,9 @@ export function useTextInput({
prewarmModifiers() prewarmModifiers()
} }
// Keep a local text/cursor mirror so consecutive keystrokes can advance const offset = externalOffset
// immediately even if the controlled parent value hasn't committed yet. const setOffset = onOffsetChange
const [renderState, setRenderState] = useState(() => ({ const cursor = Cursor.fromText(originalValue, columns, offset)
value: originalValue,
offset: externalOffset,
}))
const liveValueRef = useRef(originalValue)
const liveOffsetRef = useRef(externalOffset)
const lastSeenPropsRef = useRef({
value: originalValue,
offset: externalOffset,
})
const updateRenderedInput = (nextValue: string, nextOffset: number): void => {
liveValueRef.current = nextValue
liveOffsetRef.current = nextOffset
setRenderState(prev =>
prev.value === nextValue && prev.offset === nextOffset
? prev
: { value: nextValue, offset: nextOffset },
)
}
useLayoutEffect(() => {
if (
lastSeenPropsRef.current.value === originalValue &&
lastSeenPropsRef.current.offset === externalOffset
) {
return
}
lastSeenPropsRef.current = {
value: originalValue,
offset: externalOffset,
}
updateRenderedInput(originalValue, externalOffset)
}, [originalValue, externalOffset])
const value = renderState.value
const offset = renderState.offset
const getLiveValue = (): string => liveValueRef.current
const getLiveCursor = (): Cursor =>
Cursor.fromText(liveValueRef.current, columns, liveOffsetRef.current)
const setValue = (nextValue: string, nextOffset = liveOffsetRef.current): void => {
const previousValue = liveValueRef.current
const previousOffset = liveOffsetRef.current
if (previousValue === nextValue && previousOffset === nextOffset) {
return
}
updateRenderedInput(nextValue, nextOffset)
if (previousValue !== nextValue) {
onChange(nextValue)
}
if (previousOffset !== nextOffset) {
onOffsetChange(nextOffset)
}
}
const setOffset = (nextOffset: number): void => {
if (nextOffset === liveOffsetRef.current) {
return
}
updateRenderedInput(liveValueRef.current, nextOffset)
onOffsetChange(nextOffset)
}
const cursor = Cursor.fromText(value, columns, offset)
const { addNotification, removeNotification } = useNotifications() const { addNotification, removeNotification } = useNotifications()
const handleCtrlC = useDoublePress( const handleCtrlC = useDoublePress(
@@ -177,11 +111,9 @@ export function useTextInput({
}, },
() => onExit?.(), () => onExit?.(),
() => { () => {
const currentValue = getLiveValue() if (originalValue) {
if (currentValue) {
updateRenderedInput('', 0)
onChange('') onChange('')
onOffsetChange(0) setOffset(0)
onHistoryReset?.() onHistoryReset?.()
} }
}, },
@@ -193,8 +125,7 @@ export function useTextInput({
// not dialog dismissal, and needs the double-press safety mechanism. // not dialog dismissal, and needs the double-press safety mechanism.
const handleEscape = useDoublePress( const handleEscape = useDoublePress(
(show: boolean) => { (show: boolean) => {
const currentValue = getLiveValue() if (!originalValue || !show) {
if (!currentValue || !show) {
return return
} }
addNotification({ addNotification({
@@ -205,19 +136,17 @@ export function useTextInput({
}) })
}, },
() => { () => {
const currentValue = getLiveValue()
// Remove the "Esc again to clear" notification immediately // Remove the "Esc again to clear" notification immediately
removeNotification('escape-again-to-clear') removeNotification('escape-again-to-clear')
onClearInput?.() onClearInput?.()
if (currentValue) { if (originalValue) {
// Track double-escape usage for feature discovery // Track double-escape usage for feature discovery
// Save to history before clearing // Save to history before clearing
if (currentValue.trim() !== '') { if (originalValue.trim() !== '') {
addToHistory(currentValue) addToHistory(originalValue)
} }
updateRenderedInput('', 0)
onChange('') onChange('')
onOffsetChange(0) setOffset(0)
onHistoryReset?.() onHistoryReset?.()
} }
}, },
@@ -225,13 +154,13 @@ export function useTextInput({
const handleEmptyCtrlD = useDoublePress( const handleEmptyCtrlD = useDoublePress(
show => { show => {
if (getLiveValue() !== '') { if (originalValue !== '') {
return return
} }
onExitMessage?.(show, 'Ctrl-D') onExitMessage?.(show, 'Ctrl-D')
}, },
() => { () => {
if (getLiveValue() !== '') { if (originalValue !== '') {
return return
} }
onExit?.() onExit?.()
@@ -239,7 +168,6 @@ export function useTextInput({
) )
function handleCtrlD(): MaybeCursor { function handleCtrlD(): MaybeCursor {
const cursor = getLiveCursor()
if (cursor.text === '') { if (cursor.text === '') {
// When input is empty, handle double-press // When input is empty, handle double-press
handleEmptyCtrlD() handleEmptyCtrlD()
@@ -250,28 +178,24 @@ export function useTextInput({
} }
function killToLineEnd(): Cursor { function killToLineEnd(): Cursor {
const cursor = getLiveCursor()
const { cursor: newCursor, killed } = cursor.deleteToLineEnd() const { cursor: newCursor, killed } = cursor.deleteToLineEnd()
pushToKillRing(killed, 'append') pushToKillRing(killed, 'append')
return newCursor return newCursor
} }
function killToLineStart(): Cursor { function killToLineStart(): Cursor {
const cursor = getLiveCursor()
const { cursor: newCursor, killed } = cursor.deleteToLineStart() const { cursor: newCursor, killed } = cursor.deleteToLineStart()
pushToKillRing(killed, 'prepend') pushToKillRing(killed, 'prepend')
return newCursor return newCursor
} }
function killWordBefore(): Cursor { function killWordBefore(): Cursor {
const cursor = getLiveCursor()
const { cursor: newCursor, killed } = cursor.deleteWordBefore() const { cursor: newCursor, killed } = cursor.deleteWordBefore()
pushToKillRing(killed, 'prepend') pushToKillRing(killed, 'prepend')
return newCursor return newCursor
} }
function yank(): Cursor { function yank(): Cursor {
const cursor = getLiveCursor()
const text = getLastKill() const text = getLastKill()
if (text.length > 0) { if (text.length > 0) {
const startOffset = cursor.offset const startOffset = cursor.offset
@@ -283,7 +207,6 @@ export function useTextInput({
} }
function handleYankPop(): Cursor { function handleYankPop(): Cursor {
const cursor = getLiveCursor()
const popResult = yankPop() const popResult = yankPop()
if (!popResult) { if (!popResult) {
return cursor return cursor
@@ -299,16 +222,13 @@ export function useTextInput({
} }
const handleCtrl = mapInput([ const handleCtrl = mapInput([
['a', () => getLiveCursor().startOfLine()], ['a', () => cursor.startOfLine()],
['b', () => getLiveCursor().left()], ['b', () => cursor.left()],
['c', handleCtrlC], ['c', handleCtrlC],
['d', handleCtrlD], ['d', handleCtrlD],
['e', () => getLiveCursor().endOfLine()], ['e', () => cursor.endOfLine()],
['f', () => getLiveCursor().right()], ['f', () => cursor.right()],
['h', () => { ['h', () => cursor.deleteTokenBefore() ?? cursor.backspace()],
const cursor = getLiveCursor()
return cursor.deleteTokenBefore() ?? cursor.backspace()
}],
['k', killToLineEnd], ['k', killToLineEnd],
['n', () => downOrHistoryDown()], ['n', () => downOrHistoryDown()],
['p', () => upOrHistoryUp()], ['p', () => upOrHistoryUp()],
@@ -318,15 +238,13 @@ export function useTextInput({
]) ])
const handleMeta = mapInput([ const handleMeta = mapInput([
['b', () => getLiveCursor().prevWord()], ['b', () => cursor.prevWord()],
['f', () => getLiveCursor().nextWord()], ['f', () => cursor.nextWord()],
['d', () => getLiveCursor().deleteWordAfter()], ['d', () => cursor.deleteWordAfter()],
['y', handleYankPop], ['y', handleYankPop],
]) ])
function handleEnter(key: Key) { function handleEnter(key: Key) {
const cursor = getLiveCursor()
const currentValue = getLiveValue()
if ( if (
multiline && multiline &&
cursor.offset > 0 && cursor.offset > 0 &&
@@ -345,11 +263,10 @@ export function useTextInput({
if (env.terminal === 'Apple_Terminal' && isModifierPressed('shift')) { if (env.terminal === 'Apple_Terminal' && isModifierPressed('shift')) {
return cursor.insert('\n') return cursor.insert('\n')
} }
onSubmit?.(currentValue) onSubmit?.(originalValue)
} }
function upOrHistoryUp() { function upOrHistoryUp() {
const cursor = getLiveCursor()
if (disableCursorMovementForUpDownKeys) { if (disableCursorMovementForUpDownKeys) {
onHistoryUp?.() onHistoryUp?.()
return cursor return cursor
@@ -374,7 +291,6 @@ export function useTextInput({
return cursor return cursor
} }
function downOrHistoryDown() { function downOrHistoryDown() {
const cursor = getLiveCursor()
if (disableCursorMovementForUpDownKeys) { if (disableCursorMovementForUpDownKeys) {
onHistoryDown?.() onHistoryDown?.()
return cursor return cursor
@@ -399,7 +315,7 @@ export function useTextInput({
return cursor return cursor
} }
function mapKey(key: Key, cursor: Cursor): InputMapper { function mapKey(key: Key): InputMapper {
switch (true) { switch (true) {
case key.escape: case key.escape:
return () => { return () => {
@@ -513,7 +429,6 @@ export function useTextInput({
} }
function onInput(input: string, key: Key): void { function onInput(input: string, key: Key): void {
const currentCursor = getLiveCursor()
// Note: Image paste shortcut (chat:imagePaste) is handled via useKeybindings in PromptInput // Note: Image paste shortcut (chat:imagePaste) is handled via useKeybindings in PromptInput
// Apply filter if provided // Apply filter if provided
@@ -531,15 +446,18 @@ export function useTextInput({
// Apply all DEL characters as backspace operations synchronously // Apply all DEL characters as backspace operations synchronously
// Try to delete tokens first, fall back to character backspace // Try to delete tokens first, fall back to character backspace
let nextCursor = currentCursor let currentCursor = cursor
for (let i = 0; i < delCount; i++) { for (let i = 0; i < delCount; i++) {
nextCursor = currentCursor =
nextCursor.deleteTokenBefore() ?? nextCursor.backspace() currentCursor.deleteTokenBefore() ?? currentCursor.backspace()
} }
// Update state once with the final result // Update state once with the final result
if (!currentCursor.equals(nextCursor)) { if (!cursor.equals(currentCursor)) {
setValue(nextCursor.text, nextCursor.offset) if (cursor.text !== currentCursor.text) {
onChange(currentCursor.text)
}
setOffset(currentCursor.offset)
} }
resetKillAccumulation() resetKillAccumulation()
resetYankState() resetYankState()
@@ -556,10 +474,13 @@ export function useTextInput({
resetYankState() resetYankState()
} }
const nextCursor = mapKey(key, currentCursor)(filteredInput) const nextCursor = mapKey(key)(filteredInput)
if (nextCursor) { if (nextCursor) {
if (!currentCursor.equals(nextCursor)) { if (!cursor.equals(nextCursor)) {
setValue(nextCursor.text, nextCursor.offset) if (cursor.text !== nextCursor.text) {
onChange(nextCursor.text)
}
setOffset(nextCursor.offset)
} }
// SSH-coalesced Enter: on slow links, "o" + Enter can arrive as one // SSH-coalesced Enter: on slow links, "o" + Enter can arrive as one
// chunk "o\r". parseKeypress only matches s === '\r', so it hit the // chunk "o\r". parseKeypress only matches s === '\r', so it hit the
@@ -591,7 +512,6 @@ export function useTextInput({
return { return {
onInput, onInput,
value,
renderedValue: cursor.render( renderedValue: cursor.render(
cursorChar, cursorChar,
mask, mask,
@@ -600,7 +520,6 @@ export function useTextInput({
maxVisibleLines, maxVisibleLines,
), ),
offset, offset,
setValue,
setOffset, setOffset,
cursorLine: cursorPos.line - cursor.getViewportStartLine(maxVisibleLines), cursorLine: cursorPos.line - cursor.getViewportStartLine(maxVisibleLines),
cursorColumn: cursorPos.column, cursorColumn: cursorPos.column,

View File

@@ -70,14 +70,14 @@ export function useVimInput(props: UseVimInputProps): VimInputState {
// Vim behavior: move cursor left by 1 when exiting insert mode // Vim behavior: move cursor left by 1 when exiting insert mode
// (unless at beginning of line or at offset 0) // (unless at beginning of line or at offset 0)
const offset = textInput.offset const offset = textInput.offset
if (offset > 0 && textInput.value[offset - 1] !== '\n') { if (offset > 0 && props.value[offset - 1] !== '\n') {
textInput.setOffset(offset - 1) textInput.setOffset(offset - 1)
} }
vimStateRef.current = { mode: 'NORMAL', command: { type: 'idle' } } vimStateRef.current = { mode: 'NORMAL', command: { type: 'idle' } }
setMode('NORMAL') setMode('NORMAL')
onModeChange?.('NORMAL') onModeChange?.('NORMAL')
}, [onModeChange, textInput]) }, [onModeChange, textInput, props.value])
function createOperatorContext( function createOperatorContext(
cursor: Cursor, cursor: Cursor,
@@ -85,8 +85,8 @@ export function useVimInput(props: UseVimInputProps): VimInputState {
): OperatorContext { ): OperatorContext {
return { return {
cursor, cursor,
text: textInput.value, text: props.value,
setText: (newText: string) => textInput.setValue(newText), setText: (newText: string) => props.onChange(newText),
setOffset: (offset: number) => textInput.setOffset(offset), setOffset: (offset: number) => textInput.setOffset(offset),
enterInsert: (offset: number) => switchToInsertMode(offset), enterInsert: (offset: number) => switchToInsertMode(offset),
getRegister: () => persistentRef.current.register, getRegister: () => persistentRef.current.register,
@@ -110,18 +110,15 @@ export function useVimInput(props: UseVimInputProps): VimInputState {
const change = persistentRef.current.lastChange const change = persistentRef.current.lastChange
if (!change) return if (!change) return
const cursor = Cursor.fromText( const cursor = Cursor.fromText(props.value, props.columns, textInput.offset)
textInput.value,
props.columns,
textInput.offset,
)
const ctx = createOperatorContext(cursor, true) const ctx = createOperatorContext(cursor, true)
switch (change.type) { switch (change.type) {
case 'insert': case 'insert':
if (change.text) { if (change.text) {
const newCursor = cursor.insert(change.text) const newCursor = cursor.insert(change.text)
textInput.setValue(newCursor.text, newCursor.offset) props.onChange(newCursor.text)
textInput.setOffset(newCursor.offset)
} }
break break
@@ -182,11 +179,7 @@ export function useVimInput(props: UseVimInputProps): VimInputState {
// lookups expect single chars and a prepended space would break them. // lookups expect single chars and a prepended space would break them.
const filtered = inputFilter ? inputFilter(rawInput, key) : rawInput const filtered = inputFilter ? inputFilter(rawInput, key) : rawInput
const input = state.mode === 'INSERT' ? filtered : rawInput const input = state.mode === 'INSERT' ? filtered : rawInput
const cursor = Cursor.fromText( const cursor = Cursor.fromText(props.value, props.columns, textInput.offset)
textInput.value,
props.columns,
textInput.offset,
)
if (key.ctrl) { if (key.ctrl) {
textInput.onInput(input, key) textInput.onInput(input, key)

View File

@@ -115,10 +115,7 @@ export default class App extends PureComponent<Props, State> {
keyParseState = INITIAL_STATE; keyParseState = INITIAL_STATE;
// Timer for flushing incomplete escape sequences // Timer for flushing incomplete escape sequences
incompleteEscapeTimer: NodeJS.Timeout | null = null; incompleteEscapeTimer: NodeJS.Timeout | null = null;
// Default to readable-mode stdin (legacy Ink behavior). The data-mode path stdinMode: 'readable' | 'data' = process.env.OPENCLAUDE_USE_READABLE_STDIN === '1' ? 'readable' : 'data';
// is kept as an explicit opt-in because some terminals can enter a state
// where startup input appears frozen when data mode is the default.
stdinMode: 'readable' | 'data' = process.env.OPENCLAUDE_USE_DATA_STDIN === '1' || process.env.OPENCLAUDE_USE_READABLE_STDIN === '0' ? 'data' : 'readable';
// Timeout durations for incomplete sequences (ms) // Timeout durations for incomplete sequences (ms)
readonly NORMAL_TIMEOUT = 50; // Short timeout for regular esc sequences readonly NORMAL_TIMEOUT = 50; // Short timeout for regular esc sequences
readonly PASTE_TIMEOUT = 500; // Longer timeout for paste operations readonly PASTE_TIMEOUT = 500; // Longer timeout for paste operations

View File

@@ -33,7 +33,7 @@ import createRenderer, { type Renderer } from './renderer.js';
import { CellWidth, CharPool, cellAt, createScreen, HyperlinkPool, isEmptyCellAt, migrateScreenPools, StylePool } from './screen.js'; import { CellWidth, CharPool, cellAt, createScreen, HyperlinkPool, isEmptyCellAt, migrateScreenPools, StylePool } from './screen.js';
import { applySearchHighlight } from './searchHighlight.js'; import { applySearchHighlight } from './searchHighlight.js';
import { applySelectionOverlay, captureScrolledRows, clearSelection, createSelectionState, extendSelection, type FocusMove, findPlainTextUrlAt, getSelectedText, hasSelection, moveFocus, type SelectionState, selectLineAt, selectWordAt, shiftAnchor, shiftSelection, shiftSelectionForFollow, startSelection, updateSelection } from './selection.js'; import { applySelectionOverlay, captureScrolledRows, clearSelection, createSelectionState, extendSelection, type FocusMove, findPlainTextUrlAt, getSelectedText, hasSelection, moveFocus, type SelectionState, selectLineAt, selectWordAt, shiftAnchor, shiftSelection, shiftSelectionForFollow, startSelection, updateSelection } from './selection.js';
import { shouldSkipMainScreenSyncMarkers, shouldUseMainScreenRewrite, SYNC_OUTPUT_SUPPORTED, supportsExtendedKeys, type Terminal, writeDiffToTerminal } from './terminal.js'; import { SYNC_OUTPUT_SUPPORTED, supportsExtendedKeys, type Terminal, writeDiffToTerminal } from './terminal.js';
import { CURSOR_HOME, cursorMove, cursorPosition, DISABLE_KITTY_KEYBOARD, DISABLE_MODIFY_OTHER_KEYS, ENABLE_KITTY_KEYBOARD, ENABLE_MODIFY_OTHER_KEYS, ERASE_SCREEN } from './termio/csi.js'; import { CURSOR_HOME, cursorMove, cursorPosition, DISABLE_KITTY_KEYBOARD, DISABLE_MODIFY_OTHER_KEYS, ENABLE_KITTY_KEYBOARD, ENABLE_MODIFY_OTHER_KEYS, ERASE_SCREEN } from './termio/csi.js';
import { DBP, DFE, DISABLE_MOUSE_TRACKING, ENABLE_MOUSE_TRACKING, ENTER_ALT_SCREEN, EXIT_ALT_SCREEN, SHOW_CURSOR } from './termio/dec.js'; import { DBP, DFE, DISABLE_MOUSE_TRACKING, ENABLE_MOUSE_TRACKING, ENTER_ALT_SCREEN, EXIT_ALT_SCREEN, SHOW_CURSOR } from './termio/dec.js';
import { CLEAR_ITERM2_PROGRESS, CLEAR_TAB_STATUS, setClipboard, supportsTabStatus, wrapForMultiplexer } from './termio/osc.js'; import { CLEAR_ITERM2_PROGRESS, CLEAR_TAB_STATUS, setClipboard, supportsTabStatus, wrapForMultiplexer } from './termio/osc.js';
@@ -609,13 +609,12 @@ export default class Ink {
}; };
} }
const tDiff = performance.now(); const tDiff = performance.now();
const rewriteMainScreen = !this.altScreenActive && shouldUseMainScreenRewrite();
const diff = this.log.render(prevFrame, frame, this.altScreenActive, const diff = this.log.render(prevFrame, frame, this.altScreenActive,
// DECSTBM needs BSU/ESU atomicity — without it the outer terminal // DECSTBM needs BSU/ESU atomicity — without it the outer terminal
// renders the scrolled-but-not-yet-repainted intermediate state. // renders the scrolled-but-not-yet-repainted intermediate state.
// tmux is the main case (re-emits DECSTBM with its own timing and // tmux is the main case (re-emits DECSTBM with its own timing and
// doesn't implement DEC 2026, so SYNC_OUTPUT_SUPPORTED is false). // doesn't implement DEC 2026, so SYNC_OUTPUT_SUPPORTED is false).
SYNC_OUTPUT_SUPPORTED, rewriteMainScreen); SYNC_OUTPUT_SUPPORTED);
const diffMs = performance.now() - tDiff; const diffMs = performance.now() - tDiff;
// Swap buffers // Swap buffers
this.backFrame = this.frontFrame; this.backFrame = this.frontFrame;
@@ -760,8 +759,7 @@ export default class Ink {
} }
} }
const tWrite = performance.now(); const tWrite = performance.now();
const skipSyncMarkers = this.altScreenActive ? !SYNC_OUTPUT_SUPPORTED : rewriteMainScreen || shouldSkipMainScreenSyncMarkers(); writeDiffToTerminal(this.terminal, optimized, this.altScreenActive && !SYNC_OUTPUT_SUPPORTED);
writeDiffToTerminal(this.terminal, optimized, skipSyncMarkers);
const writeMs = performance.now() - tWrite; const writeMs = performance.now() - tWrite;
// Update blit safety for the NEXT frame. The frame just rendered // Update blit safety for the NEXT frame. The frame just rendered

View File

@@ -1,125 +0,0 @@
import { expect, test } from 'bun:test'
import type { Frame } from './frame.ts'
import { LogUpdate } from './log-update.ts'
import {
CellWidth,
CharPool,
createScreen,
HyperlinkPool,
setCellAt,
StylePool,
} from './screen.ts'
function collectStdout(diff: ReturnType<LogUpdate['render']>): string {
return diff
.filter((patch): patch is Extract<(typeof diff)[number], { type: 'stdout' }> => patch.type === 'stdout')
.map(patch => patch.content)
.join('')
}
function createHarness() {
const stylePool = new StylePool()
const charPool = new CharPool()
const hyperlinkPool = new HyperlinkPool()
return {
stylePool,
charPool,
hyperlinkPool,
log: new LogUpdate({ isTTY: true, stylePool }),
}
}
function frameFromLines(
stylePool: StylePool,
charPool: CharPool,
hyperlinkPool: HyperlinkPool,
lines: string[],
cursor = { x: 0, y: lines.length, visible: true },
): Frame {
const width = lines.reduce((max, line) => Math.max(max, line.length), 0)
const screen = createScreen(width, lines.length, stylePool, charPool, hyperlinkPool)
for (const [y, line] of lines.entries()) {
for (const [x, char] of [...line].entries()) {
setCellAt(screen, x, y, {
char,
styleId: stylePool.none,
width: CellWidth.Narrow,
})
}
}
return {
screen,
viewport: {
width: Math.max(width, 1),
height: 10,
},
cursor,
}
}
test('ghostty main-screen rewrite paints prompt content without full terminal reset when width is stable', () => {
const { stylePool, charPool, hyperlinkPool, log } = createHarness()
const prev = frameFromLines(stylePool, charPool, hyperlinkPool, [' '])
const next = frameFromLines(stylePool, charPool, hyperlinkPool, ['prompt'])
const diff = log.render(prev, next, false, true, true)
const stdout = collectStdout(diff)
expect(diff.some(patch => patch.type === 'clearTerminal')).toBe(false)
expect(diff.some(patch => patch.type === 'clear' && patch.count === 1)).toBe(
true,
)
expect(stdout).toContain('prompt')
})
test('ghostty main-screen rewrite clears only the changed prompt tail before repainting', () => {
const { stylePool, charPool, hyperlinkPool, log } = createHarness()
const prev = frameFromLines(
stylePool,
charPool,
hyperlinkPool,
['status', '> abc'],
)
const next = frameFromLines(
stylePool,
charPool,
hyperlinkPool,
['status', '> abcd'],
)
const diff = log.render(prev, next, false, true, true)
const stdout = collectStdout(diff)
expect(diff.some(patch => patch.type === 'clearTerminal')).toBe(false)
expect(diff.some(patch => patch.type === 'clear' && patch.count === 1)).toBe(
true,
)
expect(stdout).toContain('abcd')
})
test('ghostty main-screen rewrite falls back to incremental diff for larger changes', () => {
const { stylePool, charPool, hyperlinkPool, log } = createHarness()
const prev = frameFromLines(
stylePool,
charPool,
hyperlinkPool,
['row 0', 'row 1', 'row 2', 'row 3', 'row 4', '> abc'],
)
const next = frameFromLines(
stylePool,
charPool,
hyperlinkPool,
['row 0 updated', 'row 1', 'row 2', 'row 3', 'row 4', '> abcd'],
)
const diff = log.render(prev, next, false, true, true)
const stdout = collectStdout(diff)
expect(diff.some(patch => patch.type === 'clear')).toBe(false)
expect(stdout).toContain('updated')
expect(stdout).toContain('abcd')
})

View File

@@ -125,7 +125,6 @@ export class LogUpdate {
next: Frame, next: Frame,
altScreen = false, altScreen = false,
decstbmSafe = true, decstbmSafe = true,
rewriteMainScreen = false,
): Diff { ): Diff {
if (!this.options.isTTY) { if (!this.options.isTTY) {
return this.renderFullFrame(next) return this.renderFullFrame(next)
@@ -147,13 +146,6 @@ export class LogUpdate {
return fullResetSequence_CAUSES_FLICKER(next, 'resize', stylePool) return fullResetSequence_CAUSES_FLICKER(next, 'resize', stylePool)
} }
if (!altScreen && rewriteMainScreen) {
const rewriteStartY = findMainScreenRewriteStart(prev.screen, next.screen)
if (rewriteStartY !== null) {
return rewriteMainScreenFrame(prev, next, stylePool, rewriteStartY)
}
}
// DECSTBM scroll optimization: when a ScrollBox's scrollTop changed, // DECSTBM scroll optimization: when a ScrollBox's scrollTop changed,
// shift content with a hardware scroll (CSI top;bot r + CSI n S/T) // shift content with a hardware scroll (CSI top;bot r + CSI n S/T)
// instead of rewriting the whole scroll region. The shiftRows on // instead of rewriting the whole scroll region. The shiftRows on
@@ -428,8 +420,34 @@ export class LogUpdate {
// Main screen: if cursor needs to be past the last line of content // Main screen: if cursor needs to be past the last line of content
// (typical: cursor.y = screen.height), emit \n to create that line // (typical: cursor.y = screen.height), emit \n to create that line
// since cursor movement can't create new lines. // since cursor movement can't create new lines.
if (!altScreen) { if (altScreen) {
restoreMainScreenCursor(screen, next) // no-op; next frame's CSI H anchors cursor
} else if (next.cursor.y >= next.screen.height) {
// Move to column 0 of current line, then emit newlines to reach target row
screen.txn(prev => {
const rowsToCreate = next.cursor.y - prev.y
if (rowsToCreate > 0) {
// Use CR to resolve pending wrap (if any) without advancing
// to the next line, then LF to create each new row.
const patches: Diff = new Array<Diff[number]>(1 + rowsToCreate)
patches[0] = CARRIAGE_RETURN
for (let i = 0; i < rowsToCreate; i++) {
patches[1 + i] = NEWLINE
}
return [patches, { dx: -prev.x, dy: rowsToCreate }]
}
// At or past target row - need to move cursor to correct position
const dy = next.cursor.y - prev.y
if (dy !== 0 || prev.x !== next.cursor.x) {
// Use CR to clear pending wrap (if any), then cursor move
const patches: Diff = [CARRIAGE_RETURN]
patches.push({ type: 'cursorMove', x: next.cursor.x, y: dy })
return [patches, { dx: next.cursor.x - prev.x, dy }]
}
return [[], { dx: 0, dy: 0 }]
})
} else {
moveCursorTo(screen, next.cursor.x, next.cursor.y)
} }
const elapsed = performance.now() - startTime const elapsed = performance.now() - startTime
@@ -449,77 +467,6 @@ export class LogUpdate {
} }
} }
function rewriteMainScreenFrame(
prev: Frame,
next: Frame,
stylePool: StylePool,
startY: number,
): Diff {
const diff: Diff = []
const clearCount = prev.screen.height - startY
if (clearCount > 0) {
const clearStartY = prev.screen.height - 1
const clearCursor = new VirtualScreen(prev.cursor, next.viewport.width)
moveCursorTo(clearCursor, 0, clearStartY)
diff.push(...clearCursor.diff)
diff.push({ type: 'clear', count: clearCount })
}
const screen = new VirtualScreen(
clearCount > 0 ? { x: 0, y: startY } : prev.cursor,
next.viewport.width,
)
renderFrameSlice(screen, next, startY, next.screen.height, stylePool)
restoreMainScreenCursor(screen, next)
return [...diff, ...screen.diff]
}
const MAX_MAIN_SCREEN_REWRITE_ROWS = 6
function findMainScreenRewriteStart(prev: Screen, next: Screen): number | null {
const commonHeight = Math.min(prev.height, next.height)
let firstChangedY = commonHeight
for (let y = 0; y < commonHeight; y += 1) {
if (!rowsEqual(prev, next, y)) {
firstChangedY = y
break
}
}
const rewriteRows = Math.max(prev.height, next.height) - firstChangedY
if (rewriteRows <= 0) {
return null
}
return rewriteRows <= MAX_MAIN_SCREEN_REWRITE_ROWS ? firstChangedY : null
}
function rowsEqual(prev: Screen, next: Screen, y: number): boolean {
if (prev.width !== next.width) {
return false
}
if (prev.softWrap[y] !== next.softWrap[y]) {
return false
}
const rowStart = y * prev.width
const rowEnd = rowStart + prev.width
for (let index = rowStart; index < rowEnd; index += 1) {
if (
prev.cells64[index] !== next.cells64[index] ||
prev.noSelect[index] !== next.noSelect[index]
) {
return false
}
}
return true
}
function transitionHyperlink( function transitionHyperlink(
diff: Diff, diff: Diff,
current: Hyperlink, current: Hyperlink,
@@ -675,37 +622,6 @@ function renderFrameSlice(
return screen return screen
} }
function restoreMainScreenCursor(screen: VirtualScreen, next: Frame): void {
if (next.cursor.y >= next.screen.height) {
// Move to column 0 of current line, then emit newlines to reach target row
screen.txn(prev => {
const rowsToCreate = next.cursor.y - prev.y
if (rowsToCreate > 0) {
// Use CR to resolve pending wrap (if any) without advancing
// to the next line, then LF to create each new row.
const patches: Diff = new Array<Diff[number]>(1 + rowsToCreate)
patches[0] = CARRIAGE_RETURN
for (let i = 0; i < rowsToCreate; i++) {
patches[1 + i] = NEWLINE
}
return [patches, { dx: -prev.x, dy: rowsToCreate }]
}
// At or past target row - need to move cursor to correct position
const dy = next.cursor.y - prev.y
if (dy !== 0 || prev.x !== next.cursor.x) {
// Use CR to clear pending wrap (if any), then cursor move
const patches: Diff = [CARRIAGE_RETURN]
patches.push({ type: 'cursorMove', x: next.cursor.x, y: dy })
return [patches, { dx: next.cursor.x - prev.x, dy }]
}
return [[], { dx: 0, dy: 0 }]
})
return
}
moveCursorTo(screen, next.cursor.x, next.cursor.y)
}
type Delta = { dx: number; dy: number } type Delta = { dx: number; dy: number }
/** /**

View File

@@ -1,369 +0,0 @@
import { PassThrough } from 'node:stream'
import { expect, test } from 'bun:test'
import React from 'react'
import type { DOMElement, ElementNames } from './dom.ts'
import instances from './instances.ts'
import { LayoutEdge } from './layout/node.ts'
import type { ParsedKey } from './parse-keypress.ts'
import { createRoot } from './root.ts'
type TestStdin = PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
const RAW_TEXT_STYLE = {
flexDirection: 'row',
flexGrow: 0,
flexShrink: 1,
textWrap: 'wrap',
} as const
function createTestStreams(): {
stdout: PassThrough
stdin: TestStdin
} {
const stdout = new PassThrough()
const stdin = new PassThrough() as TestStdin
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
;(stdout as unknown as { rows: number }).rows = 24
;(stdout as unknown as { isTTY: boolean }).isTTY = true
return { stdout, stdin }
}
async function waitForCondition(
predicate: () => boolean,
errorMessage: string,
timeoutMs = 2000,
): Promise<void> {
const startedAt = Date.now()
while (Date.now() - startedAt < timeoutMs) {
if (predicate()) {
return
}
await Bun.sleep(10)
}
throw new Error(errorMessage)
}
function getRootNode(stdout: PassThrough): DOMElement {
const instance = getInkInstance(stdout)
if (!instance.rootNode) {
throw new Error('Ink instance root node not found')
}
return instance.rootNode
}
function getInkInstance(stdout: PassThrough): {
rootNode?: DOMElement
dispatchKeyboardEvent: (parsedKey: ParsedKey) => void
} {
const instance = instances.get(
stdout as unknown as NodeJS.WriteStream,
) as
| {
rootNode?: DOMElement
dispatchKeyboardEvent: (parsedKey: ParsedKey) => void
}
| undefined
if (!instance) {
throw new Error('Ink instance not found')
}
return instance
}
function findElement(
node: DOMElement,
nodeName: ElementNames,
): DOMElement | undefined {
if (node.nodeName === nodeName) {
return node
}
for (const child of node.childNodes) {
if (child.nodeName === '#text') {
continue
}
const found = findElement(child, nodeName)
if (found) {
return found
}
}
return undefined
}
function requireElement(stdout: PassThrough, nodeName: ElementNames): DOMElement {
const found = findElement(getRootNode(stdout), nodeName)
if (!found) {
throw new Error(`Expected to find ${nodeName} in Ink root tree`)
}
return found
}
async function createHarness(): Promise<{
stdout: PassThrough
stdin: TestStdin
root: Awaited<ReturnType<typeof createRoot>>
dispose: () => Promise<void>
}> {
const { stdout, stdin } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
return {
stdout,
stdin,
root,
dispose: async () => {
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(25)
},
}
}
test('raw ink-box updates keyboard handlers and attributes in place across rerenders', async () => {
const calls: string[] = []
const firstHandler = () => calls.push('first')
const secondHandler = () => calls.push('second')
const harness = await createHarness()
try {
harness.root.render(
React.createElement(
'ink-box',
{
autoFocus: true,
onKeyDown: firstHandler,
tabIndex: 0,
},
'first render',
),
)
await Bun.sleep(25)
const firstBox = requireElement(harness.stdout, 'ink-box')
expect(firstBox.attributes.tabIndex).toBe(0)
expect(firstBox._eventHandlers?.onKeyDown).toBe(firstHandler)
harness.root.render(
React.createElement(
'ink-box',
{
autoFocus: true,
onKeyDown: secondHandler,
tabIndex: 1,
},
'second render',
),
)
await Bun.sleep(25)
const secondBox = requireElement(harness.stdout, 'ink-box')
expect(secondBox).toBe(firstBox)
expect(secondBox.attributes.tabIndex).toBe(1)
expect(secondBox._eventHandlers?.onKeyDown).toBe(secondHandler)
getInkInstance(harness.stdout).dispatchKeyboardEvent({
kind: 'key',
name: 'a',
fn: false,
ctrl: false,
meta: false,
shift: false,
option: false,
super: false,
sequence: 'a',
raw: 'a',
isPasted: false,
})
await waitForCondition(
() => calls.length === 1,
'Timed out waiting for rerendered onKeyDown handler to fire',
)
expect(calls).toEqual(['second'])
} finally {
await harness.dispose()
}
})
test('raw ink-text updates textStyles in place across rerenders', async () => {
const harness = await createHarness()
try {
harness.root.render(
React.createElement(
'ink-text',
{
style: RAW_TEXT_STYLE,
textStyles: { color: 'ansi:red' },
},
'host text',
),
)
await Bun.sleep(25)
const firstText = requireElement(harness.stdout, 'ink-text')
expect(firstText.textStyles).toEqual({ color: 'ansi:red' })
harness.root.render(
React.createElement(
'ink-text',
{
style: RAW_TEXT_STYLE,
textStyles: { color: 'ansi:blue' },
},
'host text',
),
)
await Bun.sleep(25)
const secondText = requireElement(harness.stdout, 'ink-text')
expect(secondText).toBe(firstText)
expect(secondText.textStyles).toEqual({ color: 'ansi:blue' })
} finally {
await harness.dispose()
}
})
test('raw ink-box removes event handler when set to undefined', async () => {
const calls: string[] = []
const handler = () => calls.push('fired')
const harness = await createHarness()
try {
harness.root.render(
React.createElement(
'ink-box',
{
autoFocus: true,
onKeyDown: handler,
tabIndex: 0,
},
'with handler',
),
)
await Bun.sleep(25)
const box = requireElement(harness.stdout, 'ink-box')
expect(box._eventHandlers?.onKeyDown).toBe(handler)
// Remove the handler
harness.root.render(
React.createElement(
'ink-box',
{
autoFocus: true,
tabIndex: 0,
},
'without handler',
),
)
await Bun.sleep(25)
const sameBox = requireElement(harness.stdout, 'ink-box')
expect(sameBox).toBe(box)
expect(sameBox._eventHandlers?.onKeyDown).toBeUndefined()
// Dispatch a key event and verify the removed handler is NOT called
getInkInstance(harness.stdout).dispatchKeyboardEvent({
kind: 'key',
name: 'a',
fn: false,
ctrl: false,
meta: false,
shift: false,
option: false,
super: false,
sequence: 'a',
raw: 'a',
isPasted: false,
})
await Bun.sleep(50)
expect(calls).toEqual([])
} finally {
await harness.dispose()
}
})
test('raw ink-box updates layout style in place across rerenders', async () => {
const harness = await createHarness()
try {
harness.root.render(
React.createElement(
'ink-box',
{
style: { flexDirection: 'row', paddingLeft: 1 },
},
'styled box',
),
)
await Bun.sleep(25)
const box = requireElement(harness.stdout, 'ink-box')
expect(box.style.flexDirection).toBe('row')
expect(box.style.paddingLeft).toBe(1)
harness.root.render(
React.createElement(
'ink-box',
{
style: { flexDirection: 'column', paddingLeft: 2 },
},
'styled box',
),
)
await Bun.sleep(25)
const sameBox = requireElement(harness.stdout, 'ink-box')
expect(sameBox).toBe(box)
expect(sameBox.style.flexDirection).toBe('column')
expect(sameBox.style.paddingLeft).toBe(2)
// Verify the update reached the layout engine, not just the style object
const yogaNode = sameBox.yogaNode!
expect(yogaNode).toBeDefined()
yogaNode.calculateLayout(120)
expect(yogaNode.getComputedPadding(LayoutEdge.Left)).toBe(2)
} finally {
await harness.dispose()
}
})

View File

@@ -449,25 +449,17 @@ const reconciler = createReconciler<
}, },
commitUpdate( commitUpdate(
node: DOMElement, node: DOMElement,
updatePayload: UpdatePayload | null,
_type: ElementNames, _type: ElementNames,
oldProps: Props, _oldProps: Props,
newProps: Props, _newProps: Props,
): void { ): void {
// React 19 mutation mode calls commitUpdate as if (!updatePayload) {
// (instance, type, oldProps, newProps, fiber) and does not pass the
// prepareUpdate() payload here. This renderer used to treat the second
// argument as updatePayload, which left mounted ink-* nodes with stale
// attributes, event handlers, and textStyles until something forced a
// remount. Recompute the prop/style diff here so host nodes update
// correctly in place on rerender.
const props = diff(oldProps, newProps)
const style = diff(oldProps['style'] as Styles, newProps['style'] as Styles)
const nextStyle = newProps['style'] as Styles | undefined
if (!props && !style) {
return return
} }
const { props, style, nextStyle } = updatePayload
if (props) { if (props) {
for (const [key, value] of Object.entries(props)) { for (const [key, value] of Object.entries(props)) {
if (key === 'style') { if (key === 'style') {

View File

@@ -135,13 +135,6 @@ export function setXtversionName(name: string): void {
if (xtversionName === undefined) xtversionName = name if (xtversionName === undefined) xtversionName = name
} }
export function isGhosttyTerminal(): boolean {
if (process.env.NODE_ENV === 'test') return false
if (process.env.TERM_PROGRAM === 'ghostty') return true
if (process.env.TERM === 'xterm-ghostty') return true
return xtversionName?.toLowerCase().startsWith('ghostty') ?? false
}
/** True if running in an xterm.js-based terminal (VS Code, Cursor, Windsurf /** True if running in an xterm.js-based terminal (VS Code, Cursor, Windsurf
* integrated terminals). Combines TERM_PROGRAM env check (fast, sync, but * integrated terminals). Combines TERM_PROGRAM env check (fast, sync, but
* not forwarded over SSH) with the XTVERSION probe result (async, survives * not forwarded over SSH) with the XTVERSION probe result (async, survives
@@ -152,20 +145,6 @@ export function isXtermJs(): boolean {
return xtversionName?.startsWith('xterm.js') ?? false return xtversionName?.startsWith('xterm.js') ?? false
} }
/** Ghostty currently repaints main-screen prompt updates more reliably
* without DEC 2026 synchronized output. Prefer explicit terminal identity
* (TERM_PROGRAM/TERM or XTVERSION) in real sessions, but keep tests
* deterministic by disabling the env-based detection under NODE_ENV=test. */
export function shouldSkipMainScreenSyncMarkers(): boolean {
return isGhosttyTerminal()
}
/** Ghostty's main-screen prompt updates are currently more reliable when we
* bypass the incremental diff path and rewrite the visible prompt block. */
export function shouldUseMainScreenRewrite(): boolean {
return isGhosttyTerminal()
}
// Terminals known to correctly implement the Kitty keyboard protocol // Terminals known to correctly implement the Kitty keyboard protocol
// (CSI >1u) and/or xterm modifyOtherKeys (CSI >4;2m) for ctrl+shift+<letter> // (CSI >1u) and/or xterm modifyOtherKeys (CSI >4;2m) for ctrl+shift+<letter>
// disambiguation. We previously enabled unconditionally (#23350), assuming // disambiguation. We previously enabled unconditionally (#23350), assuming

View File

@@ -13,7 +13,6 @@ const execFileNoThrowMock = mock(
mock.module('../../utils/execFileNoThrow.js', () => ({ mock.module('../../utils/execFileNoThrow.js', () => ({
execFileNoThrow: execFileNoThrowMock, execFileNoThrow: execFileNoThrowMock,
execFileNoThrowWithCwd: execFileNoThrowMock,
})) }))
mock.module('../../utils/tempfile.js', () => ({ mock.module('../../utils/tempfile.js', () => ({

View File

@@ -617,6 +617,7 @@ export function REPL({
const toolPermissionContext = useAppState(s => s.toolPermissionContext); const toolPermissionContext = useAppState(s => s.toolPermissionContext);
const verbose = useAppState(s => s.verbose); const verbose = useAppState(s => s.verbose);
const mcp = useAppState(s => s.mcp); const mcp = useAppState(s => s.mcp);
const plugins = useAppState(s => s.plugins);
const agentDefinitions = useAppState(s => s.agentDefinitions); const agentDefinitions = useAppState(s => s.agentDefinitions);
const fileHistory = useAppState(s => s.fileHistory); const fileHistory = useAppState(s => s.fileHistory);
const initialMessage = useAppState(s => s.initialMessage); const initialMessage = useAppState(s => s.initialMessage);
@@ -779,7 +780,7 @@ export function REPL({
}, [localTools, initialTools]); }, [localTools, initialTools]);
// Initialize plugin management // Initialize plugin management
const pluginCommands = useManagePlugins({ useManagePlugins({
enabled: !isRemoteSession enabled: !isRemoteSession
}); });
const tasksV2 = useTasksV2WithCollapseEffect(); const tasksV2 = useTasksV2WithCollapseEffect();
@@ -825,16 +826,10 @@ export function REPL({
}, [mainThreadAgentDefinition, mergedTools]); }, [mainThreadAgentDefinition, mergedTools]);
// Merge commands from local state, plugins, and MCP // Merge commands from local state, plugins, and MCP
const commandsWithPlugins = useMergedCommands(localCommands, pluginCommands as Command[]); const commandsWithPlugins = useMergedCommands(localCommands, plugins.commands as Command[]);
const mergedCommands = useMergedCommands(commandsWithPlugins, mcp.commands as Command[]); const mergedCommands = useMergedCommands(commandsWithPlugins, mcp.commands as Command[]);
// Keep plugin commands out of render-time command props. Feeding the full
// execution set into PromptInput/Messages reintroduced the startup repaint
// freeze, while transcript rendering still round-trips plugin skills via the
// SkillTool's `skill` payload without needing plugin command objects here.
const renderMergedCommands = useMergedCommands(localCommands, mcp.commands as Command[]);
// Filter out all commands if disableSlashCommands is true // Filter out all commands if disableSlashCommands is true
const commands = useMemo(() => disableSlashCommands ? [] : mergedCommands, [disableSlashCommands, mergedCommands]); const commands = useMemo(() => disableSlashCommands ? [] : mergedCommands, [disableSlashCommands, mergedCommands]);
const renderCommands = useMemo(() => disableSlashCommands ? [] : renderMergedCommands, [disableSlashCommands, renderMergedCommands]);
useIdeLogging(isRemoteSession ? EMPTY_MCP_CLIENTS : mcp.clients); useIdeLogging(isRemoteSession ? EMPTY_MCP_CLIENTS : mcp.clients);
useIdeSelection(isRemoteSession ? EMPTY_MCP_CLIENTS : mcp.clients, setIDESelection); useIdeSelection(isRemoteSession ? EMPTY_MCP_CLIENTS : mcp.clients, setIDESelection);
const [streamMode, setStreamMode] = useState<SpinnerMode>('responding'); const [streamMode, setStreamMode] = useState<SpinnerMode>('responding');
@@ -4432,7 +4427,7 @@ export function REPL({
// and transcript-mode are mutually exclusive (this early return), so // and transcript-mode are mutually exclusive (this early return), so
// only one ScrollBox is ever mounted at a time. // only one ScrollBox is ever mounted at a time.
const transcriptScrollRef = isFullscreenEnvEnabled() && !disableVirtualScroll && !dumpMode ? scrollRef : undefined; const transcriptScrollRef = isFullscreenEnvEnabled() && !disableVirtualScroll && !dumpMode ? scrollRef : undefined;
const transcriptMessagesElement = <Messages messages={transcriptMessages} tools={tools} commands={renderCommands} verbose={true} toolJSX={null} toolUseConfirmQueue={[]} inProgressToolUseIDs={inProgressToolUseIDs} isMessageSelectorVisible={false} conversationId={conversationId} screen={screen} agentDefinitions={agentDefinitions} streamingToolUses={transcriptStreamingToolUses} showAllInTranscript={showAllInTranscript} onOpenRateLimitOptions={handleOpenRateLimitOptions} isLoading={isLoading} hidePastThinking={true} streamingThinking={streamingThinking} scrollRef={transcriptScrollRef} jumpRef={jumpRef} onSearchMatchesChange={onSearchMatchesChange} scanElement={scanElement} setPositions={setPositions} disableRenderCap={dumpMode} />; const transcriptMessagesElement = <Messages messages={transcriptMessages} tools={tools} commands={commands} verbose={true} toolJSX={null} toolUseConfirmQueue={[]} inProgressToolUseIDs={inProgressToolUseIDs} isMessageSelectorVisible={false} conversationId={conversationId} screen={screen} agentDefinitions={agentDefinitions} streamingToolUses={transcriptStreamingToolUses} showAllInTranscript={showAllInTranscript} onOpenRateLimitOptions={handleOpenRateLimitOptions} isLoading={isLoading} hidePastThinking={true} streamingThinking={streamingThinking} scrollRef={transcriptScrollRef} jumpRef={jumpRef} onSearchMatchesChange={onSearchMatchesChange} scanElement={scanElement} setPositions={setPositions} disableRenderCap={dumpMode} />;
const transcriptToolJSX = toolJSX && <Box flexDirection="column" width="100%"> const transcriptToolJSX = toolJSX && <Box flexDirection="column" width="100%">
{toolJSX.jsx} {toolJSX.jsx}
</Box>; </Box>;
@@ -4600,7 +4595,7 @@ export function REPL({
jumpToNew(scrollRef.current); jumpToNew(scrollRef.current);
}} scrollable={<> }} scrollable={<>
<TeammateViewHeader /> <TeammateViewHeader />
<Messages messages={displayedMessages} tools={tools} commands={renderCommands} verbose={verbose} toolJSX={toolJSX} toolUseConfirmQueue={toolUseConfirmQueue} inProgressToolUseIDs={viewedTeammateTask ? viewedTeammateTask.inProgressToolUseIDs ?? new Set() : inProgressToolUseIDs} isMessageSelectorVisible={isMessageSelectorVisible} conversationId={conversationId} screen={screen} streamingToolUses={streamingToolUses} showAllInTranscript={showAllInTranscript} agentDefinitions={agentDefinitions} onOpenRateLimitOptions={handleOpenRateLimitOptions} isLoading={isLoading} streamingText={isLoading && !viewedAgentTask ? visibleStreamingText : null} isBriefOnly={viewedAgentTask ? false : isBriefOnly} unseenDivider={viewedAgentTask ? undefined : unseenDivider} scrollRef={isFullscreenEnvEnabled() ? scrollRef : undefined} trackStickyPrompt={isFullscreenEnvEnabled() ? true : undefined} cursor={cursor} setCursor={setCursor} cursorNavRef={cursorNavRef} /> <Messages messages={displayedMessages} tools={tools} commands={commands} verbose={verbose} toolJSX={toolJSX} toolUseConfirmQueue={toolUseConfirmQueue} inProgressToolUseIDs={viewedTeammateTask ? viewedTeammateTask.inProgressToolUseIDs ?? new Set() : inProgressToolUseIDs} isMessageSelectorVisible={isMessageSelectorVisible} conversationId={conversationId} screen={screen} streamingToolUses={streamingToolUses} showAllInTranscript={showAllInTranscript} agentDefinitions={agentDefinitions} onOpenRateLimitOptions={handleOpenRateLimitOptions} isLoading={isLoading} streamingText={isLoading && !viewedAgentTask ? visibleStreamingText : null} isBriefOnly={viewedAgentTask ? false : isBriefOnly} unseenDivider={viewedAgentTask ? undefined : unseenDivider} scrollRef={isFullscreenEnvEnabled() ? scrollRef : undefined} trackStickyPrompt={isFullscreenEnvEnabled() ? true : undefined} cursor={cursor} setCursor={setCursor} cursorNavRef={cursorNavRef} />
<AwsAuthStatusBox /> <AwsAuthStatusBox />
{/* Hide the processing placeholder while a modal is showing — {/* Hide the processing placeholder while a modal is showing —
it would sit at the last visible transcript row right above it would sit at the last visible transcript row right above
@@ -4933,7 +4928,7 @@ export function REPL({
{"external" === 'ant' && skillImprovementSurvey.suggestion && <SkillImprovementSurvey isOpen={skillImprovementSurvey.isOpen} skillName={skillImprovementSurvey.suggestion.skillName} updates={skillImprovementSurvey.suggestion.updates} handleSelect={skillImprovementSurvey.handleSelect} inputValue={inputValue} setInputValue={setInputValue} />} {"external" === 'ant' && skillImprovementSurvey.suggestion && <SkillImprovementSurvey isOpen={skillImprovementSurvey.isOpen} skillName={skillImprovementSurvey.suggestion.skillName} updates={skillImprovementSurvey.suggestion.updates} handleSelect={skillImprovementSurvey.handleSelect} inputValue={inputValue} setInputValue={setInputValue} />}
{showIssueFlagBanner && <IssueFlagBanner />} {showIssueFlagBanner && <IssueFlagBanner />}
{ } { }
<PromptInput debug={debug} ideSelection={ideSelection} hasSuppressedDialogs={!!hasSuppressedDialogs} isLocalJSXCommandActive={isShowingLocalJSXCommand} getToolUseContext={getToolUseContext} toolPermissionContext={toolPermissionContext} setToolPermissionContext={setToolPermissionContext} apiKeyStatus={apiKeyStatus} commands={renderCommands} agents={agentDefinitions.activeAgents} isLoading={isLoading} onExit={handleExit} verbose={verbose} messages={messages} onAutoUpdaterResult={setAutoUpdaterResult} autoUpdaterResult={autoUpdaterResult} input={inputValue} onInputChange={setInputValue} mode={inputMode} onModeChange={setInputMode} stashedPrompt={stashedPrompt} setStashedPrompt={setStashedPrompt} submitCount={submitCount} onShowMessageSelector={handleShowMessageSelector} onMessageActionsEnter={ <PromptInput debug={debug} ideSelection={ideSelection} hasSuppressedDialogs={!!hasSuppressedDialogs} isLocalJSXCommandActive={isShowingLocalJSXCommand} getToolUseContext={getToolUseContext} toolPermissionContext={toolPermissionContext} setToolPermissionContext={setToolPermissionContext} apiKeyStatus={apiKeyStatus} commands={commands} agents={agentDefinitions.activeAgents} isLoading={isLoading} onExit={handleExit} verbose={verbose} messages={messages} onAutoUpdaterResult={setAutoUpdaterResult} autoUpdaterResult={autoUpdaterResult} input={inputValue} onInputChange={setInputValue} mode={inputMode} onModeChange={setInputMode} stashedPrompt={stashedPrompt} setStashedPrompt={setStashedPrompt} submitCount={submitCount} onShowMessageSelector={handleShowMessageSelector} onMessageActionsEnter={
// Works during isLoading — edit cancels first; uuid selection survives appends. // Works during isLoading — edit cancels first; uuid selection survives appends.
feature('MESSAGE_ACTIONS') && isFullscreenEnvEnabled() && !disableMessageActions ? enterMessageActions : undefined} mcpClients={mcpClients} pastedContents={pastedContents} setPastedContents={setPastedContents} vimMode={vimMode} setVimMode={setVimMode} showBashesDialog={showBashesDialog} setShowBashesDialog={setShowBashesDialog} onSubmit={onSubmit} onAgentSubmit={onAgentSubmit} isSearchingHistory={isSearchingHistory} setIsSearchingHistory={setIsSearchingHistory} helpOpen={isHelpOpen} setHelpOpen={setIsHelpOpen} insertTextRef={feature('VOICE_MODE') ? insertTextRef : undefined} voiceInterimRange={voice.interimRange} /> feature('MESSAGE_ACTIONS') && isFullscreenEnvEnabled() && !disableMessageActions ? enterMessageActions : undefined} mcpClients={mcpClients} pastedContents={pastedContents} setPastedContents={setPastedContents} vimMode={vimMode} setVimMode={setVimMode} showBashesDialog={showBashesDialog} setShowBashesDialog={setShowBashesDialog} onSubmit={onSubmit} onAgentSubmit={onAgentSubmit} isSearchingHistory={isSearchingHistory} setIsSearchingHistory={setIsSearchingHistory} helpOpen={isHelpOpen} setHelpOpen={setIsHelpOpen} insertTextRef={feature('VOICE_MODE') ? insertTextRef : undefined} voiceInterimRange={voice.interimRange} />
<SessionBackgroundHint onBackgroundSession={handleBackgroundSession} isLoading={isLoading} /> <SessionBackgroundHint onBackgroundSession={handleBackgroundSession} isLoading={isLoading} />

View File

@@ -14,27 +14,16 @@ type ShimClient = {
const originalFetch = globalThis.fetch const originalFetch = globalThis.fetch
const originalMacro = (globalThis as Record<string, unknown>).MACRO const originalMacro = (globalThis as Record<string, unknown>).MACRO
const originalEnv = { const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI, CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
GEMINI_API_KEY: process.env.GEMINI_API_KEY, GEMINI_API_KEY: process.env.GEMINI_API_KEY,
GEMINI_MODEL: process.env.GEMINI_MODEL, GEMINI_MODEL: process.env.GEMINI_MODEL,
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL, GEMINI_BASE_URL: process.env.GEMINI_BASE_URL,
GEMINI_AUTH_MODE: process.env.GEMINI_AUTH_MODE,
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY, GOOGLE_API_KEY: process.env.GOOGLE_API_KEY,
OPENAI_API_KEY: process.env.OPENAI_API_KEY, OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL, OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_MODEL: process.env.OPENAI_MODEL, OPENAI_MODEL: process.env.OPENAI_MODEL,
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY, ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY,
ANTHROPIC_AUTH_TOKEN: process.env.ANTHROPIC_AUTH_TOKEN, ANTHROPIC_AUTH_TOKEN: process.env.ANTHROPIC_AUTH_TOKEN,
ANTHROPIC_CUSTOM_HEADERS: process.env.ANTHROPIC_CUSTOM_HEADERS,
}
function restoreEnv(key: string, value: string | undefined): void {
if (value === undefined) {
delete process.env[key]
} else {
process.env[key] = value
}
} }
beforeEach(() => { beforeEach(() => {
@@ -43,33 +32,27 @@ beforeEach(() => {
process.env.GEMINI_API_KEY = 'gemini-test-key' process.env.GEMINI_API_KEY = 'gemini-test-key'
process.env.GEMINI_MODEL = 'gemini-2.0-flash' process.env.GEMINI_MODEL = 'gemini-2.0-flash'
process.env.GEMINI_BASE_URL = 'https://gemini.example/v1beta/openai' process.env.GEMINI_BASE_URL = 'https://gemini.example/v1beta/openai'
process.env.GEMINI_AUTH_MODE = 'api-key'
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.GOOGLE_API_KEY delete process.env.GOOGLE_API_KEY
delete process.env.OPENAI_API_KEY delete process.env.OPENAI_API_KEY
delete process.env.OPENAI_BASE_URL delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_MODEL delete process.env.OPENAI_MODEL
delete process.env.ANTHROPIC_API_KEY delete process.env.ANTHROPIC_API_KEY
delete process.env.ANTHROPIC_AUTH_TOKEN delete process.env.ANTHROPIC_AUTH_TOKEN
delete process.env.ANTHROPIC_CUSTOM_HEADERS
}) })
afterEach(() => { afterEach(() => {
;(globalThis as Record<string, unknown>).MACRO = originalMacro ;(globalThis as Record<string, unknown>).MACRO = originalMacro
restoreEnv('CLAUDE_CODE_USE_OPENAI', originalEnv.CLAUDE_CODE_USE_OPENAI) process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
restoreEnv('CLAUDE_CODE_USE_GEMINI', originalEnv.CLAUDE_CODE_USE_GEMINI) process.env.GEMINI_API_KEY = originalEnv.GEMINI_API_KEY
restoreEnv('GEMINI_API_KEY', originalEnv.GEMINI_API_KEY) process.env.GEMINI_MODEL = originalEnv.GEMINI_MODEL
restoreEnv('GEMINI_MODEL', originalEnv.GEMINI_MODEL) process.env.GEMINI_BASE_URL = originalEnv.GEMINI_BASE_URL
restoreEnv('GEMINI_BASE_URL', originalEnv.GEMINI_BASE_URL) process.env.GOOGLE_API_KEY = originalEnv.GOOGLE_API_KEY
restoreEnv('GEMINI_AUTH_MODE', originalEnv.GEMINI_AUTH_MODE) process.env.OPENAI_API_KEY = originalEnv.OPENAI_API_KEY
restoreEnv('GOOGLE_API_KEY', originalEnv.GOOGLE_API_KEY) process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
restoreEnv('OPENAI_API_KEY', originalEnv.OPENAI_API_KEY) process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
restoreEnv('OPENAI_BASE_URL', originalEnv.OPENAI_BASE_URL) process.env.ANTHROPIC_API_KEY = originalEnv.ANTHROPIC_API_KEY
restoreEnv('OPENAI_MODEL', originalEnv.OPENAI_MODEL) process.env.ANTHROPIC_AUTH_TOKEN = originalEnv.ANTHROPIC_AUTH_TOKEN
restoreEnv('ANTHROPIC_API_KEY', originalEnv.ANTHROPIC_API_KEY)
restoreEnv('ANTHROPIC_AUTH_TOKEN', originalEnv.ANTHROPIC_AUTH_TOKEN)
restoreEnv('ANTHROPIC_CUSTOM_HEADERS', originalEnv.ANTHROPIC_CUSTOM_HEADERS)
globalThis.fetch = originalFetch globalThis.fetch = originalFetch
}) })
@@ -136,135 +119,3 @@ test('routes Gemini provider requests through the OpenAI-compatible shim', async
model: 'gemini-2.0-flash', model: 'gemini-2.0-flash',
}) })
}) })
test('strips Anthropic-specific custom headers before sending OpenAI-compatible shim requests', async () => {
let capturedHeaders: Headers | undefined
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_API_KEY = 'openai-test-key'
process.env.OPENAI_BASE_URL = 'http://example.test/v1'
process.env.OPENAI_MODEL = 'gpt-4o'
process.env.ANTHROPIC_CUSTOM_HEADERS = [
'anthropic-version: 2023-06-01',
'anthropic-beta: prompt-caching-2024-07-31',
'x-anthropic-additional-protection: true',
'x-claude-remote-session-id: remote-123',
'x-app: cli',
'x-safe-header: keep-me',
].join('\n')
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response(
JSON.stringify({
id: 'chatcmpl-openai',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = (await getAnthropicClient({
maxRetries: 0,
model: 'gpt-4o',
})) as unknown as ShimClient
await client.beta.messages.create({
model: 'gpt-4o',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
})
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-anthropic-additional-protection')).toBeNull()
expect(capturedHeaders?.get('x-claude-remote-session-id')).toBeNull()
expect(capturedHeaders?.get('x-app')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
expect(capturedHeaders?.get('authorization')).toBe('Bearer openai-test-key')
})
test('strips Anthropic-specific custom headers on providerOverride shim requests too', async () => {
let capturedHeaders: Headers | undefined
process.env.ANTHROPIC_CUSTOM_HEADERS = [
'anthropic-version: 2023-06-01',
'anthropic-beta: prompt-caching-2024-07-31',
'x-claude-remote-session-id: remote-123',
'x-safe-header: keep-me',
].join('\n')
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response(
JSON.stringify({
id: 'chatcmpl-provider-override',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = (await getAnthropicClient({
maxRetries: 0,
providerOverride: {
model: 'gpt-4o',
baseURL: 'http://example.test/v1',
apiKey: 'provider-test-key',
},
})) as unknown as ShimClient
await client.beta.messages.create({
model: 'unused',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
})
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-claude-remote-session-id')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
expect(capturedHeaders?.get('authorization')).toBe('Bearer provider-test-key')
})

View File

@@ -177,8 +177,7 @@ export async function getAnthropicClient({
if ( if (
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) || isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) || isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) || isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
) { ) {
const { createOpenAIShimClient } = await import('./openaiShim.js') const { createOpenAIShimClient } = await import('./openaiShim.js')
return createOpenAIShimClient({ return createOpenAIShimClient({

View File

@@ -17,23 +17,16 @@ const tempDirs: string[] = []
const originalEnv = { const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL, OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE, OPENAI_API_BASE: process.env.OPENAI_API_BASE,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
} }
afterEach(() => { afterEach(() => {
if (originalEnv.OPENAI_BASE_URL === undefined) delete process.env.OPENAI_BASE_URL
else process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
if (originalEnv.OPENAI_API_BASE === undefined) delete process.env.OPENAI_API_BASE
else process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
if (originalEnv.CLAUDE_CODE_USE_GITHUB === undefined) delete process.env.CLAUDE_CODE_USE_GITHUB
else process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
while (tempDirs.length > 0) { while (tempDirs.length > 0) {
const dir = tempDirs.pop() const dir = tempDirs.pop()
if (dir) rmSync(dir, { recursive: true, force: true }) if (dir) rmSync(dir, { recursive: true, force: true })
} }
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
}) })
function createTempAuthJson(payload: Record<string, unknown>): string { function createTempAuthJson(payload: Record<string, unknown>): string {
@@ -78,7 +71,6 @@ describe('Codex provider config', () => {
test('resolves codexplan alias to Codex transport with reasoning', () => { test('resolves codexplan alias to Codex transport with reasoning', () => {
delete process.env.OPENAI_BASE_URL delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE delete process.env.OPENAI_API_BASE
delete process.env.CLAUDE_CODE_USE_GITHUB
const resolved = resolveProviderRequest({ model: 'codexplan' }) const resolved = resolveProviderRequest({ model: 'codexplan' })
expect(resolved.transport).toBe('codex_responses') expect(resolved.transport).toBe('codex_responses')
@@ -465,37 +457,6 @@ describe('Codex request translation', () => {
]) ])
}) })
test('strips leaked reasoning preamble from completed Codex text responses', () => {
const message = convertCodexResponseToAnthropicMessage(
{
id: 'resp_1',
model: 'gpt-5.4',
output: [
{
type: 'message',
role: 'assistant',
content: [
{
type: 'output_text',
text:
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?',
},
],
},
],
usage: { input_tokens: 12, output_tokens: 4 },
},
'gpt-5.4',
)
expect(message.content).toEqual([
{
type: 'text',
text: 'Hey! How can I help you today?',
},
])
})
test('translates Codex SSE text stream into Anthropic events', async () => { test('translates Codex SSE text stream into Anthropic events', async () => {
const responseText = [ const responseText = [
'event: response.output_item.added', 'event: response.output_item.added',
@@ -526,44 +487,4 @@ describe('Codex request translation', () => {
'message_stop', 'message_stop',
]) ])
}) })
test('strips leaked reasoning preamble from Codex SSE text stream', async () => {
const responseText = [
'event: response.output_item.added',
'data: {"type":"response.output_item.added","item":{"id":"msg_1","type":"message","status":"in_progress","content":[],"role":"assistant"},"output_index":0,"sequence_number":0}',
'',
'event: response.content_part.added',
'data: {"type":"response.content_part.added","content_index":0,"item_id":"msg_1","output_index":0,"part":{"type":"output_text","text":""},"sequence_number":1}',
'',
'event: response.output_text.delta',
'data: {"type":"response.output_text.delta","content_index":0,"delta":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?","item_id":"msg_1","output_index":0,"sequence_number":2}',
'',
'event: response.output_item.done',
'data: {"type":"response.output_item.done","item":{"id":"msg_1","type":"message","status":"completed","content":[{"type":"output_text","text":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?"}],"role":"assistant"},"output_index":0,"sequence_number":3}',
'',
'event: response.completed',
'data: {"type":"response.completed","response":{"id":"resp_1","status":"completed","model":"gpt-5.4","output":[{"type":"message","role":"assistant","content":[{"type":"output_text","text":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?"}]}],"usage":{"input_tokens":2,"output_tokens":1}},"sequence_number":4}',
'',
].join('\n')
const stream = new ReadableStream({
start(controller) {
controller.enqueue(new TextEncoder().encode(responseText))
controller.close()
},
})
const textDeltas: string[] = []
for await (const event of codexStreamToAnthropic(
new Response(stream),
'gpt-5.4',
)) {
const delta = (event as { delta?: { type?: string; text?: string } }).delta
if (delta?.type === 'text_delta' && typeof delta.text === 'string') {
textDeltas.push(delta.text)
}
}
expect(textDeltas).toEqual(['Hey! How can I help you today?'])
})
}) })

View File

@@ -4,11 +4,6 @@ import type {
ResolvedProviderRequest, ResolvedProviderRequest,
} from './providerConfig.js' } from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from './openaiSchemaSanitizer.js' import { sanitizeSchemaForOpenAICompat } from './openaiSchemaSanitizer.js'
import {
looksLikeLeakedReasoningPrefix,
shouldBufferPotentialReasoningPrefix,
stripLeakedReasoningPreamble,
} from './reasoningLeakSanitizer.js'
export interface AnthropicUsage { export interface AnthropicUsage {
input_tokens: number input_tokens: number
@@ -80,17 +75,12 @@ type CodexSseEvent = {
function makeUsage(usage?: { function makeUsage(usage?: {
input_tokens?: number input_tokens?: number
output_tokens?: number output_tokens?: number
input_tokens_details?: { cached_tokens?: number }
prompt_tokens_details?: { cached_tokens?: number }
}): AnthropicUsage { }): AnthropicUsage {
return { return {
input_tokens: usage?.input_tokens ?? 0, input_tokens: usage?.input_tokens ?? 0,
output_tokens: usage?.output_tokens ?? 0, output_tokens: usage?.output_tokens ?? 0,
cache_creation_input_tokens: 0, cache_creation_input_tokens: 0,
cache_read_input_tokens: cache_read_input_tokens: 0,
usage?.input_tokens_details?.cached_tokens ??
usage?.prompt_tokens_details?.cached_tokens ??
0,
} }
} }
@@ -688,34 +678,17 @@ export async function* codexStreamToAnthropic(
{ index: number; toolUseId: string } { index: number; toolUseId: string }
>() >()
let activeTextBlockIndex: number | null = null let activeTextBlockIndex: number | null = null
let activeTextBuffer = ''
let textBufferMode: 'none' | 'pending' | 'strip' = 'none'
let nextContentBlockIndex = 0 let nextContentBlockIndex = 0
let sawToolUse = false let sawToolUse = false
let finalResponse: Record<string, any> | undefined let finalResponse: Record<string, any> | undefined
const closeActiveTextBlock = async function* () { const closeActiveTextBlock = async function* () {
if (activeTextBlockIndex === null) return if (activeTextBlockIndex === null) return
if (textBufferMode !== 'none') {
const sanitized = stripLeakedReasoningPreamble(activeTextBuffer)
if (sanitized) {
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
delta: {
type: 'text_delta',
text: sanitized,
},
}
}
}
yield { yield {
type: 'content_block_stop', type: 'content_block_stop',
index: activeTextBlockIndex, index: activeTextBlockIndex,
} }
activeTextBlockIndex = null activeTextBlockIndex = null
activeTextBuffer = ''
textBufferMode = 'none'
} }
const startTextBlockIfNeeded = async function* () { const startTextBlockIfNeeded = async function* () {
@@ -791,36 +764,7 @@ export async function* codexStreamToAnthropic(
if (event.event === 'response.output_text.delta') { if (event.event === 'response.output_text.delta') {
yield* startTextBlockIfNeeded() yield* startTextBlockIfNeeded()
activeTextBuffer += payload.delta ?? ''
if (activeTextBlockIndex !== null) { if (activeTextBlockIndex !== null) {
if (
textBufferMode === 'strip' ||
looksLikeLeakedReasoningPrefix(activeTextBuffer)
) {
textBufferMode = 'strip'
continue
}
if (textBufferMode === 'pending') {
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
continue
}
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
delta: {
type: 'text_delta',
text: activeTextBuffer,
},
}
textBufferMode = 'none'
continue
}
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
textBufferMode = 'pending'
continue
}
yield { yield {
type: 'content_block_delta', type: 'content_block_delta',
index: activeTextBlockIndex, index: activeTextBlockIndex,
@@ -895,16 +839,8 @@ export async function* codexStreamToAnthropic(
stop_sequence: null, stop_sequence: null,
}, },
usage: { usage: {
// Subtract cached tokens: OpenAI includes them in input_tokens, input_tokens: finalResponse?.usage?.input_tokens ?? 0,
// but Anthropic convention treats input_tokens as non-cached only.
input_tokens: (finalResponse?.usage?.input_tokens ?? 0) -
(finalResponse?.usage?.input_tokens_details?.cached_tokens ??
finalResponse?.usage?.prompt_tokens_details?.cached_tokens ?? 0),
output_tokens: finalResponse?.usage?.output_tokens ?? 0, output_tokens: finalResponse?.usage?.output_tokens ?? 0,
cache_read_input_tokens:
finalResponse?.usage?.input_tokens_details?.cached_tokens ??
finalResponse?.usage?.prompt_tokens_details?.cached_tokens ??
0,
}, },
} }
yield { type: 'message_stop' } yield { type: 'message_stop' }
@@ -923,7 +859,7 @@ export function convertCodexResponseToAnthropicMessage(
if (part?.type === 'output_text') { if (part?.type === 'output_text') {
content.push({ content.push({
type: 'text', type: 'text',
text: stripLeakedReasoningPreamble(part.text ?? ''), text: part.text ?? '',
}) })
} }
} }

View File

@@ -7,10 +7,6 @@ const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL, OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_KEY: process.env.OPENAI_API_KEY, OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_MODEL: process.env.OPENAI_MODEL, OPENAI_MODEL: process.env.OPENAI_MODEL,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
GH_TOKEN: process.env.GH_TOKEN,
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI, CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
GEMINI_API_KEY: process.env.GEMINI_API_KEY, GEMINI_API_KEY: process.env.GEMINI_API_KEY,
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY, GOOGLE_API_KEY: process.env.GOOGLE_API_KEY,
@@ -19,7 +15,6 @@ const originalEnv = {
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL, GEMINI_BASE_URL: process.env.GEMINI_BASE_URL,
GEMINI_MODEL: process.env.GEMINI_MODEL, GEMINI_MODEL: process.env.GEMINI_MODEL,
GOOGLE_CLOUD_PROJECT: process.env.GOOGLE_CLOUD_PROJECT, GOOGLE_CLOUD_PROJECT: process.env.GOOGLE_CLOUD_PROJECT,
ANTHROPIC_CUSTOM_HEADERS: process.env.ANTHROPIC_CUSTOM_HEADERS,
} }
const originalFetch = globalThis.fetch const originalFetch = globalThis.fetch
@@ -75,10 +70,6 @@ beforeEach(() => {
process.env.OPENAI_BASE_URL = 'http://example.test/v1' process.env.OPENAI_BASE_URL = 'http://example.test/v1'
process.env.OPENAI_API_KEY = 'test-key' process.env.OPENAI_API_KEY = 'test-key'
delete process.env.OPENAI_MODEL delete process.env.OPENAI_MODEL
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_GEMINI delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.GEMINI_API_KEY delete process.env.GEMINI_API_KEY
delete process.env.GOOGLE_API_KEY delete process.env.GOOGLE_API_KEY
@@ -87,17 +78,12 @@ beforeEach(() => {
delete process.env.GEMINI_BASE_URL delete process.env.GEMINI_BASE_URL
delete process.env.GEMINI_MODEL delete process.env.GEMINI_MODEL
delete process.env.GOOGLE_CLOUD_PROJECT delete process.env.GOOGLE_CLOUD_PROJECT
delete process.env.ANTHROPIC_CUSTOM_HEADERS
}) })
afterEach(() => { afterEach(() => {
restoreEnv('OPENAI_BASE_URL', originalEnv.OPENAI_BASE_URL) restoreEnv('OPENAI_BASE_URL', originalEnv.OPENAI_BASE_URL)
restoreEnv('OPENAI_API_KEY', originalEnv.OPENAI_API_KEY) restoreEnv('OPENAI_API_KEY', originalEnv.OPENAI_API_KEY)
restoreEnv('OPENAI_MODEL', originalEnv.OPENAI_MODEL) restoreEnv('OPENAI_MODEL', originalEnv.OPENAI_MODEL)
restoreEnv('CLAUDE_CODE_USE_GITHUB', originalEnv.CLAUDE_CODE_USE_GITHUB)
restoreEnv('GITHUB_TOKEN', originalEnv.GITHUB_TOKEN)
restoreEnv('GH_TOKEN', originalEnv.GH_TOKEN)
restoreEnv('CLAUDE_CODE_USE_OPENAI', originalEnv.CLAUDE_CODE_USE_OPENAI)
restoreEnv('CLAUDE_CODE_USE_GEMINI', originalEnv.CLAUDE_CODE_USE_GEMINI) restoreEnv('CLAUDE_CODE_USE_GEMINI', originalEnv.CLAUDE_CODE_USE_GEMINI)
restoreEnv('GEMINI_API_KEY', originalEnv.GEMINI_API_KEY) restoreEnv('GEMINI_API_KEY', originalEnv.GEMINI_API_KEY)
restoreEnv('GOOGLE_API_KEY', originalEnv.GOOGLE_API_KEY) restoreEnv('GOOGLE_API_KEY', originalEnv.GOOGLE_API_KEY)
@@ -106,227 +92,9 @@ afterEach(() => {
restoreEnv('GEMINI_BASE_URL', originalEnv.GEMINI_BASE_URL) restoreEnv('GEMINI_BASE_URL', originalEnv.GEMINI_BASE_URL)
restoreEnv('GEMINI_MODEL', originalEnv.GEMINI_MODEL) restoreEnv('GEMINI_MODEL', originalEnv.GEMINI_MODEL)
restoreEnv('GOOGLE_CLOUD_PROJECT', originalEnv.GOOGLE_CLOUD_PROJECT) restoreEnv('GOOGLE_CLOUD_PROJECT', originalEnv.GOOGLE_CLOUD_PROJECT)
restoreEnv('ANTHROPIC_CUSTOM_HEADERS', originalEnv.ANTHROPIC_CUSTOM_HEADERS)
globalThis.fetch = originalFetch globalThis.fetch = originalFetch
}) })
test('strips canonical Anthropic headers from direct shim defaultHeaders', async () => {
let capturedHeaders: Headers | undefined
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = createOpenAIShimClient({
defaultHeaders: {
'anthropic-version': '2023-06-01',
'anthropic-beta': 'prompt-caching-2024-07-31',
'x-anthropic-additional-protection': 'true',
'x-claude-remote-session-id': 'remote-123',
'x-app': 'cli',
'x-client-app': 'sdk',
'x-safe-header': 'keep-me',
},
}) as OpenAIShimClient
await client.beta.messages.create({
model: 'gpt-4o',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
})
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-anthropic-additional-protection')).toBeNull()
expect(capturedHeaders?.get('x-claude-remote-session-id')).toBeNull()
expect(capturedHeaders?.get('x-app')).toBeNull()
expect(capturedHeaders?.get('x-client-app')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
})
test('strips canonical Anthropic headers from per-request shim headers too', async () => {
let capturedHeaders: Headers | undefined
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create(
{
model: 'gpt-4o',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
},
{
headers: {
'anthropic-version': '2023-06-01',
'anthropic-beta': 'prompt-caching-2024-07-31',
'x-safe-header': 'keep-me',
},
},
)
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
})
test('strips Anthropic-specific headers on GitHub Codex transport requests', async () => {
let capturedHeaders: Headers | undefined
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_API_KEY = 'github-test-key'
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_MODEL
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response('', {
status: 200,
headers: {
'Content-Type': 'text/event-stream',
},
})
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create(
{
model: 'github:gpt-5-codex',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: true,
},
{
headers: {
'anthropic-version': '2023-06-01',
'anthropic-beta': 'prompt-caching-2024-07-31',
'x-anthropic-additional-protection': 'true',
'x-safe-header': 'keep-me',
},
},
)
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-anthropic-additional-protection')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
expect(capturedHeaders?.get('authorization')).toBe('Bearer github-test-key')
expect(capturedHeaders?.get('editor-plugin-version')).toBe('copilot-chat/0.26.7')
})
test('strips Anthropic-specific headers on GitHub Codex transport with providerOverride API key', async () => {
let capturedHeaders: Headers | undefined
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_API_KEY = 'env-should-not-win'
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_MODEL
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response('', {
status: 200,
headers: {
'Content-Type': 'text/event-stream',
},
})
}) as FetchType
const client = createOpenAIShimClient({
providerOverride: {
model: 'github:gpt-5-codex',
baseURL: 'https://api.githubcopilot.com',
apiKey: 'provider-override-key',
},
}) as OpenAIShimClient
await client.beta.messages.create(
{
model: 'ignored',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: true,
},
{
headers: {
'anthropic-version': '2023-06-01',
'x-claude-remote-session-id': 'remote-123',
'x-safe-header': 'keep-me',
},
},
)
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('x-claude-remote-session-id')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
expect(capturedHeaders?.get('authorization')).toBe('Bearer provider-override-key')
expect(capturedHeaders?.get('editor-plugin-version')).toBe('copilot-chat/0.26.7')
})
test('preserves usage from final OpenAI stream chunk with empty choices', async () => { test('preserves usage from final OpenAI stream chunk with empty choices', async () => {
globalThis.fetch = (async (_input, init) => { globalThis.fetch = (async (_input, init) => {
const url = typeof _input === 'string' ? _input : _input.url const url = typeof _input === 'string' ? _input : _input.url
@@ -2038,70 +1806,12 @@ test('sanitizes malformed MCP tool schemas before sending them to OpenAI', async
| undefined | undefined
expect(parameters?.additionalProperties).toBe(false) expect(parameters?.additionalProperties).toBe(false)
// No required[] in the original schema → none added (optional properties must not be forced required) expect(parameters?.required).toEqual(['priority'])
expect(parameters?.required).toEqual([])
expect(properties?.priority?.type).toBe('integer') expect(properties?.priority?.type).toBe('integer')
expect(properties?.priority?.enum).toEqual([0, 1, 2, 3]) expect(properties?.priority?.enum).toEqual([0, 1, 2, 3])
expect(properties?.priority).not.toHaveProperty('default') expect(properties?.priority).not.toHaveProperty('default')
}) })
test('optional tool properties are not added to required[] — fixes Groq/Azure 400 tool_use_failed', async () => {
// Regression test for: all optional properties being sent as required in strict mode,
// causing providers like Groq to reject valid tool calls where the model omits optional args.
let requestBody: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
requestBody = JSON.parse(String(init?.body))
return new Response(
JSON.stringify({
id: 'chatcmpl-4',
model: 'gpt-4o',
choices: [{ message: { role: 'assistant', content: 'ok' }, finish_reason: 'stop' }],
usage: { prompt_tokens: 5, completion_tokens: 2, total_tokens: 7 },
}),
{ headers: { 'Content-Type': 'application/json' } },
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'read a file' }],
tools: [
{
name: 'Read',
description: 'Read a file',
input_schema: {
type: 'object',
properties: {
file_path: { type: 'string', description: 'Absolute path to file' },
offset: { type: 'number', description: 'Line to start from' },
limit: { type: 'number', description: 'Max lines to read' },
pages: { type: 'string', description: 'Page range for PDFs' },
},
required: ['file_path'],
},
},
],
max_tokens: 16,
stream: false,
})
const parameters = (
requestBody?.tools as Array<{ function?: { parameters?: Record<string, unknown> } }>
)?.[0]?.function?.parameters
expect(parameters?.required).toEqual(['file_path'])
const required = parameters?.required as string[] | undefined
expect(required).not.toContain('offset')
expect(required).not.toContain('limit')
expect(required).not.toContain('pages')
expect(parameters?.additionalProperties).toBe(false)
})
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Issue #202 — consecutive role coalescing (Devstral, Mistral strict templates) // Issue #202 — consecutive role coalescing (Devstral, Mistral strict templates)
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@@ -2139,7 +1849,7 @@ test('coalesces consecutive user messages to avoid alternation errors (issue #20
stream: false, stream: false,
}) })
expect(sentMessages?.length).toBe(2) expect(sentMessages?.length).toBe(2) // system + 1 merged user
expect(sentMessages?.[0]?.role).toBe('system') expect(sentMessages?.[0]?.role).toBe('system')
expect(sentMessages?.[1]?.role).toBe('user') expect(sentMessages?.[1]?.role).toBe('user')
const userContent = sentMessages?.[1]?.content as string const userContent = sentMessages?.[1]?.content as string
@@ -2173,12 +1883,13 @@ test('coalesces consecutive assistant messages preserving tool_calls (issue #202
stream: false, stream: false,
}) })
// system + user + merged assistant + tool
const assistantMsgs = sentMessages?.filter(m => m.role === 'assistant') const assistantMsgs = sentMessages?.filter(m => m.role === 'assistant')
expect(assistantMsgs?.length).toBe(1) expect(assistantMsgs?.length).toBe(1) // two assistant turns merged into one
expect(assistantMsgs?.[0]?.tool_calls?.length).toBeGreaterThan(0) expect(assistantMsgs?.[0]?.tool_calls?.length).toBeGreaterThan(0)
}) })
test('non-streaming: reasoning_content emitted as thinking block only when content is null', async () => { test('non-streaming: reasoning_content emitted as thinking block, used as text when content is null', async () => {
globalThis.fetch = (async (_input, _init) => { globalThis.fetch = (async (_input, _init) => {
return new Response( return new Response(
JSON.stringify({ JSON.stringify({
@@ -2220,6 +1931,7 @@ test('non-streaming: reasoning_content emitted as thinking block only when conte
expect(result.content).toEqual([ expect(result.content).toEqual([
{ type: 'thinking', thinking: 'Let me think about this step by step.' }, { type: 'thinking', thinking: 'Let me think about this step by step.' },
{ type: 'text', text: 'Let me think about this step by step.' },
]) ])
}) })
@@ -2263,8 +1975,11 @@ test('non-streaming: empty string content does not fall through to reasoning_con
stream: false, stream: false,
})) as { content: Array<Record<string, unknown>> } })) as { content: Array<Record<string, unknown>> }
// reasoning_content should be a thinking block, and also used as text
// since content is empty string (treated as absent)
expect(result.content).toEqual([ expect(result.content).toEqual([
{ type: 'thinking', thinking: 'Chain of thought here.' }, { type: 'thinking', thinking: 'Chain of thought here.' },
{ type: 'text', text: 'Chain of thought here.' },
]) ])
}) })
@@ -2314,46 +2029,6 @@ test('non-streaming: real content takes precedence over reasoning_content', asyn
]) ])
}) })
test('non-streaming: strips leaked reasoning preamble from assistant content', async () => {
globalThis.fetch = (async () => {
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-5-mini',
choices: [
{
message: {
role: 'assistant',
content:
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 10,
completion_tokens: 20,
total_tokens: 30,
},
}),
{ headers: { 'Content-Type': 'application/json' } },
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
const result = (await client.beta.messages.create({
model: 'gpt-5-mini',
system: 'test system',
messages: [{ role: 'user', content: 'hey' }],
max_tokens: 64,
stream: false,
})) as { content: Array<Record<string, unknown>> }
expect(result.content).toEqual([
{ type: 'text', text: 'Hey! How can I help you today?' },
])
})
test('streaming: thinking block closed before tool call', async () => { test('streaming: thinking block closed before tool call', async () => {
globalThis.fetch = (async (_input, _init) => { globalThis.fetch = (async (_input, _init) => {
const chunks = makeStreamChunks([ const chunks = makeStreamChunks([
@@ -2429,6 +2104,7 @@ test('streaming: thinking block closed before tool call', async () => {
const types = events.map(e => e.type) const types = events.map(e => e.type)
// Verify thinking block is started, then closed, then tool call starts
const thinkingStartIdx = types.indexOf('content_block_start') const thinkingStartIdx = types.indexOf('content_block_start')
const firstStopIdx = types.indexOf('content_block_stop') const firstStopIdx = types.indexOf('content_block_stop')
const toolStartIdx = types.indexOf( const toolStartIdx = types.indexOf(
@@ -2440,139 +2116,9 @@ test('streaming: thinking block closed before tool call', async () => {
expect(firstStopIdx).toBeGreaterThan(thinkingStartIdx) expect(firstStopIdx).toBeGreaterThan(thinkingStartIdx)
expect(toolStartIdx).toBeGreaterThan(firstStopIdx) expect(toolStartIdx).toBeGreaterThan(firstStopIdx)
// Verify thinking block start content
const thinkingStart = events[thinkingStartIdx] as { const thinkingStart = events[thinkingStartIdx] as {
content_block?: Record<string, unknown> content_block?: Record<string, unknown>
} }
expect(thinkingStart?.content_block?.type).toBe('thinking') expect(thinkingStart?.content_block?.type).toBe('thinking')
}) })
test('streaming: strips leaked reasoning preamble from assistant content deltas', async () => {
globalThis.fetch = (async () => {
const chunks = makeStreamChunks([
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {
role: 'assistant',
content:
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?',
},
finish_reason: null,
},
],
},
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {},
finish_reason: 'stop',
},
],
},
])
return makeSseResponse(chunks)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
const result = await client.beta.messages
.create({
model: 'gpt-5-mini',
system: 'test system',
messages: [{ role: 'user', content: 'hey' }],
max_tokens: 64,
stream: true,
})
.withResponse()
const textDeltas: string[] = []
for await (const event of result.data) {
const delta = (event as { delta?: { type?: string; text?: string } }).delta
if (delta?.type === 'text_delta' && typeof delta.text === 'string') {
textDeltas.push(delta.text)
}
}
expect(textDeltas).toEqual(['Hey! How can I help you today?'])
})
test('streaming: strips leaked reasoning preamble when split across multiple content chunks', async () => {
globalThis.fetch = (async () => {
const chunks = makeStreamChunks([
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {
role: 'assistant',
content: 'The user said "hey" - this is a simple greeting. ',
},
finish_reason: null,
},
],
},
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {
content:
'I should respond in a friendly, concise way.\n\nHey! How can I help you today?',
},
finish_reason: null,
},
],
},
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {},
finish_reason: 'stop',
},
],
},
])
return makeSseResponse(chunks)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
const result = await client.beta.messages
.create({
model: 'gpt-5-mini',
system: 'test system',
messages: [{ role: 'user', content: 'hey' }],
max_tokens: 64,
stream: true,
})
.withResponse()
const textDeltas: string[] = []
for await (const event of result.data) {
const delta = (event as { delta?: { type?: string; text?: string } }).delta
if (delta?.type === 'text_delta' && typeof delta.text === 'string') {
textDeltas.push(delta.text)
}
}
expect(textDeltas).toEqual(['Hey! How can I help you today?'])
})

View File

@@ -15,9 +15,9 @@
* OPENAI_MODEL=gpt-4o — default model override * OPENAI_MODEL=gpt-4o — default model override
* CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark * CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark
* *
* GitHub Copilot API (api.githubcopilot.com), OpenAI-compatible: * GitHub Models (models.github.ai), OpenAI-compatible:
* CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI) * CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI)
* GITHUB_TOKEN or GH_TOKEN — Copilot API token (mapped to Bearer auth) * GITHUB_TOKEN or GH_TOKEN — PAT with models access (mapped to Bearer auth)
* OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs * OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs
*/ */
@@ -26,17 +26,10 @@ import { isEnvTruthy } from '../../utils/envUtils.js'
import { resolveGeminiCredential } from '../../utils/geminiAuth.js' import { resolveGeminiCredential } from '../../utils/geminiAuth.js'
import { hydrateGeminiAccessTokenFromSecureStorage } from '../../utils/geminiCredentials.js' import { hydrateGeminiAccessTokenFromSecureStorage } from '../../utils/geminiCredentials.js'
import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubModelsCredentials.js' import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubModelsCredentials.js'
import {
looksLikeLeakedReasoningPrefix,
shouldBufferPotentialReasoningPrefix,
stripLeakedReasoningPreamble,
} from './reasoningLeakSanitizer.js'
import { import {
codexStreamToAnthropic, codexStreamToAnthropic,
collectCodexCompletedResponse, collectCodexCompletedResponse,
convertAnthropicMessagesToResponsesInput,
convertCodexResponseToAnthropicMessage, convertCodexResponseToAnthropicMessage,
convertToolsToResponsesTools,
performCodexRequest, performCodexRequest,
type AnthropicStreamEvent, type AnthropicStreamEvent,
type AnthropicUsage, type AnthropicUsage,
@@ -46,7 +39,6 @@ import {
isLocalProviderUrl, isLocalProviderUrl,
resolveCodexApiCredentials, resolveCodexApiCredentials,
resolveProviderRequest, resolveProviderRequest,
getGithubEndpointType,
} from './providerConfig.js' } from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js' import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js' import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
@@ -61,56 +53,19 @@ type SecretValueSource = Partial<{
GEMINI_API_KEY: string GEMINI_API_KEY: string
GOOGLE_API_KEY: string GOOGLE_API_KEY: string
GEMINI_ACCESS_TOKEN: string GEMINI_ACCESS_TOKEN: string
MISTRAL_API_KEY: string
}> }>
const GITHUB_COPILOT_BASE = 'https://api.githubcopilot.com' const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
const GITHUB_API_VERSION = '2022-11-28'
const GITHUB_429_MAX_RETRIES = 3 const GITHUB_429_MAX_RETRIES = 3
const GITHUB_429_BASE_DELAY_SEC = 1 const GITHUB_429_BASE_DELAY_SEC = 1
const GITHUB_429_MAX_DELAY_SEC = 32 const GITHUB_429_MAX_DELAY_SEC = 32
const GEMINI_API_HOST = 'generativelanguage.googleapis.com' const GEMINI_API_HOST = 'generativelanguage.googleapis.com'
const COPILOT_HEADERS: Record<string, string> = {
'User-Agent': 'GitHubCopilotChat/0.26.7',
'Editor-Version': 'vscode/1.99.3',
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
'Copilot-Integration-Id': 'vscode-chat',
}
function isGithubModelsMode(): boolean { function isGithubModelsMode(): boolean {
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
} }
function isMistralMode(): boolean {
return isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
}
function filterAnthropicHeaders(
headers: Record<string, string> | undefined,
): Record<string, string> {
if (!headers) return {}
const filtered: Record<string, string> = {}
for (const [key, value] of Object.entries(headers)) {
const lower = key.toLowerCase()
if (
lower.startsWith('x-anthropic') ||
lower.startsWith('anthropic-') ||
lower.startsWith('x-claude') ||
lower === 'x-app' ||
lower === 'x-client-app' ||
lower === 'authorization' ||
lower === 'x-api-key' ||
lower === 'api-key'
) {
continue
}
filtered[key] = value
}
return filtered
}
function hasGeminiApiHost(baseUrl: string | undefined): boolean { function hasGeminiApiHost(baseUrl: string | undefined): boolean {
if (!baseUrl) return false if (!baseUrl) return false
@@ -457,13 +412,11 @@ function normalizeSchemaForOpenAI(
record.properties = normalizedProps record.properties = normalizedProps
if (strict) { if (strict) {
// Keep only the properties that were originally marked required in the schema. // OpenAI strict mode requires every property to be listed in required[]
// Adding every property to required[] (the previous behaviour) caused strict const allKeys = Object.keys(normalizedProps)
// OpenAI-compatible providers (Groq, Azure, etc.) to reject tool calls because record.required = Array.from(new Set([...existingRequired, ...allKeys]))
// the model correctly omits optional arguments — but the provider treats them // OpenAI strict mode requires additionalProperties: false on all object
// as missing required fields and returns a 400 / tool_use_failed error. // schemas — override unconditionally to ensure nested objects comply.
record.required = existingRequired.filter(k => k in normalizedProps)
// additionalProperties: false is still required by strict-mode providers.
record.additionalProperties = false record.additionalProperties = false
} else { } else {
// For Gemini: keep only existing required keys that are present in properties // For Gemini: keep only existing required keys that are present in properties
@@ -569,14 +522,11 @@ function convertChunkUsage(
): Partial<AnthropicUsage> | undefined { ): Partial<AnthropicUsage> | undefined {
if (!usage) return undefined if (!usage) return undefined
const cached = usage.prompt_tokens_details?.cached_tokens ?? 0
return { return {
// Subtract cached tokens: OpenAI includes them in prompt_tokens, input_tokens: usage.prompt_tokens ?? 0,
// but Anthropic convention treats input_tokens as non-cached only.
input_tokens: (usage.prompt_tokens ?? 0) - cached,
output_tokens: usage.completion_tokens ?? 0, output_tokens: usage.completion_tokens ?? 0,
cache_creation_input_tokens: 0, cache_creation_input_tokens: 0,
cache_read_input_tokens: cached, cache_read_input_tokens: usage.prompt_tokens_details?.cached_tokens ?? 0,
} }
} }
@@ -627,8 +577,6 @@ async function* openaiStreamToAnthropic(
let hasEmittedContentStart = false let hasEmittedContentStart = false
let hasEmittedThinkingStart = false let hasEmittedThinkingStart = false
let hasClosedThinking = false let hasClosedThinking = false
let activeTextBuffer = ''
let textBufferMode: 'none' | 'pending' | 'strip' = 'none'
let lastStopReason: 'tool_use' | 'max_tokens' | 'end_turn' | null = null let lastStopReason: 'tool_use' | 'max_tokens' | 'end_turn' | null = null
let hasEmittedFinalUsage = false let hasEmittedFinalUsage = false
let hasProcessedFinishReason = false let hasProcessedFinishReason = false
@@ -659,30 +607,6 @@ async function* openaiStreamToAnthropic(
const decoder = new TextDecoder() const decoder = new TextDecoder()
let buffer = '' let buffer = ''
const closeActiveContentBlock = async function* () {
if (!hasEmittedContentStart) return
if (textBufferMode !== 'none') {
const sanitized = stripLeakedReasoningPreamble(activeTextBuffer)
if (sanitized) {
yield {
type: 'content_block_delta',
index: contentBlockIndex,
delta: { type: 'text_delta', text: sanitized },
}
}
}
yield {
type: 'content_block_stop',
index: contentBlockIndex,
}
contentBlockIndex++
hasEmittedContentStart = false
activeTextBuffer = ''
textBufferMode = 'none'
}
try { try {
while (true) { while (true) {
const { done, value } = await reader.read() const { done, value } = await reader.read()
@@ -737,7 +661,6 @@ async function* openaiStreamToAnthropic(
contentBlockIndex++ contentBlockIndex++
hasClosedThinking = true hasClosedThinking = true
} }
activeTextBuffer += delta.content
if (!hasEmittedContentStart) { if (!hasEmittedContentStart) {
yield { yield {
type: 'content_block_start', type: 'content_block_start',
@@ -746,35 +669,6 @@ async function* openaiStreamToAnthropic(
} }
hasEmittedContentStart = true hasEmittedContentStart = true
} }
if (
textBufferMode === 'strip' ||
looksLikeLeakedReasoningPrefix(activeTextBuffer)
) {
textBufferMode = 'strip'
continue
}
if (textBufferMode === 'pending') {
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
continue
}
yield {
type: 'content_block_delta',
index: contentBlockIndex,
delta: {
type: 'text_delta',
text: activeTextBuffer,
},
}
textBufferMode = 'none'
continue
}
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
textBufferMode = 'pending'
continue
}
yield { yield {
type: 'content_block_delta', type: 'content_block_delta',
index: contentBlockIndex, index: contentBlockIndex,
@@ -793,7 +687,12 @@ async function* openaiStreamToAnthropic(
hasClosedThinking = true hasClosedThinking = true
} }
if (hasEmittedContentStart) { if (hasEmittedContentStart) {
yield* closeActiveContentBlock() yield {
type: 'content_block_stop',
index: contentBlockIndex,
}
contentBlockIndex++
hasEmittedContentStart = false
} }
const toolBlockIndex = contentBlockIndex const toolBlockIndex = contentBlockIndex
@@ -876,7 +775,10 @@ async function* openaiStreamToAnthropic(
} }
// Close any open content blocks // Close any open content blocks
if (hasEmittedContentStart) { if (hasEmittedContentStart) {
yield* closeActiveContentBlock() yield {
type: 'content_block_stop',
index: contentBlockIndex,
}
} }
// Close active tool calls // Close active tool calls
for (const [, tc] of activeToolCalls) { for (const [, tc] of activeToolCalls) {
@@ -1023,7 +925,7 @@ class OpenAIShimMessages {
private providerOverride?: { model: string; baseURL: string; apiKey: string } private providerOverride?: { model: string; baseURL: string; apiKey: string }
constructor(defaultHeaders: Record<string, string>, reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh', providerOverride?: { model: string; baseURL: string; apiKey: string }) { constructor(defaultHeaders: Record<string, string>, reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh', providerOverride?: { model: string; baseURL: string; apiKey: string }) {
this.defaultHeaders = filterAnthropicHeaders(defaultHeaders) this.defaultHeaders = defaultHeaders
this.reasoningEffort = reasoningEffort this.reasoningEffort = reasoningEffort
this.providerOverride = providerOverride this.providerOverride = providerOverride
} }
@@ -1042,9 +944,8 @@ class OpenAIShimMessages {
httpResponse = response httpResponse = response
if (params.stream) { if (params.stream) {
const isResponsesStream = response.url?.includes('/responses')
return new OpenAIShimStream( return new OpenAIShimStream(
(request.transport === 'codex_responses' || isResponsesStream) request.transport === 'codex_responses'
? codexStreamToAnthropic(response, request.resolvedModel) ? codexStreamToAnthropic(response, request.resolvedModel)
: openaiStreamToAnthropic(response, request.resolvedModel), : openaiStreamToAnthropic(response, request.resolvedModel),
) )
@@ -1058,38 +959,8 @@ class OpenAIShimMessages {
) )
} }
const isResponsesNonStream = response.url?.includes('/responses') const data = await response.json()
if (isResponsesNonStream || (request.transport === 'chat_completions' && isGithubModelsMode())) { return self._convertNonStreamingResponse(data, request.resolvedModel)
const contentType = response.headers.get('content-type') ?? ''
if (contentType.includes('application/json')) {
const parsed = await response.json() as Record<string, unknown>
if (
parsed &&
typeof parsed === 'object' &&
('output' in parsed || 'incomplete_details' in parsed)
) {
return convertCodexResponseToAnthropicMessage(
parsed,
request.resolvedModel,
)
}
return self._convertNonStreamingResponse(parsed, request.resolvedModel)
}
}
const contentType = response.headers.get('content-type') ?? ''
if (contentType.includes('application/json')) {
const data = await response.json()
return self._convertNonStreamingResponse(data, request.resolvedModel)
}
const textBody = await response.text().catch(() => '')
throw APIError.generate(
response.status,
undefined,
`OpenAI API error ${response.status}: unexpected response: ${textBody.slice(0, 500)}`,
response.headers as unknown as Headers,
)
})() })()
; (promise as unknown as Record<string, unknown>).withResponse = ; (promise as unknown as Record<string, unknown>).withResponse =
@@ -1111,36 +982,7 @@ class OpenAIShimMessages {
params: ShimCreateParams, params: ShimCreateParams,
options?: { signal?: AbortSignal; headers?: Record<string, string> }, options?: { signal?: AbortSignal; headers?: Record<string, string> },
): Promise<Response> { ): Promise<Response> {
const githubEndpointType = getGithubEndpointType(request.baseUrl) if (request.transport === 'codex_responses') {
const isGithubMode = isGithubModelsMode()
const isGithubWithCodexTransport = isGithubMode && request.transport === 'codex_responses'
const isGithubCopilotEndpoint = isGithubMode && githubEndpointType === 'copilot'
if (isGithubWithCodexTransport) {
const apiKey = this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? ''
if (!apiKey) {
throw new Error(
'GitHub Copilot auth is required. Run /onboard-github to sign in.',
)
}
return performCodexRequest({
request,
credentials: {
apiKey,
source: 'env',
},
params,
defaultHeaders: {
...this.defaultHeaders,
...filterAnthropicHeaders(options?.headers),
...COPILOT_HEADERS,
},
signal: options?.signal,
})
}
if (request.transport === 'codex_responses' && !isGithubMode) {
const credentials = resolveCodexApiCredentials() const credentials = resolveCodexApiCredentials()
if (!credentials.apiKey) { if (!credentials.apiKey) {
const authHint = credentials.authPath const authHint = credentials.authPath
@@ -1165,7 +1007,7 @@ class OpenAIShimMessages {
params, params,
defaultHeaders: { defaultHeaders: {
...this.defaultHeaders, ...this.defaultHeaders,
...filterAnthropicHeaders(options?.headers), ...(options?.headers ?? {}),
}, },
signal: options?.signal, signal: options?.signal,
}) })
@@ -1192,7 +1034,6 @@ class OpenAIShimMessages {
model: request.resolvedModel, model: request.resolvedModel,
messages: openaiMessages, messages: openaiMessages,
stream: params.stream ?? false, stream: params.stream ?? false,
store: false,
} }
// Convert max_tokens to max_completion_tokens for OpenAI API compatibility. // Convert max_tokens to max_completion_tokens for OpenAI API compatibility.
// Azure OpenAI requires max_completion_tokens and does not accept max_tokens. // Azure OpenAI requires max_completion_tokens and does not accept max_tokens.
@@ -1215,22 +1056,11 @@ class OpenAIShimMessages {
} }
const isGithub = isGithubModelsMode() const isGithub = isGithubModelsMode()
const isMistral = isMistralMode() if (isGithub && body.max_completion_tokens !== undefined) {
const githubEndpointType = getGithubEndpointType(request.baseUrl)
const isGithubCopilot = isGithub && githubEndpointType === 'copilot'
const isGithubModels = isGithub && (githubEndpointType === 'models' || githubEndpointType === 'custom')
if ((isGithub || isMistral) && body.max_completion_tokens !== undefined) {
body.max_tokens = body.max_completion_tokens body.max_tokens = body.max_completion_tokens
delete body.max_completion_tokens delete body.max_completion_tokens
} }
// mistral also doesn't recognize body.store
if (isMistral) {
delete body.store
}
if (params.temperature !== undefined) body.temperature = params.temperature if (params.temperature !== undefined) body.temperature = params.temperature
if (params.top_p !== undefined) body.top_p = params.top_p if (params.top_p !== undefined) body.top_p = params.top_p
@@ -1265,11 +1095,12 @@ class OpenAIShimMessages {
const headers: Record<string, string> = { const headers: Record<string, string> = {
'Content-Type': 'application/json', 'Content-Type': 'application/json',
...this.defaultHeaders, ...this.defaultHeaders,
...filterAnthropicHeaders(options?.headers), ...(options?.headers ?? {}),
} }
const isGemini = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) const isGemini = isGeminiMode()
const apiKey = this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? '' const apiKey =
this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? ''
// Detect Azure endpoints by hostname (not raw URL) to prevent bypass via // Detect Azure endpoints by hostname (not raw URL) to prevent bypass via
// path segments like https://evil.com/cognitiveservices.azure.com/ // path segments like https://evil.com/cognitiveservices.azure.com/
let isAzure = false let isAzure = false
@@ -1290,17 +1121,15 @@ class OpenAIShimMessages {
const geminiCredential = await resolveGeminiCredential(process.env) const geminiCredential = await resolveGeminiCredential(process.env)
if (geminiCredential.kind !== 'none') { if (geminiCredential.kind !== 'none') {
headers.Authorization = `Bearer ${geminiCredential.credential}` headers.Authorization = `Bearer ${geminiCredential.credential}`
if (geminiCredential.kind !== 'api-key' && 'projectId' in geminiCredential && geminiCredential.projectId) { if (geminiCredential.projectId) {
headers['x-goog-user-project'] = geminiCredential.projectId headers['x-goog-user-project'] = geminiCredential.projectId
} }
} }
} }
if (isGithubCopilot) { if (isGithub) {
Object.assign(headers, COPILOT_HEADERS) headers.Accept = 'application/vnd.github.v3+json'
} else if (isGithubModels) { headers['X-GitHub-Api-Version'] = GITHUB_API_VERSION
headers['Accept'] = 'application/vnd.github+json'
headers['X-GitHub-Api-Version'] = '2022-11-28'
} }
// Build the chat completions URL // Build the chat completions URL
@@ -1352,83 +1181,9 @@ class OpenAIShimMessages {
await sleepMs(delaySec * 1000) await sleepMs(delaySec * 1000)
continue continue
} }
// Read body exactly once here — Response body is a stream that can only
// be consumed a single time.
const errorBody = await response.text().catch(() => 'unknown error') const errorBody = await response.text().catch(() => 'unknown error')
const rateHint = const rateHint =
isGithub && response.status === 429 ? formatRetryAfterHint(response) : '' isGithub && response.status === 429 ? formatRetryAfterHint(response) : ''
// If GitHub Copilot returns error about /chat/completions,
// try the /responses endpoint (needed for GPT-5+ models)
if (isGithub && response.status === 400) {
if (errorBody.includes('/chat/completions') || errorBody.includes('not accessible')) {
const responsesUrl = `${request.baseUrl}/responses`
const responsesBody: Record<string, unknown> = {
model: request.resolvedModel,
input: convertAnthropicMessagesToResponsesInput(
params.messages as Array<{
role?: string
message?: { role?: string; content?: unknown }
content?: unknown
}>,
),
stream: params.stream ?? false,
store: false,
}
if (!Array.isArray(responsesBody.input) || responsesBody.input.length === 0) {
responsesBody.input = [
{
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: '' }],
},
]
}
const systemText = convertSystemPrompt(params.system)
if (systemText) {
responsesBody.instructions = systemText
}
if (body.max_tokens !== undefined) {
responsesBody.max_output_tokens = body.max_tokens
}
if (params.tools && params.tools.length > 0) {
const convertedTools = convertToolsToResponsesTools(
params.tools as Array<{
name?: string
description?: string
input_schema?: Record<string, unknown>
}>,
)
if (convertedTools.length > 0) {
responsesBody.tools = convertedTools
}
}
const responsesResponse = await fetch(responsesUrl, {
method: 'POST',
headers,
body: JSON.stringify(responsesBody),
signal: options?.signal,
})
if (responsesResponse.ok) {
return responsesResponse
}
const responsesErrorBody = await responsesResponse.text().catch(() => 'unknown error')
let responsesErrorResponse: object | undefined
try { responsesErrorResponse = JSON.parse(responsesErrorBody) } catch { /* raw text */ }
throw APIError.generate(
responsesResponse.status,
responsesErrorResponse,
`OpenAI API error ${responsesResponse.status}: ${responsesErrorBody}`,
responsesResponse.headers,
)
}
}
let errorResponse: object | undefined let errorResponse: object | undefined
try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ } try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ }
throw APIError.generate( throw APIError.generate(
@@ -1478,9 +1233,9 @@ class OpenAIShimMessages {
const choice = data.choices?.[0] const choice = data.choices?.[0]
const content: Array<Record<string, unknown>> = [] const content: Array<Record<string, unknown>> = []
// Some reasoning models (e.g. GLM-5) put their chain-of-thought in // Some reasoning models (e.g. GLM-5) put their reply in reasoning_content
// reasoning_content while content stays null. Preserve it as a thinking // while content stays null — emit reasoning as a thinking block, then
// block, but do not surface it as visible assistant text. // fall back to it for visible text if content is empty.
const reasoningText = choice?.message?.reasoning_content const reasoningText = choice?.message?.reasoning_content
if (typeof reasoningText === 'string' && reasoningText) { if (typeof reasoningText === 'string' && reasoningText) {
content.push({ type: 'thinking', thinking: reasoningText }) content.push({ type: 'thinking', thinking: reasoningText })
@@ -1488,12 +1243,9 @@ class OpenAIShimMessages {
const rawContent = const rawContent =
choice?.message?.content !== '' && choice?.message?.content != null choice?.message?.content !== '' && choice?.message?.content != null
? choice?.message?.content ? choice?.message?.content
: null : choice?.message?.reasoning_content
if (typeof rawContent === 'string' && rawContent) { if (typeof rawContent === 'string' && rawContent) {
content.push({ content.push({ type: 'text', text: rawContent })
type: 'text',
text: stripLeakedReasoningPreamble(rawContent),
})
} else if (Array.isArray(rawContent) && rawContent.length > 0) { } else if (Array.isArray(rawContent) && rawContent.length > 0) {
const parts: string[] = [] const parts: string[] = []
for (const part of rawContent) { for (const part of rawContent) {
@@ -1508,10 +1260,7 @@ class OpenAIShimMessages {
} }
const joined = parts.join('\n') const joined = parts.join('\n')
if (joined) { if (joined) {
content.push({ content.push({ type: 'text', text: joined })
type: 'text',
text: stripLeakedReasoningPreamble(joined),
})
} }
} }
@@ -1601,15 +1350,8 @@ export function createOpenAIShimClient(options: {
if (process.env.GEMINI_MODEL && !process.env.OPENAI_MODEL) { if (process.env.GEMINI_MODEL && !process.env.OPENAI_MODEL) {
process.env.OPENAI_MODEL = process.env.GEMINI_MODEL process.env.OPENAI_MODEL = process.env.GEMINI_MODEL
} }
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)) {
process.env.OPENAI_BASE_URL =
process.env.MISTRAL_BASE_URL ?? 'https://api.mistral.ai/v1'
process.env.OPENAI_API_KEY = process.env.MISTRAL_API_KEY
if (process.env.MISTRAL_MODEL) {
process.env.OPENAI_MODEL = process.env.MISTRAL_MODEL
}
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) { } else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
process.env.OPENAI_BASE_URL ??= GITHUB_COPILOT_BASE process.env.OPENAI_BASE_URL ??= GITHUB_MODELS_DEFAULT_BASE
process.env.OPENAI_API_KEY ??= process.env.OPENAI_API_KEY ??=
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? '' process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? ''
} }

View File

@@ -23,9 +23,6 @@ test.each([
['github:gpt-4o', 'gpt-4o'], ['github:gpt-4o', 'gpt-4o'],
['gpt-4o', 'gpt-4o'], ['gpt-4o', 'gpt-4o'],
['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL], ['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL],
// normalizeGithubModelsApiModel preserves provider prefix for models.github.ai compatibility
['github:openai/gpt-4.1', 'openai/gpt-4.1'],
['openai/gpt-4.1', 'openai/gpt-4.1'],
] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => { ] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => {
expect(normalizeGithubModelsApiModel(input)).toBe(expected) expect(normalizeGithubModelsApiModel(input)).toBe(expected)
}) })
@@ -37,20 +34,6 @@ test('resolveProviderRequest applies GitHub normalization when CLAUDE_CODE_USE_G
expect(r.transport).toBe('chat_completions') expect(r.transport).toBe('chat_completions')
}) })
test('resolveProviderRequest routes GitHub GPT-5 codex models to responses transport', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const r = resolveProviderRequest({ model: 'gpt-5.3-codex' })
expect(r.resolvedModel).toBe('gpt-5.3-codex')
expect(r.transport).toBe('codex_responses')
})
test('resolveProviderRequest keeps gpt-5-mini on chat_completions for GitHub', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const r = resolveProviderRequest({ model: 'gpt-5-mini' })
expect(r.resolvedModel).toBe('gpt-5-mini')
expect(r.transport).toBe('chat_completions')
})
test('resolveProviderRequest leaves model unchanged without GitHub flag', () => { test('resolveProviderRequest leaves model unchanged without GitHub flag', () => {
delete process.env.CLAUDE_CODE_USE_GITHUB delete process.env.CLAUDE_CODE_USE_GITHUB
const r = resolveProviderRequest({ model: 'github:gpt-4o' }) const r = resolveProviderRequest({ model: 'github:gpt-4o' })

View File

@@ -7,9 +7,8 @@ import { isEnvTruthy } from '../../utils/envUtils.js'
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1' export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex' export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
export const DEFAULT_MISTRAL_BASE_URL = 'https://api.mistral.ai/v1' /** Default GitHub Models API model when user selects copilot / github:copilot */
/** Default GitHub Copilot API model when user selects copilot / github:copilot */ export const DEFAULT_GITHUB_MODELS_API_MODEL = 'openai/gpt-4.1'
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'gpt-4o'
const CODEX_ALIAS_MODELS: Record< const CODEX_ALIAS_MODELS: Record<
string, string,
@@ -228,21 +227,6 @@ export function shouldUseCodexTransport(
return isCodexBaseUrl(explicitBaseUrl) || (!explicitBaseUrl && isCodexAlias(model)) return isCodexBaseUrl(explicitBaseUrl) || (!explicitBaseUrl && isCodexAlias(model))
} }
function shouldUseGithubResponsesApi(model: string): boolean {
const normalized = model.trim().toLowerCase()
// Codex-branded models require /responses.
if (normalized.includes('codex')) return true
// GPT-5+ models use /responses, except gpt-5-mini.
const match = /^gpt-(\d+)/.exec(normalized)
if (!match) return false
const major = Number(match[1])
if (major < 5) return false
if (normalized.startsWith('gpt-5-mini')) return false
return true
}
export function isLocalProviderUrl(baseUrl: string | undefined): boolean { export function isLocalProviderUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false if (!baseUrl) return false
try { try {
@@ -296,61 +280,19 @@ export function isCodexBaseUrl(baseUrl: string | undefined): boolean {
} }
/** /**
* Normalize user model string for GitHub Copilot API inference. * Normalize user model string for GitHub Models inference (models.github.ai).
* Mirrors how Copilot resolves model IDs internally. * Mirrors runtime devsper `github._normalize_model_id`.
*/
export function normalizeGithubCopilotModel(requestedModel: string): string {
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
const segment =
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
if (!segment || segment.toLowerCase() === 'copilot') {
return DEFAULT_GITHUB_MODELS_API_MODEL
}
// Strip provider prefix if present (e.g., "openai/gpt-4o" -> "gpt-4o")
const slashIndex = segment.indexOf('/')
if (slashIndex !== -1) {
return segment.slice(slashIndex + 1)
}
return segment
}
/**
* Normalize user model string for GitHub Models API inference.
* Only normalizes the default alias, preserves provider-qualified models.
*/ */
export function normalizeGithubModelsApiModel(requestedModel: string): string { export function normalizeGithubModelsApiModel(requestedModel: string): string {
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
const segment = const segment =
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim() noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
// Only normalize the default alias for GitHub Models
if (!segment || segment.toLowerCase() === 'copilot') { if (!segment || segment.toLowerCase() === 'copilot') {
return DEFAULT_GITHUB_MODELS_API_MODEL return DEFAULT_GITHUB_MODELS_API_MODEL
} }
// Preserve provider prefix for GitHub Models (e.g., "openai/gpt-4.1" stays as-is)
return segment return segment
} }
export const GITHUB_COPILOT_BASE_URL = 'https://api.githubcopilot.com'
export const GITHUB_MODELS_BASE_URL = 'https://models.github.ai/inference'
export function getGithubEndpointType(
baseUrl: string | undefined,
): 'copilot' | 'models' | 'custom' {
if (!baseUrl) return 'copilot'
try {
const hostname = new URL(baseUrl).hostname.toLowerCase()
if (hostname === 'api.githubcopilot.com') {
return 'copilot'
}
if (hostname === 'models.github.ai' || hostname.endsWith('.github.ai')) {
return 'models'
}
return 'custom'
} catch {
return 'copilot'
}
}
export function resolveProviderRequest(options?: { export function resolveProviderRequest(options?: {
model?: string model?: string
baseUrl?: string baseUrl?: string
@@ -358,64 +300,41 @@ export function resolveProviderRequest(options?: {
reasoningEffortOverride?: ReasoningEffort reasoningEffortOverride?: ReasoningEffort
}): ResolvedProviderRequest { }): ResolvedProviderRequest {
const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const isMistralMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
const requestedModel = const requestedModel =
options?.model?.trim() || options?.model?.trim() ||
(isMistralMode process.env.OPENAI_MODEL?.trim() ||
? process.env.MISTRAL_MODEL?.trim()
: process.env.OPENAI_MODEL?.trim()) ||
options?.fallbackModel?.trim() || options?.fallbackModel?.trim() ||
(isGithubMode ? 'github:copilot' : 'gpt-4o') (isGithubMode ? 'github:copilot' : 'gpt-4o')
const descriptor = parseModelDescriptor(requestedModel) const descriptor = parseModelDescriptor(requestedModel)
const rawBaseUrl = const rawBaseUrl =
asEnvUrl(options?.baseUrl) ?? asEnvUrl(options?.baseUrl) ??
asEnvUrl( asEnvUrl(process.env.OPENAI_BASE_URL) ??
isMistralMode ? (process.env.MISTRAL_BASE_URL ?? DEFAULT_MISTRAL_BASE_URL) : process.env.OPENAI_BASE_URL,
) ??
asEnvUrl(process.env.OPENAI_API_BASE) asEnvUrl(process.env.OPENAI_API_BASE)
const githubEndpointType = isGithubMode
? getGithubEndpointType(rawBaseUrl)
: 'custom'
const isGithubCopilot = isGithubMode && githubEndpointType === 'copilot'
const isGithubModels = isGithubMode && githubEndpointType === 'models'
const isGithubCustom = isGithubMode && githubEndpointType === 'custom'
const githubResolvedModel = isGithubMode
? normalizeGithubModelsApiModel(requestedModel)
: requestedModel
const transport: ProviderTransport = const transport: ProviderTransport =
shouldUseCodexTransport(requestedModel, rawBaseUrl) || shouldUseCodexTransport(requestedModel, rawBaseUrl)
(isGithubCopilot && shouldUseGithubResponsesApi(githubResolvedModel))
? 'codex_responses' ? 'codex_responses'
: 'chat_completions' : 'chat_completions'
// For GitHub Copilot API, normalize to real model ID (e.g., "github:copilot" -> "gpt-4o") const resolvedModel =
// For GitHub Models/custom endpoints: transport === 'chat_completions' &&
// - Normalize default alias (github:copilot -> gpt-4o) isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
// - Preserve provider-qualified models (openai/gpt-4.1 stays as-is) ? normalizeGithubModelsApiModel(requestedModel)
const resolvedModel = isGithubCopilot : descriptor.baseModel
? normalizeGithubCopilotModel(descriptor.baseModel)
: (isGithubModels || isGithubCustom
? normalizeGithubModelsApiModel(descriptor.baseModel)
: descriptor.baseModel)
const reasoning = options?.reasoningEffortOverride const reasoning = options?.reasoningEffortOverride
? { effort: options.reasoningEffortOverride } ? { effort: options.reasoningEffortOverride }
: descriptor.reasoning : descriptor.reasoning
return { return {
transport, transport,
requestedModel, requestedModel,
resolvedModel, resolvedModel,
baseUrl: baseUrl:
(rawBaseUrl ?? (rawBaseUrl ??
(isGithubCopilot && transport === 'codex_responses' (transport === 'codex_responses'
? GITHUB_COPILOT_BASE_URL ? DEFAULT_CODEX_BASE_URL
: (isGithubMode : DEFAULT_OPENAI_BASE_URL)
? GITHUB_COPILOT_BASE_URL
: DEFAULT_OPENAI_BASE_URL))
).replace(/\/+$/, ''), ).replace(/\/+$/, ''),
reasoning, reasoning,
} }
@@ -424,7 +343,6 @@ export function resolveProviderRequest(options?: {
export function getAdditionalModelOptionsCacheScope(): string | null { export function getAdditionalModelOptionsCacheScope(): string | null {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) { if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) && if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) && !isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK) && !isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) && !isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) &&

View File

@@ -1,46 +0,0 @@
import { describe, expect, test } from 'bun:test'
import {
looksLikeLeakedReasoningPrefix,
shouldBufferPotentialReasoningPrefix,
stripLeakedReasoningPreamble,
} from './reasoningLeakSanitizer.ts'
describe('reasoning leak sanitizer', () => {
test('strips explicit internal reasoning preambles', () => {
const text =
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(true)
expect(stripLeakedReasoningPreamble(text)).toBe(
'Hey! How can I help you today?',
)
})
test('does not strip normal user-facing advice that mentions "the user should"', () => {
const text =
'The user should reset their password immediately.\n\nHere are the steps...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
test('does not strip legitimate first-person advice about responding to an incident', () => {
const text =
'I need to respond to this security incident immediately. The system is compromised.\n\nHere are the remediation steps...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
test('does not strip legitimate first-person advice about answering a support ticket', () => {
const text =
'I need to answer the support ticket before end of day. The customer is waiting.\n\nHere is the response I drafted...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
})

View File

@@ -1,54 +0,0 @@
const EXPLICIT_REASONING_START_RE =
/^\s*(i should\b|i need to\b|let me think\b|the task\b|the request\b)/i
const EXPLICIT_REASONING_META_RE =
/\b(user|request|question|prompt|message|task|greeting|small talk|briefly|friendly|concise)\b/i
const USER_META_START_RE =
/^\s*the user\s+(just\s+)?(said|asked|is asking|wants|wanted|mentioned|seems|appears)\b/i
const USER_REASONING_RE =
/^\s*the user\s+(just\s+)?(said|asked|is asking|wants|wanted|mentioned|seems|appears)\b[\s\S]*\b(i should|i need to|let me think|respond|reply|answer|greeting|small talk|briefly|friendly|concise)\b/i
export function shouldBufferPotentialReasoningPrefix(text: string): boolean {
const normalized = text.trim()
if (!normalized) return false
if (looksLikeLeakedReasoningPrefix(normalized)) {
return true
}
const hasParagraphBoundary = /\n\s*\n/.test(normalized)
if (hasParagraphBoundary) {
return false
}
return (
EXPLICIT_REASONING_START_RE.test(normalized) ||
USER_META_START_RE.test(normalized)
)
}
export function looksLikeLeakedReasoningPrefix(text: string): boolean {
const normalized = text.trim()
if (!normalized) return false
return (
(EXPLICIT_REASONING_START_RE.test(normalized) &&
EXPLICIT_REASONING_META_RE.test(normalized)) ||
USER_REASONING_RE.test(normalized)
)
}
export function stripLeakedReasoningPreamble(text: string): string {
const normalized = text.replace(/\r\n/g, '\n')
const parts = normalized.split(/\n\s*\n/)
if (parts.length < 2) return text
const first = parts[0]?.trim() ?? ''
if (!looksLikeLeakedReasoningPrefix(first)) {
return text
}
const remainder = parts.slice(1).join('\n\n').trim()
return remainder || text
}

View File

@@ -1,4 +1,4 @@
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test' import { afterEach, describe, expect, mock, test } from 'bun:test'
import { APIError } from '@anthropic-ai/sdk' import { APIError } from '@anthropic-ai/sdk'
// Helper to build a mock APIError with specific headers // Helper to build a mock APIError with specific headers
@@ -15,27 +15,15 @@ function makeError(headers: Record<string, string>): APIError {
// Save/restore env vars between tests // Save/restore env vars between tests
const originalEnv = { ...process.env } const originalEnv = { ...process.env }
const envKeys = [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_MODEL',
'OPENAI_BASE_URL',
'OPENAI_API_BASE',
] as const
beforeEach(() => {
for (const key of envKeys) {
delete process.env[key]
}
})
afterEach(() => { afterEach(() => {
for (const key of envKeys) { for (const key of [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
]) {
if (originalEnv[key] === undefined) delete process.env[key] if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key] else process.env[key] = originalEnv[key]
} }

View File

@@ -1,106 +0,0 @@
import { describe, expect, test } from 'bun:test'
import { AutoFixConfigSchema, getAutoFixConfig, type AutoFixConfig } from './autoFixConfig.js'
describe('AutoFixConfigSchema', () => {
test('parses valid full config', () => {
const input = {
enabled: true,
lint: 'eslint . --fix',
test: 'bun test',
maxRetries: 3,
timeout: 30000,
}
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.enabled).toBe(true)
expect(result.data.lint).toBe('eslint . --fix')
expect(result.data.test).toBe('bun test')
expect(result.data.maxRetries).toBe(3)
expect(result.data.timeout).toBe(30000)
}
})
test('parses minimal config with defaults', () => {
const input = { enabled: true, lint: 'eslint .' }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.maxRetries).toBe(3)
expect(result.data.timeout).toBe(30000)
expect(result.data.test).toBeUndefined()
}
})
test('rejects config with enabled but no lint or test', () => {
const input = { enabled: true }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
test('accepts disabled config without commands', () => {
const input = { enabled: false }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
})
test('rejects negative maxRetries', () => {
const input = { enabled: true, lint: 'eslint .', maxRetries: -1 }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
test('rejects maxRetries above 10', () => {
const input = { enabled: true, lint: 'eslint .', maxRetries: 11 }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
})
describe('getAutoFixConfig', () => {
test('returns null when settings have no autoFix', () => {
const result = getAutoFixConfig(undefined)
expect(result).toBeNull()
})
test('returns null when autoFix is disabled', () => {
const result = getAutoFixConfig({ enabled: false })
expect(result).toBeNull()
})
test('returns parsed config when valid and enabled', () => {
const result = getAutoFixConfig({ enabled: true, lint: 'eslint .' })
expect(result).not.toBeNull()
expect(result!.enabled).toBe(true)
expect(result!.lint).toBe('eslint .')
})
})
describe('SettingsSchema autoFix integration', () => {
test('SettingsSchema accepts autoFix field', async () => {
const { SettingsSchema } = await import('../../utils/settings/types.js')
const settings = {
autoFix: {
enabled: true,
lint: 'eslint .',
test: 'bun test',
maxRetries: 3,
timeout: 30000,
},
}
const result = SettingsSchema().safeParse(settings)
expect(result.success).toBe(true)
})
test('SettingsSchema rejects invalid autoFix', async () => {
const { SettingsSchema } = await import('../../utils/settings/types.js')
const settings = {
autoFix: {
enabled: true,
// missing lint and test - should fail refine
},
}
const result = SettingsSchema().safeParse(settings)
expect(result.success).toBe(false)
})
})

View File

@@ -1,52 +0,0 @@
import { z } from 'zod/v4'
export const AutoFixConfigSchema = z
.object({
enabled: z.boolean().describe('Whether auto-fix is enabled'),
lint: z
.string()
.optional()
.describe('Lint command to run after file edits (e.g. "eslint . --fix")'),
test: z
.string()
.optional()
.describe('Test command to run after file edits (e.g. "bun test")'),
maxRetries: z
.number()
.int()
.min(0)
.max(10)
.default(3)
.describe('Maximum number of auto-fix retry attempts (default: 3)'),
timeout: z
.number()
.int()
.min(1000)
.max(300000)
.default(30000)
.describe('Timeout in ms for each lint/test command (default: 30000)'),
})
.refine(
data => !data.enabled || data.lint !== undefined || data.test !== undefined,
{
message: 'At least one of "lint" or "test" must be set when enabled',
},
)
export type AutoFixConfig = z.infer<typeof AutoFixConfigSchema>
export function getAutoFixConfig(
rawConfig: unknown,
): AutoFixConfig | null {
if (!rawConfig || typeof rawConfig !== 'object') {
return null
}
const parsed = AutoFixConfigSchema.safeParse(rawConfig)
if (!parsed.success) {
return null
}
if (!parsed.data.enabled) {
return null
}
return parsed.data
}

View File

@@ -1,63 +0,0 @@
import { describe, expect, test } from 'bun:test'
import {
shouldRunAutoFix,
buildAutoFixContext,
} from './autoFixHook.js'
describe('shouldRunAutoFix', () => {
test('returns true for file_edit tool when autoFix enabled', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_edit', config)).toBe(true)
})
test('returns true for file_write tool when autoFix enabled', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_write', config)).toBe(true)
})
test('returns false for bash tool', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('bash', config)).toBe(false)
})
test('returns false for file_read tool', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_read', config)).toBe(false)
})
test('returns false when config is null', () => {
expect(shouldRunAutoFix('file_edit', null)).toBe(false)
})
})
describe('buildAutoFixContext', () => {
test('formats lint errors as AI-readable context', () => {
const context = buildAutoFixContext({
hasErrors: true,
lintOutput: 'src/foo.ts:10:5 error no-unused-vars',
lintExitCode: 1,
errorSummary: 'Lint errors (exit code 1):\nsrc/foo.ts:10:5 error no-unused-vars',
})
expect(context).toContain('AUTO-FIX')
expect(context).toContain('no-unused-vars')
expect(context).toContain('Please fix')
})
test('returns null when no errors', () => {
const context = buildAutoFixContext({
hasErrors: false,
})
expect(context).toBeNull()
})
test('formats test failures as AI-readable context', () => {
const context = buildAutoFixContext({
hasErrors: true,
testOutput: 'FAIL src/foo.test.ts\n expected true, got false',
testExitCode: 1,
errorSummary: 'Test failures (exit code 1):\nFAIL src/foo.test.ts',
})
expect(context).toContain('AUTO-FIX')
expect(context).toContain('FAIL')
})
})

View File

@@ -1,25 +0,0 @@
import type { AutoFixConfig } from './autoFixConfig.js'
import type { AutoFixResult } from './autoFixRunner.js'
const AUTO_FIX_TOOLS = new Set(['file_edit', 'file_write'])
export function shouldRunAutoFix(
toolName: string,
config: AutoFixConfig | null,
): boolean {
if (!config) return false
return AUTO_FIX_TOOLS.has(toolName)
}
export function buildAutoFixContext(result: AutoFixResult): string | null {
if (!result.hasErrors || !result.errorSummary) return null
return (
`<auto_fix_feedback>\n` +
`AUTO-FIX: The file you just edited has errors. Please fix them:\n\n` +
`${result.errorSummary}\n\n` +
`Please fix these errors in the files you just edited. ` +
`Do not ask the user — just apply the fix.\n` +
`</auto_fix_feedback>`
)
}

View File

@@ -1,48 +0,0 @@
import { describe, expect, test } from 'bun:test'
import { getAutoFixConfig } from './autoFixConfig.js'
import { shouldRunAutoFix, buildAutoFixContext } from './autoFixHook.js'
import { runAutoFixCheck } from './autoFixRunner.js'
describe('autoFix end-to-end flow', () => {
test('full flow: config → shouldRun → check → context', async () => {
const config = getAutoFixConfig({
enabled: true,
lint: 'echo "error: unused" && exit 1',
maxRetries: 2,
timeout: 5000,
})
expect(config).not.toBeNull()
expect(shouldRunAutoFix('file_edit', config)).toBe(true)
const result = await runAutoFixCheck({
lint: config!.lint,
test: config!.test,
timeout: config!.timeout,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
const context = buildAutoFixContext(result)
expect(context).not.toBeNull()
expect(context).toContain('AUTO-FIX')
expect(context).toContain('unused')
})
test('full flow: no errors = no context', async () => {
const config = getAutoFixConfig({
enabled: true,
lint: 'echo "all clean"',
timeout: 5000,
})
const result = await runAutoFixCheck({
lint: config!.lint,
timeout: config!.timeout,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
const context = buildAutoFixContext(result)
expect(context).toBeNull()
})
})

View File

@@ -1,103 +0,0 @@
import { describe, expect, test } from 'bun:test'
import {
runAutoFixCheck,
type AutoFixResult,
type AutoFixCheckOptions,
} from './autoFixRunner.js'
describe('runAutoFixCheck', () => {
test('returns success when lint command exits 0', async () => {
const result = await runAutoFixCheck({
lint: 'echo "all clean"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
expect(result.lintOutput).toContain('all clean')
expect(result.testOutput).toBeUndefined()
})
test('returns errors when lint command exits non-zero', async () => {
const result = await runAutoFixCheck({
lint: 'echo "error: unused var" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.lintOutput).toContain('unused var')
expect(result.lintExitCode).toBe(1)
})
test('returns errors when test command exits non-zero', async () => {
const result = await runAutoFixCheck({
test: 'echo "FAIL test_foo" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.testOutput).toContain('FAIL test_foo')
expect(result.testExitCode).toBe(1)
})
test('runs both lint and test commands', async () => {
const result = await runAutoFixCheck({
lint: 'echo "lint ok"',
test: 'echo "test ok"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
expect(result.lintOutput).toContain('lint ok')
expect(result.testOutput).toContain('test ok')
})
test('skips test if lint fails', async () => {
const result = await runAutoFixCheck({
lint: 'echo "lint error" && exit 1',
test: 'echo "should not run"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.lintOutput).toContain('lint error')
expect(result.testOutput).toBeUndefined()
})
test('handles timeout gracefully', async () => {
const result = await runAutoFixCheck({
lint: 'sleep 10',
timeout: 100,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.timedOut).toBe(true)
})
test('returns success with no commands configured', async () => {
const result = await runAutoFixCheck({
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
})
test('formats error summary for AI consumption', async () => {
const result = await runAutoFixCheck({
lint: 'echo "src/foo.ts:10:5 error no-unused-vars" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
const summary = result.errorSummary
expect(summary).toContain('Lint errors')
expect(summary).toContain('no-unused-vars')
})
})

View File

@@ -1,169 +0,0 @@
import { spawn } from 'child_process'
export interface AutoFixCheckOptions {
lint?: string
test?: string
timeout: number
cwd: string
signal?: AbortSignal
}
export interface AutoFixResult {
hasErrors: boolean
lintOutput?: string
lintExitCode?: number
testOutput?: string
testExitCode?: number
timedOut?: boolean
errorSummary?: string
}
async function runCommand(
command: string,
cwd: string,
timeout: number,
signal?: AbortSignal,
): Promise<{ stdout: string; stderr: string; exitCode: number; timedOut: boolean }> {
return new Promise((resolve) => {
if (signal?.aborted) {
resolve({ stdout: '', stderr: 'Aborted', exitCode: 1, timedOut: false })
return
}
let timedOut = false
let stdout = ''
let stderr = ''
const isWindows = process.platform === 'win32'
const proc = spawn(command, [], {
cwd,
env: { ...process.env },
shell: true,
windowsHide: true,
// On Unix, create a process group so we can kill child processes on timeout/abort
detached: !isWindows,
})
const killTree = () => {
try {
if (!isWindows && proc.pid) {
// Kill the entire process group
process.kill(-proc.pid, 'SIGTERM')
} else {
proc.kill('SIGTERM')
}
} catch {
// Process may have already exited
}
}
const onAbort = () => {
killTree()
}
signal?.addEventListener('abort', onAbort, { once: true })
proc.stdout?.on('data', (data: Buffer) => {
stdout += data.toString()
})
proc.stderr?.on('data', (data: Buffer) => {
stderr += data.toString()
})
const timer = setTimeout(() => {
timedOut = true
killTree()
}, timeout)
proc.on('close', (code) => {
clearTimeout(timer)
signal?.removeEventListener('abort', onAbort)
resolve({
stdout: stdout.slice(0, 10000),
stderr: stderr.slice(0, 10000),
exitCode: code ?? 1,
timedOut,
})
})
proc.on('error', () => {
clearTimeout(timer)
signal?.removeEventListener('abort', onAbort)
resolve({
stdout,
stderr: stderr || 'Command failed to start',
exitCode: 1,
timedOut: false,
})
})
})
}
function buildErrorSummary(result: AutoFixResult): string | undefined {
if (!result.hasErrors) return undefined
const parts: string[] = []
if (result.timedOut) {
parts.push('Command timed out.')
}
if (result.lintExitCode !== undefined && result.lintExitCode !== 0) {
parts.push(`Lint errors (exit code ${result.lintExitCode}):\n${result.lintOutput ?? ''}`)
}
if (result.testExitCode !== undefined && result.testExitCode !== 0) {
parts.push(`Test failures (exit code ${result.testExitCode}):\n${result.testOutput ?? ''}`)
}
return parts.join('\n\n')
}
export async function runAutoFixCheck(
options: AutoFixCheckOptions,
): Promise<AutoFixResult> {
const { lint, test, timeout, cwd, signal } = options
if (!lint && !test) {
return { hasErrors: false }
}
if (signal?.aborted) {
return { hasErrors: false }
}
const result: AutoFixResult = { hasErrors: false }
// Run lint first
if (lint) {
const lintResult = await runCommand(lint, cwd, timeout, signal)
result.lintOutput = (lintResult.stdout + '\n' + lintResult.stderr).trim()
result.lintExitCode = lintResult.exitCode
if (lintResult.timedOut) {
result.hasErrors = true
result.timedOut = true
result.errorSummary = buildErrorSummary(result)
return result
}
if (lintResult.exitCode !== 0) {
result.hasErrors = true
result.errorSummary = buildErrorSummary(result)
return result
}
}
// Run tests only if lint passed (or no lint configured)
if (test) {
const testResult = await runCommand(test, cwd, timeout, signal)
result.testOutput = (testResult.stdout + '\n' + testResult.stderr).trim()
result.testExitCode = testResult.exitCode
if (testResult.timedOut) {
result.hasErrors = true
result.timedOut = true
} else if (testResult.exitCode !== 0) {
result.hasErrors = true
}
}
result.errorSummary = buildErrorSummary(result)
return result
}

View File

@@ -1,4 +1,4 @@
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test' import { afterEach, describe, expect, mock, test } from 'bun:test'
import { import {
DEFAULT_GITHUB_DEVICE_SCOPE, DEFAULT_GITHUB_DEVICE_SCOPE,
@@ -7,26 +7,14 @@ import {
requestDeviceCode, requestDeviceCode,
} from './deviceFlow.js' } from './deviceFlow.js'
async function importFreshModule() {
mock.restore()
return import(`./deviceFlow.ts?ts=${Date.now()}-${Math.random()}`)
}
describe('requestDeviceCode', () => { describe('requestDeviceCode', () => {
const originalFetch = globalThis.fetch const originalFetch = globalThis.fetch
beforeEach(() => {
mock.restore()
globalThis.fetch = originalFetch
})
afterEach(() => { afterEach(() => {
globalThis.fetch = originalFetch globalThis.fetch = originalFetch
}) })
test('parses successful device code response', async () => { test('parses successful device code response', async () => {
const { requestDeviceCode } = await importFreshModule()
globalThis.fetch = mock(() => globalThis.fetch = mock(() =>
Promise.resolve( Promise.resolve(
new Response( new Response(
@@ -54,9 +42,6 @@ describe('requestDeviceCode', () => {
}) })
test('throws on HTTP error', async () => { test('throws on HTTP error', async () => {
const { requestDeviceCode, GitHubDeviceFlowError } =
await importFreshModule()
globalThis.fetch = mock(() => globalThis.fetch = mock(() =>
Promise.resolve(new Response('bad', { status: 500 })), Promise.resolve(new Response('bad', { status: 500 })),
) )
@@ -149,8 +134,6 @@ describe('pollAccessToken', () => {
}) })
test('returns token when GitHub responds with access_token immediately', async () => { test('returns token when GitHub responds with access_token immediately', async () => {
const { pollAccessToken } = await importFreshModule()
let calls = 0 let calls = 0
globalThis.fetch = mock(() => { globalThis.fetch = mock(() => {
calls++ calls++
@@ -170,8 +153,6 @@ describe('pollAccessToken', () => {
}) })
test('throws on access_denied', async () => { test('throws on access_denied', async () => {
const { pollAccessToken } = await importFreshModule()
globalThis.fetch = mock(() => globalThis.fetch = mock(() =>
Promise.resolve( Promise.resolve(
new Response(JSON.stringify({ error: 'access_denied' }), { new Response(JSON.stringify({ error: 'access_denied' }), {
@@ -187,62 +168,3 @@ describe('pollAccessToken', () => {
).rejects.toThrow(/denied/) ).rejects.toThrow(/denied/)
}) })
}) })
describe('exchangeForCopilotToken', () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test('parses successful Copilot token response', async () => {
const { exchangeForCopilotToken } = await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(
JSON.stringify({
token: 'copilot-token-xyz',
expires_at: 1700000000,
refresh_in: 3600,
endpoints: {
api: 'https://api.githubcopilot.com',
},
}),
{ status: 200 },
),
),
)
const result = await exchangeForCopilotToken('oauth-token', globalThis.fetch)
expect(result.token).toBe('copilot-token-xyz')
expect(result.expires_at).toBe(1700000000)
expect(result.refresh_in).toBe(3600)
expect(result.endpoints.api).toBe('https://api.githubcopilot.com')
})
test('throws on HTTP error', async () => {
const { exchangeForCopilotToken, GitHubDeviceFlowError } =
await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(new Response('unauthorized', { status: 401 })),
)
await expect(
exchangeForCopilotToken('bad-token', globalThis.fetch),
).rejects.toThrow(GitHubDeviceFlowError)
})
test('throws on malformed response', async () => {
const { exchangeForCopilotToken } = await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(JSON.stringify({ invalid: 'data' }), { status: 200 }),
),
)
await expect(
exchangeForCopilotToken('oauth-token', globalThis.fetch),
).rejects.toThrow(/Malformed/)
})
})

View File

@@ -1,35 +1,19 @@
/** /**
* GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow). * GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow).
* Uses GitHub Copilot's official OAuth app for device authentication.
*/ */
import { execFileNoThrow } from '../../utils/execFileNoThrow.js' import { execFileNoThrow } from '../../utils/execFileNoThrow.js'
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Iv1.b507a08c87ecfe98' export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Ov23liXjWSSui6QIahPl'
export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code' export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code'
export const GITHUB_DEVICE_ACCESS_TOKEN_URL = export const GITHUB_DEVICE_ACCESS_TOKEN_URL =
'https://github.com/login/oauth/access_token' 'https://github.com/login/oauth/access_token'
export const COPILOT_TOKEN_URL = 'https://api.github.com/copilot_internal/v2/token'
/** Only read:user scope — required for Copilot OAuth */ // OAuth app device flow does not accept the GitHub Models permission token
export const DEFAULT_GITHUB_DEVICE_SCOPE = 'read:user' // scope (models:read). Use an OAuth-safe default.
const OAUTH_SAFE_GITHUB_DEVICE_SCOPE = 'read:user'
export const COPILOT_HEADERS: Record<string, string> = { export const DEFAULT_GITHUB_DEVICE_SCOPE = OAUTH_SAFE_GITHUB_DEVICE_SCOPE
'User-Agent': 'GitHubCopilotChat/0.26.7',
'Editor-Version': 'vscode/1.99.3',
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
'Copilot-Integration-Id': 'vscode-chat',
}
export type CopilotTokenResponse = {
token: string
expires_at: number
refresh_in: number
endpoints: {
api: string
}
}
export class GitHubDeviceFlowError extends Error { export class GitHubDeviceFlowError extends Error {
constructor(message: string) { constructor(message: string) {
@@ -46,8 +30,6 @@ export type DeviceCodeResult = {
interval: number interval: number
} }
type FetchLike = (input: RequestInfo | URL, init?: RequestInit) => Promise<Response>
export function getGithubDeviceFlowClientId(): string { export function getGithubDeviceFlowClientId(): string {
return ( return (
process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() || process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() ||
@@ -62,21 +44,21 @@ function sleep(ms: number): Promise<void> {
export async function requestDeviceCode(options?: { export async function requestDeviceCode(options?: {
clientId?: string clientId?: string
scope?: string scope?: string
fetchImpl?: FetchLike fetchImpl?: typeof fetch
}): Promise<DeviceCodeResult> { }): Promise<DeviceCodeResult> {
const clientId = options?.clientId ?? getGithubDeviceFlowClientId() const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
if (!clientId) { if (!clientId) {
throw new GitHubDeviceFlowError( throw new GitHubDeviceFlowError(
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID.', 'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID or paste a PAT instead.',
) )
} }
const fetchFn = options?.fetchImpl ?? fetch const fetchFn = options?.fetchImpl ?? fetch
const requestedScope = const requestedScope =
options?.scope?.trim() || DEFAULT_GITHUB_DEVICE_SCOPE options?.scope?.trim() || DEFAULT_GITHUB_DEVICE_SCOPE
const scopesToTry = const scopesToTry =
requestedScope === DEFAULT_GITHUB_DEVICE_SCOPE requestedScope === OAUTH_SAFE_GITHUB_DEVICE_SCOPE
? [requestedScope] ? [requestedScope]
: [requestedScope, DEFAULT_GITHUB_DEVICE_SCOPE] : [requestedScope, OAUTH_SAFE_GITHUB_DEVICE_SCOPE]
let lastError = 'Device code request failed.' let lastError = 'Device code request failed.'
@@ -95,7 +77,7 @@ export async function requestDeviceCode(options?: {
lastError = `Device code request failed: ${res.status} ${text}` lastError = `Device code request failed: ${res.status} ${text}`
const isInvalidScope = /invalid_scope/i.test(text) const isInvalidScope = /invalid_scope/i.test(text)
const canRetryWithFallback = const canRetryWithFallback =
scope !== DEFAULT_GITHUB_DEVICE_SCOPE && isInvalidScope scope !== OAUTH_SAFE_GITHUB_DEVICE_SCOPE && isInvalidScope
if (canRetryWithFallback) { if (canRetryWithFallback) {
continue continue
} }
@@ -132,7 +114,7 @@ export type PollOptions = {
clientId?: string clientId?: string
initialInterval?: number initialInterval?: number
timeoutSeconds?: number timeoutSeconds?: number
fetchImpl?: FetchLike fetchImpl?: typeof fetch
} }
export async function pollAccessToken( export async function pollAccessToken(
@@ -215,49 +197,3 @@ export async function openVerificationUri(uri: string): Promise<void> {
// User can open the URL manually // User can open the URL manually
} }
} }
/**
* Exchange an OAuth access token for a Copilot API token.
* The OAuth token alone cannot be used with the Copilot API endpoint.
*/
export async function exchangeForCopilotToken(
oauthToken: string,
fetchImpl?: FetchLike,
): Promise<CopilotTokenResponse> {
const fetchFn = fetchImpl ?? fetch
const res = await fetchFn(COPILOT_TOKEN_URL, {
method: 'GET',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${oauthToken}`,
...COPILOT_HEADERS,
},
})
if (!res.ok) {
const text = await res.text().catch(() => '')
throw new GitHubDeviceFlowError(
`Copilot token exchange failed: ${res.status} ${text}`,
)
}
const data = (await res.json()) as Record<string, unknown>
const token = data.token
const expires_at = data.expires_at
const refresh_in = data.refresh_in
const endpoints = data.endpoints
if (
typeof token !== 'string' ||
typeof expires_at !== 'number' ||
typeof refresh_in !== 'number' ||
!endpoints ||
typeof endpoints !== 'object' ||
typeof (endpoints as Record<string, unknown>).api !== 'string'
) {
throw new GitHubDeviceFlowError('Malformed Copilot token response')
}
return {
token,
expires_at,
refresh_in,
endpoints: endpoints as { api: string },
}
}

View File

@@ -1,11 +1,6 @@
// Mock rate limits for testing [internal-only] // Mock rate limits for testing [internal-only]
// The external build keeps this module as a stable no-op surface so imports // The external build keeps this module as a stable no-op surface so imports
// remain valid without exposing internal-only rate-limit simulation behavior. // remain valid without exposing internal-only rate-limit simulation behavior.
// This allows testing various rate limit scenarios without hitting actual limits
//
// WARNING: This is for internal testing/demo purposes only!
// The mock headers may not exactly match the API specification or real-world behavior.
// Always validate against actual API responses before relying on this for production features.
import { setMockBillingAccessOverride } from '../utils/billing.js' import { setMockBillingAccessOverride } from '../utils/billing.js'
import type { OverageDisabledReason } from './claudeAiLimits.js' import type { OverageDisabledReason } from './claudeAiLimits.js'

View File

@@ -645,7 +645,7 @@ const internalOnlyTips: Tip[] =
{ {
id: 'skillify', id: 'skillify',
content: async () => content: async () =>
'[internal] Use /skillify to turn repeatable recurring workflows into reusable project skills', '[internal] Turn repeatable workflows into reusable project skills when they keep recurring',
cooldownSessions: 15, cooldownSessions: 15,
isRelevant: async () => true, isRelevant: async () => true,
}, },

View File

@@ -29,13 +29,6 @@ import {
} from '../../utils/permissions/PermissionResult.js' } from '../../utils/permissions/PermissionResult.js'
import { checkRuleBasedPermissions } from '../../utils/permissions/permissions.js' import { checkRuleBasedPermissions } from '../../utils/permissions/permissions.js'
import { formatError } from '../../utils/toolErrors.js' import { formatError } from '../../utils/toolErrors.js'
import { getAutoFixConfig } from '../autoFix/autoFixConfig.js'
import { shouldRunAutoFix, buildAutoFixContext } from '../autoFix/autoFixHook.js'
import { runAutoFixCheck } from '../autoFix/autoFixRunner.js'
// Track auto-fix retry count per query chain to enforce maxRetries cap.
// Key: queryChainId (or 'default'), Value: number of auto-fix attempts used.
const autoFixRetryCount = new Map<string, number>()
import { isMcpTool } from '../mcp/utils.js' import { isMcpTool } from '../mcp/utils.js'
import type { McpServerType, MessageUpdateLazy } from './toolExecution.js' import type { McpServerType, MessageUpdateLazy } from './toolExecution.js'
@@ -192,65 +185,6 @@ export async function* runPostToolUseHooks<Input extends AnyObject, Output>(
} }
} }
} }
// Auto-fix: run lint/test if configured for this tool
const autoFixSettings = toolUseContext.getAppState().settings
const autoFixConfig = getAutoFixConfig(
autoFixSettings && typeof autoFixSettings === 'object' && 'autoFix' in autoFixSettings
? (autoFixSettings as Record<string, unknown>).autoFix
: undefined,
)
if (shouldRunAutoFix(tool.name, autoFixConfig) && autoFixConfig) {
// Enforce maxRetries cap to prevent unbounded auto-fix loops.
// Uses queryChainId to scope the counter to the current conversation turn.
const chainKey = (toolUseContext.queryTracking?.chainId as string) ?? 'default'
const currentRetries = autoFixRetryCount.get(chainKey) ?? 0
if (currentRetries >= autoFixConfig.maxRetries) {
// Max retries reached — skip auto-fix and let the user know
yield {
message: createAttachmentMessage({
type: 'hook_additional_context',
content: [
`<auto_fix_feedback>\nAUTO-FIX: Maximum retry limit (${autoFixConfig.maxRetries}) reached. ` +
`Skipping further auto-fix attempts. Please review the errors manually.\n</auto_fix_feedback>`,
],
hookName: `AutoFix:${tool.name}`,
toolUseID,
hookEvent: 'PostToolUse',
}),
}
} else {
try {
const cwd = toolUseContext.options?.cwd ?? process.cwd()
const autoFixResult = await runAutoFixCheck({
lint: autoFixConfig.lint,
test: autoFixConfig.test,
timeout: autoFixConfig.timeout,
cwd,
signal: toolUseContext.abortController.signal,
})
const autoFixContext = buildAutoFixContext(autoFixResult)
if (autoFixContext) {
autoFixRetryCount.set(chainKey, currentRetries + 1)
yield {
message: createAttachmentMessage({
type: 'hook_additional_context',
content: [autoFixContext],
hookName: `AutoFix:${tool.name}`,
toolUseID,
hookEvent: 'PostToolUse',
}),
}
} else {
// Lint/test passed — reset the retry counter for this chain
autoFixRetryCount.delete(chainKey)
}
} catch (autoFixError) {
logError(autoFixError)
}
}
}
} catch (error) { } catch (error) {
logError(error) logError(error)
} }

View File

@@ -1,68 +0,0 @@
import { readdir, readFile, writeFile } from 'fs/promises'
import { basename, relative } from 'path'
import { getWikiPaths } from './paths.js'
async function listMarkdownFiles(dir: string): Promise<string[]> {
const entries = await readdir(dir, { withFileTypes: true })
const files: string[] = []
for (const entry of entries) {
const fullPath = `${dir}/${entry.name}`
if (entry.isDirectory()) {
files.push(...(await listMarkdownFiles(fullPath)))
} else if (entry.isFile() && entry.name.endsWith('.md')) {
files.push(fullPath)
}
}
return files.sort()
}
async function getPageTitle(path: string): Promise<string> {
const content = await readFile(path, 'utf8')
const titleLine = content
.split('\n')
.map(line => line.trim())
.find(line => line.startsWith('# '))
return titleLine ? titleLine.replace(/^#\s+/, '') : basename(path, '.md')
}
export async function rebuildWikiIndex(cwd: string): Promise<void> {
const paths = getWikiPaths(cwd)
const pageFiles = await listMarkdownFiles(paths.pagesDir)
const sourceFiles = await listMarkdownFiles(paths.sourcesDir)
const pageLinks = await Promise.all(
pageFiles.map(async file => {
const rel = relative(paths.root, file)
const title = await getPageTitle(file)
return `- [${title}](./${rel.replace(/\\/g, '/')})`
}),
)
const sourceLinks = sourceFiles.map(file => {
const rel = relative(paths.root, file).replace(/\\/g, '/')
const title = basename(file, '.md')
return `- [${title}](./${rel})`
})
const content = `# ${basename(cwd)} Wiki
This wiki is maintained by OpenClaude as a durable project knowledge layer.
## Core Pages
${pageLinks.length > 0 ? pageLinks.join('\n') : '- No pages yet'}
## Sources
${sourceLinks.length > 0 ? sourceLinks.join('\n') : '- No sources yet'}
## Recent Updates
- See [log.md](./log.md)
`
await writeFile(paths.indexFile, content, 'utf8')
}

View File

@@ -1,48 +0,0 @@
import { afterEach, expect, test } from 'bun:test'
import { mkdtemp, readFile, rm, writeFile } from 'fs/promises'
import { tmpdir } from 'os'
import { join } from 'path'
import { ingestLocalWikiSource } from './ingest.js'
import { getWikiPaths } from './paths.js'
const tempDirs: string[] = []
afterEach(async () => {
await Promise.all(
tempDirs.splice(0).map(dir => rm(dir, { recursive: true, force: true })),
)
})
async function makeProjectDir(): Promise<string> {
const dir = await mkdtemp(join(tmpdir(), 'openclaude-wiki-ingest-'))
tempDirs.push(dir)
return dir
}
test('ingestLocalWikiSource creates a source note and updates log/index', async () => {
const cwd = await makeProjectDir()
const sourcePath = join(cwd, 'notes.md')
await writeFile(
sourcePath,
'# Design Notes\n\nThis subsystem coordinates provider routing and session state.\nIt should be documented for future contributors.\n',
'utf8',
)
const result = await ingestLocalWikiSource(cwd, 'notes.md')
const paths = getWikiPaths(cwd)
expect(result.sourceFile).toBe('notes.md')
expect(result.title).toBe('Design Notes')
expect(result.sourceNote.startsWith('.openclaude/wiki/sources/')).toBe(true)
const sourceNote = await readFile(join(cwd, result.sourceNote), 'utf8')
expect(sourceNote).toContain('# Design Notes')
expect(sourceNote).toContain('Path: `notes.md`')
const log = await readFile(paths.logFile, 'utf8')
expect(log).toContain('Ingested `notes.md`')
const index = await readFile(paths.indexFile, 'utf8')
expect(index).toContain('./sources/')
expect(index).toContain(result.sourceNote.replace('.openclaude/wiki/', './'))
})

View File

@@ -1,93 +0,0 @@
import { appendFile, readFile, stat, writeFile } from 'fs/promises'
import { basename, extname, isAbsolute, relative, resolve } from 'path'
import { initializeWiki } from './init.js'
import { rebuildWikiIndex } from './indexBuilder.js'
import { getWikiPaths } from './paths.js'
import type { WikiIngestResult } from './types.js'
import {
extractTitleFromText,
sanitizeWikiSlug,
summarizeText,
} from './utils.js'
function buildSourceNote(params: {
title: string
sourcePath: string
ingestedAt: string
summary: string
excerpt: string
}): string {
const { title, sourcePath, ingestedAt, summary, excerpt } = params
return `# ${title}
## Source
- Path: \`${sourcePath}\`
- Ingested at: ${ingestedAt}
## Summary
${summary}
## Excerpt
\`\`\`
${excerpt}
\`\`\`
## Linked Pages
- [Architecture](../pages/architecture.md)
`
}
function buildLogEntry(sourcePath: string, title: string, ingestedAt: string): string {
return `- ${ingestedAt}: Ingested \`${sourcePath}\` into source note "${title}"`
}
export async function ingestLocalWikiSource(
cwd: string,
rawPath: string,
): Promise<WikiIngestResult> {
await initializeWiki(cwd)
const resolvedPath = isAbsolute(rawPath) ? rawPath : resolve(cwd, rawPath)
const fileInfo = await stat(resolvedPath)
if (!fileInfo.isFile()) {
throw new Error(`Not a file: ${resolvedPath}`)
}
const content = await readFile(resolvedPath, 'utf8')
const relSourcePath = relative(cwd, resolvedPath).replace(/\\/g, '/')
const ingestedAt = new Date().toISOString()
const baseName = basename(resolvedPath, extname(resolvedPath))
const title = extractTitleFromText(baseName, content)
const summary = summarizeText(content)
const excerpt = content.split('\n').slice(0, 20).join('\n').trim()
const slug = sanitizeWikiSlug(`${baseName}-${Date.now()}`) || `source-${Date.now()}`
const paths = getWikiPaths(cwd)
const sourceNotePath = `${paths.sourcesDir}/${slug}.md`
await writeFile(
sourceNotePath,
buildSourceNote({
title,
sourcePath: relSourcePath,
ingestedAt,
summary,
excerpt,
}),
'utf8',
)
await appendFile(paths.logFile, `${buildLogEntry(relSourcePath, title, ingestedAt)}\n`, 'utf8')
await rebuildWikiIndex(cwd)
return {
sourceFile: relSourcePath,
sourceNote: relative(cwd, sourceNotePath).replace(/\\/g, '/'),
summary,
title,
}
}

View File

@@ -1,54 +0,0 @@
import { afterEach, expect, test } from 'bun:test'
import { mkdtemp, readFile, rm } from 'fs/promises'
import { tmpdir } from 'os'
import { join } from 'path'
import { initializeWiki } from './init.js'
import { getWikiPaths } from './paths.js'
const tempDirs: string[] = []
afterEach(async () => {
await Promise.all(
tempDirs.splice(0).map(dir => rm(dir, { recursive: true, force: true })),
)
})
async function makeProjectDir(): Promise<string> {
const dir = await mkdtemp(join(tmpdir(), 'openclaude-wiki-init-'))
tempDirs.push(dir)
return dir
}
test('initializeWiki creates the expected wiki scaffold', async () => {
const cwd = await makeProjectDir()
const result = await initializeWiki(cwd)
const paths = getWikiPaths(cwd)
expect(result.alreadyExisted).toBe(false)
expect(result.createdFiles).toEqual([
'.openclaude/wiki/schema.md',
'.openclaude/wiki/index.md',
'.openclaude/wiki/log.md',
'.openclaude/wiki/pages/architecture.md',
])
expect(await readFile(paths.schemaFile, 'utf8')).toContain(
'# OpenClaude Wiki Schema',
)
expect(await readFile(paths.indexFile, 'utf8')).toContain('Wiki')
expect(await readFile(paths.logFile, 'utf8')).toContain(
'Wiki initialized by OpenClaude',
)
expect(await readFile(join(paths.pagesDir, 'architecture.md'), 'utf8')).toContain(
'# Architecture',
)
})
test('initializeWiki is idempotent and preserves existing files', async () => {
const cwd = await makeProjectDir()
await initializeWiki(cwd)
const second = await initializeWiki(cwd)
expect(second.alreadyExisted).toBe(true)
expect(second.createdFiles).toEqual([])
})

View File

@@ -1,140 +0,0 @@
import { mkdir, writeFile } from 'fs/promises'
import { basename, relative } from 'path'
import { getWikiPaths } from './paths.js'
import type { WikiInitResult } from './types.js'
function buildSchemaTemplate(projectName: string): string {
return `# OpenClaude Wiki Schema
This wiki stores durable, human-readable project knowledge for ${projectName}.
## Goals
- Keep useful project knowledge in markdown, not only in chat history
- Prefer synthesized facts over raw copy-paste
- Keep source attribution explicit
- Make pages easy for both humans and agents to update
## Structure
- \`index.md\`: top-level navigation and major topics
- \`log.md\`: append-only update log
- \`pages/\`: durable topic and architecture pages
- \`sources/\`: source ingestion notes and summaries
## Page Rules
- Keep pages focused on one topic
- Use stable headings such as:
- \`## Summary\`
- \`## Key Facts\`
- \`## Relationships\`
- \`## Open Questions\`
- \`## Sources\`
- Add or update facts only when they are grounded in project files or explicit source notes
- Prefer editing an existing page over creating duplicates
`
}
function buildIndexTemplate(projectName: string): string {
return `# ${projectName} Wiki
This wiki is maintained by OpenClaude as a durable project knowledge layer.
## Core Pages
- [Architecture](./pages/architecture.md)
## Sources
- Source notes live in [sources/](./sources/)
## Recent Updates
- See [log.md](./log.md)
`
}
function buildLogTemplate(timestamp: string): string {
return `# Wiki Update Log
- ${timestamp}: Wiki initialized by OpenClaude
`
}
function buildArchitectureTemplate(projectName: string): string {
return `# Architecture
## Summary
High-level architecture notes for ${projectName}.
## Key Facts
- This page is the starting point for durable architecture knowledge.
## Relationships
- Link this page to major subsystems as the wiki grows.
## Open Questions
- What are the most important runtime subsystems?
- Which files best represent the system architecture?
## Sources
- Wiki bootstrap
`
}
async function ensureFile(
filePath: string,
content: string,
createdFiles: string[],
): Promise<void> {
try {
await writeFile(filePath, content, { encoding: 'utf8', flag: 'wx' })
createdFiles.push(filePath)
} catch (error: unknown) {
if (
typeof error === 'object' &&
error !== null &&
'code' in error &&
error.code === 'EEXIST'
) {
return
}
throw error
}
}
export async function initializeWiki(cwd: string): Promise<WikiInitResult> {
const paths = getWikiPaths(cwd)
const createdDirectories: string[] = []
const createdFiles: string[] = []
for (const dir of [paths.root, paths.pagesDir, paths.sourcesDir]) {
await mkdir(dir, { recursive: true })
createdDirectories.push(dir)
}
const projectName = basename(cwd)
const timestamp = new Date().toISOString()
await ensureFile(paths.schemaFile, buildSchemaTemplate(projectName), createdFiles)
await ensureFile(paths.indexFile, buildIndexTemplate(projectName), createdFiles)
await ensureFile(paths.logFile, buildLogTemplate(timestamp), createdFiles)
await ensureFile(
`${paths.pagesDir}/architecture.md`,
buildArchitectureTemplate(projectName),
createdFiles,
)
return {
root: paths.root,
createdFiles: createdFiles.map(file => relative(cwd, file)),
createdDirectories: createdDirectories.map(dir => relative(cwd, dir)),
alreadyExisted: createdFiles.length === 0,
}
}

View File

@@ -1,18 +0,0 @@
import { join } from 'path'
import type { WikiPaths } from './types.js'
export const OPENCLAUDE_DIRNAME = '.openclaude'
export const WIKI_DIRNAME = 'wiki'
export function getWikiPaths(cwd: string): WikiPaths {
const root = join(cwd, OPENCLAUDE_DIRNAME, WIKI_DIRNAME)
return {
root,
pagesDir: join(root, 'pages'),
sourcesDir: join(root, 'sources'),
schemaFile: join(root, 'schema.md'),
indexFile: join(root, 'index.md'),
logFile: join(root, 'log.md'),
}
}

View File

@@ -1,55 +0,0 @@
import { afterEach, expect, test } from 'bun:test'
import { mkdtemp, mkdir, rm, writeFile } from 'fs/promises'
import { tmpdir } from 'os'
import { join } from 'path'
import { initializeWiki } from './init.js'
import { getWikiPaths } from './paths.js'
import { getWikiStatus } from './status.js'
const tempDirs: string[] = []
afterEach(async () => {
await Promise.all(
tempDirs.splice(0).map(dir => rm(dir, { recursive: true, force: true })),
)
})
async function makeProjectDir(): Promise<string> {
const dir = await mkdtemp(join(tmpdir(), 'openclaude-wiki-status-'))
tempDirs.push(dir)
return dir
}
test('getWikiStatus reports uninitialized wiki state', async () => {
const cwd = await makeProjectDir()
const status = await getWikiStatus(cwd)
expect(status.initialized).toBe(false)
expect(status.pageCount).toBe(0)
expect(status.sourceCount).toBe(0)
expect(status.lastUpdatedAt).toBeNull()
})
test('getWikiStatus counts pages and sources for initialized wiki', async () => {
const cwd = await makeProjectDir()
await initializeWiki(cwd)
const paths = getWikiPaths(cwd)
await writeFile(join(paths.pagesDir, 'commands.md'), '# Commands\n', 'utf8')
await mkdir(join(paths.sourcesDir, 'external'), { recursive: true })
await writeFile(
join(paths.sourcesDir, 'external', 'spec.md'),
'# Spec\n',
'utf8',
)
const status = await getWikiStatus(cwd)
expect(status.initialized).toBe(true)
expect(status.pageCount).toBe(2)
expect(status.sourceCount).toBe(1)
expect(status.hasSchema).toBe(true)
expect(status.hasIndex).toBe(true)
expect(status.hasLog).toBe(true)
expect(status.lastUpdatedAt).not.toBeNull()
})

View File

@@ -1,82 +0,0 @@
import { readdir, stat } from 'fs/promises'
import { getWikiPaths } from './paths.js'
import type { WikiStatus } from './types.js'
async function pathExists(path: string): Promise<boolean> {
try {
await stat(path)
return true
} catch {
return false
}
}
async function listMarkdownFiles(dir: string): Promise<string[]> {
if (!(await pathExists(dir))) {
return []
}
const entries = await readdir(dir, { withFileTypes: true })
const files: string[] = []
for (const entry of entries) {
const fullPath = `${dir}/${entry.name}`
if (entry.isDirectory()) {
files.push(...(await listMarkdownFiles(fullPath)))
} else if (entry.isFile() && entry.name.endsWith('.md')) {
files.push(fullPath)
}
}
return files
}
async function getLastUpdatedAt(pathsToCheck: string[]): Promise<string | null> {
const mtimes: number[] = []
for (const path of pathsToCheck) {
try {
const info = await stat(path)
mtimes.push(info.mtimeMs)
} catch {
continue
}
}
if (mtimes.length === 0) {
return null
}
return new Date(Math.max(...mtimes)).toISOString()
}
export async function getWikiStatus(cwd: string): Promise<WikiStatus> {
const paths = getWikiPaths(cwd)
const [hasRoot, hasSchema, hasIndex, hasLog, pages, sources] =
await Promise.all([
pathExists(paths.root),
pathExists(paths.schemaFile),
pathExists(paths.indexFile),
pathExists(paths.logFile),
listMarkdownFiles(paths.pagesDir),
listMarkdownFiles(paths.sourcesDir),
])
return {
initialized: hasRoot && hasSchema && hasIndex && hasLog,
root: paths.root,
pageCount: pages.length,
sourceCount: sources.length,
hasSchema,
hasIndex,
hasLog,
lastUpdatedAt: await getLastUpdatedAt([
paths.schemaFile,
paths.indexFile,
paths.logFile,
...pages,
...sources,
]),
}
}

View File

@@ -1,33 +0,0 @@
export type WikiPaths = {
root: string
pagesDir: string
sourcesDir: string
schemaFile: string
indexFile: string
logFile: string
}
export type WikiInitResult = {
root: string
createdFiles: string[]
createdDirectories: string[]
alreadyExisted: boolean
}
export type WikiStatus = {
initialized: boolean
root: string
pageCount: number
sourceCount: number
hasSchema: boolean
hasIndex: boolean
hasLog: boolean
lastUpdatedAt: string | null
}
export type WikiIngestResult = {
sourceFile: string
sourceNote: string
summary: string
title: string
}

View File

@@ -1,36 +0,0 @@
export function sanitizeWikiSlug(value: string): string {
return value
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.replace(/^-+|-+$/g, '')
.replace(/-{2,}/g, '-')
}
export function summarizeText(input: string, maxLength = 280): string {
const normalized = input.replace(/\s+/g, ' ').trim()
if (!normalized) {
return 'No summary available.'
}
if (normalized.length <= maxLength) {
return normalized
}
return `${normalized.slice(0, maxLength - 1).trimEnd()}`
}
export function extractTitleFromText(
fallbackName: string,
content: string,
): string {
const firstNonEmptyLine = content
.split('\n')
.map(line => line.trim())
.find(Boolean)
if (!firstNonEmptyLine) {
return fallbackName
}
return firstNonEmptyLine.replace(/^#+\s*/, '') || fallbackName
}

View File

@@ -1,13 +0,0 @@
import type { Command } from '../commands.js'
import { createStore } from './store.js'
const pluginCommandsStore = createStore<Command[]>([])
export const getPluginCommandsState = (): Command[] =>
pluginCommandsStore.getState()
export const subscribePluginCommands = pluginCommandsStore.subscribe
export function setPluginCommandsState(commands: Command[]): void {
pluginCommandsStore.setState(() => [...commands])
}

View File

@@ -27,19 +27,19 @@ function getClaudeCodeGuideBasePrompt(): string {
? `${FILE_READ_TOOL_NAME}, \`find\`, and \`grep\`` ? `${FILE_READ_TOOL_NAME}, \`find\`, and \`grep\``
: `${FILE_READ_TOOL_NAME}, ${GLOB_TOOL_NAME}, and ${GREP_TOOL_NAME}` : `${FILE_READ_TOOL_NAME}, ${GLOB_TOOL_NAME}, and ${GREP_TOOL_NAME}`
return `You are the OpenClaude guide agent. Your primary responsibility is helping users understand and use OpenClaude, the Claude Agent SDK, and the Claude API (formerly the Anthropic API) effectively. return `You are the Claude guide agent. Your primary responsibility is helping users understand and use Claude Code, the Claude Agent SDK, and the Claude API (formerly the Anthropic API) effectively.
**Your expertise spans three domains:** **Your expertise spans three domains:**
1. **OpenClaude** (the CLI tool): Installation, configuration, hooks, skills, MCP servers, keyboard shortcuts, IDE integrations, settings, and workflows. 1. **Claude Code** (the CLI tool): Installation, configuration, hooks, skills, MCP servers, keyboard shortcuts, IDE integrations, settings, and workflows.
2. **Claude Agent SDK**: A framework for building custom AI agents. Available for Node.js/TypeScript and Python. 2. **Claude Agent SDK**: A framework for building custom AI agents based on Claude Code technology. Available for Node.js/TypeScript and Python.
3. **Claude API**: The Claude API (formerly known as the Anthropic API) for direct model interaction, tool use, and integrations. 3. **Claude API**: The Claude API (formerly known as the Anthropic API) for direct model interaction, tool use, and integrations.
**Documentation sources:** **Documentation sources:**
- **Claude Code docs** (${CLAUDE_CODE_DOCS_MAP_URL}): Use these as the compatibility reference for questions about the OpenClaude CLI tool, including: - **Claude Code docs** (${CLAUDE_CODE_DOCS_MAP_URL}): Fetch this for questions about the Claude Code CLI tool, including:
- Installation, setup, and getting started - Installation, setup, and getting started
- Hooks (pre/post command execution) - Hooks (pre/post command execution)
- Custom skills - Custom skills
@@ -97,7 +97,7 @@ function getFeedbackGuideline(): string {
export const CLAUDE_CODE_GUIDE_AGENT: BuiltInAgentDefinition = { export const CLAUDE_CODE_GUIDE_AGENT: BuiltInAgentDefinition = {
agentType: CLAUDE_CODE_GUIDE_AGENT_TYPE, agentType: CLAUDE_CODE_GUIDE_AGENT_TYPE,
whenToUse: `Use this agent when the user asks questions ("Can OpenClaude...", "Does OpenClaude...", "How do I...") about: (1) OpenClaude (the CLI tool) - features, hooks, slash commands, MCP servers, settings, IDE integrations, keyboard shortcuts; (2) Claude Agent SDK - building custom agents; (3) Claude API (formerly Anthropic API) - API usage, tool use, Anthropic SDK usage. **IMPORTANT:** Before spawning a new agent, check if there is already a running or recently completed claude-code-guide agent that you can continue via ${SEND_MESSAGE_TOOL_NAME}.`, whenToUse: `Use this agent when the user asks questions ("Can Claude...", "Does Claude...", "How do I...") about: (1) Claude Code (the CLI tool) - features, hooks, slash commands, MCP servers, settings, IDE integrations, keyboard shortcuts; (2) Claude Agent SDK - building custom agents; (3) Claude API (formerly Anthropic API) - API usage, tool use, Anthropic SDK usage. **IMPORTANT:** Before spawning a new agent, check if there is already a running or recently completed claude-code-guide agent that you can continue via ${SEND_MESSAGE_TOOL_NAME}.`,
// Ant-native builds: Glob/Grep tools are removed; use Bash (with embedded // Ant-native builds: Glob/Grep tools are removed; use Bash (with embedded
// bfs/ugrep via find/grep aliases) for local file search instead. // bfs/ugrep via find/grep aliases) for local file search instead.
tools: hasEmbeddedSearchTools() tools: hasEmbeddedSearchTools()

View File

@@ -21,7 +21,7 @@ function getExploreSystemPrompt(): string {
? `- Use \`grep\` via ${BASH_TOOL_NAME} for searching file contents with regex` ? `- Use \`grep\` via ${BASH_TOOL_NAME} for searching file contents with regex`
: `- Use ${GREP_TOOL_NAME} for searching file contents with regex` : `- Use ${GREP_TOOL_NAME} for searching file contents with regex`
return `You are a file search specialist for OpenClaude. You excel at thoroughly navigating and exploring codebases. return `You are a file search specialist for OpenClaude, an open-source fork of Claude Code. You excel at thoroughly navigating and exploring codebases.
=== CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS === === CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS ===
This is a READ-ONLY exploration task. You are STRICTLY PROHIBITED from: This is a READ-ONLY exploration task. You are STRICTLY PROHIBITED from:

View File

@@ -1,6 +1,6 @@
import type { BuiltInAgentDefinition } from '../loadAgentsDir.js' import type { BuiltInAgentDefinition } from '../loadAgentsDir.js'
const SHARED_PREFIX = `You are an agent for OpenClaude, an open-source coding agent and CLI. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done.` const SHARED_PREFIX = `You are an agent for OpenClaude, an open-source fork of Claude Code. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done.`
const SHARED_GUIDELINES = `Your strengths: const SHARED_GUIDELINES = `Your strengths:
- Searching for code, configurations, and patterns across large codebases - Searching for code, configurations, and patterns across large codebases

View File

@@ -18,7 +18,7 @@ function getPlanV2SystemPrompt(): string {
? `\`find\`, \`grep\`, and ${FILE_READ_TOOL_NAME}` ? `\`find\`, \`grep\`, and ${FILE_READ_TOOL_NAME}`
: `${GLOB_TOOL_NAME}, ${GREP_TOOL_NAME}, and ${FILE_READ_TOOL_NAME}` : `${GLOB_TOOL_NAME}, ${GREP_TOOL_NAME}, and ${FILE_READ_TOOL_NAME}`
return `You are a software architect and planning specialist for OpenClaude. Your role is to explore the codebase and design implementation plans. return `You are a software architect and planning specialist for Claude Code. Your role is to explore the codebase and design implementation plans.
=== CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS === === CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS ===
This is a READ-ONLY planning task. You are STRICTLY PROHIBITED from: This is a READ-ONLY planning task. You are STRICTLY PROHIBITED from:

View File

@@ -1,6 +1,6 @@
import type { BuiltInAgentDefinition } from '../loadAgentsDir.js' import type { BuiltInAgentDefinition } from '../loadAgentsDir.js'
const STATUSLINE_SYSTEM_PROMPT = `You are a status line setup agent for OpenClaude. Your job is to create or update the statusLine command in the user's OpenClaude settings. const STATUSLINE_SYSTEM_PROMPT = `You are a status line setup agent for Claude Code. Your job is to create or update the statusLine command in the user's Claude Code settings.
When asked to convert the user's shell PS1 configuration, follow these steps: When asked to convert the user's shell PS1 configuration, follow these steps:
1. Read the user's shell configuration files in this order of preference: 1. Read the user's shell configuration files in this order of preference:
@@ -47,7 +47,7 @@ How to use the statusLine command:
"project_dir": "string", // Project root directory path "project_dir": "string", // Project root directory path
"added_dirs": ["string"] // Directories added via /add-dir "added_dirs": ["string"] // Directories added via /add-dir
}, },
"version": "string", // OpenClaude app version (e.g., "1.0.71") "version": "string", // Claude Code app version (e.g., "1.0.71")
"output_style": { "output_style": {
"name": "string", // Output style name (e.g., "default", "Explanatory", "Learning") "name": "string", // Output style name (e.g., "default", "Explanatory", "Learning")
}, },
@@ -110,11 +110,10 @@ How to use the statusLine command:
To display both 5-hour and 7-day limits when available: To display both 5-hour and 7-day limits when available:
- input=$(cat); five=$(echo "$input" | jq -r '.rate_limits.five_hour.used_percentage // empty'); week=$(echo "$input" | jq -r '.rate_limits.seven_day.used_percentage // empty'); out=""; [ -n "$five" ] && out="5h:$(printf '%.0f' "$five")%"; [ -n "$week" ] && out="$out 7d:$(printf '%.0f' "$week")%"; echo "$out" - input=$(cat); five=$(echo "$input" | jq -r '.rate_limits.five_hour.used_percentage // empty'); week=$(echo "$input" | jq -r '.rate_limits.seven_day.used_percentage // empty'); out=""; [ -n "$five" ] && out="5h:$(printf '%.0f' "$five")%"; [ -n "$week" ] && out="$out 7d:$(printf '%.0f' "$week")%"; echo "$out"
2. For longer commands, save a new file alongside the user's existing settings home instead of creating a new config directory. 2. For longer commands, you can save a new file in the user's ~/.claude directory, e.g.:
- Prefer the currently used config home if one already exists. - ~/.claude/statusline-command.sh and reference that file in the settings.
- In most existing setups this will be ~/.claude/statusline-command.sh.
3. Update the user's resolved settings file with: 3. Update the user's ~/.claude/settings.json with:
{ {
"statusLine": { "statusLine": {
"type": "command", "type": "command",
@@ -122,23 +121,20 @@ How to use the statusLine command:
} }
} }
4. Use the existing config home already in use by the installation. 4. If ~/.claude/settings.json is a symlink, update the target file instead.
- Prefer the settings file that already exists.
- If the settings file is a symlink, update the target file instead.
- Do not create a second parallel settings home just because another directory name also exists.
Guidelines: Guidelines:
- Preserve existing settings when updating - Preserve existing settings when updating
- Return a summary of what was configured, including the name of the script file if used - Return a summary of what was configured, including the name of the script file if used
- If the script includes git commands, they should skip optional locks - If the script includes git commands, they should skip optional locks
- IMPORTANT: At the end of your response, inform the parent agent that this "statusline-setup" agent must be used for further status line changes. - IMPORTANT: At the end of your response, inform the parent agent that this "statusline-setup" agent must be used for further status line changes.
Also ensure that the user is informed that they can ask OpenClaude to continue to make changes to the status line. Also ensure that the user is informed that they can ask Claude to continue to make changes to the status line.
` `
export const STATUSLINE_SETUP_AGENT: BuiltInAgentDefinition = { export const STATUSLINE_SETUP_AGENT: BuiltInAgentDefinition = {
agentType: 'statusline-setup', agentType: 'statusline-setup',
whenToUse: whenToUse:
"Use this agent to configure the user's OpenClaude status line setting.", "Use this agent to configure the user's Claude Code status line setting.",
tools: ['Read', 'Edit'], tools: ['Read', 'Edit'],
source: 'built-in', source: 'built-in',
baseDir: 'built-in', baseDir: 'built-in',

View File

@@ -14,21 +14,8 @@ import {
export const inputSchema = lazySchema(() => z.object({}).passthrough()) export const inputSchema = lazySchema(() => z.object({}).passthrough())
type InputSchema = ReturnType<typeof inputSchema> type InputSchema = ReturnType<typeof inputSchema>
// MCP tools can return either a plain string or an array of content blocks
// (text, images, etc.). The outputSchema must reflect both shapes so the model
// knows rich content is possible.
export const outputSchema = lazySchema(() => export const outputSchema = lazySchema(() =>
z.union([ z.string().describe('MCP tool execution result'),
z.string().describe('MCP tool execution result as text'),
z
.array(
z.object({
type: z.string(),
text: z.string().optional(),
}),
)
.describe('MCP tool execution result as content blocks'),
]),
) )
type OutputSchema = ReturnType<typeof outputSchema> type OutputSchema = ReturnType<typeof outputSchema>
@@ -78,19 +65,7 @@ export const MCPTool = buildTool({
renderToolUseProgressMessage, renderToolUseProgressMessage,
renderToolResultMessage, renderToolResultMessage,
isResultTruncated(output: Output): boolean { isResultTruncated(output: Output): boolean {
if (typeof output === 'string') { return isOutputLineTruncated(output)
return isOutputLineTruncated(output)
}
// Array of content blocks — check if any text block exceeds the display limit
if (Array.isArray(output)) {
return output.some(
block =>
block?.type === 'text' &&
typeof block.text === 'string' &&
isOutputLineTruncated(block.text),
)
}
return false
}, },
mapToolResultToToolResultBlockParam(content, toolUseID) { mapToolResultToToolResultBlockParam(content, toolUseID) {
return { return {

View File

@@ -1,29 +1,6 @@
import { describe, expect, test } from 'bun:test' import { describe, expect, test } from 'bun:test'
import type { Command } from '../../commands.js'
import { SkillTool } from './SkillTool.js' import { SkillTool } from './SkillTool.js'
import { renderToolUseMessage } from './UI.js'
function createPromptCommand(
name: string,
options: {
source?: 'builtin' | 'plugin' | 'mcp' | 'bundled'
loadedFrom?: Command['loadedFrom']
} = {},
): Command {
return {
type: 'prompt',
name,
description: `${name} description`,
progressMessage: `${name} progress`,
contentLength: 0,
source: options.source ?? 'builtin',
loadedFrom: options.loadedFrom,
async getPromptForCommand() {
return []
},
}
}
describe('SkillTool missing parameter handling', () => { describe('SkillTool missing parameter handling', () => {
test('missing skill stays required at the schema level', async () => { test('missing skill stays required at the schema level', async () => {
@@ -52,47 +29,3 @@ describe('SkillTool missing parameter handling', () => {
expect(parsed.success).toBe(true) expect(parsed.success).toBe(true)
}) })
}) })
describe('SkillTool renderToolUseMessage', () => {
test('plugin skills render correctly without plugin command metadata', () => {
const pluginSkillName = 'plugin:review-pr'
expect(
renderToolUseMessage(
{ skill: pluginSkillName },
{
commands: [],
},
),
).toBe(pluginSkillName)
expect(
renderToolUseMessage(
{ skill: pluginSkillName },
{
commands: [
createPromptCommand(pluginSkillName, {
source: 'plugin',
loadedFrom: 'plugin',
}),
],
},
),
).toBe(pluginSkillName)
})
test('legacy commands still render with a slash prefix when metadata is present', () => {
expect(
renderToolUseMessage(
{ skill: 'legacy-command' },
{
commands: [
createPromptCommand('legacy-command', {
loadedFrom: 'commands_DEPRECATED',
}),
],
},
),
).toBe('/legacy-command')
})
})

View File

@@ -54,10 +54,7 @@ export function renderToolUseMessage({
if (!skill) { if (!skill) {
return null; return null;
} }
// Only legacy /commands_DEPRECATED entries need the command lookup so we can // Look up the command to check if it came from the legacy /commands folder
// preserve the slash-prefixed display. Plugin skills already carry the
// invoked skill name in `skill`, so transcript/history rendering does not
// need plugin command metadata.
const command = commands?.find(c => c.name === skill); const command = commands?.find(c => c.name === skill);
const displayName = command?.loadedFrom === 'commands_DEPRECATED' ? `/${skill}` : skill; const displayName = command?.loadedFrom === 'commands_DEPRECATED' ? `/${skill}` : skill;
return displayName; return displayName;

View File

@@ -1,518 +0,0 @@
# Web Search Providers
OpenClaude supports multiple search backends through a provider adapter system.
## Supported Providers
| Provider | Env Var | Auth Header | Method |
|---|---|---|---|
| Custom API | `WEB_SEARCH_API` | Configurable | GET/POST |
| SearXNG | `WEB_PROVIDER=searxng` | — | GET |
| Google | `WEB_PROVIDER=google` | `Authorization: Bearer` | GET |
| Brave | `WEB_PROVIDER=brave` | `X-Subscription-Token` | GET |
| SerpAPI | `WEB_PROVIDER=serpapi` | `Authorization: Bearer` | GET |
| Firecrawl | `FIRECRAWL_API_KEY` | Internal | SDK |
| Tavily | `TAVILY_API_KEY` | `Authorization: Bearer` | POST |
| Exa | `EXA_API_KEY` | `x-api-key` | POST |
| You.com | `YOU_API_KEY` | `X-API-Key` | GET |
| Jina | `JINA_API_KEY` | `Authorization: Bearer` | GET |
| Bing | `BING_API_KEY` | `Ocp-Apim-Subscription-Key` | GET |
| Mojeek | `MOJEEK_API_KEY` | `Authorization: Bearer` | GET |
| Linkup | `LINKUP_API_KEY` | `Authorization: Bearer` | POST |
| DuckDuckGo | *(default)* | — | SDK |
## Quick Start
```bash
# Tavily (recommended for AI — fast, RAG-ready)
export TAVILY_API_KEY=tvly-your-key
# Exa (neural search, semantic queries)
export EXA_API_KEY=your-exa-key
# Brave (traditional web search, good coverage)
export WEB_PROVIDER=brave
export WEB_KEY=your-brave-key
# Bing
export BING_API_KEY=your-bing-key
# Self-hosted SearXNG (free, private)
export WEB_PROVIDER=searxng
export WEB_SEARCH_API=https://search.example.com/search
```
## Provider Selection Mode
`WEB_SEARCH_PROVIDER` controls fallback behavior:
| Mode | Behavior |
|---|---|
| `auto` (default) | Try all configured providers in order, fall through on failure |
| `tavily` | Tavily only — throws on failure |
| `exa` | Exa only — throws on failure |
| `custom` | Custom API only — throws on failure. **Not in the auto chain** — must be explicitly selected |
| `firecrawl` | Firecrawl only — throws on failure |
| `ddg` | DuckDuckGo only — throws on failure |
| `native` | Anthropic native / Codex only |
**Auto mode priority:** firecrawl → tavily → exa → you → jina → bing → mojeek → linkup → ddg
> **Note:** The `custom` provider is excluded from the `auto` chain. It is only used when `WEB_SEARCH_PROVIDER=custom` is explicitly set. This prevents the generic outbound provider from silently becoming the default backend.
```bash
# Fail loudly if Tavily is down (don't silently switch backends)
export WEB_SEARCH_PROVIDER=tavily
# Try everything, fall through gracefully
export WEB_SEARCH_PROVIDER=auto
```
## Provider Request & Response Formats
### Tavily
```bash
export TAVILY_API_KEY=tvly-your-key
```
**Request:**
```
POST https://api.tavily.com/search
Authorization: Bearer tvly-your-key
Content-Type: application/json
{"query": "search terms", "max_results": 10, "include_answer": false}
```
**Response:**
```json
{
"results": [
{
"title": "Result Title",
"url": "https://example.com/page",
"content": "Full text snippet from the page...",
"score": 0.95
}
]
}
```
### Exa
```bash
export EXA_API_KEY=your-exa-key
```
**Request:**
```
POST https://api.exa.ai/search
x-api-key: your-exa-key
Content-Type: application/json
{"query": "search terms", "numResults": 10, "type": "auto"}
```
**Response:**
```json
{
"results": [
{
"title": "Result Title",
"url": "https://example.com/page",
"snippet": "A short summary of the page content...",
"score": 0.89
}
]
}
```
### You.com
```bash
export YOU_API_KEY=your-you-key
```
**Request:**
```
GET https://api.ydc-index.io/v1/search?query=search+terms
X-API-Key: your-you-key
```
**Response:**
```json
{
"results": {
"web": [
{
"title": "Result Title",
"url": "https://example.com/page",
"snippets": ["First snippet from the page...", "Second snippet..."],
"description": "Page description"
}
]
}
}
```
### Jina
```bash
export JINA_API_KEY=your-jina-key
```
**Request:**
```
GET https://s.jina.ai/?q=search+terms
Authorization: Bearer your-jina-key
Accept: application/json
```
**Response:**
```json
{
"data": [
{
"title": "Result Title",
"url": "https://example.com/page",
"description": "Snippet from the page..."
}
]
}
```
### Bing
```bash
export BING_API_KEY=your-bing-key
```
**Request:**
```
GET https://api.bing.microsoft.com/v7.0/search?q=search+terms&count=10
Ocp-Apim-Subscription-Key: your-bing-key
```
**Response:**
```json
{
"webPages": {
"value": [
{
"name": "Result Title",
"url": "https://example.com/page",
"snippet": "A short excerpt from the page...",
"displayUrl": "example.com/page"
}
]
}
}
```
### Mojeek
```bash
export MOJEEK_API_KEY=your-mojeek-key
```
**Request:**
```
GET https://www.mojeek.com/search?q=search+terms&fmt=json
Authorization: Bearer your-mojeek-key
```
**Response:**
```json
{
"response": {
"results": [
{
"title": "Result Title",
"url": "https://example.com/page",
"snippet": "Excerpt from the page..."
}
]
}
}
```
### Linkup
```bash
export LINKUP_API_KEY=your-linkup-key
```
**Request:**
```
POST https://api.linkup.so/v1/search
Authorization: Bearer your-linkup-key
Content-Type: application/json
{"q": "search terms", "search_type": "standard"}
```
**Response:**
```json
{
"results": [
{
"name": "Result Title",
"url": "https://example.com/page",
"snippet": "A short description of the result..."
}
]
}
```
### SearXNG (Built-in Preset)
```bash
export WEB_PROVIDER=searxng
export WEB_SEARCH_API=https://search.example.com/search
```
**Request:**
```
GET https://search.example.com/search?q=search+terms
```
**Response:**
```json
{
"results": [
{
"title": "Result Title",
"url": "https://example.com/page",
"content": "Snippet from the page...",
"engine": "google"
}
]
}
```
### Google Custom Search (Built-in Preset)
```bash
export WEB_PROVIDER=google
export WEB_KEY=your-google-api-key
```
**Request:**
```
GET https://www.googleapis.com/customsearch/v1?q=search+terms
Authorization: Bearer your-google-api-key
```
**Response:**
```json
{
"items": [
{
"title": "Result Title",
"link": "https://example.com/page",
"snippet": "A short excerpt...",
"displayLink": "example.com"
}
]
}
```
### Brave (Built-in Preset)
```bash
export WEB_PROVIDER=brave
export WEB_KEY=your-brave-key
```
**Request:**
```
GET https://api.search.brave.com/res/v1/web/search?q=search+terms
X-Subscription-Token: your-brave-key
```
**Response:**
```json
{
"web": {
"results": [
{
"title": "Result Title",
"url": "https://example.com/page",
"description": "Page description..."
}
]
}
}
```
### SerpAPI (Built-in Preset)
```bash
export WEB_PROVIDER=serpapi
export WEB_KEY=your-serpapi-key
```
**Request:**
```
GET https://serpapi.com/search.json?q=search+terms
Authorization: Bearer your-serpapi-key
```
**Response:**
```json
{
"organic_results": [
{
"title": "Result Title",
"link": "https://example.com/page",
"snippet": "A short excerpt...",
"displayed_link": "example.com"
}
]
}
```
### DuckDuckGo (Default Fallback)
No configuration needed. Uses the `duck-duck-scrape` npm package.
```bash
# Set as explicit-only backend
export WEB_SEARCH_PROVIDER=ddg
```
---
## Custom API Configuration
### Standard GET
```
GET https://api.example.com/search?q=hello
```
```bash
export WEB_SEARCH_API=https://api.example.com/search
export WEB_QUERY_PARAM=q
```
### Query in URL Path
```
GET https://api.example.com/v2/search/hello
```
```bash
export WEB_URL_TEMPLATE=https://api.example.com/v2/search/{query}
```
### POST with Custom Body
```
POST https://api.example.com/v1/query
Content-Type: application/json
{"input": {"text": "hello"}}
```
```bash
export WEB_SEARCH_API=https://api.example.com/v1/query
export WEB_METHOD=POST
export WEB_BODY_TEMPLATE='{"input":{"text":"{query}"}}'
```
### Extra Static Params
```bash
export WEB_PARAMS='{"lang":"en","count":"10"}'
```
## Auth
API keys are sent in HTTP headers, **never** in query strings.
```bash
# Default: Authorization: Bearer <key>
export WEB_KEY=your-key
# Custom header
export WEB_AUTH_HEADER=X-Api-Key
export WEB_AUTH_SCHEME=""
# Extra headers
export WEB_HEADERS="X-Tenant: acme; Accept: application/json"
```
## Response Parsing
The tool auto-detects many response formats:
```jsonc
{ "results": [{ "title": "...", "url": "..." }] } // flat array
{ "items": [{ "title": "...", "link": "..." }] } // Google-style
{ "results": { "engine": [{ "title": "...", "url": "..." }] } } // nested map
[{ "title": "...", "url": "..." }] // bare array
```
Field name aliases: `title`/`headline`/`name`, `url`/`link`/`href`, `description`/`snippet`/`content`
For deeply nested responses:
```bash
export WEB_JSON_PATH=response.payload.results
```
## Retry
Failed requests (network errors, 5xx) are retried once after 500ms. Client errors (4xx) are not retried. Custom requests have a default 120s timeout.
## Custom Provider Security Guardrails
The custom provider enforces the following guardrails by default:
| Guardrail | Default | Override |
|-----------|---------|----------|
| HTTPS-only | ✅ | `WEB_CUSTOM_ALLOW_HTTP=true` |
| Block private IPs / localhost | ✅ | `WEB_CUSTOM_ALLOW_PRIVATE=true` |
| Header allowlist | ✅ | `WEB_CUSTOM_ALLOW_ARBITRARY_HEADERS=true` |
| Max POST body | 300 KB | `WEB_CUSTOM_MAX_BODY_KB=<kb>` |
| Request timeout | 120s | `WEB_CUSTOM_TIMEOUT_SEC=<seconds>` |
| Audit log (one-time warning) | ✅ | — |
### Self-hosted SearXNG example
```bash
export WEB_PROVIDER=searxng
export WEB_SEARCH_API=https://search.mydomain.com/search
export WEB_CUSTOM_ALLOW_PRIVATE=true # needed if SearXNG is on a private IP
```
### Header allowlist
By default only these headers are permitted:
`accept`, `accept-encoding`, `accept-language`, `authorization`, `cache-control`, `content-type`, `if-modified-since`, `if-none-match`, `ocp-apim-subscription-key`, `user-agent`, `x-api-key`, `x-subscription-token`, `x-tenant-id`
## Adding a Provider
1. Create `providers/myprovider.ts`:
```typescript
import type { SearchInput, SearchProvider } from './types.js'
import { applyDomainFilters, type ProviderOutput } from './types.js'
export const myProvider: SearchProvider = {
name: 'myprovider',
isConfigured() { return Boolean(process.env.MYPROVIDER_API_KEY) },
async search(input: SearchInput): Promise<ProviderOutput> {
const start = performance.now()
// ... call API, map to SearchHit[] ...
return {
hits: applyDomainFilters(hits, input),
providerName: 'myprovider',
durationSeconds: (performance.now() - start) / 1000,
}
},
}
```
2. Register in `providers/index.ts` — add import and push to `ALL_PROVIDERS`.

Some files were not shown because too many files have changed in this diff Show More