Compare commits

...

26 Commits

Author SHA1 Message Date
root
2f4a06dd40 fix(theme): remove stale React Compiler memo wrappers from theme hooks
Rebase on current main (includes #589 reconciler fix).

The React Compiler memo caches (_c) in useTheme() and usePreviewTheme()
use referential equality checks on destructured context values. These
caches can return stale references when the ThemeProvider's useMemo
recreates the context value object but the individual property
references (setThemeSetting, setPreviewTheme, etc.) compare equal —
the memo short-circuits and returns a cached tuple/object that still
holds the old closure captures.

This is a distinct bug from #589 (which fixed the ink reconciler's
commitUpdate path for host prop updates). #589 ensures that when
React _does_ re-render a component with new props, those props actually
reach the DOM node. But the memo wrappers here prevent React from
_even seeing_ the new context value in the first place — the hook
returns the stale cached result.

Removing the memo wrappers ensures useTheme() and usePreviewTheme()
always read the current context value, eliminating the stale-reference
path entirely.
2026-04-12 07:39:39 +00:00
lunamonke
4c50977f3c Decouple and fix mistral (#595)
* decouple and fix mistral

* fix wrong variable for currentBaseUrl and buildAPIProviderProperties
2026-04-12 15:26:14 +08:00
euxaristia
b126e38b1a fix: display selected model in startup screen instead of hardcoded sonnet 4.6 (#587) 2026-04-11 21:20:00 +08:00
Alina Lisova
6e94dd9136 fix(ink): restore host prop updates in React 19 reconciler (#589)
React 19's react-reconciler@0.33 mutation path calls commitUpdate with
(instance, type, oldProps, newProps, fiber), but our Ink host config
still expected an updatePayload from prepareUpdate. That left mounted
ink-* nodes with stale onKeyDown, tabIndex, and textStyles, making menu
navigation and highlights appear stuck until remount.

Diff old/new props directly inside commitUpdate and add regression tests
covering in-place updates for ink-box handlers/attributes and ink-text
styles.
2026-04-11 21:19:39 +08:00
FluxLuFFy
91e4cfb15b fix: WebSearch providers + MCPTool bugs (#593)
* fix: WebSearch providers + MCPTool bugs

WebSearchTool:
- custom.ts: fix buildAuthHeadersForPreset WEB_AUTH_HEADER opt-out
- custom.ts: fix WEB_AUTH_SCHEME empty string handling
- custom.ts: fix walkJsonPath null safety for jsonPath parsing
- duckduckgo.ts: use SafeSearchType enum instead of raw 0
- mojeek.ts: always send Accept: application/json header
- README: fix timeout documentation (15s -> 120s to match code)
- custom.test.ts: add tests for auth header behavior

MCPTool:
- MCPTool.ts: fix outputSchema to accept ContentBlockParam[] (not just string)
- MCPTool.ts: fix isResultTruncated for array output (iterates text blocks)

* fix: address PR #593 review feedback

1. Export buildAuthHeadersForPreset and add direct tests for:
   - WEB_AUTH_HEADER="" explicit opt-out behavior
   - WEB_AUTH_SCHEME="" stripping scheme prefix
   - Preset defaults (authHeader + authScheme)
   - No WEB_KEY returns empty headers

2. Add duckduckgo.test.ts verifying SafeSearchType.STRICT === 0,
   confirming the enum change is semantically identical to the
   previous raw value.

Addresses review by @Vasanthdev2004 at
pullrequestreview-4093533095

---------

Co-authored-by: FluxLuFFy <flux@openclaude.dev>
Co-authored-by: Fix Bot <fix@openclaude.local>
2026-04-11 21:07:20 +08:00
Zartris
f4ac709fa6 fix: report cache reads in streaming and correct cost calculation (#577)
* fix: report cache reads in streaming and correct cost calculation

Fix two bugs in how the OpenAI-to-Anthropic shim handles cached tokens:

1. codexShim: streaming message_delta missing cache_read_input_tokens
   The codexStreamToAnthropic() function builds the final message_delta
   usage object inline (not through makeUsage()), and only included
   input_tokens and output_tokens. cache_read_input_tokens was always 0,
   so /cost never showed cache reads for Responses API models (GPT-5+).

   Also fix makeUsage() to read input_tokens_details.cached_tokens and
   prompt_tokens_details.cached_tokens for the non-streaming path.

2. Both shims: cost double-counting from convention mismatch
   OpenAI includes cached tokens in input_tokens/prompt_tokens (i.e.,
   input_tokens = uncached + cached). Anthropic treats input_tokens as
   uncached only. The cost formula was:
     cost = input_tokens * inputRate + cache_read * cacheRate
   This double-counts cached tokens. Fix by subtracting cached from
   input during the conversion:
     input_tokens = prompt_tokens - cached_tokens

   In practice this was inflating reported costs by ~2x for sessions
   with high cache hit rates (which is most sessions, since Copilot
   auto-caches server-side).

Fixes #515

* fix: omit zero cache read/write fields from /cost output

Only show "cache read" and "cache write" in /cost per-model usage when
the value is > 0. Providers like GitHub Copilot never report
cache_creation_input_tokens (the server manages its own cache), so
showing "0 cache write" on every line is misleading — it implies caching
is not working when it actually is.

Before:
  claude-haiku:  2.6k input, 151 output, 39.8k cache read, 0 cache write ($0.04)

After:
  claude-haiku:  2.6k input, 151 output, 39.8k cache read ($0.04)

---------

Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
2026-04-10 23:40:42 +08:00
Zartris
8aaa4f22ac fix: add store:false to Chat Completions and /responses fallback (#578)
Set store: false in the request body for both the Chat Completions path
and the /responses fallback path in openaiShim.ts.

The codexShim (Responses API primary path) already sets store: false.
The Chat Completions path and the /responses fallback in openaiShim were
missing it.

store: false tells the API provider not to persist conversation data for
model training, logging, or other non-operational purposes. This is a
privacy measure — it does not affect caching or functionality.

Note: Whether third-party proxies (e.g. GitHub Copilot) honour this
parameter is provider-dependent, but setting it is a reasonable default
for user privacy.

Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
2026-04-10 23:40:09 +08:00
Zartris
a7f5982f64 fix: add GitHub Copilot model context windows and output limits (#576)
Add context_window and max_output_tokens entries for all models available
through the GitHub Copilot proxy (Claude, GPT, Gemini, Grok), sourced from
https://api.githubcopilot.com/models.

Models are namespaced as "github:copilot:<model>" to avoid collisions with
the same model names served by other providers (which may have different
limits). A new lookupByKey() helper and qualified-key lookup in
lookupByModel() ensures the correct limits are selected when
OPENAI_MODEL=github:copilot.

Without this, Claude models on Copilot would use default context/output
limits that may not match the proxy's actual constraints, causing 400 errors
like "max_tokens is too large".

Related: #515

Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
2026-04-10 22:00:26 +08:00
Juan Camilo Auriti
cb8f8b7ac2 fix: let saved provider profiles win on restart (#513)
Treat profile-managed env as restart state rather than explicit user intent so saved OpenAI-compatible profiles can replace stale Ollama values on startup and persist correctly across restarts.

Co-authored-by: Claude Opus 4.6 <noreply@openclaude.dev>
2026-04-10 21:58:33 +08:00
ibaaaaal
07621a6f8d fix: scrub canonical Anthropic headers from 3P shim requests (#499)
* Stop canonical Anthropic headers from leaking into 3P shim requests

The remaining blocker from PR #268 was that canonical Anthropic headers such as
`anthropic-version` and `anthropic-beta` could still ride through supported 3P
paths even after the earlier x-anthropic/x-claude scrubber work. This tightens
header filtering inside the shim itself so direct defaultHeaders, env-driven
client setup, providerOverride routing, and per-request header injection all
share the same scrubber.

Constraint: Preserve non-Anthropic custom headers and provider auth while stripping only Anthropic/OpenClaude-internal headers from 3P requests
Rejected: Rely on client.ts filtering alone | direct shim construction and per-request headers would still leave gaps
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Keep header scrubbing centralized in the shim so new call paths do not reopen 3P leakage bugs
Tested: bun test src/services/api/openaiShim.test.ts src/services/api/client.test.ts src/utils/context.test.ts
Tested: bun run test:provider
Tested: bun run build && node dist/cli.mjs --version
Not-tested: bun run typecheck (repository baseline currently fails in many unrelated files)

* Keep OpenAI client tests from restoring undefined env as strings

The new header-leak regression tests in client.test.ts restored environment
variables via direct assignment, which can leave literal "undefined" strings in
process.env when the original value was unset. This switches the teardown over
to the same restore helper pattern already used in openaiShim.test.ts.

Constraint: Keep the fix limited to test hygiene without altering runtime behavior
Rejected: Restore only the two env vars Copilot called out | using one helper for all test env restores is simpler and less error-prone
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Use restore helpers for env teardown in tests so unset values stay deleted instead of becoming the string "undefined"
Tested: bun test src/services/api/client.test.ts src/services/api/openaiShim.test.ts src/utils/context.test.ts
Not-tested: Full provider suite (unchanged runtime path)

* Prevent GitHub Codex requests from forwarding unsanitized Anthropic headers

A base-sync with upstream exposed a separate GitHub+Codex transport branch
that still merged per-request headers raw before adding Copilot headers.
This keeps the filter aligned across Codex-family paths and adds explicit
regression tests for GitHub Codex routing, including providerOverride.

Constraint: Must not push or modify GitHub state while validating the reviewer concern
Rejected: Leave the GitHub Codex path unchanged | runtime repro showed anthropic-* headers still leaked after the upstream sync
Confidence: high
Scope-risk: narrow
Directive: Keep header scrubbing consistent across every Codex-family transport branch when provider routing changes
Tested: bun test src/services/api/openaiShim.test.ts
Tested: bun test src/services/api/client.test.ts src/services/api/codexShim.test.ts src/services/api/providerConfig.github.test.ts
Tested: bun run build
Not-tested: Full repository test suite
2026-04-10 21:56:40 +08:00
Anandan
692471850f fix: update theme preview on focus change (#562)
Treat default select focus as initial state so /theme and first-run previews follow keyboard navigation again.

Co-authored-by: anandh8x <test@example.com>
2026-04-10 21:55:15 +08:00
Anandan
68c296833d fix: restore Ollama auto-detect in first-run setup (#561)
Co-authored-by: anandh8x <test@example.com>
2026-04-10 21:53:30 +08:00
Zartris
9ccaa7a675 feat: add /cache-probe diagnostic command (#580)
Add a /cache-probe slash command for debugging prompt caching behaviour
on OpenAI-compatible providers (GitHub Copilot, OpenAI direct).

The command sends two identical API requests in sequence and compares the
raw server response usage stats, showing:
- Input/output token counts
- Cache read tokens (from prompt_tokens_details or input_tokens_details)
- Latency for each request
- Cache hit rate percentage

Usage:
  /cache-probe                    # test default model
  /cache-probe claude-sonnet-4    # test specific model
  /cache-probe gpt-5.4 --no-key  # test without prompt_cache_key

The --no-key flag omits prompt_cache_key/prompt_cache_retention/store to
test whether the server does content-based auto-caching (it does on
GitHub Copilot).

This is a debugging/diagnostic tool, not intended for regular use. It was
instrumental in discovering that:
1. Copilot auto-caches server-side based on content hash
2. prompt_cache_key is ignored by the proxy
3. The streaming path was not reporting cached tokens

Only enabled when the provider is OpenAI or GitHub (not for firstParty
Anthropic which has different caching semantics).

Related: #515

Co-authored-by: Zartris <14197299+Zartris@users.noreply.github.com>
2026-04-10 21:34:38 +08:00
Kevin Codex
598651f423 fix: rebrand prompt identity to openclaude (#496)
* fix: rebrand prompt identity to openclaude

* fix prompt branding

* fix: align prompt branding with config compatibility
2026-04-10 01:20:05 +08:00
KRATOS
c385047abb feat: add auto-fix service — auto-lint and test after AI file edits (#508)
* feat: add AutoFix config schema and reader module

Implements AutoFixConfigSchema (Zod v4) with validation for lint/test
commands, maxRetries (0-10, default 3), and timeout (1000-300000ms,
default 30000). Adds getAutoFixConfig helper that returns null for
disabled or invalid configs. All 9 unit tests pass.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add autoFix runner with lint/test command execution

Implements AutoFixRunner (Task 2) - executes lint and test shell commands
sequentially, short-circuits on lint failure, handles timeouts, and
produces structured AutoFixResult with AI-friendly error summaries.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add autoFix field to SettingsSchema with integration tests

Integrates AutoFixConfigSchema into SettingsSchema so autoFix settings
are validated at the settings layer. Adds two integration tests verifying
that valid configs are accepted and invalid configs (enabled with no
commands) are rejected.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add autoFix hook integration helpers (Task 4)

Implements shouldRunAutoFix and buildAutoFixContext functions used by
the PostToolUse hook to determine when to run auto-fix and format
errors as AI-readable context for injection.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: wire autoFix into PostToolUse hook flow (Task 5)

Add auto-fix lint/test check after existing PostToolUse hooks in
runPostToolUseHooks. When autoFix is configured in settings, runs
lint/test commands after file_edit/file_write tools and yields
errors as hook_additional_context for the model to act on.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add /auto-fix slash command

Adds the /auto-fix prompt command that helps users configure autoFix settings
(lint/test commands, maxRetries, timeout) in .claude/settings.json.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: remove unused imports in autoFixRunner test

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address review feedback — enforce maxRetries, wire abort signal, use cross-platform shell

1. Enforce maxRetries: track auto-fix attempts per query chain in toolHooks.ts
   and stop feeding errors back after the configured limit is reached.

2. Wire abort signal to subprocess: subscribe to AbortController signal in
   runCommand() and kill the process tree on abort. Uses detached process
   groups on Unix to ensure child processes are also terminated.

3. Replace hardcoded bash with shell:true: use Node's cross-platform shell
   resolution instead of spawn('bash', ['-c', ...]) so auto-fix commands
   work on Windows and non-bash environments.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-09 21:18:57 +08:00
Kevin Codex
42b121bd0d Fix/openclaude diagnostics settings (#483)
* fix: use openclaude paths in diagnostics and settings

* fix: strip leaked reasoning from assistant output

* fix: preserve legacy claude config compatibility

* fix: tighten path and reasoning compatibility

* fix: buffer streamed reasoning leak preambles

* test: cover openclaude migration and reasoning fixes

* test: isolate execFileNoThrow from cross-file mocks
2026-04-09 20:42:51 +08:00
FluxLuFFy
32fbd0c7b4 fix: custom web search — WEB_URL_TEMPLATE not recognized, timeout too short, silent native fallback (#537)
* fix: custom web search — WEB_URL_TEMPLATE not recognized, timeout too short, silent native fallback

1. custom.ts: Add WEB_URL_TEMPLATE to isConfigured() so the custom provider
   is recognized when configured via URL template alone.

2. custom.ts: Bump DEFAULT_TIMEOUT_SECONDS from 15s to 120s.
   Self-hosted search APIs (SearXNG, internal) commonly need 30-90s.

3. WebSearchTool.ts: When an explicit adapter is selected via
   WEB_SEARCH_PROVIDER=custom, do not silently fall through to the
   native Anthropic path on adapter errors or 0-hit results.
   - 0 hits: return directly (no fallback)
   - Error: throw the real error (no fallback)
   - Auto mode: existing fallback behavior preserved

* fix: tighten auto-mode adapter fallback — only swallow transient errors

Address review feedback: in auto mode, only fall through to native on
transient errors (network failure, timeout, HTTP 5xx). Config and
guardrail errors (SSRF, HTTPS, bad URL, header allowlist, etc.) now
surface properly instead of being silently swallowed.

---------

Co-authored-by: FluxLuFFy <fluxluffy@users.noreply.github.com>
2026-04-09 20:41:58 +08:00
sooth
e30ad17ae0 fix(tui): restore prompt rendering on startup (#498)
* fix(tui): restore prompt rendering on startup

* test(tui): document render-time command split

* fix(tui): reduce ghostty prompt repaint scope
2026-04-09 20:40:06 +08:00
Kevin Codex
c328fdf9e2 feat: add wiki mvp commands (#532) 2026-04-09 14:54:38 +08:00
FluxLuFFy
4ad6bc50c1 refactor: provider adapter system + 7 new search providers (bug-fixed) (#512)
* refactor: provider adapter system + 7 new search providers

Architecture:
- Each search backend is a small adapter implementing SearchProvider
- 12 providers: custom, tavily, exa, you, jina, bing, mojeek, linkup, firecrawl, duckduckgo + native
- WEB_SEARCH_PROVIDER controls selection: auto (fallback chain) or specific provider
- Auth always in headers, never in query strings

Bug fixes from review feedback:
- Fix applyDomainFilters catch block: keep hits with malformed URLs on blocked_domains
  (can't confirm blocked), drop on allowed_domains (can't confirm allowed)
- Add safeHostname() helper: safely extract hostname from URLs without throwing
- Replace unsafe new URL(r.url).hostname in 7 providers with safeHostname()
- Remove dead code: buildAllHeaders, buildAuthHeaders, parseExtraHeaders from types.ts
- Fix WEB_PARMS typo: consistently use WEB_QUERY_PARAM everywhere
- AbortSignal forwarded to fetch() in all 12 providers
- DuckDuckGo: wrap dynamic import in try/catch for graceful error
- Exa: remove double domain filtering (server-side already)
- runSearch(): aggregate all provider errors instead of throwing only the last one
- Retry logic: check numeric status code directly, retry 5xx/network, skip 4xx

Test coverage (44 tests, all passing):
- types.test.ts: safeHostname, normalizeHit, applyDomainFilters (20 tests)
- index.test.ts: getProviderMode, getProviderChain, getAvailableProviders (13 tests)
- custom.test.ts: extractHits flexible response parsing (11 tests)

Co-authored-by: FluxLuFFy <195792511+FluxLuFFy@users.noreply.github.com>

* security: add guardrails to custom search provider (Option B)

- HTTPS-only by default (opt-out: WEB_CUSTOM_ALLOW_HTTP=true)
- Private/localhost IPs blocked by default (opt-out: WEB_CUSTOM_ALLOW_PRIVATE=true)
- Header allowlist: only known-safe headers allowed unless WEB_CUSTOM_ALLOW_ARBITRARY_HEADERS=true
- Configurable timeout in seconds (WEB_CUSTOM_TIMEOUT_SEC, default 15)
- Configurable POST body limit (WEB_CUSTOM_MAX_BODY_KB, default 300)
- Removed max URL size restriction
- Audit log warning on first custom search call
- Updated .env.example and README_SEARCH_PROVIDERS.md with all new options

* fix: remove custom provider from auto chain (Option 1)

Remove customProvider from the auto fallback chain so it is only
available when WEB_SEARCH_PROVIDER=custom is explicitly selected.

Changes:
- Remove customProvider from ALL_PROVIDERS array in providers/index.ts
- Add 3 new tests verifying custom is excluded from auto chain
- Update README_SEARCH_PROVIDERS.md: auto priority, mode table, note
- Update .env.example: auto priority comment, custom mode annotation

All 47 tests pass (44 existing + 3 new).

Co-Authored-By: @Vasanthdev2004

* fix: address review blockers (routing, abort, config check, domain matching)

1. Native/Codex routing precedence in auto mode
   shouldUseAdapterProvider() now checks if native/first-party/vertex/foundry
   or Codex paths are available before falling back to adapter providers.
   Auto mode: native paths take precedence; adapter is fallback only.

2. AbortError stops provider chain immediately
   runSearch() now checks for AbortError/aborted signal before continuing
   the fallback chain. Cancelled searches don't create extra outbound requests.

3. Explicit provider mode fails fast on missing credentials
   runSearch() validates isConfigured() for explicit modes before attempting
   requests. Throws clear error: 'Search provider "X" is not configured.'

4. Domain filter exact-or-subdomain matching (fixes suffix collision)
   New hostMatchesDomain() helper: exact match or .subdomain match.
   badexample.com no longer matches example.com.

5. Tests: 56 pass (9 new) covering all 4 fixes

Co-Authored-By: @Vasanthdev2004

---------

Co-authored-by: Claude Fix <fix@openclaude.local>
Co-authored-by: FluxLuFFy <195792511+FluxLuFFy@users.noreply.github.com>
Co-authored-by: bot <bot@openclaw.ai>
2026-04-09 02:51:25 +08:00
José Zechel
284d9bda36 Error: Fix of an image in the conversation exceeds the dimension limit for many-image requests (2000px) (#520)
Root cause: IMAGE_MAX_WIDTH and IMAGE_MAX_HEIGHT were set to 2000 — exactly the APIs many-image dimension limit. Images resized to exactly 2000px would get rejected when the conversation accumulated enough images to trigger the API's many-image mode.

      Fix: Changed both constants from 2000 to 1568 in src/constants/apiLimits.ts:42-43. This is the resolution the API internally downscales to anyway (documented in the API's encoding/full_encoding.py), so there is zero effective quality loss. All images are now safely below the many-image threshold.

      export const IMAGE_MAX_WIDTH = 1568
      export const IMAGE_MAX_HEIGHT = 1568

      Impact: The single constant change propagates everywhere — imageResizer.ts uses IMAGE_MAX_WIDTH/IMAGE_MAX_HEIGHT for all resize decisions, and the error messages reference these constants dynamically. No other files need changes.
2026-04-08 22:12:57 +08:00
Vasanth T
537c469c3a fix: replace isDeepStrictEqual with navigation-aware options comparison (#507)
The select cursor highlight was broken because isDeepStrictEqual in
use-select-navigation.ts and use-multi-select-state.ts would fail when
options contained identity-unstable properties (JSX label elements,
function onChange callbacks, computed disabled booleans). This caused
the reset logic to fire on every re-render, resetting focusedValue
back to the first option.

Replace isDeepStrictEqual with optionsNavigateEqual which only compares
properties that affect navigation behavior: value, disabled, and type.
ReactNode labels and function callbacks are intentionally excluded as
they are identity-unstable but don't change navigation semantics.

Fixes #472

Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>
2026-04-08 16:44:42 +08:00
Juan Camilo Auriti
ccaa193eec fix: preserve only originally-required properties in strict tool schemas (#471)
Fixes #430. In normalizeSchemaForOpenAI(), the strict branch was adding every
property key to required[], including optional ones. This caused providers like
Groq, Azure OpenAI, and others to reject valid tool calls with a 400 /
tool_use_failed error because the model correctly omits optional arguments but
the provider sees them as missing required fields.

Root cause: the strict branch used `[...existingRequired, ...allKeys]` instead
of `existingRequired.filter(k => k in normalizedProps)`. The Gemini branch
already had the correct logic.

Fix: align the strict branch with the Gemini branch — only keep properties that
were already marked required in the original schema. The additionalProperties:
false constraint is preserved as strict-mode providers still require it.

Add regression test covering the Read tool schema (file_path required,
offset/limit/pages optional).
2026-04-08 16:42:11 +08:00
Vasanth T
2caf2fd982 fix: defer startup checks and suppress recommendation dialogs during startup window (issue #363) (#504)
* fix: defer startup plugin checks and suppress recommendation dialogs during startup window (issue #363)

Root cause: performStartupChecks() fires immediately on REPL mount,
triggering plugin loading which populates trackedFiles, which triggers
useLspPluginRecommendation to surface an LSP recommendation dialog.
Since promptTypingSuppressionActive is false before any user input,
getFocusedInputDialog() returns the dialog, unmounting PromptInput
entirely and making the CLI appear frozen.

Fix: Two-pronged approach:
1. Defer performStartupChecks by 1500ms and gate on
   promptTypingSuppressionActive so startup checks dont run while
   the user is typing or has early input buffered
2. Suppress lower-priority startup dialogs (LSP recommendation,
   plugin hint, desktop upsell) until startupChecksStartedRef is
   true, preventing them from stealing focus during the vulnerable
   startup window

This also explains why --bare mode and disabling plugins work:
--bare mode skips plugin loading entirely, and disabling the
autoresearch plugin eliminates the LSP match, so lspRecommendation
stays null and PromptInput renders normally.

* fix: move startup checks effect after promptTypingSuppressionActive declaration

Fixes temporal dead zone warning flagged by code-quality bot.
promptTypingSuppressionActive is declared on line ~1340 but the
useEffect was on line ~800, causing a reference-before-declaration.
Also adds missing semicolons for style consistency.

* fix: gate startup checks on prompt readiness, not just a timeout (issue #363)

The previous approach used a fixed 1500ms timeout, but as gnanam1990
pointed out, if a user pauses for >1.5s before typing the timer can
still fire and recommendation dialogs can steal focus. This is a
timing mitigation, not a reliable fix.

New approach: gate startup checks on actual prompt readiness:
1. After first message submission (submitCount > 0) — always safe
2. After grace period (3s) elapsed AND user is idle — safe because
   no dialog will interrupt an idle user who hasn't started typing
3. While user is actively typing — deferrred until they stop

This ensures startup checks never steal focus from a prompt the user
is about to type into, regardless of how long they pause before typing.

Also removes the old STARTUP_CHECK_DELAY_MS constant in favor of
STARTUP_GRACE_PERIOD_MS with clearer semantics.

* fix: move startup checks after submitCount declaration to avoid temporal dead zone

Code quality bot flagged that submitCount was used before its declaration.
Moved the entire startup checks block to after the submitCount useState
declaration. Also added nullish coalescing (submitCount ?? 0) per bot
suggestion.

* fix: gate startup checks strictly on first submission, remove grace period (issue #363)

As gnanam1990 pointed out, the 3s grace period still allows the failure
mode: if a user pauses for a few seconds before typing, startup checks
fire and recommendation dialogs steal focus. A grace period is still a
timing mitigation, not a reliable fix.

New approach: startup checks only run after the user has submitted their
first message (submitCount > 0). No grace period, no timeout. This
guarantees the prompt gets first interaction — no dialog can steal focus
before the user has actually used the CLI.

If the user never submits a message, startup checks never run. That's
acceptable because with no user interaction there's no need for plugin
installations or marketplace seeding.

---------

Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>
2026-04-08 16:08:36 +08:00
Meetpatel006
ad724dc3a4 Improve GitHub Copilot provider: official OAuth onboarding, Copilot API routing, and test hardening and auto refresh token logic (#288)
* update gitHub copilot API with offical client id and update model configurations

* test: add unit tests for exchangeForCopilotToken and enhance GitHub model normalization

* remove PAT token feature

* test(api): harden provider tests against env leakage

* Added back trimmed github auth token

* added auto refresh logic for auto token along with test

* fix: remove forked provider validation in cli.tsx and clear stale provider env vars in /onboard-github

* refactor: streamline environment variable handling in mergeUserSettingsEnv

* fix: clear stale provider env vars to ensure correct GH routing

* Remove internal-only tooling from the external build (#352)

* Remove internal-only tooling without changing external runtime contracts

This trims the lowest-risk internal-only surfaces first: deleted internal
modules are replaced by build-time no-op stubs, the bundled stuck skill is
removed, and the insights S3 upload path now stays local-only. The privacy
verifier is expanded and the remaining bundled internal Slack/Artifactory
strings are neutralized without broad repo-wide renames.

Constraint: Keep the first PR deletion-heavy and avoid mass rewrites of USER_TYPE, tengu, or claude_code identifiers
Rejected: One-shot DMCA cleanup branch | too much semantic risk for a first PR
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Treat full-repo typecheck as a baseline issue on this upstream snapshot; do not claim this commit introduced the existing non-Phase-A errors without isolating them first
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Not-tested: Full repo typecheck (currently fails on widespread pre-existing upstream errors outside this change set)

* Keep minimal source shims so CI can import Phase A cleanup paths

The first PR removed internal-only source files entirely, but CI provider
and context tests import those modules directly from source rather than
through the build-time no-telemetry stubs. This restores tiny no-op source
shims so tests and local source imports resolve while preserving the same
external runtime behavior.

Constraint: GitHub Actions runs source-level tests in addition to bundled build/privacy checks
Rejected: Revert the entire deletion pass | unnecessary once the import contract is satisfied by small shims
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: For later cleanup phases, treat build-time stubs and source-test imports as separate compatibility surfaces
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (still noisy on this upstream snapshot)

---------

Co-authored-by: anandh8x <test@example.com>

* Reduce internal-only labeling noise in source comments (#355)

This pass rewrites comment-only ANT-ONLY markers to neutral internal-only
language across the source tree without changing runtime strings, flags,
commands, or protocol identifiers. The goal is to lower obvious internal
prose leakage while keeping the diff mechanically safe and easy to review.

Constraint: Phase B is limited to comments/prose only; runtime strings and user-facing labels remain deferred
Rejected: Broad search-and-replace across strings and command descriptions | too risky for a prose-only pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly runtime/user-facing strings and should be handled separately from comment cleanup
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Neutralize internal Anthropic prose in explanatory comments (#357)

This is a small prose-only follow-up that rewrites clearly internal or
explanatory Anthropic comment language to neutral wording in a handful of
high-confidence files. It avoids runtime strings, flags, command labels,
protocol identifiers, and provider-facing references.

Constraint: Keep this pass narrowly scoped to comments/documentation only
Rejected: Broader Anthropic comment sweep across functional API/protocol references | too ambiguous for a safe prose-only PR
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Leave functional Anthropic references (API behavior, SDKs, URLs, provider labels, protocol docs) for separate reviewed passes
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Neutralize remaining internal-only diagnostic labels (#359)

This pass rewrites a small set of ant-only diagnostic and UI labels to
neutral internal wording while leaving command definitions, flags, and
runtime logic untouched. It focuses on internal debug output, dead UI
branches, and noninteractive headings rather than broader product text.

Constraint: Label cleanup only; do not change command semantics or ant-only logic gates
Rejected: Renaming ant-only command descriptions in main.tsx | broader UX surface better handled in a separate reviewed pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Remaining ANT-ONLY hits are mostly command descriptions and intentionally deferred user-facing strings
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Finish eliminating remaining ANT-ONLY source labels (#360)

This extends the label-only cleanup to the remaining internal-only command,
debug, and heading strings so the source tree no longer contains ANT-ONLY
markers. The pass still avoids logic changes and only renames labels shown
in internal or gated surfaces.

Constraint: Update the existing label-cleanup PR without widening scope into behavior changes
Rejected: Leave the last ANT-ONLY strings for a later pass | low-cost cleanup while the branch is already focused on labels
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: The next phase should move off label cleanup and onto a separately scoped logic or rebrand slice
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Stub internal-only recording and model capability helpers (#377)

This follow-up Phase C-lite slice replaces purely internal helper modules
with stable external no-op surfaces and collapses internal elevated error
logging to a no-op. The change removes additional USER_TYPE-gated helper
behavior without touching product-facing runtime flows.

Constraint: Keep this PR limited to isolated helper modules that are already external no-ops in practice
Rejected: Pulling in broader speculation or logging sink changes | less isolated and easier to debate during review
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Continue Phase C with similarly isolated helpers before moving into mixed behavior files
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

Co-authored-by: anandh8x <test@example.com>

* Remove internal-only bundled skills and mock helpers (#376)

* Remove internal-only bundled skills and mock rate-limit behavior

This takes the next planned Phase C-lite slice by deleting bundled skills
that only ever registered for internal users and replacing the internal
mock rate-limit helper with a stable no-op external stub. The external
build keeps the same behavior while removing a concentrated block of
USER_TYPE-gated dead code.

Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior
Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

* Align internal-only helper removal with remaining user guidance

This follow-up fixes the mock billing stub to be a true no-op and removes
stale user-facing references to /verify and /skillify from the same PR.
It also leaves a clearer paper trail for review: the deleted verify skill
was explicitly ant-gated before removal, and the remaining mock helper
callers still resolve to safe no-op returns in the external build.

Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope
Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines)

* Clarify generic workflow wording after skill removal

This removes the last generic verification-skill wording that could still
be read as pointing at a deleted bundled command. The guidance now talks
about project workflows rather than a specific bundled verify skill.

Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR
Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When removing bundled commands, scrub both explicit and generic references in the same branch
Tested: bun run build
Tested: bun run smoke
Not-tested: Additional checks unchanged by wording-only follow-up

---------

Co-authored-by: anandh8x <test@example.com>

* test(api): add GEMINI_AUTH_MODE to environment setup in tests

* test: isolate GitHub/Gemini credential tests with fresh module imports and explicit non-bare env setup to prevent cross-test mock/cache leaks

* fix: update GitHub Copilot base URL and model defaults for improved compatibility

* fix: enhance error handling in OpenAI API response processing

* fix: improve error handling for GitHub Copilot API responses and streamline error body consumption

* fix: enhance response handling in OpenAI API shim for better error reporting and support for streaming responses

* feat: enhance GitHub device flow with fresh module import and token validation improvements

* fix: separate Copilot API routing from GitHub Models, clear stale env vars, honor providerOverride.apiKey

* fix: route GitHub GPT-5/Codex to Copilot API, show all Copilot models in picker, clear stale env vars

* fix GitHub Models API regression

* feat: update GitHub authentication to require OAuth tokens, normalize model handling for Copilot and GitHub Models

* fix: update GitHub token validation to support OAuth tokens and improve endpoint type handling

---------

Co-authored-by: Anandan <anandan.8x@gmail.com>
Co-authored-by: anandh8x <test@example.com>
2026-04-08 16:03:31 +08:00
Urvish Lanje
648ae8053b ci: run python provider tests in pr-checks (#477)
* Add WakaTime extension to devcontainer configuration

* ci: run python provider tests in pr-checks

* Delete .devcontainer directory

* ci: added requirements.txt for pip caching

* ci: addressed security and mainenance issues

* ci: updated release tag

* Update .github/workflows/pr-checks.yml

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* ci: added full commit SHA for python setup

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-08 15:18:04 +08:00
155 changed files with 9789 additions and 850 deletions

View File

@@ -248,3 +248,93 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
# Enable debug logging
# CLAUDE_DEBUG=1
# =============================================================================
# WEB SEARCH (OPTIONAL)
# =============================================================================
# OpenClaude includes a web search tool. By default it uses DuckDuckGo (free)
# or the provider's native search (Anthropic firstParty / vertex).
#
# Set one API key below to enable a provider. That's it.
# ── Provider API keys — set ONE of these ────────────────────────────
# Tavily (AI-optimized search, recommended)
# TAVILY_API_KEY=tvly-your-key-here
# Exa (neural/semantic search)
# EXA_API_KEY=your-exa-key-here
# You.com (RAG-ready snippets)
# YOU_API_KEY=your-you-key-here
# Jina (s.jina.ai endpoint)
# JINA_API_KEY=your-jina-key-here
# Bing Web Search
# BING_API_KEY=your-bing-key-here
# Mojeek (privacy-focused)
# MOJEEK_API_KEY=your-mojeek-key-here
# Linkup
# LINKUP_API_KEY=your-linkup-key-here
# Firecrawl (premium, uses @mendable/firecrawl-js)
# FIRECRAWL_API_KEY=fc-your-key-here
# ── Provider selection mode ─────────────────────────────────────────
#
# WEB_SEARCH_PROVIDER controls fallback behavior:
#
# "auto" (default) — try all configured providers, fall through on failure
# "custom" — custom API only, throw on failure (NOT in auto chain)
# "firecrawl" — firecrawl only
# "tavily" — tavily only
# "exa" — exa only
# "you" — you.com only
# "jina" — jina only
# "bing" — bing only
# "mojeek" — mojeek only
# "linkup" — linkup only
# "ddg" — duckduckgo only
# "native" — anthropic native / codex only
#
# Auto mode priority: firecrawl → tavily → exa → you → jina → bing → mojeek →
# linkup → ddg
# Note: "custom" is NOT in the auto chain. To use the custom API provider,
# you must explicitly set WEB_SEARCH_PROVIDER=custom.
#
# WEB_SEARCH_PROVIDER=auto
# ── Built-in custom API presets ─────────────────────────────────────
#
# Use with WEB_KEY for the API key:
# WEB_PROVIDER=searxng|google|brave|serpapi
# WEB_KEY=your-api-key-here
# ── Custom API endpoint (advanced) ──────────────────────────────────
#
# WEB_SEARCH_API — base URL of your search endpoint
# WEB_QUERY_PARAM — query parameter name (default: "q")
# WEB_METHOD — GET or POST (default: GET)
# WEB_PARAMS — extra static query params as JSON: {"lang":"en","count":"10"}
# WEB_URL_TEMPLATE — URL template with {query} for path embedding
# WEB_BODY_TEMPLATE — custom POST body with {query} placeholder
# WEB_AUTH_HEADER — header name for API key (default: "Authorization")
# WEB_AUTH_SCHEME — prefix before key (default: "Bearer")
# WEB_HEADERS — extra headers as "Name: value; Name2: value2"
# WEB_JSON_PATH — dot-path to results array in response
# ── Custom API security guardrails ──────────────────────────────────
#
# The custom provider enforces security guardrails by default.
# Override these only if you understand the risks.
#
# WEB_CUSTOM_TIMEOUT_SEC=15 — request timeout in seconds (default 15)
# WEB_CUSTOM_MAX_BODY_KB=300 — max POST body size in KB (default 300)
# WEB_CUSTOM_ALLOW_ARBITRARY_HEADERS=false — set "true" to use non-standard headers
# WEB_CUSTOM_ALLOW_HTTP=false — set "true" to allow http:// URLs
# WEB_CUSTOM_ALLOW_PRIVATE=false — set "true" to target localhost/private IPs
# (needed for self-hosted SearXNG)

View File

@@ -29,6 +29,13 @@ jobs:
with:
bun-version: 1.3.11
- name: Set up Python
uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5.0.0
with:
python-version: "3.12"
cache: "pip"
cache-dependency-path: python/requirements.txt
- name: Install dependencies
run: bun install --frozen-lockfile
@@ -38,6 +45,12 @@ jobs:
- name: Full unit test suite
run: bun test --max-concurrency=1
- name: Install Python test dependencies
run: python -m pip install -r python/requirements.txt
- name: Python unit tests
run: python -m pytest -q python/tests
- name: Suspicious PR intent scan
run: bun run security:pr-scan -- --base ${{ github.event.pull_request.base.sha || 'origin/main' }}
- name: Provider tests

View File

@@ -137,10 +137,9 @@ export OPENAI_MODEL=llama-3.3-70b-versatile
### Mistral
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
export CLAUDE_CODE_USE_MISTRAL=1
export MISTRAL_API_KEY=...
export MISTRAL_MODEL=mistral-large-latest
```
### Azure OpenAI

3
python/requirements.txt Normal file
View File

@@ -0,0 +1,3 @@
pytest==7.4.4
pytest-asyncio==0.23.3
httpx==0.25.2

View File

@@ -112,6 +112,14 @@ def build_default_providers() -> list[Provider]:
big_model=big if "gemini" in big else "gemini-2.5-pro",
small_model=small if "gemini" in small else "gemini-2.0-flash",
),
Provider(
name="mistral",
ping_url="",
api_key_env="MISTRAL_API_KEY",
cost_per_1k_tokens=0.0001,
big_model=big if "mistral" in big else "devstral-latest",
small_model=small if "small" in small else "ministral-3b-latest",
),
Provider(
name="ollama",
ping_url=f"{ollama_url}/api/tags",

View File

@@ -11,6 +11,7 @@ import {
buildAtomicChatProfileEnv,
buildCodexProfileEnv,
buildGeminiProfileEnv,
buildMistralProfileEnv,
buildOllamaProfileEnv,
buildOpenAIProfileEnv,
createProfileFile,
@@ -37,7 +38,7 @@ function parseArg(name: string): string | null {
function parseProviderArg(): ProviderProfile | 'auto' {
const p = parseArg('--provider')?.toLowerCase()
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini' || p === 'atomic-chat') return p
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini' || p === 'mistral' || p === 'atomic-chat') return p
return 'auto'
}
@@ -90,6 +91,21 @@ async function main(): Promise<void> {
process.exit(1)
}
env = builtEnv
} else if (selected === 'mistral') {
const builtEnv = buildMistralProfileEnv({
model: argModel || null,
baseUrl: argBaseUrl || null,
apiKey: argApiKey || null,
processEnv: process.env,
})
if (!builtEnv) {
console.error('Mistral profile requires an API key. Use --api-key or set MISTRAL_API_KEY.')
console.error('Get a free key at: https://admin.mistral.ai/organization/api-keys')
process.exit(1)
}
env = builtEnv
} else if (selected === 'ollama') {
resolvedOllamaModel ??= await resolveOllamaModel(argModel, argBaseUrl, goal)
@@ -169,7 +185,7 @@ async function main(): Promise<void> {
console.log(`Saved profile: ${selected}`)
console.log(`Goal: ${goal}`)
console.log(`Model: ${profile.env.GEMINI_MODEL || profile.env.OPENAI_MODEL || getGoalDefaultOpenAIModel(goal)}`)
console.log(`Model: ${profile.env.GEMINI_MODEL || profile.env.MISTRAL_MODEL || profile.env.OPENAI_MODEL || getGoalDefaultOpenAIModel(goal)}`)
console.log(`Path: ${outputPath}`)
console.log('Next: bun run dev:profile')
}

View File

@@ -50,7 +50,7 @@ function parseLaunchOptions(argv: string[]): LaunchOptions {
continue
}
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini' || lower === 'atomic-chat') && requestedProfile === 'auto') {
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini' || lower ==='mistral' || lower === 'atomic-chat') && requestedProfile === 'auto') {
requestedProfile = lower as ProviderProfile | 'auto'
continue
}
@@ -124,6 +124,8 @@ function printSummary(profile: ProviderProfile): void {
console.log(`Launching profile: ${profile}`)
if (profile === 'gemini') {
console.log('Using configured Gemini provider settings.')
} else if (profile === 'mistral') {
console.log('Using configured Mistral provider settings.')
} else if (profile === 'codex') {
console.log('Using configured Codex/OpenAI-compatible provider settings.')
} else if (profile === 'atomic-chat') {
@@ -139,7 +141,7 @@ async function main(): Promise<void> {
const options = parseLaunchOptions(process.argv.slice(2))
const requestedProfile = options.requestedProfile
if (!requestedProfile) {
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|atomic-chat|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|mistral|atomic-chat|mistral|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
process.exit(1)
}
@@ -205,6 +207,11 @@ async function main(): Promise<void> {
process.exit(1)
}
if (profile === 'mistral' && !env.MISTRAL_API_KEY) {
console.error('MISTRAL_API_KEY is required for mistral profile. Run: bun run profile:init -- --provider mistral --api-key <key>')
process.exit(1)
}
if (profile === 'openai' && (!env.OPENAI_API_KEY || env.OPENAI_API_KEY === 'SUA_CHAVE')) {
console.error('OPENAI_API_KEY is required for openai profile and cannot be SUA_CHAVE. Run: bun run profile:init -- --provider openai --api-key <key>')
process.exit(1)

View File

@@ -118,14 +118,18 @@ function isLocalBaseUrl(baseUrl: string): boolean {
}
const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai'
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
const MISTRAL_DEFAULT_BASE_URL = 'https://api.mistral.ai/v1'
const GITHUB_COPILOT_BASE = 'https://api.githubcopilot.com'
function currentBaseUrl(): string {
if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL
}
if (isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)) {
return process.env.MISTRAL_BASE_URL ?? MISTRAL_DEFAULT_BASE_URL
}
if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
return process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
return process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE
}
return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1'
}
@@ -155,9 +159,34 @@ function checkGeminiEnv(): CheckResult[] {
return results
}
function checkMistralEnv(): CheckResult[] {
const results: CheckResult[] = []
const model = process.env.MISTRAL_MODEL
const key = process.env.MISTRAL_API_KEY
const baseUrl = process.env.MISTRAL_BASE_URL ?? MISTRAL_DEFAULT_BASE_URL
results.push(pass('Provider mode', 'Mistral provider enabled.'))
if (!model) {
results.push(pass('MISTRAL_MODEL', 'Not set. Default will be used at runtime.'))
} else {
results.push(pass('MISTRAL_MODEL', model))
}
results.push(pass('MISTRAL_BASE_URL', baseUrl))
if (!key) {
results.push(fail('MISTRAL_API_KEY', 'Missing. Set MISTRAL_API_KEY.'))
} else {
results.push(pass('MISTRAL_API_KEY', 'Configured.'))
}
return results
}
function checkGithubEnv(): CheckResult[] {
const results: CheckResult[] = []
const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE
results.push(pass('Provider mode', 'GitHub Models provider enabled.'))
const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
@@ -186,12 +215,17 @@ function checkOpenAIEnv(): CheckResult[] {
const results: CheckResult[] = []
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const useMistral = isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
if (useGemini) {
return checkGeminiEnv()
}
if (useMistral) {
return checkMistralEnv()
}
if (useGithub && !useOpenAI) {
return checkGithubEnv()
}
@@ -268,8 +302,9 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const useMistral = isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
if (!useGemini && !useOpenAI && !useGithub) {
if (!useGemini && !useOpenAI && !useGithub && !useMistral) {
return pass('Provider reachability', 'Skipped (OpenAI-compatible mode disabled).')
}
@@ -326,6 +361,8 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
})
} else if (useGemini && (process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY)) {
headers.Authorization = `Bearer ${process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY}`
} else if (useMistral && process.env.MISTRAL_API_KEY) {
headers.Authorization = `Bearer ${process.env.MISTRAL_API_KEY}`
} else if (process.env.OPENAI_API_KEY) {
headers.Authorization = `Bearer ${process.env.OPENAI_API_KEY}`
}
@@ -373,7 +410,8 @@ function checkOllamaProcessorMode(): CheckResult {
if (
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) ||
isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
) {
return pass('Ollama processor mode', 'Skipped (OpenAI-compatible mode disabled).')
}
@@ -425,6 +463,14 @@ function serializeSafeEnvSummary(): Record<string, string | boolean> {
GEMINI_API_KEY_SET: Boolean(process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY),
}
}
if (isTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)) {
return {
CLAUDE_CODE_USE_MISTRAL: true,
MISTRAL_MODEL: process.env.MISTRAL_MODEL ?? '(unset, default: devstral-latest)',
MISTRAL_BASE_URL: process.env.MISTRAL_BASE_URL ?? 'https://api.mistral.ai/v1',
MISTRAL_API_KEY_SET: Boolean(process.env.MISTRAL_API_KEY),
}
}
if (
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
@@ -435,7 +481,7 @@ function serializeSafeEnvSummary(): Record<string, string | boolean> {
process.env.OPENAI_MODEL ??
'(unset, default: github:copilot → openai/gpt-4.1)',
OPENAI_BASE_URL:
process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE,
process.env.OPENAI_BASE_URL ?? GITHUB_COPILOT_BASE,
GITHUB_TOKEN_SET: Boolean(
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN,
),

View File

@@ -400,12 +400,12 @@ export async function update() {
if (useLocalUpdate) {
process.stderr.write('Try manually updating with:\n')
process.stderr.write(
` cd ~/.claude/local && npm update ${MACRO.PACKAGE_URL}\n`,
` cd ~/.openclaude/local && npm update ${MACRO.PACKAGE_URL}\n`,
)
} else {
process.stderr.write('Try running with sudo or fix npm permissions\n')
process.stderr.write(
'Or consider using native installation with: claude install\n',
'Or consider using native installation with: openclaude install\n',
)
}
await gracefulShutdown(1)
@@ -415,11 +415,11 @@ export async function update() {
if (useLocalUpdate) {
process.stderr.write('Try manually updating with:\n')
process.stderr.write(
` cd ~/.claude/local && npm update ${MACRO.PACKAGE_URL}\n`,
` cd ~/.openclaude/local && npm update ${MACRO.PACKAGE_URL}\n`,
)
} else {
process.stderr.write(
'Or consider using native installation with: claude install\n',
'Or consider using native installation with: openclaude install\n',
)
}
await gracefulShutdown(1)

View File

@@ -32,6 +32,7 @@ import logout from './commands/logout/index.js'
import installGitHubApp from './commands/install-github-app/index.js'
import installSlackApp from './commands/install-slack-app/index.js'
import breakCache from './commands/break-cache/index.js'
import cacheProbe from './commands/cache-probe/index.js'
import mcp from './commands/mcp/index.js'
import mobile from './commands/mobile/index.js'
import onboarding from './commands/onboarding/index.js'
@@ -136,6 +137,7 @@ import hooks from './commands/hooks/index.js'
import files from './commands/files/index.js'
import branch from './commands/branch/index.js'
import agents from './commands/agents/index.js'
import autoFix from './commands/auto-fix.js'
import plugin from './commands/plugin/index.js'
import reloadPlugins from './commands/reload-plugins/index.js'
import rewind from './commands/rewind/index.js'
@@ -143,6 +145,7 @@ import heapDump from './commands/heapdump/index.js'
import mockLimits from './commands/mock-limits/index.js'
import bridgeKick from './commands/bridge-kick.js'
import version from './commands/version.js'
import wiki from './commands/wiki/index.js'
import summary from './commands/summary/index.js'
import {
resetLimits,
@@ -263,8 +266,10 @@ const COMMANDS = memoize((): Command[] => [
addDir,
advisor,
agents,
autoFix,
branch,
btw,
cacheProbe,
chrome,
clear,
color,
@@ -324,6 +329,7 @@ const COMMANDS = memoize((): Command[] => [
usage,
usageReport,
vim,
wiki,
...(webCmd ? [webCmd] : []),
...(forkCmd ? [forkCmd] : []),
...(buddy ? [buddy] : []),

25
src/commands/auto-fix.ts Normal file
View File

@@ -0,0 +1,25 @@
import type { Command } from '../types/command.js'
const command: Command = {
name: 'auto-fix',
description: 'Configure auto-fix: run lint/test after AI edits',
isEnabled: () => true,
type: 'prompt',
progressMessage: 'Configuring auto-fix...',
contentLength: 0,
source: 'builtin',
async getPromptForCommand() {
return [
{
type: 'text',
text:
'The user wants to configure auto-fix settings. Auto-fix automatically runs lint and test commands after AI file edits, feeding errors back for self-repair.\n\n' +
'Current settings location: `.claude/settings.json` or `.claude/settings.local.json`\n\n' +
'Example configuration:\n```json\n{\n "autoFix": {\n "enabled": true,\n "lint": "eslint . --fix",\n "test": "bun test",\n "maxRetries": 3,\n "timeout": 30000\n }\n}\n```\n\n' +
'Ask the user what lint and test commands they use, then help them set up the configuration.',
},
]
},
}
export default command

View File

@@ -0,0 +1,413 @@
import { getSessionId } from '../../bootstrap/state.js'
import { resolveProviderRequest } from '../../services/api/providerConfig.js'
import type { LocalCommandCall } from '../../types/command.js'
import { logForDebugging } from '../../utils/debug.js'
import { isEnvTruthy } from '../../utils/envUtils.js'
import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubModelsCredentials.js'
import { getMainLoopModel } from '../../utils/model/model.js'
const COPILOT_HEADERS: Record<string, string> = {
'User-Agent': 'GitHubCopilotChat/0.26.7',
'Editor-Version': 'vscode/1.99.3',
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
'Copilot-Integration-Id': 'vscode-chat',
}
// Large system prompt (~6000 chars, ~1500 tokens) to cross the 1024-token cache threshold
const SYSTEM_PROMPT = [
'You are a coding assistant. Answer concisely.',
'CONTEXT: User is working on a TypeScript project with Bun runtime.',
...Array.from(
{ length: 80 },
(_, i) =>
`Rule ${i + 1}: Follow best practices for TypeScript including strict typing, error handling, testing, and clean code. Prefer explicit types over any. Use const assertions. Await all async operations.`,
),
].join('\n\n')
const USER_MESSAGE = 'Say "hello" and nothing else.'
const DELAY_MS = 3000
/**
* Extract model family from a versioned model string.
* e.g. "gpt-5.4-0626" → "gpt-5.4", "codex-mini-latest" → "codex-mini"
*/
function getModelFamily(model: string | undefined): string {
if (!model) return 'unknown'
return model
.replace(/-\d{4,}$/, '')
.replace(/-latest$/, '')
.replace(/-preview$/, '')
}
function getField(obj: unknown, path: string): unknown {
return path
.split('.')
.reduce((o: any, k: string) => (o != null ? o[k] : undefined), obj)
}
interface ProbeResult {
label: string
status: number
elapsed: number
headers: Record<string, string>
usage: Record<string, unknown> | null
responseId: string | null
error: string | null
}
async function sendProbe(
url: string,
headers: Record<string, string>,
body: Record<string, unknown>,
label: string,
): Promise<ProbeResult> {
const start = Date.now()
let response: Response
try {
response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify(body),
})
} catch (err: any) {
return {
label,
status: 0,
elapsed: Date.now() - start,
headers: {},
usage: null,
responseId: null,
error: err.message,
}
}
const elapsed = Date.now() - start
const respHeaders: Record<string, string> = {}
response.headers.forEach((value, key) => {
respHeaders[key] = value
})
if (!response.ok) {
const errorBody = await response.text().catch(() => '')
return {
label,
status: response.status,
elapsed,
headers: respHeaders,
usage: null,
responseId: null,
error: errorBody,
}
}
// Parse SSE stream for usage data
const text = await response.text()
let usage: Record<string, unknown> | null = null
let responseId: string | null = null
const isResponses = url.endsWith('/responses')
for (const chunk of text.split('\n\n')) {
const lines = chunk
.split('\n')
.map((l) => l.trim())
.filter(Boolean)
if (isResponses) {
const eventLine = lines.find((l) => l.startsWith('event: '))
const dataLines = lines.filter((l) => l.startsWith('data: '))
if (!eventLine || !dataLines.length) continue
const event = eventLine.slice(7).trim()
if (
event === 'response.completed' ||
event === 'response.incomplete'
) {
try {
const data = JSON.parse(
dataLines.map((l) => l.slice(6)).join('\n'),
)
usage = (data?.response?.usage as Record<string, unknown>) ?? null
responseId = (data?.response?.id as string) ?? null
} catch {}
}
} else {
for (const line of lines) {
if (!line.startsWith('data: ')) continue
const raw = line.slice(6).trim()
if (raw === '[DONE]') continue
try {
const data = JSON.parse(raw) as Record<string, unknown>
if (data.usage) {
usage = data.usage as Record<string, unknown>
responseId = (data.id as string) ?? null
}
} catch {}
}
}
}
return { label, status: response.status, elapsed, headers: respHeaders, usage, responseId, error: null }
}
function formatResult(r: ProbeResult): string {
const lines: string[] = [`--- ${r.label} ---`]
if (r.error) {
lines.push(` ERROR (HTTP ${r.status}): ${r.error.slice(0, 200)}`)
return lines.join('\n')
}
lines.push(` HTTP ${r.status}${r.elapsed}ms`)
if (r.responseId) lines.push(` response.id: ${r.responseId}`)
if (r.usage) {
lines.push(' Usage:')
lines.push(` ${JSON.stringify(r.usage, null, 2).replace(/\n/g, '\n ')}`)
} else {
lines.push(' Usage: null')
}
// Interesting headers
for (const h of [
'openai-processing-ms',
'x-ratelimit-remaining',
'x-ratelimit-limit',
'x-ms-region',
'x-github-request-id',
'x-request-id',
]) {
if (r.headers[h]) lines.push(` ${h}: ${r.headers[h]}`)
}
return lines.join('\n')
}
export const call: LocalCommandCall = async (args) => {
const parts = (args ?? '').trim().split(/\s+/).filter(Boolean)
const noKey = parts.includes('--no-key')
const modelOverride = parts.find((p) => !p.startsWith('--')) || undefined
const modelStr = modelOverride ?? getMainLoopModel()
const request = resolveProviderRequest({ model: modelStr })
const isGithub = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
// Resolve API key the same way the OpenAI shim does
let apiKey = process.env.OPENAI_API_KEY ?? ''
if (!apiKey && isGithub) {
hydrateGithubModelsTokenFromSecureStorage()
apiKey =
process.env.OPENAI_API_KEY ??
process.env.GITHUB_TOKEN ??
process.env.GH_TOKEN ??
''
}
if (!apiKey) {
return {
type: 'text',
value:
'No API key found. Make sure you are in an active OpenAI-compatible or GitHub Copilot session.\n' +
'For GitHub Copilot: run /onboard-github first.\n' +
'For OpenAI-compatible: set OPENAI_API_KEY.',
}
}
const useResponses = request.transport === 'codex_responses'
const endpoint = useResponses ? '/responses' : '/chat/completions'
const url = `${request.baseUrl}${endpoint}`
const family = getModelFamily(request.resolvedModel)
const cacheKey = `${getSessionId()}:${family}`
const headers: Record<string, string> = {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
originator: 'openclaude',
}
if (isGithub) {
Object.assign(headers, COPILOT_HEADERS)
}
let body: Record<string, unknown>
if (useResponses) {
body = {
model: request.resolvedModel,
instructions: SYSTEM_PROMPT,
input: [
{
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: USER_MESSAGE }],
},
],
stream: true,
...(noKey ? {} : {
store: false,
prompt_cache_key: cacheKey,
prompt_cache_retention: '24h',
}),
}
} else {
body = {
model: request.resolvedModel,
messages: [
{ role: 'system', content: SYSTEM_PROMPT },
{ role: 'user', content: USER_MESSAGE },
],
stream: true,
stream_options: { include_usage: true },
max_tokens: 20,
...(noKey ? {} : {
store: false,
prompt_cache_key: cacheKey,
}),
}
}
// Log configuration
const config = [
`[cache-probe] Starting cache probe${noKey ? ' (--no-key: cache params OMITTED)' : ''}`,
` model: ${request.resolvedModel} (family: ${family})`,
` transport: ${request.transport}`,
` endpoint: ${url}`,
` prompt_cache_key: ${noKey ? 'NOT SENT' : cacheKey}`,
` store: ${noKey ? 'NOT SENT' : 'false'}`,
` system prompt: ~${Math.round(SYSTEM_PROMPT.length / 4)} tokens`,
` delay between calls: ${DELAY_MS}ms`,
].join('\n')
logForDebugging(config)
// Call 1 — Cold
const r1 = await sendProbe(url, headers, body, 'CALL 1 — Cold (no cache)')
logForDebugging(`[cache-probe]\n${formatResult(r1)}`)
if (r1.error) {
return {
type: 'text',
value: `Cache probe failed on first call: HTTP ${r1.status}\n${r1.error.slice(0, 300)}\n\nFull details in debug log.`,
}
}
// Wait
await new Promise((r) => setTimeout(r, DELAY_MS))
// Call 2 — Warm
const r2 = await sendProbe(url, headers, body, 'CALL 2 — Warm (cache expected)')
logForDebugging(`[cache-probe]\n${formatResult(r2)}`)
// --- Comparison ---
const fields = [
'input_tokens',
'output_tokens',
'total_tokens',
'prompt_tokens',
'completion_tokens',
'input_tokens_details.cached_tokens',
'prompt_tokens_details.cached_tokens',
'output_tokens_details.reasoning_tokens',
]
const comparison: string[] = ['[cache-probe] COMPARISON']
comparison.push(
` ${'Field'.padEnd(42)} ${'Call 1'.padStart(8)} ${'Call 2'.padStart(8)} ${'Delta'.padStart(8)}`,
)
comparison.push(` ${'-'.repeat(72)}`)
for (const f of fields) {
const v1 = getField(r1.usage, f)
const v2 = getField(r2.usage, f)
if (v1 === undefined && v2 === undefined) continue
const d =
typeof v1 === 'number' && typeof v2 === 'number' ? v2 - v1 : ''
comparison.push(
` ${f.padEnd(42)} ${String(v1 ?? '-').padStart(8)} ${String(v2 ?? '-').padStart(8)} ${String(d).padStart(8)}`,
)
}
comparison.push('')
comparison.push(
` Latency: ${r1.elapsed}ms → ${r2.elapsed}ms (${r2.elapsed - r1.elapsed > 0 ? '+' : ''}${r2.elapsed - r1.elapsed}ms)`,
)
// Header comparison
for (const h of ['openai-processing-ms', 'x-ms-region', 'x-ratelimit-remaining']) {
const v1 = r1.headers[h]
const v2 = r2.headers[h]
if (v1 || v2) {
comparison.push(` ${h}: ${v1 ?? '-'}${v2 ?? '-'}`)
}
}
// Verdict
const cached2 =
(getField(r2.usage, 'input_tokens_details.cached_tokens') as number) ??
(getField(r2.usage, 'prompt_tokens_details.cached_tokens') as number) ??
0
const input1 =
((r1.usage?.input_tokens ?? r1.usage?.prompt_tokens) as number) ?? 0
const input2 =
((r2.usage?.input_tokens ?? r2.usage?.prompt_tokens) as number) ?? 0
let verdict: string
if (cached2 > 0) {
const rate = input2 > 0 ? Math.round((cached2 / input2) * 100) : '?'
verdict = `CACHE HIT: ${cached2} cached tokens (${rate}% of input)`
} else if (input1 === 0 && input2 === 0) {
verdict = 'INCONCLUSIVE: Server returns 0 input_tokens — cannot measure'
} else if (r2.elapsed < r1.elapsed * 0.6 && input1 > 100) {
verdict = `POSSIBLE SILENT CACHING: Call 2 was ${Math.round((1 - r2.elapsed / r1.elapsed) * 100)}% faster but no cached_tokens reported`
} else {
verdict = 'NO CACHE DETECTED'
}
comparison.push(`\n Verdict: ${verdict}`)
// --- Simulate what main's shim code does with this usage ---
// codexShim.ts makeUsage() — used for Responses API (GPT-5+/Codex)
function mainMakeUsage(u: any) {
return {
input_tokens: u?.input_tokens ?? 0,
output_tokens: u?.output_tokens ?? 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0, // ← main hardcodes this to 0
}
}
// openaiShim.ts convertChunkUsage() — used for Chat Completions
function mainConvertChunkUsage(u: any) {
return {
input_tokens: u?.prompt_tokens ?? 0,
output_tokens: u?.completion_tokens ?? 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: u?.prompt_tokens_details?.cached_tokens ?? 0,
}
}
const shimFn = useResponses ? mainMakeUsage : mainConvertChunkUsage
const shim1 = shimFn(r1.usage)
const shim2 = shimFn(r2.usage)
comparison.push('')
comparison.push(` --- What main's shim reports (${useResponses ? 'codexShim.makeUsage' : 'openaiShim.convertChunkUsage'}) ---`)
comparison.push(` Call 1: cache_read_input_tokens=${shim1.cache_read_input_tokens}`)
comparison.push(` Call 2: cache_read_input_tokens=${shim2.cache_read_input_tokens}`)
if (useResponses && cached2 > 0) {
comparison.push(` BUG: Server returned ${cached2} cached tokens but main's makeUsage() drops it → reports 0`)
} else if (!useResponses && shim2.cache_read_input_tokens > 0) {
comparison.push(` OK: Chat Completions path on main correctly reads cached_tokens`)
}
logForDebugging(comparison.join('\n'))
// User-facing summary
const mode = noKey ? ' (NO cache key sent)' : ''
const shimLabel = useResponses ? 'codexShim.makeUsage()' : 'openaiShim.convertChunkUsage()'
const summary = [
`Cache Probe — ${request.resolvedModel} via ${useResponses ? 'Responses API' : 'Chat Completions'}${mode}`,
'',
`Call 1: ${r1.elapsed}ms, input=${input1}, cached=${(getField(r1.usage, 'input_tokens_details.cached_tokens') as number) ?? (getField(r1.usage, 'prompt_tokens_details.cached_tokens') as number) ?? 0}`,
`Call 2: ${r2.elapsed}ms, input=${input2}, cached=${cached2}`,
'',
verdict,
'',
`What main's ${shimLabel} reports:`,
` Call 2 cache_read_input_tokens = ${shim2.cache_read_input_tokens}${useResponses && cached2 > 0 ? ' ← BUG: server sent ' + cached2 + ' but main drops it' : ''}`,
'',
'Full details written to debug log.',
].join('\n')
return { type: 'text', value: summary }
}

View File

@@ -0,0 +1,17 @@
import type { Command } from '../../commands.js'
import { isEnvTruthy } from '../../utils/envUtils.js'
const cacheProbe: Command = {
type: 'local',
name: 'cache-probe',
description:
'Send identical requests to test prompt caching (results in debug log)',
argumentHint: '[model] [--no-key]',
isEnabled: () =>
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB),
supportsNonInteractive: false,
load: () => import('./cache-probe.js'),
}
export default cacheProbe

View File

@@ -39,16 +39,16 @@ type InstallState = {
message: string;
warnings?: string[];
};
function getInstallationPath(): string {
export function getInstallationPath(): string {
const isWindows = env.platform === 'win32';
const homeDir = homedir();
if (isWindows) {
// Convert to Windows-style path
const windowsPath = join(homeDir, '.local', 'bin', 'claude.exe');
const windowsPath = join(homeDir, '.local', 'bin', 'openclaude.exe');
// Replace forward slashes with backslashes for Windows display
return windowsPath.replace(/\//g, '\\');
}
return '~/.local/bin/claude';
return '~/.local/bin/openclaude';
}
function SetupNotes(t0) {
const $ = _c(5);

View File

@@ -1,20 +1,44 @@
import { afterEach, expect, mock, test } from 'bun:test'
import { getAdditionalModelOptionsCacheScope } from '../../services/api/providerConfig.js'
import { getAPIProvider } from '../../utils/model/providers.js'
const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
CLAUDE_CODE_USE_MISTRAL: process.env.CLAUDE_CODE_USE_MISTRAL,
CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK,
CLAUDE_CODE_USE_VERTEX: process.env.CLAUDE_CODE_USE_VERTEX,
CLAUDE_CODE_USE_FOUNDRY: process.env.CLAUDE_CODE_USE_FOUNDRY,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE,
OPENAI_MODEL: process.env.OPENAI_MODEL,
}
afterEach(() => {
mock.restore()
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
process.env.CLAUDE_CODE_USE_MISTRAL = originalEnv.CLAUDE_CODE_USE_MISTRAL
process.env.CLAUDE_CODE_USE_BEDROCK = originalEnv.CLAUDE_CODE_USE_BEDROCK
process.env.CLAUDE_CODE_USE_VERTEX = originalEnv.CLAUDE_CODE_USE_VERTEX
process.env.CLAUDE_CODE_USE_FOUNDRY = originalEnv.CLAUDE_CODE_USE_FOUNDRY
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
})
test('opens the model picker without awaiting local model discovery refresh', async () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.CLAUDE_CODE_USE_MISTRAL
delete process.env.CLAUDE_CODE_USE_BEDROCK
delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY
delete process.env.OPENAI_API_BASE
process.env.OPENAI_BASE_URL = 'http://127.0.0.1:8080/v1'
process.env.OPENAI_MODEL = 'qwen2.5-coder-7b-instruct'
@@ -30,7 +54,9 @@ test('opens the model picker without awaiting local model discovery refresh', as
discoverOpenAICompatibleModelOptions,
}))
const { call } = await import(`./model.js?ts=${Date.now()}-${Math.random()}`)
expect(getAdditionalModelOptionsCacheScope()).toBe('openai:http://127.0.0.1:8080/v1')
const { call } = await import('./model.js')
const result = await Promise.race([
call(() => {}, {} as never, ''),
new Promise(resolve => setTimeout(() => resolve('timeout'), 50)),

View File

@@ -284,7 +284,7 @@ function haveSameModelOptions(left: ModelOption[], right: ModelOption[]): boolea
});
}
async function refreshOpenAIModelOptionsCache(): Promise<void> {
if (getAPIProvider() !== 'openai') {
if (!getAdditionalModelOptionsCacheScope()?.startsWith('openai:')) {
return;
}
try {

View File

@@ -4,7 +4,7 @@ const onboardGithub: Command = {
name: 'onboard-github',
aliases: ['onboarding-github', 'onboardgithub', 'onboardinggithub'],
description:
'Interactive setup for GitHub Models: device login or PAT, saved to secure storage',
'Interactive setup for GitHub Copilot: OAuth device login stored in secure storage',
type: 'local-jsx',
load: () => import('./onboard-github.js'),
}

View File

@@ -2,9 +2,9 @@ import * as React from 'react'
import { useCallback, useState } from 'react'
import { Select } from '../../components/CustomSelect/select.js'
import { Spinner } from '../../components/Spinner.js'
import TextInput from '../../components/TextInput.js'
import { Box, Text } from '../../ink.js'
import {
exchangeForCopilotToken,
openVerificationUri,
pollAccessToken,
requestDeviceCode,
@@ -15,7 +15,7 @@ import {
readGithubModelsToken,
saveGithubModelsToken,
} from '../../utils/githubModelsCredentials.js'
import { updateSettingsForSource } from '../../utils/settings/settings.js'
import { getSettingsForSource, updateSettingsForSource } from '../../utils/settings/settings.js'
const DEFAULT_MODEL = 'github:copilot'
const FORCE_RELOGIN_ARGS = new Set([
@@ -27,11 +27,25 @@ const FORCE_RELOGIN_ARGS = new Set([
'--reauth',
])
type Step =
| 'menu'
| 'device-busy'
| 'pat'
| 'error'
type Step = 'menu' | 'device-busy' | 'error'
const PROVIDER_SPECIFIC_KEYS = new Set([
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_BASE_URL',
'OPENAI_API_BASE',
'OPENAI_API_KEY',
'OPENAI_MODEL',
'GEMINI_API_KEY',
'GOOGLE_API_KEY',
'GEMINI_BASE_URL',
'GEMINI_MODEL',
'GEMINI_ACCESS_TOKEN',
'GEMINI_AUTH_MODE',
])
export function shouldForceGithubRelogin(args?: string): boolean {
const normalized = (args ?? '').trim().toLowerCase()
@@ -41,15 +55,29 @@ export function shouldForceGithubRelogin(args?: string): boolean {
return normalized.split(/\s+/).some(arg => FORCE_RELOGIN_ARGS.has(arg))
}
const GITHUB_PAT_PREFIXES = ['ghp_', 'gho_','ghs_', 'ghr_', 'github_pat_']
function isGithubPat(token: string): boolean {
return GITHUB_PAT_PREFIXES.some(prefix => token.startsWith(prefix))
}
export function hasExistingGithubModelsLoginToken(
env: NodeJS.ProcessEnv = process.env,
storedToken?: string,
): boolean {
const envToken = env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()
if (envToken) {
// PATs are no longer supported - require OAuth re-auth
if (isGithubPat(envToken)) {
return false
}
return true
}
const persisted = (storedToken ?? readGithubModelsToken())?.trim()
// PATs are no longer supported - require OAuth re-auth
if (persisted && isGithubPat(persisted)) {
return false
}
return Boolean(persisted)
}
@@ -97,8 +125,21 @@ export function applyGithubOnboardingProcessEnv(
}
function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
const currentSettings = getSettingsForSource('userSettings')
const currentEnv = currentSettings?.env ?? {}
const newEnv: Record<string, string> = {}
for (const [key, value] of Object.entries(currentEnv)) {
if (!PROVIDER_SPECIFIC_KEYS.has(key)) {
newEnv[key] = value
}
}
newEnv.CLAUDE_CODE_USE_GITHUB = '1'
newEnv.OPENAI_MODEL = model
const { error } = updateSettingsForSource('userSettings', {
env: buildGithubOnboardingSettingsEnv(model) as any,
env: newEnv,
})
if (error) {
return { ok: false, detail: error.message }
@@ -143,12 +184,14 @@ function OnboardGithub(props: {
user_code: string
verification_uri: string
} | null>(null)
const [patDraft, setPatDraft] = useState('')
const [cursorOffset, setCursorOffset] = useState(0)
const finalize = useCallback(
async (token: string, model: string = DEFAULT_MODEL) => {
const saved = saveGithubModelsToken(token)
async (
token: string,
model: string = DEFAULT_MODEL,
oauthToken?: string,
) => {
const saved = saveGithubModelsToken(token, oauthToken)
if (!saved.success) {
setErrorMsg(saved.warning ?? 'Could not save token to secure storage.')
setStep('error')
@@ -165,8 +208,18 @@ function OnboardGithub(props: {
setStep('error')
return
}
// Clear stale provider-specific env vars from the current session
// so resolveProviderRequest() doesn't pick up a previous provider's
// base URL or key after onboarding completes.
for (const key of PROVIDER_SPECIFIC_KEYS) {
delete process.env[key]
}
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_MODEL = model.trim() || DEFAULT_MODEL
hydrateGithubModelsTokenFromSecureStorage()
onChangeAPIKey()
onDone(
'GitHub Models onboard complete. Token stored in secure storage; user settings updated. Restart if the model does not switch.',
'GitHub Copilot onboard complete. Copilot token and OAuth token stored in secure storage (Windows/Linux: ~/.claude/.credentials.json, macOS: Keychain fallback to ~/.claude/.credentials.json); user settings updated. Restart if the model does not switch.',
{ display: 'user' },
)
},
@@ -184,11 +237,12 @@ function OnboardGithub(props: {
verification_uri: device.verification_uri,
})
await openVerificationUri(device.verification_uri)
const token = await pollAccessToken(device.device_code, {
const oauthToken = await pollAccessToken(device.device_code, {
initialInterval: device.interval,
timeoutSeconds: device.expires_in,
})
await finalize(token, DEFAULT_MODEL)
const copilotToken = await exchangeForCopilotToken(oauthToken)
await finalize(copilotToken.token, DEFAULT_MODEL, oauthToken)
} catch (e) {
setErrorMsg(e instanceof Error ? e.message : String(e))
setStep('error')
@@ -227,7 +281,7 @@ function OnboardGithub(props: {
if (step === 'device-busy') {
return (
<Box flexDirection="column" gap={1}>
<Text>GitHub device login</Text>
<Text>GitHub Copilot sign-in</Text>
{deviceHint ? (
<>
<Text>
@@ -246,43 +300,11 @@ function OnboardGithub(props: {
)
}
if (step === 'pat') {
return (
<Box flexDirection="column" gap={1}>
<Text>Paste a GitHub personal access token with access to GitHub Models.</Text>
<Text dimColor>Input is masked. Enter to submit; Esc to go back.</Text>
<TextInput
value={patDraft}
mask="*"
onChange={setPatDraft}
onSubmit={async (value: string) => {
const t = value.trim()
if (!t) {
return
}
await finalize(t, DEFAULT_MODEL)
}}
onExit={() => {
setStep('menu')
setPatDraft('')
}}
columns={80}
cursorOffset={cursorOffset}
onChangeCursorOffset={setCursorOffset}
/>
</Box>
)
}
const menuOptions = [
{
label: 'Sign in with browser (device code)',
label: 'Sign in with browser',
value: 'device' as const,
},
{
label: 'Paste personal access token',
value: 'pat' as const,
},
{
label: 'Cancel',
value: 'cancel' as const,
@@ -291,7 +313,7 @@ function OnboardGithub(props: {
return (
<Box flexDirection="column" gap={1}>
<Text bold>GitHub Models setup</Text>
<Text bold>GitHub Copilot setup</Text>
<Text dimColor>
Stores your token in the OS credential store (macOS Keychain when available)
and enables CLAUDE_CODE_USE_GITHUB in your user settings - no export
@@ -304,10 +326,6 @@ function OnboardGithub(props: {
onDone('GitHub onboard cancelled', { display: 'system' })
return
}
if (v === 'pat') {
setStep('pat')
return
}
void runDeviceFlow()
}}
/>

View File

@@ -22,11 +22,14 @@ import {
import {
buildCodexProfileEnv,
buildGeminiProfileEnv,
buildMistralProfileEnv,
buildOllamaProfileEnv,
buildOpenAIProfileEnv,
createProfileFile,
DEFAULT_GEMINI_BASE_URL,
DEFAULT_GEMINI_MODEL,
DEFAULT_MISTRAL_BASE_URL,
DEFAULT_MISTRAL_MODEL,
deleteProfileFile,
loadProfileFile,
maskSecretForDisplay,
@@ -74,6 +77,14 @@ type Step =
baseUrl: string | null
defaultModel: string
}
| { name: 'mistral-key'; defaultModel: string }
| { name: 'mistral-base'; apiKey: string; defaultModel: string }
| {
name: 'mistral-model'
apiKey: string
baseUrl: string | null
defaultModel: string
}
| { name: 'gemini-auth-method' }
| { name: 'gemini-key' }
| { name: 'gemini-access-token' }
@@ -116,6 +127,8 @@ type ProviderWizardDefaults = {
openAIModel: string
openAIBaseUrl: string
geminiModel: string
mistralModel: string
mistralBaseUrl: string
}
function isEnvTruthy(value: string | undefined): boolean {
@@ -147,11 +160,19 @@ export function getProviderWizardDefaults(
const safeGeminiModel =
sanitizeProviderConfigValue(processEnv.GEMINI_MODEL, processEnv) ||
DEFAULT_GEMINI_MODEL
const safeMistralModel =
sanitizeProviderConfigValue(processEnv.MISTRAL_MODEL, processEnv) ||
DEFAULT_MISTRAL_MODEL
const safeMistralBaseUrl =
sanitizeProviderConfigValue(processEnv.MISTRAL_BASE_URL, processEnv) ||
DEFAULT_MISTRAL_BASE_URL
return {
openAIModel: safeOpenAIModel,
openAIBaseUrl: safeOpenAIBaseUrl,
geminiModel: safeGeminiModel,
mistralModel: safeMistralModel,
mistralBaseUrl: safeMistralBaseUrl,
}
}
@@ -178,6 +199,21 @@ export function buildCurrentProviderSummary(options?: {
}
}
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_MISTRAL)) {
return {
providerLabel: 'Mistral',
modelLabel: getSafeDisplayValue(
processEnv.MISTRAL_MODEL ?? DEFAULT_MISTRAL_MODEL,
processEnv
),
endpointLabel: getSafeDisplayValue(
processEnv.MISTRAL_BASE_URL ?? DEFAULT_MISTRAL_BASE_URL,
processEnv
),
savedProfileLabel,
}
}
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB)) {
return {
providerLabel: 'GitHub Models',
@@ -259,6 +295,24 @@ function buildSavedProfileSummary(
? 'configured'
: undefined,
}
case 'mistral':
return {
providerLabel: 'Mistral',
modelLabel: getSafeDisplayValue(
env.MISTRAL_MODEL ?? DEFAULT_MISTRAL_MODEL,
process.env,
env,
),
endpointLabel: getSafeDisplayValue(
env.MISTRAL_BASE_URL ?? DEFAULT_MISTRAL_BASE_URL,
process.env,
env,
),
credentialLabel:
maskSecretForDisplay(env.MISTRAL_API_KEY) !== undefined
? 'configured'
: undefined,
}
case 'codex':
return {
providerLabel: 'Codex',
@@ -473,6 +527,11 @@ function ProviderChooser({
value: 'gemini',
description: 'Use Google Gemini with API key, access token, or local ADC',
},
{
label: 'Mistral',
value: 'mistral',
description: 'Use Mistral with API key'
},
{
label: 'Codex',
value: 'codex',
@@ -971,6 +1030,11 @@ export function ProviderWizard({
})
} else if (value === 'gemini') {
setStep({ name: 'gemini-auth-method' })
} else if (value === 'mistral') {
setStep({
name: 'mistral-key',
defaultModel: defaults.mistralModel,
})
} else if (value === 'clear') {
const filePath = deleteProfileFile()
onDone(`Removed saved provider profile at ${filePath}. Restart OpenClaude to go back to normal startup.`, {
@@ -1110,6 +1174,101 @@ export function ProviderWizard({
/>
)
case 'mistral-key':
return (
<TextEntryDialog
resetStateKey={step.name}
title="Mistral setup"
subtitle="Step 1 of 3"
description={
process.env.MISTRAL_API_KEY
? 'Enter an API key, or leave this blank to reuse the current MISTRAL_API_KEY from this session.'
: 'Enter the API key for your Mistral provider.'
}
initialValue=""
placeholder="..."
mask="*"
allowEmpty={Boolean(process.env.MISTRAL_API_KEY)}
validate={value => {
const candidate = value.trim() || process.env.MISTRAL_API_KEY || ''
return sanitizeApiKey(candidate)
? null
: 'Enter a real API key. Placeholder values like SUA_CHAVE are not valid.'
}}
onSubmit={value => {
const apiKey = value.trim() || process.env.MISTRAL_API_KEY || ''
setStep({
name: 'mistral-base',
apiKey,
defaultModel: step.defaultModel,
})
}}
onCancel={() => setStep({ name: 'choose' })}
/>
)
case 'mistral-base':
return (
<TextEntryDialog
resetStateKey={step.name}
title="Mistral setup"
subtitle="Step 2 of 3"
description={`Optionally enter a base URL. Leave blank for ${DEFAULT_MISTRAL_BASE_URL}.`}
initialValue={
defaults.mistralBaseUrl === DEFAULT_MISTRAL_BASE_URL
? ''
: defaults.mistralBaseUrl
}
placeholder={DEFAULT_MISTRAL_BASE_URL}
allowEmpty
onSubmit={value => {
setStep({
name: 'mistral-model',
apiKey: step.apiKey,
baseUrl: value.trim() || null,
defaultModel: step.defaultModel,
})
}}
onCancel={() =>
setStep({
name: 'mistral-key',
defaultModel: step.defaultModel,
})
}
/>
)
case 'mistral-model':
return (
<TextEntryDialog
resetStateKey={step.name}
title="Mistral setup"
subtitle="Step 3 of 3"
description={`Enter a model name. Leave blank for ${step.defaultModel}.`}
initialValue={defaults.mistralModel ?? step.defaultModel}
placeholder={step.defaultModel}
allowEmpty
onSubmit={value => {
const env = buildMistralProfileEnv({
model: value.trim() || step.defaultModel,
baseUrl: step.baseUrl,
apiKey: step.apiKey,
processEnv: process.env,
})
if (env) {
finishProfileSave(onDone, 'mistral', env)
}
}}
onCancel={() =>
setStep({
name: 'mistral-base',
apiKey: step.apiKey,
defaultModel: step.defaultModel,
})
}
/>
)
case 'gemini-auth-method': {
const hasShellGeminiKey = Boolean(
process.env.GEMINI_API_KEY || process.env.GOOGLE_API_KEY,

View File

@@ -65,7 +65,7 @@ export async function call(onDone: (result?: string) => void, _context: unknown,
// Get the local settings path and make it relative to cwd
const localSettingsPath = getSettingsFilePathForSource('localSettings');
const relativePath = localSettingsPath ? relative(getCwdState(), localSettingsPath) : '.claude/settings.local.json';
const relativePath = localSettingsPath ? relative(getCwdState(), localSettingsPath) : '.openclaude/settings.local.json';
const message = color('success', themeName)(`Added "${cleanPattern}" to excluded commands in ${relativePath}`);
onDone(message);
return null;

View File

@@ -0,0 +1,12 @@
import type { Command } from '../../commands.js'
const wiki = {
type: 'local-jsx',
name: 'wiki',
description: 'Initialize and inspect the OpenClaude project wiki',
argumentHint: '[init|status]',
immediate: true,
load: () => import('./wiki.js'),
} satisfies Command
export default wiki

123
src/commands/wiki/wiki.tsx Normal file
View File

@@ -0,0 +1,123 @@
import React from 'react'
import { COMMON_HELP_ARGS, COMMON_INFO_ARGS } from '../../constants/xml.js'
import { ingestLocalWikiSource } from '../../services/wiki/ingest.js'
import { initializeWiki } from '../../services/wiki/init.js'
import { getWikiStatus } from '../../services/wiki/status.js'
import type {
LocalJSXCommandCall,
LocalJSXCommandOnDone,
} from '../../types/command.js'
import { getCwd } from '../../utils/cwd.js'
function renderHelp(): string {
return `Usage: /wiki [init|status|ingest <path>]
Manage the OpenClaude project wiki stored in .openclaude/wiki.
Commands:
/wiki init Initialize the wiki structure in the current project
/wiki status Show wiki status and page/source counts
/wiki ingest Ingest a local file into wiki sources
Examples:
/wiki init
/wiki status
/wiki ingest README.md`
}
function formatInitResult(result: Awaited<ReturnType<typeof initializeWiki>>): string {
const lines = [`Initialized OpenClaude wiki at ${result.root}`]
if (result.alreadyExisted) {
lines.push('', 'Wiki already existed. No new files were created.')
return lines.join('\n')
}
if (result.createdFiles.length > 0) {
lines.push('', 'Created files:')
for (const file of result.createdFiles) {
lines.push(`- ${file}`)
}
}
return lines.join('\n')
}
function formatStatus(status: Awaited<ReturnType<typeof getWikiStatus>>): string {
if (!status.initialized) {
return `OpenClaude wiki is not initialized in this project.\n\nRun /wiki init to create ${status.root}.`
}
return [
'OpenClaude wiki status',
'',
`Root: ${status.root}`,
`Pages: ${status.pageCount}`,
`Sources: ${status.sourceCount}`,
`Schema: ${status.hasSchema ? 'present' : 'missing'}`,
`Index: ${status.hasIndex ? 'present' : 'missing'}`,
`Log: ${status.hasLog ? 'present' : 'missing'}`,
`Last updated: ${status.lastUpdatedAt ?? 'unknown'}`,
].join('\n')
}
function formatIngestResult(
result: Awaited<ReturnType<typeof ingestLocalWikiSource>>,
): string {
return [
`Ingested ${result.sourceFile} into the OpenClaude wiki.`,
'',
`Title: ${result.title}`,
`Source note: ${result.sourceNote}`,
`Summary: ${result.summary}`,
].join('\n')
}
async function runWikiCommand(
onDone: LocalJSXCommandOnDone,
args: string,
): Promise<void> {
const cwd = getCwd()
const normalized = args.trim().toLowerCase()
if (COMMON_HELP_ARGS.includes(normalized) || COMMON_INFO_ARGS.includes(normalized)) {
onDone(renderHelp(), { display: 'system' })
return
}
if (!normalized || normalized === 'status') {
onDone(formatStatus(await getWikiStatus(cwd)), { display: 'system' })
return
}
if (normalized === 'init') {
onDone(formatInitResult(await initializeWiki(cwd)), { display: 'system' })
return
}
if (normalized.startsWith('ingest')) {
const pathArg = args.trim().slice('ingest'.length).trim()
if (!pathArg) {
onDone('Usage: /wiki ingest <local-file-path>', { display: 'system' })
return
}
onDone(formatIngestResult(await ingestLocalWikiSource(cwd, pathArg)), {
display: 'system',
})
return
}
onDone(`Unknown wiki subcommand: ${args.trim()}\n\n${renderHelp()}`, {
display: 'system',
})
}
export const call: LocalJSXCommandCall = async (
onDone,
_context,
args,
): Promise<React.ReactNode> => {
await runWikiCommand(onDone, args ?? '')
return null
}

View File

@@ -188,9 +188,9 @@ export function AutoUpdater({
Update installed · Restart to apply
</Text>}
{(autoUpdaterResult?.status === 'install_failed' || autoUpdaterResult?.status === 'no_permissions') && <Text color="error" wrap="truncate">
Auto-update failed &middot; Try <Text bold>claude doctor</Text> or{' '}
Auto-update failed &middot; Try <Text bold>openclaude doctor</Text> or{' '}
<Text bold>
{hasLocalInstall ? `cd ~/.claude/local && npm update ${MACRO.PACKAGE_URL}` : `npm i -g ${MACRO.PACKAGE_URL}`}
{hasLocalInstall ? `cd ~/.openclaude/local && npm update ${MACRO.PACKAGE_URL}` : `npm i -g ${MACRO.PACKAGE_URL}`}
</Text>
</Text>}
</Box>;

View File

@@ -31,9 +31,11 @@ export function BaseTextInput(t0) {
} = t0;
const {
onInput,
value,
renderedValue,
cursorLine,
cursorColumn
cursorColumn,
offset,
} = inputState;
const t1 = Boolean(props.focus && props.showCursor && terminalFocus);
let t2;
@@ -78,7 +80,7 @@ export function BaseTextInput(t0) {
renderedPlaceholder
} = renderPlaceholder({
placeholder: props.placeholder,
value: props.value,
value,
showCursor: props.showCursor,
focus: props.focus,
terminalFocus,
@@ -88,9 +90,9 @@ export function BaseTextInput(t0) {
useInput(wrappedOnInput, {
isActive: props.focus
});
const commandWithoutArgs = props.value && props.value.trim().indexOf(" ") === -1 || props.value && props.value.endsWith(" ");
const showArgumentHint = Boolean(props.argumentHint && props.value && commandWithoutArgs && props.value.startsWith("/"));
const cursorFiltered = props.showCursor && props.highlights ? props.highlights.filter(h => h.dimColor || props.cursorOffset < h.start || props.cursorOffset >= h.end) : props.highlights;
const commandWithoutArgs = value && value.trim().indexOf(" ") === -1 || value && value.endsWith(" ");
const showArgumentHint = Boolean(props.argumentHint && value && commandWithoutArgs && value.startsWith("/"));
const cursorFiltered = props.showCursor && props.highlights ? props.highlights.filter(h => h.dimColor || offset < h.start || offset >= h.end) : props.highlights;
const {
viewportCharOffset,
viewportCharEnd
@@ -102,13 +104,13 @@ export function BaseTextInput(t0) {
})) : cursorFiltered;
const hasHighlights = filteredHighlights && filteredHighlights.length > 0;
if (hasHighlights) {
return <Box ref={cursorRef}><HighlightedInput text={renderedValue} highlights={filteredHighlights} />{showArgumentHint && <Text dimColor={true}>{props.value?.endsWith(" ") ? "" : " "}{props.argumentHint}</Text>}{children}</Box>;
return <Box ref={cursorRef}><HighlightedInput text={renderedValue} highlights={filteredHighlights} />{showArgumentHint && <Text dimColor={true}>{value.endsWith(" ") ? "" : " "}{props.argumentHint}</Text>}{children}</Box>;
}
const T0 = Box;
const T1 = Text;
const t4 = "truncate-end";
const t5 = showPlaceholder && props.placeholderElement ? props.placeholderElement : showPlaceholder && renderedPlaceholder ? <Ansi>{renderedPlaceholder}</Ansi> : <Ansi>{renderedValue}</Ansi>;
const t6 = showArgumentHint && <Text dimColor={true}>{props.value?.endsWith(" ") ? "" : " "}{props.argumentHint}</Text>;
const t6 = showArgumentHint && <Text dimColor={true}>{value.endsWith(" ") ? "" : " "}{props.argumentHint}</Text>;
let t7;
if ($[4] !== T1 || $[5] !== children || $[6] !== props || $[7] !== t5 || $[8] !== t6) {
t7 = <T1 wrap={t4} dimColor={props.dimColor}>{t5}{t6}{children}</T1>;

View File

@@ -103,7 +103,7 @@ test('login picker shows the third-party platform option', async () => {
expect(output).toContain('3rd-party platform')
})
test('third-party provider branch opens the provider wizard', async () => {
test('third-party provider branch opens the first-run provider manager', async () => {
const output = await renderFrame(
<ConsoleOAuthFlow
initialStatus={{ state: 'platform_setup' }}
@@ -111,7 +111,9 @@ test('third-party provider branch opens the provider wizard', async () => {
/>,
)
expect(output).toContain('Set up a provider profile')
expect(output).toContain('OpenAI-compatible')
expect(output).toContain('Set up provider')
expect(output).toContain('Anthropic')
expect(output).toContain('OpenAI')
expect(output).toContain('Ollama')
expect(output).toContain('LM Studio')
})

View File

@@ -12,7 +12,7 @@ import { OAuthService } from '../services/oauth/index.js';
import { getOauthAccountInfo, validateForceLoginOrg } from '../utils/auth.js';
import { logError } from '../utils/log.js';
import { getSettings_DEPRECATED } from '../utils/settings/settings.js';
import { ProviderWizard } from '../commands/provider/provider.js';
import { ProviderManager } from './ProviderManager.js';
import { Select } from './CustomSelect/select.js';
import { KeyboardShortcutHint } from './design-system/KeyboardShortcutHint.js';
import { Spinner } from './Spinner.js';
@@ -450,16 +450,17 @@ function OAuthStatusMessage({
case 'platform_setup':
return (
<ProviderWizard
<ProviderManager
mode="first-run"
onDone={result => {
if (!result) {
if (!result || result.action !== 'saved' || !result.message) {
setOAuthStatus({ state: 'idle' })
return
}
setOAuthStatus({
state: 'platform_setup_complete',
message: result,
message: result.message,
})
}}
/>

View File

@@ -285,7 +285,7 @@ export function Select(t0) {
onChange,
onCancel,
onFocus,
focusValue: defaultFocusValue
defaultFocusValue,
};
$[7] = defaultFocusValue;
$[8] = defaultValue;

View File

@@ -1,5 +1,4 @@
import { useCallback, useState } from 'react'
import { isDeepStrictEqual } from 'util'
import { useRegisterOverlay } from '../../context/overlayContext.js'
import type { InputEvent } from '../../ink/events/input-event.js'
// eslint-disable-next-line custom-rules/prefer-use-keybindings -- raw space/arrow multiselect input
@@ -9,6 +8,7 @@ import {
normalizeFullWidthSpace,
} from '../../utils/stringUtils.js'
import type { OptionWithDescription } from './select.js'
import { optionsNavigateEqual } from './use-select-navigation.js'
import { useSelectNavigation } from './use-select-navigation.js'
export type UseMultiSelectStateProps<T> = {
@@ -174,7 +174,7 @@ export function useMultiSelectState<T>({
// and the deleted ui/useMultiSelectState.ts — without this, MCPServerDesktopImportDialog
// keeps colliding servers checked after getAllMcpConfigs() resolves.
const [lastOptions, setLastOptions] = useState(options)
if (options !== lastOptions && !isDeepStrictEqual(options, lastOptions)) {
if (options !== lastOptions && !optionsNavigateEqual(options, lastOptions)) {
setSelectedValues(defaultValue)
setLastOptions(options)
}

View File

@@ -6,10 +6,34 @@ import {
useRef,
useState,
} from 'react'
import { isDeepStrictEqual } from 'util'
import OptionMap from './option-map.js'
import type { OptionWithDescription } from './select.js'
/**
* Compare two option arrays for structural equality on properties that
* affect navigation behavior. ReactNode `label` and function `onChange`
* are intentionally excluded — they are identity-unstable (new reference
* each render) but don't change navigation semantics.
*/
export function optionsNavigateEqual<T>(
a: OptionWithDescription<T>[],
b: OptionWithDescription<T>[],
): boolean {
if (a.length !== b.length) return false
for (let i = 0; i < a.length; i++) {
const ao = a[i]!
const bo = b[i]!
if (
ao.value !== bo.value ||
ao.disabled !== bo.disabled ||
ao.type !== bo.type
) {
return false
}
}
return true
}
type State<T> = {
/**
* Map where key is option's value and value is option's index.
@@ -524,7 +548,7 @@ export function useSelectNavigation<T>({
const [lastOptions, setLastOptions] = useState(options)
if (options !== lastOptions && !isDeepStrictEqual(options, lastOptions)) {
if (options !== lastOptions && !optionsNavigateEqual(options, lastOptions)) {
dispatch({
type: 'reset',
state: createDefaultState({

View File

@@ -35,6 +35,11 @@ export type UseSelectStateProps<T> = {
*/
onFocus?: (value: T) => void
/**
* Initial value to focus when the component mounts.
*/
defaultFocusValue?: T
/**
* Value to focus
*/
@@ -131,6 +136,7 @@ export function useSelectState<T>({
onChange,
onCancel,
onFocus,
defaultFocusValue,
focusValue,
}: UseSelectStateProps<T>): SelectState<T> {
const [value, setValue] = useState<T | undefined>(defaultValue)
@@ -138,7 +144,7 @@ export function useSelectState<T>({
const navigation = useSelectNavigation<T>({
visibleOptionCount,
options,
initialFocusValue: undefined,
initialFocusValue: defaultFocusValue,
onFocus,
focusValue,
})

View File

@@ -112,7 +112,7 @@ export function HelpV2(t0) {
}
tabs.push(t6);
if (false && antOnlyCommands.length > 0) {
let t7;
let t7;
if ($[26] !== antOnlyCommands || $[27] !== close || $[28] !== columns || $[29] !== maxHeight) {
t7 = <Tab key="internal-only" title="[internal-only]"><Commands commands={antOnlyCommands} maxHeight={maxHeight} columns={columns} title="Browse internal-only commands:" onCancel={close} /></Tab>;
$[26] = antOnlyCommands;

View File

@@ -252,14 +252,24 @@ function PromptInput({
show: false
});
const [cursorOffset, setCursorOffset] = useState<number>(input.length);
// Track the last input value set via internal handlers so we can detect
// external input changes (e.g. speech-to-text injection) and move cursor to end.
// Track the last input value set via internal handlers so external updates
// (for example speech-to-text injection) can still move the cursor to end
// without clobbering a pending internal keystroke during render.
const lastInternalInputRef = React.useRef(input);
if (input !== lastInternalInputRef.current) {
// Input changed externally (not through any internal handler) — move cursor to end
setCursorOffset(input.length);
const lastPropInputRef = React.useRef(input);
React.useLayoutEffect(() => {
if (input === lastPropInputRef.current) {
return;
}
lastPropInputRef.current = input;
if (input === lastInternalInputRef.current) {
return;
}
lastInternalInputRef.current = input;
}
setCursorOffset(prev => prev === input.length ? prev : input.length);
}, [input]);
// Wrap onInputChange to track internal changes before they trigger re-render
const trackAndSetInput = React.useCallback((value: string) => {
lastInternalInputRef.current = value;
@@ -2201,7 +2211,7 @@ function PromptInput({
multiline: true,
onSubmit,
onChange,
value: historyMatch ? getValueFromInput(typeof historyMatch === 'string' ? historyMatch : historyMatch.display) : input,
value: isSearchingHistory && historyMatch ? getValueFromInput(typeof historyMatch === 'string' ? historyMatch : historyMatch.display) : input,
// History navigation is handled via TextInput props (onHistoryUp/onHistoryDown),
// NOT via useKeybindings. This allows useTextInput's upOrHistoryUp/downOrHistoryDown
// to try cursor movement first and only fall through to history navigation when the

View File

@@ -6,6 +6,7 @@ import stripAnsi from 'strip-ansi'
import { createRoot } from '../ink.js'
import { AppStateProvider } from '../state/AppState.js'
import { KeybindingSetup } from '../keybindings/KeybindingProviderSetup.js'
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
@@ -106,19 +107,30 @@ function createDeferred<T>(): {
return { promise, resolve }
}
function mockProviderProfilesModule(): void {
function mockProviderProfilesModule(options?: {
addProviderProfile?: (...args: unknown[]) => unknown
}): void {
mock.module('../utils/providerProfiles.js', () => ({
addProviderProfile: () => null,
addProviderProfile: options?.addProviderProfile ?? (() => null),
applyActiveProviderProfileFromConfig: () => {},
deleteProviderProfile: () => ({ removed: false, activeProfileId: null }),
getActiveProviderProfile: () => null,
getProviderPresetDefaults: () => ({
provider: 'openai',
name: 'Mock provider',
baseUrl: 'http://localhost:11434/v1',
model: 'mock-model',
apiKey: '',
}),
getProviderPresetDefaults: (preset: string) =>
preset === 'ollama'
? {
provider: 'openai',
name: 'Ollama',
baseUrl: 'http://localhost:11434/v1',
model: 'llama3.1:8b',
apiKey: '',
}
: {
provider: 'openai',
name: 'Mock provider',
baseUrl: 'http://localhost:11434/v1',
model: 'mock-model',
apiKey: '',
},
getProviderProfiles: () => [],
setActiveProviderProfile: () => null,
updateProviderProfile: () => null,
@@ -128,8 +140,27 @@ function mockProviderProfilesModule(): void {
function mockProviderManagerDependencies(
syncRead: () => string | undefined,
asyncRead: () => Promise<string | undefined>,
options?: {
addProviderProfile?: (...args: unknown[]) => unknown
hasLocalOllama?: () => Promise<boolean>
listOllamaModels?: () => Promise<
Array<{
name: string
sizeBytes?: number | null
family?: string | null
families?: string[]
parameterSize?: string | null
quantizationLevel?: string | null
}>
>
},
): void {
mockProviderProfilesModule()
mockProviderProfilesModule({ addProviderProfile: options?.addProviderProfile })
mock.module('../utils/providerDiscovery.js', () => ({
hasLocalOllama: options?.hasLocalOllama ?? (async () => false),
listOllamaModels: options?.listOllamaModels ?? (async () => []),
}))
mock.module('../utils/githubModelsCredentials.js', () => ({
clearGithubModelsToken: () => ({ success: true }),
@@ -162,9 +193,14 @@ async function waitForFrameOutput(
async function mountProviderManager(
ProviderManager: React.ComponentType<{
mode: 'first-run' | 'manage'
onDone: () => void
onDone: (result?: unknown) => void
}>,
options?: {
mode?: 'first-run' | 'manage'
onDone?: (result?: unknown) => void
},
): Promise<{
stdin: PassThrough
getOutput: () => string
dispose: () => Promise<void>
}> {
@@ -177,14 +213,17 @@ async function mountProviderManager(
root.render(
<AppStateProvider>
<ProviderManager
mode="manage"
onDone={() => {}}
/>
<KeybindingSetup>
<ProviderManager
mode={options?.mode ?? 'manage'}
onDone={options?.onDone ?? (() => {})}
/>
</KeybindingSetup>
</AppStateProvider>,
)
return {
stdin,
getOutput,
dispose: async () => {
root.unmount()
@@ -198,14 +237,17 @@ async function mountProviderManager(
async function renderProviderManagerFrame(
ProviderManager: React.ComponentType<{
mode: 'first-run' | 'manage'
onDone: () => void
onDone: (result?: unknown) => void
}>,
options?: {
waitForOutput?: (output: string) => boolean
timeoutMs?: number
mode?: 'first-run' | 'manage'
},
): Promise<string> {
const mounted = await mountProviderManager(ProviderManager)
const mounted = await mountProviderManager(ProviderManager, {
mode: options?.mode,
})
const output = await waitForFrameOutput(
mounted.getOutput,
frame => {
@@ -263,6 +305,96 @@ test('ProviderManager resolves GitHub virtual provider from async storage withou
expect(asyncRead).toHaveBeenCalled()
})
test('ProviderManager first-run Ollama preset auto-detects installed models', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const onDone = mock(() => {})
const addProviderProfile = mock((payload: {
provider: string
name: string
baseUrl: string
model: string
apiKey?: string
}) => ({
id: 'provider_ollama',
provider: payload.provider,
name: payload.name,
baseUrl: payload.baseUrl,
model: payload.model,
apiKey: payload.apiKey,
}))
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
{
addProviderProfile,
hasLocalOllama: async () => true,
listOllamaModels: async () => [
{
name: 'gemma4:31b-cloud',
family: 'gemma',
parameterSize: '31b',
},
{
name: 'kimi-k2.5:cloud',
family: 'kimi',
parameterSize: '2.5b',
},
],
},
)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager, {
mode: 'first-run',
onDone,
})
await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Set up provider') && frame.includes('Ollama'),
)
mounted.stdin.write('j')
await Bun.sleep(50)
mounted.stdin.write('\r')
const modelFrame = await waitForFrameOutput(
mounted.getOutput,
frame =>
frame.includes('Choose an Ollama model') &&
frame.includes('gemma4:31b-cloud') &&
frame.includes('kimi-k2.5:cloud'),
)
expect(modelFrame).toContain('Choose an Ollama model')
expect(modelFrame).toContain('gemma4:31b-cloud')
await Bun.sleep(25)
mounted.stdin.write('\r')
await waitForCondition(() => onDone.mock.calls.length > 0)
expect(addProviderProfile).toHaveBeenCalled()
expect(addProviderProfile.mock.calls[0]?.[0]).toMatchObject({
name: 'Ollama',
baseUrl: 'http://localhost:11434/v1',
model: 'gemma4:31b-cloud',
})
expect(onDone).toHaveBeenCalledWith(
expect.objectContaining({
action: 'saved',
message: 'Provider configured: Ollama',
}),
)
await mounted.dispose()
})
test('ProviderManager avoids first-frame false negative while stored-token lookup is pending', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN

View File

@@ -3,6 +3,7 @@ import * as React from 'react'
import { Box, Text } from '../ink.js'
import { useKeybinding } from '../keybindings/useKeybinding.js'
import type { ProviderProfile } from '../utils/config.js'
import { hasLocalOllama, listOllamaModels } from '../utils/providerDiscovery.js'
import {
addProviderProfile,
applyActiveProviderProfileFromConfig,
@@ -15,6 +16,10 @@ import {
type ProviderProfileInput,
updateProviderProfile,
} from '../utils/providerProfiles.js'
import {
rankOllamaModels,
recommendOllamaModel,
} from '../utils/providerRecommendation.js'
import {
clearGithubModelsToken,
GITHUB_MODELS_HYDRATED_ENV_MARKER,
@@ -24,7 +29,7 @@ import {
} from '../utils/githubModelsCredentials.js'
import { isEnvTruthy } from '../utils/envUtils.js'
import { updateSettingsForSource } from '../utils/settings/settings.js'
import { Select } from './CustomSelect/index.js'
import { type OptionWithDescription, Select } from './CustomSelect/index.js'
import { Pane } from './design-system/Pane.js'
import TextInput from './TextInput.js'
@@ -42,6 +47,7 @@ type Props = {
type Screen =
| 'menu'
| 'select-preset'
| 'select-ollama-model'
| 'form'
| 'select-active'
| 'select-edit'
@@ -51,6 +57,16 @@ type DraftField = 'name' | 'baseUrl' | 'model' | 'apiKey'
type ProviderDraft = Record<DraftField, string>
type OllamaSelectionState =
| { state: 'idle' }
| { state: 'loading' }
| {
state: 'ready'
options: OptionWithDescription<string>[]
defaultValue?: string
}
| { state: 'unavailable'; message: string }
const FORM_STEPS: Array<{
key: DraftField
label: string
@@ -210,6 +226,9 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
const [cursorOffset, setCursorOffset] = React.useState(0)
const [statusMessage, setStatusMessage] = React.useState<string | undefined>()
const [errorMessage, setErrorMessage] = React.useState<string | undefined>()
const [ollamaSelection, setOllamaSelection] = React.useState<OllamaSelectionState>({
state: 'idle',
})
const currentStep = FORM_STEPS[formStepIndex] ?? FORM_STEPS[0]
const currentStepKey = currentStep.key
@@ -364,6 +383,59 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
return null
}
React.useEffect(() => {
if (screen !== 'select-ollama-model') {
return
}
let cancelled = false
setOllamaSelection({ state: 'loading' })
void (async () => {
const available = await hasLocalOllama(draft.baseUrl)
if (!available) {
if (!cancelled) {
setOllamaSelection({
state: 'unavailable',
message:
'Could not reach Ollama. Start Ollama first, or enter the endpoint manually.',
})
}
return
}
const models = await listOllamaModels(draft.baseUrl)
if (models.length === 0) {
if (!cancelled) {
setOllamaSelection({
state: 'unavailable',
message:
'Ollama is running, but no installed models were found. Pull a chat model such as qwen2.5-coder:7b or llama3.1:8b first, or enter details manually.',
})
}
return
}
const ranked = rankOllamaModels(models, 'balanced')
const recommended = recommendOllamaModel(models, 'balanced')
if (!cancelled) {
setOllamaSelection({
state: 'ready',
defaultValue: recommended?.name ?? ranked[0]?.name,
options: ranked.map(model => ({
label: model.name,
value: model.name,
description: model.summary,
})),
})
}
})()
return () => {
cancelled = true
}
}, [draft.baseUrl, screen])
function startCreateFromPreset(preset: ProviderPreset): void {
const defaults = getProviderPresetDefaults(preset)
const nextDraft = {
@@ -378,6 +450,13 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
setFormStepIndex(0)
setCursorOffset(nextDraft.name.length)
setErrorMessage(undefined)
if (preset === 'ollama') {
setOllamaSelection({ state: 'loading' })
setScreen('select-ollama-model')
return
}
setScreen('form')
}
@@ -397,13 +476,13 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
setScreen('form')
}
function persistDraft(): void {
function persistDraft(nextDraft: ProviderDraft = draft): void {
const payload: ProviderProfileInput = {
provider: draftProvider,
name: draft.name,
baseUrl: draft.baseUrl,
model: draft.model,
apiKey: draft.apiKey,
name: nextDraft.name,
baseUrl: nextDraft.baseUrl,
model: nextDraft.model,
apiKey: nextDraft.apiKey,
}
const saved = editingProfileId
@@ -446,6 +525,83 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
setScreen('menu')
}
function renderOllamaSelection(): React.ReactNode {
if (ollamaSelection.state === 'loading' || ollamaSelection.state === 'idle') {
return (
<Box flexDirection="column" gap={1}>
<Text color="remember" bold>
Checking Ollama
</Text>
<Text dimColor>Looking for installed Ollama models...</Text>
</Box>
)
}
if (ollamaSelection.state === 'unavailable') {
return (
<Box flexDirection="column" gap={1}>
<Text color="remember" bold>
Ollama setup
</Text>
<Text dimColor>{ollamaSelection.message}</Text>
<Select
options={[
{
value: 'manual',
label: 'Enter manually',
description: 'Fill in the base URL and model yourself',
},
{
value: 'back',
label: 'Back',
description: 'Choose another provider preset',
},
]}
onChange={value => {
if (value === 'manual') {
setFormStepIndex(0)
setCursorOffset(draft.name.length)
setScreen('form')
return
}
setScreen('select-preset')
}}
onCancel={() => setScreen('select-preset')}
visibleOptionCount={2}
/>
</Box>
)
}
return (
<Box flexDirection="column" gap={1}>
<Text color="remember" bold>
Choose an Ollama model
</Text>
<Text dimColor>
Pick one of the installed Ollama models to save into a local provider
profile.
</Text>
<Select
options={ollamaSelection.options}
defaultValue={ollamaSelection.defaultValue}
defaultFocusValue={ollamaSelection.defaultValue}
inlineDescriptions
visibleOptionCount={Math.min(8, ollamaSelection.options.length)}
onChange={value => {
const nextDraft = {
...draft,
model: value,
}
setDraft(nextDraft)
persistDraft(nextDraft)
}}
onCancel={() => setScreen('select-preset')}
/>
</Box>
)
}
function handleFormSubmit(value: string): void {
const trimmed = value.trim()
@@ -470,7 +626,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
return
}
persistDraft()
persistDraft(nextDraft)
}
function handleBackFromForm(): void {
@@ -819,13 +975,16 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
let content: React.ReactNode
switch (screen) {
case 'select-preset':
content = renderPresetSelection()
break
case 'form':
content = renderForm()
break
switch (screen) {
case 'select-preset':
content = renderPresetSelection()
break
case 'select-ollama-model':
content = renderOllamaSelection()
break
case 'form':
content = renderForm()
break
case 'select-active':
content = renderProfileSelection(
'Set active provider',

View File

@@ -7,6 +7,8 @@
import { isLocalProviderUrl } from '../services/api/providerConfig.js'
import { getLocalOpenAICompatibleProviderLabel } from '../utils/providerDiscovery.js'
import { getSettings_DEPRECATED } from '../utils/settings/settings.js'
import { parseUserSpecifiedModel } from '../utils/model/model.js'
declare const MACRO: { VERSION: string; DISPLAY_VERSION?: string }
@@ -85,6 +87,7 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
const useGemini = process.env.CLAUDE_CODE_USE_GEMINI === '1' || process.env.CLAUDE_CODE_USE_GEMINI === 'true'
const useGithub = process.env.CLAUDE_CODE_USE_GITHUB === '1' || process.env.CLAUDE_CODE_USE_GITHUB === 'true'
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1' || process.env.CLAUDE_CODE_USE_OPENAI === 'true'
const useMistral = process.env.CLAUDE_CODE_USE_MISTRAL === '1' || process.env.CLAUDE_CODE_USE_MISTRAL === 'true'
if (useGemini) {
const model = process.env.GEMINI_MODEL || 'gemini-2.0-flash'
@@ -92,11 +95,17 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
return { name: 'Google Gemini', model, baseUrl, isLocal: false }
}
if (useMistral) {
const model = process.env.MISTRAL_MODEL || 'devstral-latest'
const baseUrl = process.env.MISTRAL_BASE_URL || 'https://api.mistral.ai/v1'
return { name: 'Mistral', model, baseUrl, isLocal: false }
}
if (useGithub) {
const model = process.env.OPENAI_MODEL || 'github:copilot'
const baseUrl =
process.env.OPENAI_BASE_URL || 'https://models.github.ai/inference'
return { name: 'GitHub Models', model, baseUrl, isLocal: false }
process.env.OPENAI_BASE_URL || 'https://api.githubcopilot.com'
return { name: 'GitHub Copilot', model, baseUrl, isLocal: false }
}
if (useOpenAI) {
@@ -139,9 +148,11 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
return { name, model: displayModel, baseUrl, isLocal }
}
// Default: Anthropic
const model = process.env.ANTHROPIC_MODEL || process.env.CLAUDE_MODEL || 'claude-sonnet-4-6'
return { name: 'Anthropic', model, baseUrl: 'https://api.anthropic.com', isLocal: false }
// Default: Anthropic - check settings.model first, then env vars
const settings = getSettings_DEPRECATED() || {}
const modelSetting = settings.model || process.env.ANTHROPIC_MODEL || process.env.CLAUDE_MODEL || 'claude-sonnet-4-6'
const resolvedModel = parseUserSpecifiedModel(modelSetting)
return { name: 'Anthropic', model: resolvedModel, baseUrl: 'https://api.anthropic.com', isLocal: false }
}
// ─── Box drawing ──────────────────────────────────────────────────────────────

View File

@@ -0,0 +1,231 @@
import { PassThrough } from 'node:stream'
import { expect, test } from 'bun:test'
import React from 'react'
import stripAnsi from 'strip-ansi'
import { createRoot } from '../ink.js'
import { AppStateProvider } from '../state/AppState.js'
import TextInput from './TextInput.js'
import VimTextInput from './VimTextInput.js'
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
function extractLastFrame(output: string): string {
let lastFrame: string | null = null
let cursor = 0
while (cursor < output.length) {
const start = output.indexOf(SYNC_START, cursor)
if (start === -1) {
break
}
const contentStart = start + SYNC_START.length
const end = output.indexOf(SYNC_END, contentStart)
if (end === -1) {
break
}
const frame = output.slice(contentStart, end)
if (frame.trim().length > 0) {
lastFrame = frame
}
cursor = end + SYNC_END.length
}
return lastFrame ?? output
}
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
getOutput: () => string
} {
let output = ''
const stdout = new PassThrough()
const stdin = new PassThrough() as PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
stdout.on('data', chunk => {
output += chunk.toString()
})
return {
stdout,
stdin,
getOutput: () => output,
}
}
function DelayedControlledTextInput(): React.ReactNode {
const [value, setValue] = React.useState('')
const [cursorOffset, setCursorOffset] = React.useState(0)
const valueTimerRef = React.useRef<ReturnType<typeof setTimeout> | null>(null)
const offsetTimerRef = React.useRef<ReturnType<typeof setTimeout> | null>(null)
React.useEffect(() => {
return () => {
if (valueTimerRef.current) {
clearTimeout(valueTimerRef.current)
}
if (offsetTimerRef.current) {
clearTimeout(offsetTimerRef.current)
}
}
}, [])
return (
<AppStateProvider>
<TextInput
value={value}
onChange={nextValue => {
if (valueTimerRef.current) {
clearTimeout(valueTimerRef.current)
}
valueTimerRef.current = setTimeout(() => {
setValue(nextValue)
}, 200)
}}
onSubmit={() => {}}
placeholder="Type here..."
columns={60}
cursorOffset={cursorOffset}
onChangeCursorOffset={nextOffset => {
if (offsetTimerRef.current) {
clearTimeout(offsetTimerRef.current)
}
offsetTimerRef.current = setTimeout(() => {
setCursorOffset(nextOffset)
}, 200)
}}
focus
showCursor
multiline
/>
</AppStateProvider>
)
}
function DelayedControlledVimTextInput(): React.ReactNode {
const [value, setValue] = React.useState('')
const [cursorOffset, setCursorOffset] = React.useState(0)
const valueTimerRef = React.useRef<ReturnType<typeof setTimeout> | null>(null)
const offsetTimerRef = React.useRef<ReturnType<typeof setTimeout> | null>(null)
React.useEffect(() => {
return () => {
if (valueTimerRef.current) {
clearTimeout(valueTimerRef.current)
}
if (offsetTimerRef.current) {
clearTimeout(offsetTimerRef.current)
}
}
}, [])
return (
<AppStateProvider>
<VimTextInput
value={value}
onChange={nextValue => {
if (valueTimerRef.current) {
clearTimeout(valueTimerRef.current)
}
valueTimerRef.current = setTimeout(() => {
setValue(nextValue)
}, 200)
}}
onSubmit={() => {}}
placeholder="Type here..."
columns={60}
cursorOffset={cursorOffset}
onChangeCursorOffset={nextOffset => {
if (offsetTimerRef.current) {
clearTimeout(offsetTimerRef.current)
}
offsetTimerRef.current = setTimeout(() => {
setCursorOffset(nextOffset)
}, 200)
}}
initialMode="INSERT"
focus
showCursor
multiline
/>
</AppStateProvider>
)
}
test('TextInput renders typed characters before delayed parent value commits', async () => {
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(<DelayedControlledTextInput />)
await Bun.sleep(50)
stdin.write('a')
await Bun.sleep(25)
stdin.write('b')
await Bun.sleep(25)
const output = stripAnsi(extractLastFrame(getOutput()))
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(25)
expect(output).toContain('ab')
expect(output).not.toContain('Type here...')
})
test('VimTextInput preserves rapid typed characters before delayed parent value commits', async () => {
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(<DelayedControlledVimTextInput />)
await Bun.sleep(50)
stdin.write('a')
await Bun.sleep(25)
stdin.write('s')
await Bun.sleep(25)
stdin.write('d')
await Bun.sleep(25)
stdin.write('f')
await Bun.sleep(25)
const output = stripAnsi(extractLastFrame(getOutput()))
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(25)
expect(output).toContain('asdf')
expect(output).not.toContain('Type here...')
})

View File

@@ -1,113 +1,161 @@
import { describe, expect, it, mock } from 'bun:test'
import { PassThrough } from 'node:stream'
// We can't fully render ThemePicker due to complex dependencies
// But we can test the theme options generation logic
describe('ThemePicker', () => {
describe('theme options', () => {
it('generates correct theme options without AUTO_THEME feature flag', () => {
// Since we can't easily mock bun:bundle, test the options structure
// The real test would require integration testing
const expectedOptions = [
{ label: "Dark mode", value: "dark" },
{ label: "Light mode", value: "light" },
{ label: "Dark mode (colorblind-friendly)", value: "dark-daltonized" },
{ label: "Light mode (colorblind-friendly)", value: "light-daltonized" },
{ label: "Dark mode (ANSI colors only)", value: "dark-ansi" },
{ label: "Light mode (ANSI colors only)", value: "light-ansi" },
]
expect(expectedOptions.length).toBe(6)
})
import { afterEach, expect, mock, test } from 'bun:test'
import React from 'react'
import stripAnsi from 'strip-ansi'
it('includes auto theme when AUTO_THEME feature is enabled', () => {
// Test the structure when auto is present
const optionsWithAuto = [
{ label: "Auto (match terminal)", value: "auto" },
{ label: "Dark mode", value: "dark" },
]
expect(optionsWithAuto[0].value).toBe('auto')
})
import { createRoot, Text, useTheme } from '../ink.js'
import { KeybindingSetup } from '../keybindings/KeybindingProviderSetup.js'
import { AppStateProvider } from '../state/AppState.js'
import { ThemeProvider } from './design-system/ThemeProvider.js'
mock.module('./StructuredDiff.js', () => ({
StructuredDiff: function StructuredDiffPreview(): React.ReactNode {
const [theme] = useTheme()
return <Text>{`Preview theme: ${theme}`}</Text>
},
}))
mock.module('./StructuredDiff/colorDiff.js', () => ({
getColorModuleUnavailableReason: () => 'env',
getSyntaxTheme: () => null,
}))
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
function extractLastFrame(output: string): string {
let lastFrame: string | null = null
let cursor = 0
while (cursor < output.length) {
const start = output.indexOf(SYNC_START, cursor)
if (start === -1) {
break
}
const contentStart = start + SYNC_START.length
const end = output.indexOf(SYNC_END, contentStart)
if (end === -1) {
break
}
const frame = output.slice(contentStart, end)
if (frame.trim().length > 0) {
lastFrame = frame
}
cursor = end + SYNC_END.length
}
return lastFrame ?? output
}
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
getOutput: () => string
} {
let output = ''
const stdout = new PassThrough()
const stdin = new PassThrough() as PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
stdout.on('data', chunk => {
output += chunk.toString()
})
describe('handleRowFocus callback', () => {
it('setPreviewTheme is called with theme setting', () => {
const setPreviewTheme = mock()
const handleRowFocus = (setting: string) => setPreviewTheme(setting)
handleRowFocus('dark')
expect(setPreviewTheme).toHaveBeenCalledWith('dark')
})
return {
stdout,
stdin,
getOutput: () => output,
}
}
async function waitForCondition(
predicate: () => boolean,
timeoutMs = 2000,
): Promise<void> {
const startedAt = Date.now()
while (Date.now() - startedAt < timeoutMs) {
if (predicate()) {
return
}
await Bun.sleep(10)
}
throw new Error('Timed out waiting for ThemePicker test condition')
}
async function waitForFrame(
getOutput: () => string,
predicate: (frame: string) => boolean,
): Promise<string> {
let frame = ''
await waitForCondition(() => {
frame = stripAnsi(extractLastFrame(getOutput()))
return predicate(frame)
})
describe('handleSelect callback', () => {
it('calls savePreview and onThemeSelect', () => {
const savePreview = mock()
const onThemeSelect = mock()
const handleSelect = (setting: string) => {
savePreview()
onThemeSelect(setting)
}
handleSelect('light')
expect(savePreview).toHaveBeenCalled()
expect(onThemeSelect).toHaveBeenCalledWith('light')
})
})
return frame
}
describe('handleCancel callback', () => {
it('calls cancelPreview and gracefulShutdown when not skipExitHandling', () => {
const cancelPreview = mock()
const gracefulShutdown = mock()
const handleCancel = (skipExitHandling: boolean, onCancelProp?: () => void) => {
cancelPreview()
if (skipExitHandling) {
onCancelProp?.()
} else {
gracefulShutdown(0)
}
}
handleCancel(false)
expect(cancelPreview).toHaveBeenCalled()
expect(gracefulShutdown).toHaveBeenCalledWith(0)
})
it('calls onCancelProp when skipExitHandling is true', () => {
const cancelPreview = mock()
const onCancelProp = mock()
const handleCancel = (skipExitHandling: boolean, onCancelProp?: () => void) => {
cancelPreview()
if (skipExitHandling) {
onCancelProp?.()
}
}
handleCancel(true, onCancelProp)
expect(cancelPreview).toHaveBeenCalled()
expect(onCancelProp).toHaveBeenCalled()
})
})
describe('syntax hint logic', () => {
it('shows disabled hint when syntax highlighting is disabled', () => {
const syntaxHighlightingDisabled = true
const syntaxToggleShortcut = 'Ctrl+T'
const hint = syntaxHighlightingDisabled
? `Syntax highlighting disabled (${syntaxToggleShortcut} to enable)`
: `Syntax highlighting enabled (${syntaxToggleShortcut} to disable)`
expect(hint).toContain('disabled')
})
it('shows enabled hint when syntax highlighting is active', () => {
const syntaxHighlightingDisabled = false
const syntaxToggleShortcut = 'Ctrl+T'
const hint = !syntaxHighlightingDisabled
? `Syntax highlighting enabled (${syntaxToggleShortcut} to disable)`
: `Syntax highlighting disabled (${syntaxToggleShortcut} to enable)`
expect(hint).toContain('enabled')
})
})
afterEach(() => {
mock.restore()
})
test('updates the preview when keyboard focus moves to another theme', async () => {
const { ThemePicker } = await import('./ThemePicker.js')
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(
<AppStateProvider>
<KeybindingSetup>
<ThemeProvider initialState="dark">
<ThemePicker onThemeSelect={() => {}} />
</ThemeProvider>
</KeybindingSetup>
</AppStateProvider>,
)
try {
const initialFrame = await waitForFrame(
getOutput,
frame => frame.includes('Preview theme: dark'),
)
expect(initialFrame).toContain('Preview theme: dark')
stdin.write('j')
const updatedFrame = await waitForFrame(
getOutput,
frame => frame.includes('Preview theme: light'),
)
expect(updatedFrame).toContain('Preview theme: light')
} finally {
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(0)
}
})

View File

@@ -1,4 +1,3 @@
import { c as _c } from "react-compiler-runtime";
import { feature } from 'bun:bundle';
import React, { createContext, useContext, useEffect, useMemo, useState } from 'react';
import useStdin from '../../ink/hooks/use-stdin.js';
@@ -120,21 +119,8 @@ export function ThemeProvider({
* accepts any ThemeSetting (including 'auto').
*/
export function useTheme() {
const $ = _c(3);
const {
currentTheme,
setThemeSetting
} = useContext(ThemeContext);
let t0;
if ($[0] !== currentTheme || $[1] !== setThemeSetting) {
t0 = [currentTheme, setThemeSetting];
$[0] = currentTheme;
$[1] = setThemeSetting;
$[2] = t0;
} else {
t0 = $[2];
}
return t0;
const { currentTheme, setThemeSetting } = useContext(ThemeContext);
return [currentTheme, setThemeSetting] as const;
}
/**
@@ -145,25 +131,10 @@ export function useThemeSetting() {
return useContext(ThemeContext).themeSetting;
}
export function usePreviewTheme() {
const $ = _c(4);
const {
const { setPreviewTheme, savePreview, cancelPreview } = useContext(ThemeContext);
return {
setPreviewTheme,
savePreview,
cancelPreview
} = useContext(ThemeContext);
let t0;
if ($[0] !== cancelPreview || $[1] !== savePreview || $[2] !== setPreviewTheme) {
t0 = {
setPreviewTheme,
savePreview,
cancelPreview
};
$[0] = cancelPreview;
$[1] = savePreview;
$[2] = setPreviewTheme;
$[3] = t0;
} else {
t0 = $[3];
}
return t0;
cancelPreview,
};
}

View File

@@ -32,7 +32,7 @@ export function optionForPermissionSaveDestination(saveDestination: EditableSett
case 'userSettings':
return {
label: 'User settings',
description: `Saved in at ~/.claude/settings.json`,
description: `Saved in ~/.openclaude/settings.json`,
value: saveDestination
};
}

View File

@@ -33,14 +33,14 @@ export const IMAGE_TARGET_RAW_SIZE = (API_IMAGE_MAX_BASE64_SIZE * 3) / 4 // 3.75
*
* Note: The API internally resizes images larger than 1568px (source:
* encoding/full_encoding.py), but this is handled server-side and doesn't
* cause errors. These client-side limits (2000px) are slightly larger to
* cause errors. These client-side limits (1568px) are slightly larger to
* preserve quality when beneficial.
*
* The API_IMAGE_MAX_BASE64_SIZE (5MB) is the actual hard limit that causes
* API errors if exceeded.
*/
export const IMAGE_MAX_WIDTH = 2000
export const IMAGE_MAX_HEIGHT = 2000
export const IMAGE_MAX_WIDTH = 1568
export const IMAGE_MAX_HEIGHT = 1568
// =============================================================================
// PDF LIMITS

View File

@@ -2,8 +2,11 @@ import { afterEach, expect, test } from 'bun:test'
import { getSystemPrompt, DEFAULT_AGENT_PROMPT } from './prompts.js'
import { CLI_SYSPROMPT_PREFIXES, getCLISyspromptPrefix } from './system.js'
import { CLAUDE_CODE_GUIDE_AGENT } from '../tools/AgentTool/built-in/claudeCodeGuideAgent.js'
import { GENERAL_PURPOSE_AGENT } from '../tools/AgentTool/built-in/generalPurposeAgent.js'
import { EXPLORE_AGENT } from '../tools/AgentTool/built-in/exploreAgent.js'
import { PLAN_AGENT } from '../tools/AgentTool/built-in/planAgent.js'
import { STATUSLINE_SETUP_AGENT } from '../tools/AgentTool/built-in/statuslineSetup.js'
const originalSimpleEnv = process.env.CLAUDE_CODE_SIMPLE
@@ -13,10 +16,12 @@ afterEach(() => {
test('CLI identity prefixes describe OpenClaude instead of Claude Code', () => {
expect(getCLISyspromptPrefix()).toContain('OpenClaude')
expect(getCLISyspromptPrefix()).not.toContain('Claude Code')
expect(getCLISyspromptPrefix()).not.toContain("Anthropic's official CLI for Claude")
for (const prefix of CLI_SYSPROMPT_PREFIXES) {
expect(prefix).toContain('OpenClaude')
expect(prefix).not.toContain('Claude Code')
expect(prefix).not.toContain("Anthropic's official CLI for Claude")
}
})
@@ -27,22 +32,53 @@ test('simple mode identity describes OpenClaude instead of Claude Code', async (
const prompt = await getSystemPrompt([], 'gpt-4o')
expect(prompt[0]).toContain('OpenClaude')
expect(prompt[0]).not.toContain('Claude Code')
expect(prompt[0]).not.toContain("Anthropic's official CLI for Claude")
})
test('built-in agent prompts describe OpenClaude instead of Claude Code', () => {
expect(DEFAULT_AGENT_PROMPT).toContain('OpenClaude')
expect(DEFAULT_AGENT_PROMPT).not.toContain('Claude Code')
expect(DEFAULT_AGENT_PROMPT).not.toContain("Anthropic's official CLI for Claude")
const generalPrompt = GENERAL_PURPOSE_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(generalPrompt).toContain('OpenClaude')
expect(generalPrompt).not.toContain('Claude Code')
expect(generalPrompt).not.toContain("Anthropic's official CLI for Claude")
const explorePrompt = EXPLORE_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(explorePrompt).toContain('OpenClaude')
expect(explorePrompt).not.toContain('Claude Code')
expect(explorePrompt).not.toContain("Anthropic's official CLI for Claude")
const planPrompt = PLAN_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(planPrompt).toContain('OpenClaude')
expect(planPrompt).not.toContain('Claude Code')
const statuslinePrompt = STATUSLINE_SETUP_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(statuslinePrompt).toContain('OpenClaude')
expect(statuslinePrompt).not.toContain('Claude Code')
const guidePrompt = CLAUDE_CODE_GUIDE_AGENT.getSystemPrompt({
toolUseContext: {
options: {
commands: [],
agentDefinitions: { activeAgents: [] },
mcpClients: [],
} as never,
},
})
expect(guidePrompt).toContain('OpenClaude')
expect(guidePrompt).toContain('You are the OpenClaude guide agent.')
expect(guidePrompt).toContain('**OpenClaude** (the CLI tool)')
expect(guidePrompt).not.toContain('You are the Claude guide agent.')
expect(guidePrompt).not.toContain('**Claude Code** (the CLI tool)')
})

View File

@@ -214,7 +214,7 @@ function getSimpleDoingTasksSection(): string {
]
const userHelpSubitems = [
`/help: Get help with using Claude Code`,
`/help: Get help with using OpenClaude`,
`To give feedback, users should ${MACRO.ISSUES_EXPLAINER}`,
]
@@ -242,7 +242,7 @@ function getSimpleDoingTasksSection(): string {
: []),
...(process.env.USER_TYPE === 'ant'
? [
`If the user reports a bug, slowness, or unexpected behavior with Claude Code itself (as opposed to asking you to fix their own code), recommend the appropriate slash command: /issue for model-related problems (odd outputs, wrong tool choices, hallucinations, refusals), or /share to upload the full session transcript for product bugs, crashes, slowness, or general issues. Only recommend these when the user is describing a problem with Claude Code.`,
`If the user reports a bug, slowness, or unexpected behavior with OpenClaude itself (as opposed to asking you to fix their own code), recommend the appropriate slash command: /issue for model-related problems (odd outputs, wrong tool choices, hallucinations, refusals), or /share to upload the full session transcript for product bugs, crashes, slowness, or general issues. Only recommend these when the user is describing a problem with OpenClaude.`,
]
: []),
`If the user asks for help or wants to give feedback inform them of the following:`,
@@ -449,7 +449,7 @@ export async function getSystemPrompt(
): Promise<string[]> {
if (isEnvTruthy(process.env.CLAUDE_CODE_SIMPLE)) {
return [
`You are OpenClaude, an open-source fork of Claude Code.\n\nCWD: ${getCwd()}\nDate: ${getSessionStartDate()}`,
`You are OpenClaude, an open-source coding agent and CLI.\n\nCWD: ${getCwd()}\nDate: ${getSessionStartDate()}`,
]
}
@@ -696,10 +696,10 @@ export async function computeSimpleEnvInfo(
: `The most recent Claude model family is Claude 4.5/4.6. Model IDs — Opus 4.6: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.opus}', Sonnet 4.6: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.sonnet}', Haiku 4.5: '${CLAUDE_4_5_OR_4_6_MODEL_IDS.haiku}'. When building AI applications, default to the latest and most capable Claude models.`,
process.env.USER_TYPE === 'ant' && isUndercover()
? null
: `Claude Code is available as a CLI in the terminal, desktop app (Mac/Windows), web app (claude.ai/code), and IDE extensions (VS Code, JetBrains).`,
: `OpenClaude is available as a CLI in the terminal and can be used across local development environments and IDE workflows.`,
process.env.USER_TYPE === 'ant' && isUndercover()
? null
: `Fast mode for Claude Code uses the same ${FRONTIER_MODEL_NAME} model with faster output. It does NOT switch to a different model. It can be toggled with /fast.`,
: `Fast mode for OpenClaude uses the same ${FRONTIER_MODEL_NAME} model with faster output. It does NOT switch to a different model. It can be toggled with /fast.`,
].filter(item => item !== null)
return [
@@ -755,7 +755,7 @@ export function getUnameSR(): string {
return `${osType()} ${osRelease()}`
}
export const DEFAULT_AGENT_PROMPT = `You are an agent for OpenClaude, an open-source fork of Claude Code. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done. When you complete the task, respond with a concise report covering what was done and any key findings — the caller will relay this to the user, so it only needs the essentials.`
export const DEFAULT_AGENT_PROMPT = `You are an agent for OpenClaude, an open-source coding agent and CLI. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done. When you complete the task, respond with a concise report covering what was done and any key findings — the caller will relay this to the user, so it only needs the essentials.`
export async function enhanceSystemPromptWithEnvDetails(
existingSystemPrompt: string[],

View File

@@ -8,11 +8,11 @@ import { getAPIProvider } from '../utils/model/providers.js'
import { getWorkload } from '../utils/workloadContext.js'
const DEFAULT_PREFIX =
`You are OpenClaude, an open-source fork of Claude Code.`
`You are OpenClaude, an open-source coding agent and CLI.`
const AGENT_SDK_CLAUDE_CODE_PRESET_PREFIX =
`You are OpenClaude, an open-source fork of Claude Code, running within the Claude Agent SDK.`
`You are OpenClaude, an open-source coding agent and CLI running within the Claude Agent SDK.`
const AGENT_SDK_PREFIX =
`You are a Claude agent running in OpenClaude, built on the Claude Agent SDK.`
`You are OpenClaude, built on the Claude Agent SDK.`
const CLI_SYSPROMPT_PREFIX_VALUES = [
DEFAULT_PREFIX,

View File

@@ -181,7 +181,7 @@ function formatCost(cost: number, maxDecimalPlaces: number = 4): string {
function formatModelUsage(): string {
const modelUsageMap = getModelUsage()
if (Object.keys(modelUsageMap).length === 0) {
return 'Usage: 0 input, 0 output, 0 cache read, 0 cache write'
return 'Usage: 0 input, 0 output'
}
// Accumulate usage by short name
@@ -211,15 +211,19 @@ function formatModelUsage(): string {
let result = 'Usage by model:'
for (const [shortName, usage] of Object.entries(usageByShortName)) {
const usageString =
let usageString =
` ${formatNumber(usage.inputTokens)} input, ` +
`${formatNumber(usage.outputTokens)} output, ` +
`${formatNumber(usage.cacheReadInputTokens)} cache read, ` +
`${formatNumber(usage.cacheCreationInputTokens)} cache write` +
(usage.webSearchRequests > 0
? `, ${formatNumber(usage.webSearchRequests)} web search`
: '') +
` (${formatCost(usage.costUSD)})`
`${formatNumber(usage.outputTokens)} output`
if (usage.cacheReadInputTokens > 0) {
usageString += `, ${formatNumber(usage.cacheReadInputTokens)} cache read`
}
if (usage.cacheCreationInputTokens > 0) {
usageString += `, ${formatNumber(usage.cacheCreationInputTokens)} cache write`
}
if (usage.webSearchRequests > 0) {
usageString += `, ${formatNumber(usage.webSearchRequests)} web search`
}
usageString += ` (${formatCost(usage.costUSD)})`
result += `\n` + `${shortName}:`.padStart(21) + usageString
}
return result

View File

@@ -96,15 +96,16 @@ async function main(): Promise<void> {
}
}
// Enable configs first so we can read settings
{
const { enableConfigs } = await import('../utils/config.js')
enableConfigs()
}
// Apply settings.env from user settings (includes GitHub provider settings from /onboard-github)
{
const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js')
applySafeConfigEnvironmentVariables()
const { hydrateGeminiAccessTokenFromSecureStorage } = await import('../utils/geminiCredentials.js')
hydrateGeminiAccessTokenFromSecureStorage()
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../utils/githubModelsCredentials.js')
hydrateGithubModelsTokenFromSecureStorage()
}
const startupEnv = await buildStartupEnvFromProfile({
@@ -121,6 +122,16 @@ async function main(): Promise<void> {
}
}
// Hydrate GitHub credentials after profile is applied so CLAUDE_CODE_USE_GITHUB from profile is available
{
const {
hydrateGithubModelsTokenFromSecureStorage,
refreshGithubModelsTokenIfNeeded,
} = await import('../utils/githubModelsCredentials.js')
await refreshGithubModelsTokenIfNeeded()
hydrateGithubModelsTokenFromSecureStorage()
}
await validateProviderEnvOrExit()
// Print the gradient startup screen before the Ink UI loads

View File

@@ -1,4 +1,4 @@
import { useCallback, useEffect } from 'react'
import { useCallback, useEffect, useSyncExternalStore } from 'react'
import type { Command } from '../commands.js'
import { useNotifications } from '../context/notifications.js'
import {
@@ -7,6 +7,11 @@ import {
} from '../services/analytics/index.js'
import { reinitializeLspServerManager } from '../services/lsp/manager.js'
import { useAppState, useSetAppState } from '../state/AppState.js'
import {
getPluginCommandsState,
setPluginCommandsState,
subscribePluginCommands,
} from '../state/pluginCommandsStore.js'
import type { AgentDefinition } from '../tools/AgentTool/loadAgentsDir.js'
import { count } from '../utils/array.js'
import { logForDebugging } from '../utils/debug.js'
@@ -39,6 +44,11 @@ export function useManagePlugins({
}: {
enabled?: boolean
} = {}) {
const pluginCommands = useSyncExternalStore(
subscribePluginCommands,
getPluginCommandsState,
getPluginCommandsState,
)
const setAppState = useSetAppState()
const needsRefresh = useAppState(s => s.plugins.needsRefresh)
const { addNotification } = useNotifications()
@@ -74,6 +84,7 @@ export function useManagePlugins({
try {
commands = await getPluginCommands()
setPluginCommandsState(commands)
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : String(error)
@@ -82,6 +93,7 @@ export function useManagePlugins({
source: 'plugin-commands',
error: `Failed to load plugin commands: ${errorMessage}`,
})
setPluginCommandsState([])
}
try {
@@ -173,7 +185,7 @@ export function useManagePlugins({
...prevState.plugins,
enabled,
disabled,
commands,
commands: [],
errors: mergedErrors,
},
}
@@ -226,6 +238,7 @@ export function useManagePlugins({
logError(errorObj)
logForDebugging(`Error loading plugins: ${error}`)
// Set empty state on error, but preserve LSP errors and add the new error
setPluginCommandsState([])
setAppState(prevState => {
// Keep existing LSP/non-plugin-loading errors
const existingLspErrors = prevState.plugins.errors.filter(
@@ -284,6 +297,11 @@ export function useManagePlugins({
})
}, [initialPluginLoad, enabled])
useEffect(() => {
if (enabled) return
setPluginCommandsState([])
}, [enabled])
// Plugin state changed on disk (background reconcile, /plugin menu,
// external settings edit). Show a notification; user runs /reload-plugins
// to apply. The previous auto-refresh here had a stale-cache bug (only
@@ -301,4 +319,6 @@ export function useManagePlugins({
// Do NOT auto-refresh. Do NOT reset needsRefresh — /reload-plugins
// consumes it via refreshActivePlugins().
}, [enabled, needsRefresh, addNotification])
return enabled ? pluginCommands : []
}

View File

@@ -1,3 +1,4 @@
import { useLayoutEffect, useRef, useState } from 'react'
import { isInputModeCharacter } from 'src/components/PromptInput/inputModes.js'
import { useNotifications } from 'src/context/notifications.js'
import stripAnsi from 'strip-ansi'
@@ -100,9 +101,74 @@ export function useTextInput({
prewarmModifiers()
}
const offset = externalOffset
const setOffset = onOffsetChange
const cursor = Cursor.fromText(originalValue, columns, offset)
// Keep a local text/cursor mirror so consecutive keystrokes can advance
// immediately even if the controlled parent value hasn't committed yet.
const [renderState, setRenderState] = useState(() => ({
value: originalValue,
offset: externalOffset,
}))
const liveValueRef = useRef(originalValue)
const liveOffsetRef = useRef(externalOffset)
const lastSeenPropsRef = useRef({
value: originalValue,
offset: externalOffset,
})
const updateRenderedInput = (nextValue: string, nextOffset: number): void => {
liveValueRef.current = nextValue
liveOffsetRef.current = nextOffset
setRenderState(prev =>
prev.value === nextValue && prev.offset === nextOffset
? prev
: { value: nextValue, offset: nextOffset },
)
}
useLayoutEffect(() => {
if (
lastSeenPropsRef.current.value === originalValue &&
lastSeenPropsRef.current.offset === externalOffset
) {
return
}
lastSeenPropsRef.current = {
value: originalValue,
offset: externalOffset,
}
updateRenderedInput(originalValue, externalOffset)
}, [originalValue, externalOffset])
const value = renderState.value
const offset = renderState.offset
const getLiveValue = (): string => liveValueRef.current
const getLiveCursor = (): Cursor =>
Cursor.fromText(liveValueRef.current, columns, liveOffsetRef.current)
const setValue = (nextValue: string, nextOffset = liveOffsetRef.current): void => {
const previousValue = liveValueRef.current
const previousOffset = liveOffsetRef.current
if (previousValue === nextValue && previousOffset === nextOffset) {
return
}
updateRenderedInput(nextValue, nextOffset)
if (previousValue !== nextValue) {
onChange(nextValue)
}
if (previousOffset !== nextOffset) {
onOffsetChange(nextOffset)
}
}
const setOffset = (nextOffset: number): void => {
if (nextOffset === liveOffsetRef.current) {
return
}
updateRenderedInput(liveValueRef.current, nextOffset)
onOffsetChange(nextOffset)
}
const cursor = Cursor.fromText(value, columns, offset)
const { addNotification, removeNotification } = useNotifications()
const handleCtrlC = useDoublePress(
@@ -111,9 +177,11 @@ export function useTextInput({
},
() => onExit?.(),
() => {
if (originalValue) {
const currentValue = getLiveValue()
if (currentValue) {
updateRenderedInput('', 0)
onChange('')
setOffset(0)
onOffsetChange(0)
onHistoryReset?.()
}
},
@@ -125,7 +193,8 @@ export function useTextInput({
// not dialog dismissal, and needs the double-press safety mechanism.
const handleEscape = useDoublePress(
(show: boolean) => {
if (!originalValue || !show) {
const currentValue = getLiveValue()
if (!currentValue || !show) {
return
}
addNotification({
@@ -136,17 +205,19 @@ export function useTextInput({
})
},
() => {
const currentValue = getLiveValue()
// Remove the "Esc again to clear" notification immediately
removeNotification('escape-again-to-clear')
onClearInput?.()
if (originalValue) {
if (currentValue) {
// Track double-escape usage for feature discovery
// Save to history before clearing
if (originalValue.trim() !== '') {
addToHistory(originalValue)
if (currentValue.trim() !== '') {
addToHistory(currentValue)
}
updateRenderedInput('', 0)
onChange('')
setOffset(0)
onOffsetChange(0)
onHistoryReset?.()
}
},
@@ -154,13 +225,13 @@ export function useTextInput({
const handleEmptyCtrlD = useDoublePress(
show => {
if (originalValue !== '') {
if (getLiveValue() !== '') {
return
}
onExitMessage?.(show, 'Ctrl-D')
},
() => {
if (originalValue !== '') {
if (getLiveValue() !== '') {
return
}
onExit?.()
@@ -168,6 +239,7 @@ export function useTextInput({
)
function handleCtrlD(): MaybeCursor {
const cursor = getLiveCursor()
if (cursor.text === '') {
// When input is empty, handle double-press
handleEmptyCtrlD()
@@ -178,24 +250,28 @@ export function useTextInput({
}
function killToLineEnd(): Cursor {
const cursor = getLiveCursor()
const { cursor: newCursor, killed } = cursor.deleteToLineEnd()
pushToKillRing(killed, 'append')
return newCursor
}
function killToLineStart(): Cursor {
const cursor = getLiveCursor()
const { cursor: newCursor, killed } = cursor.deleteToLineStart()
pushToKillRing(killed, 'prepend')
return newCursor
}
function killWordBefore(): Cursor {
const cursor = getLiveCursor()
const { cursor: newCursor, killed } = cursor.deleteWordBefore()
pushToKillRing(killed, 'prepend')
return newCursor
}
function yank(): Cursor {
const cursor = getLiveCursor()
const text = getLastKill()
if (text.length > 0) {
const startOffset = cursor.offset
@@ -207,6 +283,7 @@ export function useTextInput({
}
function handleYankPop(): Cursor {
const cursor = getLiveCursor()
const popResult = yankPop()
if (!popResult) {
return cursor
@@ -222,13 +299,16 @@ export function useTextInput({
}
const handleCtrl = mapInput([
['a', () => cursor.startOfLine()],
['b', () => cursor.left()],
['a', () => getLiveCursor().startOfLine()],
['b', () => getLiveCursor().left()],
['c', handleCtrlC],
['d', handleCtrlD],
['e', () => cursor.endOfLine()],
['f', () => cursor.right()],
['h', () => cursor.deleteTokenBefore() ?? cursor.backspace()],
['e', () => getLiveCursor().endOfLine()],
['f', () => getLiveCursor().right()],
['h', () => {
const cursor = getLiveCursor()
return cursor.deleteTokenBefore() ?? cursor.backspace()
}],
['k', killToLineEnd],
['n', () => downOrHistoryDown()],
['p', () => upOrHistoryUp()],
@@ -238,13 +318,15 @@ export function useTextInput({
])
const handleMeta = mapInput([
['b', () => cursor.prevWord()],
['f', () => cursor.nextWord()],
['d', () => cursor.deleteWordAfter()],
['b', () => getLiveCursor().prevWord()],
['f', () => getLiveCursor().nextWord()],
['d', () => getLiveCursor().deleteWordAfter()],
['y', handleYankPop],
])
function handleEnter(key: Key) {
const cursor = getLiveCursor()
const currentValue = getLiveValue()
if (
multiline &&
cursor.offset > 0 &&
@@ -263,10 +345,11 @@ export function useTextInput({
if (env.terminal === 'Apple_Terminal' && isModifierPressed('shift')) {
return cursor.insert('\n')
}
onSubmit?.(originalValue)
onSubmit?.(currentValue)
}
function upOrHistoryUp() {
const cursor = getLiveCursor()
if (disableCursorMovementForUpDownKeys) {
onHistoryUp?.()
return cursor
@@ -291,6 +374,7 @@ export function useTextInput({
return cursor
}
function downOrHistoryDown() {
const cursor = getLiveCursor()
if (disableCursorMovementForUpDownKeys) {
onHistoryDown?.()
return cursor
@@ -315,7 +399,7 @@ export function useTextInput({
return cursor
}
function mapKey(key: Key): InputMapper {
function mapKey(key: Key, cursor: Cursor): InputMapper {
switch (true) {
case key.escape:
return () => {
@@ -429,6 +513,7 @@ export function useTextInput({
}
function onInput(input: string, key: Key): void {
const currentCursor = getLiveCursor()
// Note: Image paste shortcut (chat:imagePaste) is handled via useKeybindings in PromptInput
// Apply filter if provided
@@ -446,18 +531,15 @@ export function useTextInput({
// Apply all DEL characters as backspace operations synchronously
// Try to delete tokens first, fall back to character backspace
let currentCursor = cursor
let nextCursor = currentCursor
for (let i = 0; i < delCount; i++) {
currentCursor =
currentCursor.deleteTokenBefore() ?? currentCursor.backspace()
nextCursor =
nextCursor.deleteTokenBefore() ?? nextCursor.backspace()
}
// Update state once with the final result
if (!cursor.equals(currentCursor)) {
if (cursor.text !== currentCursor.text) {
onChange(currentCursor.text)
}
setOffset(currentCursor.offset)
if (!currentCursor.equals(nextCursor)) {
setValue(nextCursor.text, nextCursor.offset)
}
resetKillAccumulation()
resetYankState()
@@ -474,13 +556,10 @@ export function useTextInput({
resetYankState()
}
const nextCursor = mapKey(key)(filteredInput)
const nextCursor = mapKey(key, currentCursor)(filteredInput)
if (nextCursor) {
if (!cursor.equals(nextCursor)) {
if (cursor.text !== nextCursor.text) {
onChange(nextCursor.text)
}
setOffset(nextCursor.offset)
if (!currentCursor.equals(nextCursor)) {
setValue(nextCursor.text, nextCursor.offset)
}
// SSH-coalesced Enter: on slow links, "o" + Enter can arrive as one
// chunk "o\r". parseKeypress only matches s === '\r', so it hit the
@@ -512,6 +591,7 @@ export function useTextInput({
return {
onInput,
value,
renderedValue: cursor.render(
cursorChar,
mask,
@@ -520,6 +600,7 @@ export function useTextInput({
maxVisibleLines,
),
offset,
setValue,
setOffset,
cursorLine: cursorPos.line - cursor.getViewportStartLine(maxVisibleLines),
cursorColumn: cursorPos.column,

View File

@@ -70,14 +70,14 @@ export function useVimInput(props: UseVimInputProps): VimInputState {
// Vim behavior: move cursor left by 1 when exiting insert mode
// (unless at beginning of line or at offset 0)
const offset = textInput.offset
if (offset > 0 && props.value[offset - 1] !== '\n') {
if (offset > 0 && textInput.value[offset - 1] !== '\n') {
textInput.setOffset(offset - 1)
}
vimStateRef.current = { mode: 'NORMAL', command: { type: 'idle' } }
setMode('NORMAL')
onModeChange?.('NORMAL')
}, [onModeChange, textInput, props.value])
}, [onModeChange, textInput])
function createOperatorContext(
cursor: Cursor,
@@ -85,8 +85,8 @@ export function useVimInput(props: UseVimInputProps): VimInputState {
): OperatorContext {
return {
cursor,
text: props.value,
setText: (newText: string) => props.onChange(newText),
text: textInput.value,
setText: (newText: string) => textInput.setValue(newText),
setOffset: (offset: number) => textInput.setOffset(offset),
enterInsert: (offset: number) => switchToInsertMode(offset),
getRegister: () => persistentRef.current.register,
@@ -110,15 +110,18 @@ export function useVimInput(props: UseVimInputProps): VimInputState {
const change = persistentRef.current.lastChange
if (!change) return
const cursor = Cursor.fromText(props.value, props.columns, textInput.offset)
const cursor = Cursor.fromText(
textInput.value,
props.columns,
textInput.offset,
)
const ctx = createOperatorContext(cursor, true)
switch (change.type) {
case 'insert':
if (change.text) {
const newCursor = cursor.insert(change.text)
props.onChange(newCursor.text)
textInput.setOffset(newCursor.offset)
textInput.setValue(newCursor.text, newCursor.offset)
}
break
@@ -179,7 +182,11 @@ export function useVimInput(props: UseVimInputProps): VimInputState {
// lookups expect single chars and a prepended space would break them.
const filtered = inputFilter ? inputFilter(rawInput, key) : rawInput
const input = state.mode === 'INSERT' ? filtered : rawInput
const cursor = Cursor.fromText(props.value, props.columns, textInput.offset)
const cursor = Cursor.fromText(
textInput.value,
props.columns,
textInput.offset,
)
if (key.ctrl) {
textInput.onInput(input, key)

View File

@@ -115,7 +115,10 @@ export default class App extends PureComponent<Props, State> {
keyParseState = INITIAL_STATE;
// Timer for flushing incomplete escape sequences
incompleteEscapeTimer: NodeJS.Timeout | null = null;
stdinMode: 'readable' | 'data' = process.env.OPENCLAUDE_USE_READABLE_STDIN === '1' ? 'readable' : 'data';
// Default to readable-mode stdin (legacy Ink behavior). The data-mode path
// is kept as an explicit opt-in because some terminals can enter a state
// where startup input appears frozen when data mode is the default.
stdinMode: 'readable' | 'data' = process.env.OPENCLAUDE_USE_DATA_STDIN === '1' || process.env.OPENCLAUDE_USE_READABLE_STDIN === '0' ? 'data' : 'readable';
// Timeout durations for incomplete sequences (ms)
readonly NORMAL_TIMEOUT = 50; // Short timeout for regular esc sequences
readonly PASTE_TIMEOUT = 500; // Longer timeout for paste operations

View File

@@ -33,7 +33,7 @@ import createRenderer, { type Renderer } from './renderer.js';
import { CellWidth, CharPool, cellAt, createScreen, HyperlinkPool, isEmptyCellAt, migrateScreenPools, StylePool } from './screen.js';
import { applySearchHighlight } from './searchHighlight.js';
import { applySelectionOverlay, captureScrolledRows, clearSelection, createSelectionState, extendSelection, type FocusMove, findPlainTextUrlAt, getSelectedText, hasSelection, moveFocus, type SelectionState, selectLineAt, selectWordAt, shiftAnchor, shiftSelection, shiftSelectionForFollow, startSelection, updateSelection } from './selection.js';
import { SYNC_OUTPUT_SUPPORTED, supportsExtendedKeys, type Terminal, writeDiffToTerminal } from './terminal.js';
import { shouldSkipMainScreenSyncMarkers, shouldUseMainScreenRewrite, SYNC_OUTPUT_SUPPORTED, supportsExtendedKeys, type Terminal, writeDiffToTerminal } from './terminal.js';
import { CURSOR_HOME, cursorMove, cursorPosition, DISABLE_KITTY_KEYBOARD, DISABLE_MODIFY_OTHER_KEYS, ENABLE_KITTY_KEYBOARD, ENABLE_MODIFY_OTHER_KEYS, ERASE_SCREEN } from './termio/csi.js';
import { DBP, DFE, DISABLE_MOUSE_TRACKING, ENABLE_MOUSE_TRACKING, ENTER_ALT_SCREEN, EXIT_ALT_SCREEN, SHOW_CURSOR } from './termio/dec.js';
import { CLEAR_ITERM2_PROGRESS, CLEAR_TAB_STATUS, setClipboard, supportsTabStatus, wrapForMultiplexer } from './termio/osc.js';
@@ -609,12 +609,13 @@ export default class Ink {
};
}
const tDiff = performance.now();
const rewriteMainScreen = !this.altScreenActive && shouldUseMainScreenRewrite();
const diff = this.log.render(prevFrame, frame, this.altScreenActive,
// DECSTBM needs BSU/ESU atomicity — without it the outer terminal
// renders the scrolled-but-not-yet-repainted intermediate state.
// tmux is the main case (re-emits DECSTBM with its own timing and
// doesn't implement DEC 2026, so SYNC_OUTPUT_SUPPORTED is false).
SYNC_OUTPUT_SUPPORTED);
SYNC_OUTPUT_SUPPORTED, rewriteMainScreen);
const diffMs = performance.now() - tDiff;
// Swap buffers
this.backFrame = this.frontFrame;
@@ -759,7 +760,8 @@ export default class Ink {
}
}
const tWrite = performance.now();
writeDiffToTerminal(this.terminal, optimized, this.altScreenActive && !SYNC_OUTPUT_SUPPORTED);
const skipSyncMarkers = this.altScreenActive ? !SYNC_OUTPUT_SUPPORTED : rewriteMainScreen || shouldSkipMainScreenSyncMarkers();
writeDiffToTerminal(this.terminal, optimized, skipSyncMarkers);
const writeMs = performance.now() - tWrite;
// Update blit safety for the NEXT frame. The frame just rendered

125
src/ink/log-update.test.ts Normal file
View File

@@ -0,0 +1,125 @@
import { expect, test } from 'bun:test'
import type { Frame } from './frame.ts'
import { LogUpdate } from './log-update.ts'
import {
CellWidth,
CharPool,
createScreen,
HyperlinkPool,
setCellAt,
StylePool,
} from './screen.ts'
function collectStdout(diff: ReturnType<LogUpdate['render']>): string {
return diff
.filter((patch): patch is Extract<(typeof diff)[number], { type: 'stdout' }> => patch.type === 'stdout')
.map(patch => patch.content)
.join('')
}
function createHarness() {
const stylePool = new StylePool()
const charPool = new CharPool()
const hyperlinkPool = new HyperlinkPool()
return {
stylePool,
charPool,
hyperlinkPool,
log: new LogUpdate({ isTTY: true, stylePool }),
}
}
function frameFromLines(
stylePool: StylePool,
charPool: CharPool,
hyperlinkPool: HyperlinkPool,
lines: string[],
cursor = { x: 0, y: lines.length, visible: true },
): Frame {
const width = lines.reduce((max, line) => Math.max(max, line.length), 0)
const screen = createScreen(width, lines.length, stylePool, charPool, hyperlinkPool)
for (const [y, line] of lines.entries()) {
for (const [x, char] of [...line].entries()) {
setCellAt(screen, x, y, {
char,
styleId: stylePool.none,
width: CellWidth.Narrow,
})
}
}
return {
screen,
viewport: {
width: Math.max(width, 1),
height: 10,
},
cursor,
}
}
test('ghostty main-screen rewrite paints prompt content without full terminal reset when width is stable', () => {
const { stylePool, charPool, hyperlinkPool, log } = createHarness()
const prev = frameFromLines(stylePool, charPool, hyperlinkPool, [' '])
const next = frameFromLines(stylePool, charPool, hyperlinkPool, ['prompt'])
const diff = log.render(prev, next, false, true, true)
const stdout = collectStdout(diff)
expect(diff.some(patch => patch.type === 'clearTerminal')).toBe(false)
expect(diff.some(patch => patch.type === 'clear' && patch.count === 1)).toBe(
true,
)
expect(stdout).toContain('prompt')
})
test('ghostty main-screen rewrite clears only the changed prompt tail before repainting', () => {
const { stylePool, charPool, hyperlinkPool, log } = createHarness()
const prev = frameFromLines(
stylePool,
charPool,
hyperlinkPool,
['status', '> abc'],
)
const next = frameFromLines(
stylePool,
charPool,
hyperlinkPool,
['status', '> abcd'],
)
const diff = log.render(prev, next, false, true, true)
const stdout = collectStdout(diff)
expect(diff.some(patch => patch.type === 'clearTerminal')).toBe(false)
expect(diff.some(patch => patch.type === 'clear' && patch.count === 1)).toBe(
true,
)
expect(stdout).toContain('abcd')
})
test('ghostty main-screen rewrite falls back to incremental diff for larger changes', () => {
const { stylePool, charPool, hyperlinkPool, log } = createHarness()
const prev = frameFromLines(
stylePool,
charPool,
hyperlinkPool,
['row 0', 'row 1', 'row 2', 'row 3', 'row 4', '> abc'],
)
const next = frameFromLines(
stylePool,
charPool,
hyperlinkPool,
['row 0 updated', 'row 1', 'row 2', 'row 3', 'row 4', '> abcd'],
)
const diff = log.render(prev, next, false, true, true)
const stdout = collectStdout(diff)
expect(diff.some(patch => patch.type === 'clear')).toBe(false)
expect(stdout).toContain('updated')
expect(stdout).toContain('abcd')
})

View File

@@ -125,6 +125,7 @@ export class LogUpdate {
next: Frame,
altScreen = false,
decstbmSafe = true,
rewriteMainScreen = false,
): Diff {
if (!this.options.isTTY) {
return this.renderFullFrame(next)
@@ -146,6 +147,13 @@ export class LogUpdate {
return fullResetSequence_CAUSES_FLICKER(next, 'resize', stylePool)
}
if (!altScreen && rewriteMainScreen) {
const rewriteStartY = findMainScreenRewriteStart(prev.screen, next.screen)
if (rewriteStartY !== null) {
return rewriteMainScreenFrame(prev, next, stylePool, rewriteStartY)
}
}
// DECSTBM scroll optimization: when a ScrollBox's scrollTop changed,
// shift content with a hardware scroll (CSI top;bot r + CSI n S/T)
// instead of rewriting the whole scroll region. The shiftRows on
@@ -420,34 +428,8 @@ export class LogUpdate {
// Main screen: if cursor needs to be past the last line of content
// (typical: cursor.y = screen.height), emit \n to create that line
// since cursor movement can't create new lines.
if (altScreen) {
// no-op; next frame's CSI H anchors cursor
} else if (next.cursor.y >= next.screen.height) {
// Move to column 0 of current line, then emit newlines to reach target row
screen.txn(prev => {
const rowsToCreate = next.cursor.y - prev.y
if (rowsToCreate > 0) {
// Use CR to resolve pending wrap (if any) without advancing
// to the next line, then LF to create each new row.
const patches: Diff = new Array<Diff[number]>(1 + rowsToCreate)
patches[0] = CARRIAGE_RETURN
for (let i = 0; i < rowsToCreate; i++) {
patches[1 + i] = NEWLINE
}
return [patches, { dx: -prev.x, dy: rowsToCreate }]
}
// At or past target row - need to move cursor to correct position
const dy = next.cursor.y - prev.y
if (dy !== 0 || prev.x !== next.cursor.x) {
// Use CR to clear pending wrap (if any), then cursor move
const patches: Diff = [CARRIAGE_RETURN]
patches.push({ type: 'cursorMove', x: next.cursor.x, y: dy })
return [patches, { dx: next.cursor.x - prev.x, dy }]
}
return [[], { dx: 0, dy: 0 }]
})
} else {
moveCursorTo(screen, next.cursor.x, next.cursor.y)
if (!altScreen) {
restoreMainScreenCursor(screen, next)
}
const elapsed = performance.now() - startTime
@@ -467,6 +449,77 @@ export class LogUpdate {
}
}
function rewriteMainScreenFrame(
prev: Frame,
next: Frame,
stylePool: StylePool,
startY: number,
): Diff {
const diff: Diff = []
const clearCount = prev.screen.height - startY
if (clearCount > 0) {
const clearStartY = prev.screen.height - 1
const clearCursor = new VirtualScreen(prev.cursor, next.viewport.width)
moveCursorTo(clearCursor, 0, clearStartY)
diff.push(...clearCursor.diff)
diff.push({ type: 'clear', count: clearCount })
}
const screen = new VirtualScreen(
clearCount > 0 ? { x: 0, y: startY } : prev.cursor,
next.viewport.width,
)
renderFrameSlice(screen, next, startY, next.screen.height, stylePool)
restoreMainScreenCursor(screen, next)
return [...diff, ...screen.diff]
}
const MAX_MAIN_SCREEN_REWRITE_ROWS = 6
function findMainScreenRewriteStart(prev: Screen, next: Screen): number | null {
const commonHeight = Math.min(prev.height, next.height)
let firstChangedY = commonHeight
for (let y = 0; y < commonHeight; y += 1) {
if (!rowsEqual(prev, next, y)) {
firstChangedY = y
break
}
}
const rewriteRows = Math.max(prev.height, next.height) - firstChangedY
if (rewriteRows <= 0) {
return null
}
return rewriteRows <= MAX_MAIN_SCREEN_REWRITE_ROWS ? firstChangedY : null
}
function rowsEqual(prev: Screen, next: Screen, y: number): boolean {
if (prev.width !== next.width) {
return false
}
if (prev.softWrap[y] !== next.softWrap[y]) {
return false
}
const rowStart = y * prev.width
const rowEnd = rowStart + prev.width
for (let index = rowStart; index < rowEnd; index += 1) {
if (
prev.cells64[index] !== next.cells64[index] ||
prev.noSelect[index] !== next.noSelect[index]
) {
return false
}
}
return true
}
function transitionHyperlink(
diff: Diff,
current: Hyperlink,
@@ -622,6 +675,37 @@ function renderFrameSlice(
return screen
}
function restoreMainScreenCursor(screen: VirtualScreen, next: Frame): void {
if (next.cursor.y >= next.screen.height) {
// Move to column 0 of current line, then emit newlines to reach target row
screen.txn(prev => {
const rowsToCreate = next.cursor.y - prev.y
if (rowsToCreate > 0) {
// Use CR to resolve pending wrap (if any) without advancing
// to the next line, then LF to create each new row.
const patches: Diff = new Array<Diff[number]>(1 + rowsToCreate)
patches[0] = CARRIAGE_RETURN
for (let i = 0; i < rowsToCreate; i++) {
patches[1 + i] = NEWLINE
}
return [patches, { dx: -prev.x, dy: rowsToCreate }]
}
// At or past target row - need to move cursor to correct position
const dy = next.cursor.y - prev.y
if (dy !== 0 || prev.x !== next.cursor.x) {
// Use CR to clear pending wrap (if any), then cursor move
const patches: Diff = [CARRIAGE_RETURN]
patches.push({ type: 'cursorMove', x: next.cursor.x, y: dy })
return [patches, { dx: next.cursor.x - prev.x, dy }]
}
return [[], { dx: 0, dy: 0 }]
})
return
}
moveCursorTo(screen, next.cursor.x, next.cursor.y)
}
type Delta = { dx: number; dy: number }
/**

369
src/ink/reconciler.test.ts Normal file
View File

@@ -0,0 +1,369 @@
import { PassThrough } from 'node:stream'
import { expect, test } from 'bun:test'
import React from 'react'
import type { DOMElement, ElementNames } from './dom.ts'
import instances from './instances.ts'
import { LayoutEdge } from './layout/node.ts'
import type { ParsedKey } from './parse-keypress.ts'
import { createRoot } from './root.ts'
type TestStdin = PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
const RAW_TEXT_STYLE = {
flexDirection: 'row',
flexGrow: 0,
flexShrink: 1,
textWrap: 'wrap',
} as const
function createTestStreams(): {
stdout: PassThrough
stdin: TestStdin
} {
const stdout = new PassThrough()
const stdin = new PassThrough() as TestStdin
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
;(stdout as unknown as { rows: number }).rows = 24
;(stdout as unknown as { isTTY: boolean }).isTTY = true
return { stdout, stdin }
}
async function waitForCondition(
predicate: () => boolean,
errorMessage: string,
timeoutMs = 2000,
): Promise<void> {
const startedAt = Date.now()
while (Date.now() - startedAt < timeoutMs) {
if (predicate()) {
return
}
await Bun.sleep(10)
}
throw new Error(errorMessage)
}
function getRootNode(stdout: PassThrough): DOMElement {
const instance = getInkInstance(stdout)
if (!instance.rootNode) {
throw new Error('Ink instance root node not found')
}
return instance.rootNode
}
function getInkInstance(stdout: PassThrough): {
rootNode?: DOMElement
dispatchKeyboardEvent: (parsedKey: ParsedKey) => void
} {
const instance = instances.get(
stdout as unknown as NodeJS.WriteStream,
) as
| {
rootNode?: DOMElement
dispatchKeyboardEvent: (parsedKey: ParsedKey) => void
}
| undefined
if (!instance) {
throw new Error('Ink instance not found')
}
return instance
}
function findElement(
node: DOMElement,
nodeName: ElementNames,
): DOMElement | undefined {
if (node.nodeName === nodeName) {
return node
}
for (const child of node.childNodes) {
if (child.nodeName === '#text') {
continue
}
const found = findElement(child, nodeName)
if (found) {
return found
}
}
return undefined
}
function requireElement(stdout: PassThrough, nodeName: ElementNames): DOMElement {
const found = findElement(getRootNode(stdout), nodeName)
if (!found) {
throw new Error(`Expected to find ${nodeName} in Ink root tree`)
}
return found
}
async function createHarness(): Promise<{
stdout: PassThrough
stdin: TestStdin
root: Awaited<ReturnType<typeof createRoot>>
dispose: () => Promise<void>
}> {
const { stdout, stdin } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
return {
stdout,
stdin,
root,
dispose: async () => {
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(25)
},
}
}
test('raw ink-box updates keyboard handlers and attributes in place across rerenders', async () => {
const calls: string[] = []
const firstHandler = () => calls.push('first')
const secondHandler = () => calls.push('second')
const harness = await createHarness()
try {
harness.root.render(
React.createElement(
'ink-box',
{
autoFocus: true,
onKeyDown: firstHandler,
tabIndex: 0,
},
'first render',
),
)
await Bun.sleep(25)
const firstBox = requireElement(harness.stdout, 'ink-box')
expect(firstBox.attributes.tabIndex).toBe(0)
expect(firstBox._eventHandlers?.onKeyDown).toBe(firstHandler)
harness.root.render(
React.createElement(
'ink-box',
{
autoFocus: true,
onKeyDown: secondHandler,
tabIndex: 1,
},
'second render',
),
)
await Bun.sleep(25)
const secondBox = requireElement(harness.stdout, 'ink-box')
expect(secondBox).toBe(firstBox)
expect(secondBox.attributes.tabIndex).toBe(1)
expect(secondBox._eventHandlers?.onKeyDown).toBe(secondHandler)
getInkInstance(harness.stdout).dispatchKeyboardEvent({
kind: 'key',
name: 'a',
fn: false,
ctrl: false,
meta: false,
shift: false,
option: false,
super: false,
sequence: 'a',
raw: 'a',
isPasted: false,
})
await waitForCondition(
() => calls.length === 1,
'Timed out waiting for rerendered onKeyDown handler to fire',
)
expect(calls).toEqual(['second'])
} finally {
await harness.dispose()
}
})
test('raw ink-text updates textStyles in place across rerenders', async () => {
const harness = await createHarness()
try {
harness.root.render(
React.createElement(
'ink-text',
{
style: RAW_TEXT_STYLE,
textStyles: { color: 'ansi:red' },
},
'host text',
),
)
await Bun.sleep(25)
const firstText = requireElement(harness.stdout, 'ink-text')
expect(firstText.textStyles).toEqual({ color: 'ansi:red' })
harness.root.render(
React.createElement(
'ink-text',
{
style: RAW_TEXT_STYLE,
textStyles: { color: 'ansi:blue' },
},
'host text',
),
)
await Bun.sleep(25)
const secondText = requireElement(harness.stdout, 'ink-text')
expect(secondText).toBe(firstText)
expect(secondText.textStyles).toEqual({ color: 'ansi:blue' })
} finally {
await harness.dispose()
}
})
test('raw ink-box removes event handler when set to undefined', async () => {
const calls: string[] = []
const handler = () => calls.push('fired')
const harness = await createHarness()
try {
harness.root.render(
React.createElement(
'ink-box',
{
autoFocus: true,
onKeyDown: handler,
tabIndex: 0,
},
'with handler',
),
)
await Bun.sleep(25)
const box = requireElement(harness.stdout, 'ink-box')
expect(box._eventHandlers?.onKeyDown).toBe(handler)
// Remove the handler
harness.root.render(
React.createElement(
'ink-box',
{
autoFocus: true,
tabIndex: 0,
},
'without handler',
),
)
await Bun.sleep(25)
const sameBox = requireElement(harness.stdout, 'ink-box')
expect(sameBox).toBe(box)
expect(sameBox._eventHandlers?.onKeyDown).toBeUndefined()
// Dispatch a key event and verify the removed handler is NOT called
getInkInstance(harness.stdout).dispatchKeyboardEvent({
kind: 'key',
name: 'a',
fn: false,
ctrl: false,
meta: false,
shift: false,
option: false,
super: false,
sequence: 'a',
raw: 'a',
isPasted: false,
})
await Bun.sleep(50)
expect(calls).toEqual([])
} finally {
await harness.dispose()
}
})
test('raw ink-box updates layout style in place across rerenders', async () => {
const harness = await createHarness()
try {
harness.root.render(
React.createElement(
'ink-box',
{
style: { flexDirection: 'row', paddingLeft: 1 },
},
'styled box',
),
)
await Bun.sleep(25)
const box = requireElement(harness.stdout, 'ink-box')
expect(box.style.flexDirection).toBe('row')
expect(box.style.paddingLeft).toBe(1)
harness.root.render(
React.createElement(
'ink-box',
{
style: { flexDirection: 'column', paddingLeft: 2 },
},
'styled box',
),
)
await Bun.sleep(25)
const sameBox = requireElement(harness.stdout, 'ink-box')
expect(sameBox).toBe(box)
expect(sameBox.style.flexDirection).toBe('column')
expect(sameBox.style.paddingLeft).toBe(2)
// Verify the update reached the layout engine, not just the style object
const yogaNode = sameBox.yogaNode!
expect(yogaNode).toBeDefined()
yogaNode.calculateLayout(120)
expect(yogaNode.getComputedPadding(LayoutEdge.Left)).toBe(2)
} finally {
await harness.dispose()
}
})

View File

@@ -449,17 +449,25 @@ const reconciler = createReconciler<
},
commitUpdate(
node: DOMElement,
updatePayload: UpdatePayload | null,
_type: ElementNames,
_oldProps: Props,
_newProps: Props,
oldProps: Props,
newProps: Props,
): void {
if (!updatePayload) {
// React 19 mutation mode calls commitUpdate as
// (instance, type, oldProps, newProps, fiber) and does not pass the
// prepareUpdate() payload here. This renderer used to treat the second
// argument as updatePayload, which left mounted ink-* nodes with stale
// attributes, event handlers, and textStyles until something forced a
// remount. Recompute the prop/style diff here so host nodes update
// correctly in place on rerender.
const props = diff(oldProps, newProps)
const style = diff(oldProps['style'] as Styles, newProps['style'] as Styles)
const nextStyle = newProps['style'] as Styles | undefined
if (!props && !style) {
return
}
const { props, style, nextStyle } = updatePayload
if (props) {
for (const [key, value] of Object.entries(props)) {
if (key === 'style') {

View File

@@ -135,6 +135,13 @@ export function setXtversionName(name: string): void {
if (xtversionName === undefined) xtversionName = name
}
export function isGhosttyTerminal(): boolean {
if (process.env.NODE_ENV === 'test') return false
if (process.env.TERM_PROGRAM === 'ghostty') return true
if (process.env.TERM === 'xterm-ghostty') return true
return xtversionName?.toLowerCase().startsWith('ghostty') ?? false
}
/** True if running in an xterm.js-based terminal (VS Code, Cursor, Windsurf
* integrated terminals). Combines TERM_PROGRAM env check (fast, sync, but
* not forwarded over SSH) with the XTVERSION probe result (async, survives
@@ -145,6 +152,20 @@ export function isXtermJs(): boolean {
return xtversionName?.startsWith('xterm.js') ?? false
}
/** Ghostty currently repaints main-screen prompt updates more reliably
* without DEC 2026 synchronized output. Prefer explicit terminal identity
* (TERM_PROGRAM/TERM or XTVERSION) in real sessions, but keep tests
* deterministic by disabling the env-based detection under NODE_ENV=test. */
export function shouldSkipMainScreenSyncMarkers(): boolean {
return isGhosttyTerminal()
}
/** Ghostty's main-screen prompt updates are currently more reliable when we
* bypass the incremental diff path and rewrite the visible prompt block. */
export function shouldUseMainScreenRewrite(): boolean {
return isGhosttyTerminal()
}
// Terminals known to correctly implement the Kitty keyboard protocol
// (CSI >1u) and/or xterm modifyOtherKeys (CSI >4;2m) for ctrl+shift+<letter>
// disambiguation. We previously enabled unconditionally (#23350), assuming

View File

@@ -13,6 +13,7 @@ const execFileNoThrowMock = mock(
mock.module('../../utils/execFileNoThrow.js', () => ({
execFileNoThrow: execFileNoThrowMock,
execFileNoThrowWithCwd: execFileNoThrowMock,
}))
mock.module('../../utils/tempfile.js', () => ({

View File

@@ -238,6 +238,7 @@ import { usePromptsFromClaudeInChrome } from 'src/hooks/usePromptsFromClaudeInCh
import { getTipToShowOnSpinner, recordShownTip } from 'src/services/tips/tipScheduler.js';
import type { Theme } from 'src/utils/theme.js';
import { isPromptTypingSuppressionActive } from './replInputSuppression.js';
import { shouldRunStartupChecks } from './replStartupGates.js';
import { checkAndDisableBypassPermissionsIfNeeded, checkAndDisableAutoModeIfNeeded, useKickOffCheckAndDisableBypassPermissionsIfNeeded, useKickOffCheckAndDisableAutoModeIfNeeded } from 'src/utils/permissions/bypassPermissionsKillswitch.js';
import { SandboxManager } from 'src/utils/sandbox/sandbox-adapter.js';
import { SANDBOX_NETWORK_ACCESS_TOOL_NAME } from 'src/cli/structuredIO.js';
@@ -616,7 +617,6 @@ export function REPL({
const toolPermissionContext = useAppState(s => s.toolPermissionContext);
const verbose = useAppState(s => s.verbose);
const mcp = useAppState(s => s.mcp);
const plugins = useAppState(s => s.plugins);
const agentDefinitions = useAppState(s => s.agentDefinitions);
const fileHistory = useAppState(s => s.fileHistory);
const initialMessage = useAppState(s => s.initialMessage);
@@ -779,7 +779,7 @@ export function REPL({
}, [localTools, initialTools]);
// Initialize plugin management
useManagePlugins({
const pluginCommands = useManagePlugins({
enabled: !isRemoteSession
});
const tasksV2 = useTasksV2WithCollapseEffect();
@@ -792,10 +792,8 @@ export function REPL({
// accepts, and only then is the REPL component mounted and this effect runs.
// This ensures that plugin installations from repository and user settings only
// happen after explicit user consent to trust the current working directory.
useEffect(() => {
if (isRemoteSession) return;
void performStartupChecks(setAppState);
}, [setAppState, isRemoteSession]);
// Deferring startup checks is handled below (after promptTypingSuppressionActive
// is declared) to avoid temporal dead zone issues.
// Allow Claude in Chrome MCP to send prompts through MCP notifications
// and sync permission mode changes to the Chrome extension
@@ -827,10 +825,16 @@ export function REPL({
}, [mainThreadAgentDefinition, mergedTools]);
// Merge commands from local state, plugins, and MCP
const commandsWithPlugins = useMergedCommands(localCommands, plugins.commands as Command[]);
const commandsWithPlugins = useMergedCommands(localCommands, pluginCommands as Command[]);
const mergedCommands = useMergedCommands(commandsWithPlugins, mcp.commands as Command[]);
// Keep plugin commands out of render-time command props. Feeding the full
// execution set into PromptInput/Messages reintroduced the startup repaint
// freeze, while transcript rendering still round-trips plugin skills via the
// SkillTool's `skill` payload without needing plugin command objects here.
const renderMergedCommands = useMergedCommands(localCommands, mcp.commands as Command[]);
// Filter out all commands if disableSlashCommands is true
const commands = useMemo(() => disableSlashCommands ? [] : mergedCommands, [disableSlashCommands, mergedCommands]);
const renderCommands = useMemo(() => disableSlashCommands ? [] : renderMergedCommands, [disableSlashCommands, renderMergedCommands]);
useIdeLogging(isRemoteSession ? EMPTY_MCP_CLIENTS : mcp.clients);
useIdeSelection(isRemoteSession ? EMPTY_MCP_CLIENTS : mcp.clients, setIDESelection);
const [streamMode, setStreamMode] = useState<SpinnerMode>('responding');
@@ -1429,6 +1433,25 @@ export function REPL({
const activeRemote = sshRemote.isRemoteMode ? sshRemote : directConnect.isRemoteMode ? directConnect : remoteSession;
const [pastedContents, setPastedContents] = useState<Record<number, PastedContent>>({});
const [submitCount, setSubmitCount] = useState(0);
// Defer startup checks until the user has submitted their first message.
// A timeout or grace period is insufficient (issue #363): if the user pauses
// before typing, startup checks can still fire and recommendation dialogs
// steal focus. Only the user's first submission guarantees the prompt was
// the first thing they interacted with.
const startupChecksStartedRef = React.useRef(false);
const hasHadFirstSubmission = (submitCount ?? 0) > 0;
useEffect(() => {
if (isRemoteSession) return;
if (startupChecksStartedRef.current) return;
if (!shouldRunStartupChecks({
isRemoteSession,
hasStarted: startupChecksStartedRef.current,
hasHadFirstSubmission,
})) return;
startupChecksStartedRef.current = true;
void performStartupChecks(setAppState);
}, [setAppState, isRemoteSession, hasHadFirstSubmission]);
// Ref instead of state to avoid triggering React re-renders on every
// streaming text_delta. The spinner reads this via its animation timer.
const responseLengthRef = useRef(0);
@@ -2061,13 +2084,14 @@ export function REPL({
if (allowDialogsWithAnimation && showRemoteCallout) return 'remote-callout';
// LSP plugin recommendation (lowest priority - non-blocking suggestion)
if (allowDialogsWithAnimation && lspRecommendation) return 'lsp-recommendation';
// Suppress during startup window to prevent stealing focus from the prompt (issue #363)
if (allowDialogsWithAnimation && lspRecommendation && startupChecksStartedRef.current) return 'lsp-recommendation';
// Plugin hint from CLI/SDK stderr (same priority band as LSP rec)
if (allowDialogsWithAnimation && hintRecommendation) return 'plugin-hint';
if (allowDialogsWithAnimation && hintRecommendation && startupChecksStartedRef.current) return 'plugin-hint';
// Desktop app upsell (max 3 launches, lowest priority)
if (allowDialogsWithAnimation && showDesktopUpsellStartup) return 'desktop-upsell';
if (allowDialogsWithAnimation && showDesktopUpsellStartup && startupChecksStartedRef.current) return 'desktop-upsell';
return undefined;
}
const focusedInputDialog = getFocusedInputDialog();
@@ -4408,7 +4432,7 @@ export function REPL({
// and transcript-mode are mutually exclusive (this early return), so
// only one ScrollBox is ever mounted at a time.
const transcriptScrollRef = isFullscreenEnvEnabled() && !disableVirtualScroll && !dumpMode ? scrollRef : undefined;
const transcriptMessagesElement = <Messages messages={transcriptMessages} tools={tools} commands={commands} verbose={true} toolJSX={null} toolUseConfirmQueue={[]} inProgressToolUseIDs={inProgressToolUseIDs} isMessageSelectorVisible={false} conversationId={conversationId} screen={screen} agentDefinitions={agentDefinitions} streamingToolUses={transcriptStreamingToolUses} showAllInTranscript={showAllInTranscript} onOpenRateLimitOptions={handleOpenRateLimitOptions} isLoading={isLoading} hidePastThinking={true} streamingThinking={streamingThinking} scrollRef={transcriptScrollRef} jumpRef={jumpRef} onSearchMatchesChange={onSearchMatchesChange} scanElement={scanElement} setPositions={setPositions} disableRenderCap={dumpMode} />;
const transcriptMessagesElement = <Messages messages={transcriptMessages} tools={tools} commands={renderCommands} verbose={true} toolJSX={null} toolUseConfirmQueue={[]} inProgressToolUseIDs={inProgressToolUseIDs} isMessageSelectorVisible={false} conversationId={conversationId} screen={screen} agentDefinitions={agentDefinitions} streamingToolUses={transcriptStreamingToolUses} showAllInTranscript={showAllInTranscript} onOpenRateLimitOptions={handleOpenRateLimitOptions} isLoading={isLoading} hidePastThinking={true} streamingThinking={streamingThinking} scrollRef={transcriptScrollRef} jumpRef={jumpRef} onSearchMatchesChange={onSearchMatchesChange} scanElement={scanElement} setPositions={setPositions} disableRenderCap={dumpMode} />;
const transcriptToolJSX = toolJSX && <Box flexDirection="column" width="100%">
{toolJSX.jsx}
</Box>;
@@ -4576,7 +4600,7 @@ export function REPL({
jumpToNew(scrollRef.current);
}} scrollable={<>
<TeammateViewHeader />
<Messages messages={displayedMessages} tools={tools} commands={commands} verbose={verbose} toolJSX={toolJSX} toolUseConfirmQueue={toolUseConfirmQueue} inProgressToolUseIDs={viewedTeammateTask ? viewedTeammateTask.inProgressToolUseIDs ?? new Set() : inProgressToolUseIDs} isMessageSelectorVisible={isMessageSelectorVisible} conversationId={conversationId} screen={screen} streamingToolUses={streamingToolUses} showAllInTranscript={showAllInTranscript} agentDefinitions={agentDefinitions} onOpenRateLimitOptions={handleOpenRateLimitOptions} isLoading={isLoading} streamingText={isLoading && !viewedAgentTask ? visibleStreamingText : null} isBriefOnly={viewedAgentTask ? false : isBriefOnly} unseenDivider={viewedAgentTask ? undefined : unseenDivider} scrollRef={isFullscreenEnvEnabled() ? scrollRef : undefined} trackStickyPrompt={isFullscreenEnvEnabled() ? true : undefined} cursor={cursor} setCursor={setCursor} cursorNavRef={cursorNavRef} />
<Messages messages={displayedMessages} tools={tools} commands={renderCommands} verbose={verbose} toolJSX={toolJSX} toolUseConfirmQueue={toolUseConfirmQueue} inProgressToolUseIDs={viewedTeammateTask ? viewedTeammateTask.inProgressToolUseIDs ?? new Set() : inProgressToolUseIDs} isMessageSelectorVisible={isMessageSelectorVisible} conversationId={conversationId} screen={screen} streamingToolUses={streamingToolUses} showAllInTranscript={showAllInTranscript} agentDefinitions={agentDefinitions} onOpenRateLimitOptions={handleOpenRateLimitOptions} isLoading={isLoading} streamingText={isLoading && !viewedAgentTask ? visibleStreamingText : null} isBriefOnly={viewedAgentTask ? false : isBriefOnly} unseenDivider={viewedAgentTask ? undefined : unseenDivider} scrollRef={isFullscreenEnvEnabled() ? scrollRef : undefined} trackStickyPrompt={isFullscreenEnvEnabled() ? true : undefined} cursor={cursor} setCursor={setCursor} cursorNavRef={cursorNavRef} />
<AwsAuthStatusBox />
{/* Hide the processing placeholder while a modal is showing —
it would sit at the last visible transcript row right above
@@ -4909,7 +4933,7 @@ export function REPL({
{"external" === 'ant' && skillImprovementSurvey.suggestion && <SkillImprovementSurvey isOpen={skillImprovementSurvey.isOpen} skillName={skillImprovementSurvey.suggestion.skillName} updates={skillImprovementSurvey.suggestion.updates} handleSelect={skillImprovementSurvey.handleSelect} inputValue={inputValue} setInputValue={setInputValue} />}
{showIssueFlagBanner && <IssueFlagBanner />}
{ }
<PromptInput debug={debug} ideSelection={ideSelection} hasSuppressedDialogs={!!hasSuppressedDialogs} isLocalJSXCommandActive={isShowingLocalJSXCommand} getToolUseContext={getToolUseContext} toolPermissionContext={toolPermissionContext} setToolPermissionContext={setToolPermissionContext} apiKeyStatus={apiKeyStatus} commands={commands} agents={agentDefinitions.activeAgents} isLoading={isLoading} onExit={handleExit} verbose={verbose} messages={messages} onAutoUpdaterResult={setAutoUpdaterResult} autoUpdaterResult={autoUpdaterResult} input={inputValue} onInputChange={setInputValue} mode={inputMode} onModeChange={setInputMode} stashedPrompt={stashedPrompt} setStashedPrompt={setStashedPrompt} submitCount={submitCount} onShowMessageSelector={handleShowMessageSelector} onMessageActionsEnter={
<PromptInput debug={debug} ideSelection={ideSelection} hasSuppressedDialogs={!!hasSuppressedDialogs} isLocalJSXCommandActive={isShowingLocalJSXCommand} getToolUseContext={getToolUseContext} toolPermissionContext={toolPermissionContext} setToolPermissionContext={setToolPermissionContext} apiKeyStatus={apiKeyStatus} commands={renderCommands} agents={agentDefinitions.activeAgents} isLoading={isLoading} onExit={handleExit} verbose={verbose} messages={messages} onAutoUpdaterResult={setAutoUpdaterResult} autoUpdaterResult={autoUpdaterResult} input={inputValue} onInputChange={setInputValue} mode={inputMode} onModeChange={setInputMode} stashedPrompt={stashedPrompt} setStashedPrompt={setStashedPrompt} submitCount={submitCount} onShowMessageSelector={handleShowMessageSelector} onMessageActionsEnter={
// Works during isLoading — edit cancels first; uuid selection survives appends.
feature('MESSAGE_ACTIONS') && isFullscreenEnvEnabled() && !disableMessageActions ? enterMessageActions : undefined} mcpClients={mcpClients} pastedContents={pastedContents} setPastedContents={setPastedContents} vimMode={vimMode} setVimMode={setVimMode} showBashesDialog={showBashesDialog} setShowBashesDialog={setShowBashesDialog} onSubmit={onSubmit} onAgentSubmit={onAgentSubmit} isSearchingHistory={isSearchingHistory} setIsSearchingHistory={setIsSearchingHistory} helpOpen={isHelpOpen} setHelpOpen={setIsHelpOpen} insertTextRef={feature('VOICE_MODE') ? insertTextRef : undefined} voiceInterimRange={voice.interimRange} />
<SessionBackgroundHint onBackgroundSession={handleBackgroundSession} isLoading={isLoading} />

View File

@@ -0,0 +1,53 @@
import { describe, expect, test } from 'bun:test'
import { shouldRunStartupChecks } from './replStartupGates.js'
describe('shouldRunStartupChecks', () => {
test('runs checks after first message submission', () => {
expect(shouldRunStartupChecks({
isRemoteSession: false,
hasStarted: false,
hasHadFirstSubmission: true,
})).toBe(true)
})
test('skips checks in remote sessions even after submission', () => {
expect(shouldRunStartupChecks({
isRemoteSession: true,
hasStarted: false,
hasHadFirstSubmission: true,
})).toBe(false)
})
test('skips checks if already started', () => {
expect(shouldRunStartupChecks({
isRemoteSession: false,
hasStarted: true,
hasHadFirstSubmission: true,
})).toBe(false)
})
test('does not run checks before first submission', () => {
expect(shouldRunStartupChecks({
isRemoteSession: false,
hasStarted: false,
hasHadFirstSubmission: false,
})).toBe(false)
})
test('does not run checks when idle before first submission', () => {
expect(shouldRunStartupChecks({
isRemoteSession: false,
hasStarted: false,
hasHadFirstSubmission: false,
})).toBe(false)
})
test('skips checks in remote session regardless of other conditions', () => {
expect(shouldRunStartupChecks({
isRemoteSession: true,
hasStarted: false,
hasHadFirstSubmission: false,
})).toBe(false)
})
})

View File

@@ -0,0 +1,35 @@
/**
* Startup gates for the REPL.
*
* Prevents startup plugin checks and recommendation dialogs from stealing
* focus before the user has interacted with the prompt.
*
* This addresses the root cause of issue #363: on mount, performStartupChecks
* triggers plugin loading, which populates trackedFiles, which triggers
* useLspPluginRecommendation to surface an LSP recommendation dialog. Since
* promptTypingSuppressionActive is false before the user has typed anything,
* getFocusedInputDialog() returns the dialog, unmounting PromptInput entirely.
*
* The fix gates startup checks on actual prompt interaction. A pure timeout
* or grace period is insufficient because pausing before typing would still
* allow dialogs to steal focus. Only the user's first submission guarantees
* the prompt is no longer in the vulnerable pre-interaction window.
*/
/**
* Determines whether startup checks should run.
*
* Startup checks are deferred until the user has submitted their first
* message. This guarantees the prompt was the first thing the user interacted
* with, so no recommendation dialog can steal focus before the first keystroke.
*/
export function shouldRunStartupChecks(options: {
isRemoteSession: boolean;
hasStarted: boolean;
hasHadFirstSubmission: boolean;
}): boolean {
if (options.isRemoteSession) return false;
if (options.hasStarted) return false;
if (!options.hasHadFirstSubmission) return false;
return true;
}

View File

@@ -14,16 +14,27 @@ type ShimClient = {
const originalFetch = globalThis.fetch
const originalMacro = (globalThis as Record<string, unknown>).MACRO
const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
GEMINI_API_KEY: process.env.GEMINI_API_KEY,
GEMINI_MODEL: process.env.GEMINI_MODEL,
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL,
GEMINI_AUTH_MODE: process.env.GEMINI_AUTH_MODE,
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY,
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_MODEL: process.env.OPENAI_MODEL,
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY,
ANTHROPIC_AUTH_TOKEN: process.env.ANTHROPIC_AUTH_TOKEN,
ANTHROPIC_CUSTOM_HEADERS: process.env.ANTHROPIC_CUSTOM_HEADERS,
}
function restoreEnv(key: string, value: string | undefined): void {
if (value === undefined) {
delete process.env[key]
} else {
process.env[key] = value
}
}
beforeEach(() => {
@@ -32,27 +43,33 @@ beforeEach(() => {
process.env.GEMINI_API_KEY = 'gemini-test-key'
process.env.GEMINI_MODEL = 'gemini-2.0-flash'
process.env.GEMINI_BASE_URL = 'https://gemini.example/v1beta/openai'
process.env.GEMINI_AUTH_MODE = 'api-key'
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.GOOGLE_API_KEY
delete process.env.OPENAI_API_KEY
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_MODEL
delete process.env.ANTHROPIC_API_KEY
delete process.env.ANTHROPIC_AUTH_TOKEN
delete process.env.ANTHROPIC_CUSTOM_HEADERS
})
afterEach(() => {
;(globalThis as Record<string, unknown>).MACRO = originalMacro
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
process.env.GEMINI_API_KEY = originalEnv.GEMINI_API_KEY
process.env.GEMINI_MODEL = originalEnv.GEMINI_MODEL
process.env.GEMINI_BASE_URL = originalEnv.GEMINI_BASE_URL
process.env.GOOGLE_API_KEY = originalEnv.GOOGLE_API_KEY
process.env.OPENAI_API_KEY = originalEnv.OPENAI_API_KEY
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
process.env.ANTHROPIC_API_KEY = originalEnv.ANTHROPIC_API_KEY
process.env.ANTHROPIC_AUTH_TOKEN = originalEnv.ANTHROPIC_AUTH_TOKEN
restoreEnv('CLAUDE_CODE_USE_OPENAI', originalEnv.CLAUDE_CODE_USE_OPENAI)
restoreEnv('CLAUDE_CODE_USE_GEMINI', originalEnv.CLAUDE_CODE_USE_GEMINI)
restoreEnv('GEMINI_API_KEY', originalEnv.GEMINI_API_KEY)
restoreEnv('GEMINI_MODEL', originalEnv.GEMINI_MODEL)
restoreEnv('GEMINI_BASE_URL', originalEnv.GEMINI_BASE_URL)
restoreEnv('GEMINI_AUTH_MODE', originalEnv.GEMINI_AUTH_MODE)
restoreEnv('GOOGLE_API_KEY', originalEnv.GOOGLE_API_KEY)
restoreEnv('OPENAI_API_KEY', originalEnv.OPENAI_API_KEY)
restoreEnv('OPENAI_BASE_URL', originalEnv.OPENAI_BASE_URL)
restoreEnv('OPENAI_MODEL', originalEnv.OPENAI_MODEL)
restoreEnv('ANTHROPIC_API_KEY', originalEnv.ANTHROPIC_API_KEY)
restoreEnv('ANTHROPIC_AUTH_TOKEN', originalEnv.ANTHROPIC_AUTH_TOKEN)
restoreEnv('ANTHROPIC_CUSTOM_HEADERS', originalEnv.ANTHROPIC_CUSTOM_HEADERS)
globalThis.fetch = originalFetch
})
@@ -119,3 +136,135 @@ test('routes Gemini provider requests through the OpenAI-compatible shim', async
model: 'gemini-2.0-flash',
})
})
test('strips Anthropic-specific custom headers before sending OpenAI-compatible shim requests', async () => {
let capturedHeaders: Headers | undefined
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_API_KEY = 'openai-test-key'
process.env.OPENAI_BASE_URL = 'http://example.test/v1'
process.env.OPENAI_MODEL = 'gpt-4o'
process.env.ANTHROPIC_CUSTOM_HEADERS = [
'anthropic-version: 2023-06-01',
'anthropic-beta: prompt-caching-2024-07-31',
'x-anthropic-additional-protection: true',
'x-claude-remote-session-id: remote-123',
'x-app: cli',
'x-safe-header: keep-me',
].join('\n')
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response(
JSON.stringify({
id: 'chatcmpl-openai',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = (await getAnthropicClient({
maxRetries: 0,
model: 'gpt-4o',
})) as unknown as ShimClient
await client.beta.messages.create({
model: 'gpt-4o',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
})
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-anthropic-additional-protection')).toBeNull()
expect(capturedHeaders?.get('x-claude-remote-session-id')).toBeNull()
expect(capturedHeaders?.get('x-app')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
expect(capturedHeaders?.get('authorization')).toBe('Bearer openai-test-key')
})
test('strips Anthropic-specific custom headers on providerOverride shim requests too', async () => {
let capturedHeaders: Headers | undefined
process.env.ANTHROPIC_CUSTOM_HEADERS = [
'anthropic-version: 2023-06-01',
'anthropic-beta: prompt-caching-2024-07-31',
'x-claude-remote-session-id: remote-123',
'x-safe-header: keep-me',
].join('\n')
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response(
JSON.stringify({
id: 'chatcmpl-provider-override',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = (await getAnthropicClient({
maxRetries: 0,
providerOverride: {
model: 'gpt-4o',
baseURL: 'http://example.test/v1',
apiKey: 'provider-test-key',
},
})) as unknown as ShimClient
await client.beta.messages.create({
model: 'unused',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
})
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-claude-remote-session-id')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
expect(capturedHeaders?.get('authorization')).toBe('Bearer provider-test-key')
})

View File

@@ -177,7 +177,8 @@ export async function getAnthropicClient({
if (
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
) {
const { createOpenAIShimClient } = await import('./openaiShim.js')
return createOpenAIShimClient({

View File

@@ -17,16 +17,23 @@ const tempDirs: string[] = []
const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
}
afterEach(() => {
if (originalEnv.OPENAI_BASE_URL === undefined) delete process.env.OPENAI_BASE_URL
else process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
if (originalEnv.OPENAI_API_BASE === undefined) delete process.env.OPENAI_API_BASE
else process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
if (originalEnv.CLAUDE_CODE_USE_GITHUB === undefined) delete process.env.CLAUDE_CODE_USE_GITHUB
else process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
while (tempDirs.length > 0) {
const dir = tempDirs.pop()
if (dir) rmSync(dir, { recursive: true, force: true })
}
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
})
function createTempAuthJson(payload: Record<string, unknown>): string {
@@ -71,6 +78,7 @@ describe('Codex provider config', () => {
test('resolves codexplan alias to Codex transport with reasoning', () => {
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE
delete process.env.CLAUDE_CODE_USE_GITHUB
const resolved = resolveProviderRequest({ model: 'codexplan' })
expect(resolved.transport).toBe('codex_responses')
@@ -457,6 +465,37 @@ describe('Codex request translation', () => {
])
})
test('strips leaked reasoning preamble from completed Codex text responses', () => {
const message = convertCodexResponseToAnthropicMessage(
{
id: 'resp_1',
model: 'gpt-5.4',
output: [
{
type: 'message',
role: 'assistant',
content: [
{
type: 'output_text',
text:
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?',
},
],
},
],
usage: { input_tokens: 12, output_tokens: 4 },
},
'gpt-5.4',
)
expect(message.content).toEqual([
{
type: 'text',
text: 'Hey! How can I help you today?',
},
])
})
test('translates Codex SSE text stream into Anthropic events', async () => {
const responseText = [
'event: response.output_item.added',
@@ -487,4 +526,44 @@ describe('Codex request translation', () => {
'message_stop',
])
})
test('strips leaked reasoning preamble from Codex SSE text stream', async () => {
const responseText = [
'event: response.output_item.added',
'data: {"type":"response.output_item.added","item":{"id":"msg_1","type":"message","status":"in_progress","content":[],"role":"assistant"},"output_index":0,"sequence_number":0}',
'',
'event: response.content_part.added',
'data: {"type":"response.content_part.added","content_index":0,"item_id":"msg_1","output_index":0,"part":{"type":"output_text","text":""},"sequence_number":1}',
'',
'event: response.output_text.delta',
'data: {"type":"response.output_text.delta","content_index":0,"delta":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?","item_id":"msg_1","output_index":0,"sequence_number":2}',
'',
'event: response.output_item.done',
'data: {"type":"response.output_item.done","item":{"id":"msg_1","type":"message","status":"completed","content":[{"type":"output_text","text":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?"}],"role":"assistant"},"output_index":0,"sequence_number":3}',
'',
'event: response.completed',
'data: {"type":"response.completed","response":{"id":"resp_1","status":"completed","model":"gpt-5.4","output":[{"type":"message","role":"assistant","content":[{"type":"output_text","text":"The user just said \\"hey\\" - a simple greeting. I should respond briefly and friendly.\\n\\nHey! How can I help you today?"}]}],"usage":{"input_tokens":2,"output_tokens":1}},"sequence_number":4}',
'',
].join('\n')
const stream = new ReadableStream({
start(controller) {
controller.enqueue(new TextEncoder().encode(responseText))
controller.close()
},
})
const textDeltas: string[] = []
for await (const event of codexStreamToAnthropic(
new Response(stream),
'gpt-5.4',
)) {
const delta = (event as { delta?: { type?: string; text?: string } }).delta
if (delta?.type === 'text_delta' && typeof delta.text === 'string') {
textDeltas.push(delta.text)
}
}
expect(textDeltas).toEqual(['Hey! How can I help you today?'])
})
})

View File

@@ -4,6 +4,11 @@ import type {
ResolvedProviderRequest,
} from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from './openaiSchemaSanitizer.js'
import {
looksLikeLeakedReasoningPrefix,
shouldBufferPotentialReasoningPrefix,
stripLeakedReasoningPreamble,
} from './reasoningLeakSanitizer.js'
export interface AnthropicUsage {
input_tokens: number
@@ -75,12 +80,17 @@ type CodexSseEvent = {
function makeUsage(usage?: {
input_tokens?: number
output_tokens?: number
input_tokens_details?: { cached_tokens?: number }
prompt_tokens_details?: { cached_tokens?: number }
}): AnthropicUsage {
return {
input_tokens: usage?.input_tokens ?? 0,
output_tokens: usage?.output_tokens ?? 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
cache_read_input_tokens:
usage?.input_tokens_details?.cached_tokens ??
usage?.prompt_tokens_details?.cached_tokens ??
0,
}
}
@@ -678,17 +688,34 @@ export async function* codexStreamToAnthropic(
{ index: number; toolUseId: string }
>()
let activeTextBlockIndex: number | null = null
let activeTextBuffer = ''
let textBufferMode: 'none' | 'pending' | 'strip' = 'none'
let nextContentBlockIndex = 0
let sawToolUse = false
let finalResponse: Record<string, any> | undefined
const closeActiveTextBlock = async function* () {
if (activeTextBlockIndex === null) return
if (textBufferMode !== 'none') {
const sanitized = stripLeakedReasoningPreamble(activeTextBuffer)
if (sanitized) {
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
delta: {
type: 'text_delta',
text: sanitized,
},
}
}
}
yield {
type: 'content_block_stop',
index: activeTextBlockIndex,
}
activeTextBlockIndex = null
activeTextBuffer = ''
textBufferMode = 'none'
}
const startTextBlockIfNeeded = async function* () {
@@ -764,7 +791,36 @@ export async function* codexStreamToAnthropic(
if (event.event === 'response.output_text.delta') {
yield* startTextBlockIfNeeded()
activeTextBuffer += payload.delta ?? ''
if (activeTextBlockIndex !== null) {
if (
textBufferMode === 'strip' ||
looksLikeLeakedReasoningPrefix(activeTextBuffer)
) {
textBufferMode = 'strip'
continue
}
if (textBufferMode === 'pending') {
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
continue
}
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
delta: {
type: 'text_delta',
text: activeTextBuffer,
},
}
textBufferMode = 'none'
continue
}
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
textBufferMode = 'pending'
continue
}
yield {
type: 'content_block_delta',
index: activeTextBlockIndex,
@@ -839,8 +895,16 @@ export async function* codexStreamToAnthropic(
stop_sequence: null,
},
usage: {
input_tokens: finalResponse?.usage?.input_tokens ?? 0,
// Subtract cached tokens: OpenAI includes them in input_tokens,
// but Anthropic convention treats input_tokens as non-cached only.
input_tokens: (finalResponse?.usage?.input_tokens ?? 0) -
(finalResponse?.usage?.input_tokens_details?.cached_tokens ??
finalResponse?.usage?.prompt_tokens_details?.cached_tokens ?? 0),
output_tokens: finalResponse?.usage?.output_tokens ?? 0,
cache_read_input_tokens:
finalResponse?.usage?.input_tokens_details?.cached_tokens ??
finalResponse?.usage?.prompt_tokens_details?.cached_tokens ??
0,
},
}
yield { type: 'message_stop' }
@@ -859,7 +923,7 @@ export function convertCodexResponseToAnthropicMessage(
if (part?.type === 'output_text') {
content.push({
type: 'text',
text: part.text ?? '',
text: stripLeakedReasoningPreamble(part.text ?? ''),
})
}
}

View File

@@ -7,6 +7,10 @@ const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_MODEL: process.env.OPENAI_MODEL,
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
GH_TOKEN: process.env.GH_TOKEN,
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
GEMINI_API_KEY: process.env.GEMINI_API_KEY,
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY,
@@ -15,6 +19,7 @@ const originalEnv = {
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL,
GEMINI_MODEL: process.env.GEMINI_MODEL,
GOOGLE_CLOUD_PROJECT: process.env.GOOGLE_CLOUD_PROJECT,
ANTHROPIC_CUSTOM_HEADERS: process.env.ANTHROPIC_CUSTOM_HEADERS,
}
const originalFetch = globalThis.fetch
@@ -70,6 +75,10 @@ beforeEach(() => {
process.env.OPENAI_BASE_URL = 'http://example.test/v1'
process.env.OPENAI_API_KEY = 'test-key'
delete process.env.OPENAI_MODEL
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.GEMINI_API_KEY
delete process.env.GOOGLE_API_KEY
@@ -78,12 +87,17 @@ beforeEach(() => {
delete process.env.GEMINI_BASE_URL
delete process.env.GEMINI_MODEL
delete process.env.GOOGLE_CLOUD_PROJECT
delete process.env.ANTHROPIC_CUSTOM_HEADERS
})
afterEach(() => {
restoreEnv('OPENAI_BASE_URL', originalEnv.OPENAI_BASE_URL)
restoreEnv('OPENAI_API_KEY', originalEnv.OPENAI_API_KEY)
restoreEnv('OPENAI_MODEL', originalEnv.OPENAI_MODEL)
restoreEnv('CLAUDE_CODE_USE_GITHUB', originalEnv.CLAUDE_CODE_USE_GITHUB)
restoreEnv('GITHUB_TOKEN', originalEnv.GITHUB_TOKEN)
restoreEnv('GH_TOKEN', originalEnv.GH_TOKEN)
restoreEnv('CLAUDE_CODE_USE_OPENAI', originalEnv.CLAUDE_CODE_USE_OPENAI)
restoreEnv('CLAUDE_CODE_USE_GEMINI', originalEnv.CLAUDE_CODE_USE_GEMINI)
restoreEnv('GEMINI_API_KEY', originalEnv.GEMINI_API_KEY)
restoreEnv('GOOGLE_API_KEY', originalEnv.GOOGLE_API_KEY)
@@ -92,9 +106,227 @@ afterEach(() => {
restoreEnv('GEMINI_BASE_URL', originalEnv.GEMINI_BASE_URL)
restoreEnv('GEMINI_MODEL', originalEnv.GEMINI_MODEL)
restoreEnv('GOOGLE_CLOUD_PROJECT', originalEnv.GOOGLE_CLOUD_PROJECT)
restoreEnv('ANTHROPIC_CUSTOM_HEADERS', originalEnv.ANTHROPIC_CUSTOM_HEADERS)
globalThis.fetch = originalFetch
})
test('strips canonical Anthropic headers from direct shim defaultHeaders', async () => {
let capturedHeaders: Headers | undefined
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = createOpenAIShimClient({
defaultHeaders: {
'anthropic-version': '2023-06-01',
'anthropic-beta': 'prompt-caching-2024-07-31',
'x-anthropic-additional-protection': 'true',
'x-claude-remote-session-id': 'remote-123',
'x-app': 'cli',
'x-client-app': 'sdk',
'x-safe-header': 'keep-me',
},
}) as OpenAIShimClient
await client.beta.messages.create({
model: 'gpt-4o',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
})
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-anthropic-additional-protection')).toBeNull()
expect(capturedHeaders?.get('x-claude-remote-session-id')).toBeNull()
expect(capturedHeaders?.get('x-app')).toBeNull()
expect(capturedHeaders?.get('x-client-app')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
})
test('strips canonical Anthropic headers from per-request shim headers too', async () => {
let capturedHeaders: Headers | undefined
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create(
{
model: 'gpt-4o',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
},
{
headers: {
'anthropic-version': '2023-06-01',
'anthropic-beta': 'prompt-caching-2024-07-31',
'x-safe-header': 'keep-me',
},
},
)
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
})
test('strips Anthropic-specific headers on GitHub Codex transport requests', async () => {
let capturedHeaders: Headers | undefined
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_API_KEY = 'github-test-key'
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_MODEL
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response('', {
status: 200,
headers: {
'Content-Type': 'text/event-stream',
},
})
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create(
{
model: 'github:gpt-5-codex',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: true,
},
{
headers: {
'anthropic-version': '2023-06-01',
'anthropic-beta': 'prompt-caching-2024-07-31',
'x-anthropic-additional-protection': 'true',
'x-safe-header': 'keep-me',
},
},
)
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('anthropic-beta')).toBeNull()
expect(capturedHeaders?.get('x-anthropic-additional-protection')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
expect(capturedHeaders?.get('authorization')).toBe('Bearer github-test-key')
expect(capturedHeaders?.get('editor-plugin-version')).toBe('copilot-chat/0.26.7')
})
test('strips Anthropic-specific headers on GitHub Codex transport with providerOverride API key', async () => {
let capturedHeaders: Headers | undefined
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_API_KEY = 'env-should-not-win'
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_MODEL
globalThis.fetch = (async (_input, init) => {
capturedHeaders = new Headers(init?.headers)
return new Response('', {
status: 200,
headers: {
'Content-Type': 'text/event-stream',
},
})
}) as FetchType
const client = createOpenAIShimClient({
providerOverride: {
model: 'github:gpt-5-codex',
baseURL: 'https://api.githubcopilot.com',
apiKey: 'provider-override-key',
},
}) as OpenAIShimClient
await client.beta.messages.create(
{
model: 'ignored',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: true,
},
{
headers: {
'anthropic-version': '2023-06-01',
'x-claude-remote-session-id': 'remote-123',
'x-safe-header': 'keep-me',
},
},
)
expect(capturedHeaders?.get('anthropic-version')).toBeNull()
expect(capturedHeaders?.get('x-claude-remote-session-id')).toBeNull()
expect(capturedHeaders?.get('x-safe-header')).toBe('keep-me')
expect(capturedHeaders?.get('authorization')).toBe('Bearer provider-override-key')
expect(capturedHeaders?.get('editor-plugin-version')).toBe('copilot-chat/0.26.7')
})
test('preserves usage from final OpenAI stream chunk with empty choices', async () => {
globalThis.fetch = (async (_input, init) => {
const url = typeof _input === 'string' ? _input : _input.url
@@ -1806,12 +2038,70 @@ test('sanitizes malformed MCP tool schemas before sending them to OpenAI', async
| undefined
expect(parameters?.additionalProperties).toBe(false)
expect(parameters?.required).toEqual(['priority'])
// No required[] in the original schema → none added (optional properties must not be forced required)
expect(parameters?.required).toEqual([])
expect(properties?.priority?.type).toBe('integer')
expect(properties?.priority?.enum).toEqual([0, 1, 2, 3])
expect(properties?.priority).not.toHaveProperty('default')
})
test('optional tool properties are not added to required[] — fixes Groq/Azure 400 tool_use_failed', async () => {
// Regression test for: all optional properties being sent as required in strict mode,
// causing providers like Groq to reject valid tool calls where the model omits optional args.
let requestBody: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
requestBody = JSON.parse(String(init?.body))
return new Response(
JSON.stringify({
id: 'chatcmpl-4',
model: 'gpt-4o',
choices: [{ message: { role: 'assistant', content: 'ok' }, finish_reason: 'stop' }],
usage: { prompt_tokens: 5, completion_tokens: 2, total_tokens: 7 },
}),
{ headers: { 'Content-Type': 'application/json' } },
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'read a file' }],
tools: [
{
name: 'Read',
description: 'Read a file',
input_schema: {
type: 'object',
properties: {
file_path: { type: 'string', description: 'Absolute path to file' },
offset: { type: 'number', description: 'Line to start from' },
limit: { type: 'number', description: 'Max lines to read' },
pages: { type: 'string', description: 'Page range for PDFs' },
},
required: ['file_path'],
},
},
],
max_tokens: 16,
stream: false,
})
const parameters = (
requestBody?.tools as Array<{ function?: { parameters?: Record<string, unknown> } }>
)?.[0]?.function?.parameters
expect(parameters?.required).toEqual(['file_path'])
const required = parameters?.required as string[] | undefined
expect(required).not.toContain('offset')
expect(required).not.toContain('limit')
expect(required).not.toContain('pages')
expect(parameters?.additionalProperties).toBe(false)
})
// ---------------------------------------------------------------------------
// Issue #202 — consecutive role coalescing (Devstral, Mistral strict templates)
// ---------------------------------------------------------------------------
@@ -1849,7 +2139,7 @@ test('coalesces consecutive user messages to avoid alternation errors (issue #20
stream: false,
})
expect(sentMessages?.length).toBe(2) // system + 1 merged user
expect(sentMessages?.length).toBe(2)
expect(sentMessages?.[0]?.role).toBe('system')
expect(sentMessages?.[1]?.role).toBe('user')
const userContent = sentMessages?.[1]?.content as string
@@ -1883,13 +2173,12 @@ test('coalesces consecutive assistant messages preserving tool_calls (issue #202
stream: false,
})
// system + user + merged assistant + tool
const assistantMsgs = sentMessages?.filter(m => m.role === 'assistant')
expect(assistantMsgs?.length).toBe(1) // two assistant turns merged into one
expect(assistantMsgs?.length).toBe(1)
expect(assistantMsgs?.[0]?.tool_calls?.length).toBeGreaterThan(0)
})
test('non-streaming: reasoning_content emitted as thinking block, used as text when content is null', async () => {
test('non-streaming: reasoning_content emitted as thinking block only when content is null', async () => {
globalThis.fetch = (async (_input, _init) => {
return new Response(
JSON.stringify({
@@ -1931,7 +2220,6 @@ test('non-streaming: reasoning_content emitted as thinking block, used as text w
expect(result.content).toEqual([
{ type: 'thinking', thinking: 'Let me think about this step by step.' },
{ type: 'text', text: 'Let me think about this step by step.' },
])
})
@@ -1975,11 +2263,8 @@ test('non-streaming: empty string content does not fall through to reasoning_con
stream: false,
})) as { content: Array<Record<string, unknown>> }
// reasoning_content should be a thinking block, and also used as text
// since content is empty string (treated as absent)
expect(result.content).toEqual([
{ type: 'thinking', thinking: 'Chain of thought here.' },
{ type: 'text', text: 'Chain of thought here.' },
])
})
@@ -2029,6 +2314,46 @@ test('non-streaming: real content takes precedence over reasoning_content', asyn
])
})
test('non-streaming: strips leaked reasoning preamble from assistant content', async () => {
globalThis.fetch = (async () => {
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-5-mini',
choices: [
{
message: {
role: 'assistant',
content:
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 10,
completion_tokens: 20,
total_tokens: 30,
},
}),
{ headers: { 'Content-Type': 'application/json' } },
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
const result = (await client.beta.messages.create({
model: 'gpt-5-mini',
system: 'test system',
messages: [{ role: 'user', content: 'hey' }],
max_tokens: 64,
stream: false,
})) as { content: Array<Record<string, unknown>> }
expect(result.content).toEqual([
{ type: 'text', text: 'Hey! How can I help you today?' },
])
})
test('streaming: thinking block closed before tool call', async () => {
globalThis.fetch = (async (_input, _init) => {
const chunks = makeStreamChunks([
@@ -2104,7 +2429,6 @@ test('streaming: thinking block closed before tool call', async () => {
const types = events.map(e => e.type)
// Verify thinking block is started, then closed, then tool call starts
const thinkingStartIdx = types.indexOf('content_block_start')
const firstStopIdx = types.indexOf('content_block_stop')
const toolStartIdx = types.indexOf(
@@ -2116,9 +2440,139 @@ test('streaming: thinking block closed before tool call', async () => {
expect(firstStopIdx).toBeGreaterThan(thinkingStartIdx)
expect(toolStartIdx).toBeGreaterThan(firstStopIdx)
// Verify thinking block start content
const thinkingStart = events[thinkingStartIdx] as {
content_block?: Record<string, unknown>
}
expect(thinkingStart?.content_block?.type).toBe('thinking')
})
test('streaming: strips leaked reasoning preamble from assistant content deltas', async () => {
globalThis.fetch = (async () => {
const chunks = makeStreamChunks([
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {
role: 'assistant',
content:
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?',
},
finish_reason: null,
},
],
},
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {},
finish_reason: 'stop',
},
],
},
])
return makeSseResponse(chunks)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
const result = await client.beta.messages
.create({
model: 'gpt-5-mini',
system: 'test system',
messages: [{ role: 'user', content: 'hey' }],
max_tokens: 64,
stream: true,
})
.withResponse()
const textDeltas: string[] = []
for await (const event of result.data) {
const delta = (event as { delta?: { type?: string; text?: string } }).delta
if (delta?.type === 'text_delta' && typeof delta.text === 'string') {
textDeltas.push(delta.text)
}
}
expect(textDeltas).toEqual(['Hey! How can I help you today?'])
})
test('streaming: strips leaked reasoning preamble when split across multiple content chunks', async () => {
globalThis.fetch = (async () => {
const chunks = makeStreamChunks([
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {
role: 'assistant',
content: 'The user said "hey" - this is a simple greeting. ',
},
finish_reason: null,
},
],
},
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {
content:
'I should respond in a friendly, concise way.\n\nHey! How can I help you today?',
},
finish_reason: null,
},
],
},
{
id: 'chatcmpl-1',
object: 'chat.completion.chunk',
model: 'gpt-5-mini',
choices: [
{
index: 0,
delta: {},
finish_reason: 'stop',
},
],
},
])
return makeSseResponse(chunks)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
const result = await client.beta.messages
.create({
model: 'gpt-5-mini',
system: 'test system',
messages: [{ role: 'user', content: 'hey' }],
max_tokens: 64,
stream: true,
})
.withResponse()
const textDeltas: string[] = []
for await (const event of result.data) {
const delta = (event as { delta?: { type?: string; text?: string } }).delta
if (delta?.type === 'text_delta' && typeof delta.text === 'string') {
textDeltas.push(delta.text)
}
}
expect(textDeltas).toEqual(['Hey! How can I help you today?'])
})

View File

@@ -15,9 +15,9 @@
* OPENAI_MODEL=gpt-4o — default model override
* CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark
*
* GitHub Models (models.github.ai), OpenAI-compatible:
* GitHub Copilot API (api.githubcopilot.com), OpenAI-compatible:
* CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI)
* GITHUB_TOKEN or GH_TOKEN — PAT with models access (mapped to Bearer auth)
* GITHUB_TOKEN or GH_TOKEN — Copilot API token (mapped to Bearer auth)
* OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs
*/
@@ -26,10 +26,17 @@ import { isEnvTruthy } from '../../utils/envUtils.js'
import { resolveGeminiCredential } from '../../utils/geminiAuth.js'
import { hydrateGeminiAccessTokenFromSecureStorage } from '../../utils/geminiCredentials.js'
import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubModelsCredentials.js'
import {
looksLikeLeakedReasoningPrefix,
shouldBufferPotentialReasoningPrefix,
stripLeakedReasoningPreamble,
} from './reasoningLeakSanitizer.js'
import {
codexStreamToAnthropic,
collectCodexCompletedResponse,
convertAnthropicMessagesToResponsesInput,
convertCodexResponseToAnthropicMessage,
convertToolsToResponsesTools,
performCodexRequest,
type AnthropicStreamEvent,
type AnthropicUsage,
@@ -39,6 +46,7 @@ import {
isLocalProviderUrl,
resolveCodexApiCredentials,
resolveProviderRequest,
getGithubEndpointType,
} from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
@@ -53,19 +61,56 @@ type SecretValueSource = Partial<{
GEMINI_API_KEY: string
GOOGLE_API_KEY: string
GEMINI_ACCESS_TOKEN: string
MISTRAL_API_KEY: string
}>
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
const GITHUB_API_VERSION = '2022-11-28'
const GITHUB_COPILOT_BASE = 'https://api.githubcopilot.com'
const GITHUB_429_MAX_RETRIES = 3
const GITHUB_429_BASE_DELAY_SEC = 1
const GITHUB_429_MAX_DELAY_SEC = 32
const GEMINI_API_HOST = 'generativelanguage.googleapis.com'
const COPILOT_HEADERS: Record<string, string> = {
'User-Agent': 'GitHubCopilotChat/0.26.7',
'Editor-Version': 'vscode/1.99.3',
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
'Copilot-Integration-Id': 'vscode-chat',
}
function isGithubModelsMode(): boolean {
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
}
function isMistralMode(): boolean {
return isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
}
function filterAnthropicHeaders(
headers: Record<string, string> | undefined,
): Record<string, string> {
if (!headers) return {}
const filtered: Record<string, string> = {}
for (const [key, value] of Object.entries(headers)) {
const lower = key.toLowerCase()
if (
lower.startsWith('x-anthropic') ||
lower.startsWith('anthropic-') ||
lower.startsWith('x-claude') ||
lower === 'x-app' ||
lower === 'x-client-app' ||
lower === 'authorization' ||
lower === 'x-api-key' ||
lower === 'api-key'
) {
continue
}
filtered[key] = value
}
return filtered
}
function hasGeminiApiHost(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
@@ -412,11 +457,13 @@ function normalizeSchemaForOpenAI(
record.properties = normalizedProps
if (strict) {
// OpenAI strict mode requires every property to be listed in required[]
const allKeys = Object.keys(normalizedProps)
record.required = Array.from(new Set([...existingRequired, ...allKeys]))
// OpenAI strict mode requires additionalProperties: false on all object
// schemas — override unconditionally to ensure nested objects comply.
// Keep only the properties that were originally marked required in the schema.
// Adding every property to required[] (the previous behaviour) caused strict
// OpenAI-compatible providers (Groq, Azure, etc.) to reject tool calls because
// the model correctly omits optional arguments — but the provider treats them
// as missing required fields and returns a 400 / tool_use_failed error.
record.required = existingRequired.filter(k => k in normalizedProps)
// additionalProperties: false is still required by strict-mode providers.
record.additionalProperties = false
} else {
// For Gemini: keep only existing required keys that are present in properties
@@ -522,11 +569,14 @@ function convertChunkUsage(
): Partial<AnthropicUsage> | undefined {
if (!usage) return undefined
const cached = usage.prompt_tokens_details?.cached_tokens ?? 0
return {
input_tokens: usage.prompt_tokens ?? 0,
// Subtract cached tokens: OpenAI includes them in prompt_tokens,
// but Anthropic convention treats input_tokens as non-cached only.
input_tokens: (usage.prompt_tokens ?? 0) - cached,
output_tokens: usage.completion_tokens ?? 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: usage.prompt_tokens_details?.cached_tokens ?? 0,
cache_read_input_tokens: cached,
}
}
@@ -577,6 +627,8 @@ async function* openaiStreamToAnthropic(
let hasEmittedContentStart = false
let hasEmittedThinkingStart = false
let hasClosedThinking = false
let activeTextBuffer = ''
let textBufferMode: 'none' | 'pending' | 'strip' = 'none'
let lastStopReason: 'tool_use' | 'max_tokens' | 'end_turn' | null = null
let hasEmittedFinalUsage = false
let hasProcessedFinishReason = false
@@ -607,6 +659,30 @@ async function* openaiStreamToAnthropic(
const decoder = new TextDecoder()
let buffer = ''
const closeActiveContentBlock = async function* () {
if (!hasEmittedContentStart) return
if (textBufferMode !== 'none') {
const sanitized = stripLeakedReasoningPreamble(activeTextBuffer)
if (sanitized) {
yield {
type: 'content_block_delta',
index: contentBlockIndex,
delta: { type: 'text_delta', text: sanitized },
}
}
}
yield {
type: 'content_block_stop',
index: contentBlockIndex,
}
contentBlockIndex++
hasEmittedContentStart = false
activeTextBuffer = ''
textBufferMode = 'none'
}
try {
while (true) {
const { done, value } = await reader.read()
@@ -661,6 +737,7 @@ async function* openaiStreamToAnthropic(
contentBlockIndex++
hasClosedThinking = true
}
activeTextBuffer += delta.content
if (!hasEmittedContentStart) {
yield {
type: 'content_block_start',
@@ -669,6 +746,35 @@ async function* openaiStreamToAnthropic(
}
hasEmittedContentStart = true
}
if (
textBufferMode === 'strip' ||
looksLikeLeakedReasoningPrefix(activeTextBuffer)
) {
textBufferMode = 'strip'
continue
}
if (textBufferMode === 'pending') {
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
continue
}
yield {
type: 'content_block_delta',
index: contentBlockIndex,
delta: {
type: 'text_delta',
text: activeTextBuffer,
},
}
textBufferMode = 'none'
continue
}
if (shouldBufferPotentialReasoningPrefix(activeTextBuffer)) {
textBufferMode = 'pending'
continue
}
yield {
type: 'content_block_delta',
index: contentBlockIndex,
@@ -687,12 +793,7 @@ async function* openaiStreamToAnthropic(
hasClosedThinking = true
}
if (hasEmittedContentStart) {
yield {
type: 'content_block_stop',
index: contentBlockIndex,
}
contentBlockIndex++
hasEmittedContentStart = false
yield* closeActiveContentBlock()
}
const toolBlockIndex = contentBlockIndex
@@ -775,10 +876,7 @@ async function* openaiStreamToAnthropic(
}
// Close any open content blocks
if (hasEmittedContentStart) {
yield {
type: 'content_block_stop',
index: contentBlockIndex,
}
yield* closeActiveContentBlock()
}
// Close active tool calls
for (const [, tc] of activeToolCalls) {
@@ -925,7 +1023,7 @@ class OpenAIShimMessages {
private providerOverride?: { model: string; baseURL: string; apiKey: string }
constructor(defaultHeaders: Record<string, string>, reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh', providerOverride?: { model: string; baseURL: string; apiKey: string }) {
this.defaultHeaders = defaultHeaders
this.defaultHeaders = filterAnthropicHeaders(defaultHeaders)
this.reasoningEffort = reasoningEffort
this.providerOverride = providerOverride
}
@@ -944,8 +1042,9 @@ class OpenAIShimMessages {
httpResponse = response
if (params.stream) {
const isResponsesStream = response.url?.includes('/responses')
return new OpenAIShimStream(
request.transport === 'codex_responses'
(request.transport === 'codex_responses' || isResponsesStream)
? codexStreamToAnthropic(response, request.resolvedModel)
: openaiStreamToAnthropic(response, request.resolvedModel),
)
@@ -959,8 +1058,38 @@ class OpenAIShimMessages {
)
}
const data = await response.json()
return self._convertNonStreamingResponse(data, request.resolvedModel)
const isResponsesNonStream = response.url?.includes('/responses')
if (isResponsesNonStream || (request.transport === 'chat_completions' && isGithubModelsMode())) {
const contentType = response.headers.get('content-type') ?? ''
if (contentType.includes('application/json')) {
const parsed = await response.json() as Record<string, unknown>
if (
parsed &&
typeof parsed === 'object' &&
('output' in parsed || 'incomplete_details' in parsed)
) {
return convertCodexResponseToAnthropicMessage(
parsed,
request.resolvedModel,
)
}
return self._convertNonStreamingResponse(parsed, request.resolvedModel)
}
}
const contentType = response.headers.get('content-type') ?? ''
if (contentType.includes('application/json')) {
const data = await response.json()
return self._convertNonStreamingResponse(data, request.resolvedModel)
}
const textBody = await response.text().catch(() => '')
throw APIError.generate(
response.status,
undefined,
`OpenAI API error ${response.status}: unexpected response: ${textBody.slice(0, 500)}`,
response.headers as unknown as Headers,
)
})()
; (promise as unknown as Record<string, unknown>).withResponse =
@@ -982,7 +1111,36 @@ class OpenAIShimMessages {
params: ShimCreateParams,
options?: { signal?: AbortSignal; headers?: Record<string, string> },
): Promise<Response> {
if (request.transport === 'codex_responses') {
const githubEndpointType = getGithubEndpointType(request.baseUrl)
const isGithubMode = isGithubModelsMode()
const isGithubWithCodexTransport = isGithubMode && request.transport === 'codex_responses'
const isGithubCopilotEndpoint = isGithubMode && githubEndpointType === 'copilot'
if (isGithubWithCodexTransport) {
const apiKey = this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? ''
if (!apiKey) {
throw new Error(
'GitHub Copilot auth is required. Run /onboard-github to sign in.',
)
}
return performCodexRequest({
request,
credentials: {
apiKey,
source: 'env',
},
params,
defaultHeaders: {
...this.defaultHeaders,
...filterAnthropicHeaders(options?.headers),
...COPILOT_HEADERS,
},
signal: options?.signal,
})
}
if (request.transport === 'codex_responses' && !isGithubMode) {
const credentials = resolveCodexApiCredentials()
if (!credentials.apiKey) {
const authHint = credentials.authPath
@@ -1007,7 +1165,7 @@ class OpenAIShimMessages {
params,
defaultHeaders: {
...this.defaultHeaders,
...(options?.headers ?? {}),
...filterAnthropicHeaders(options?.headers),
},
signal: options?.signal,
})
@@ -1034,6 +1192,7 @@ class OpenAIShimMessages {
model: request.resolvedModel,
messages: openaiMessages,
stream: params.stream ?? false,
store: false,
}
// Convert max_tokens to max_completion_tokens for OpenAI API compatibility.
// Azure OpenAI requires max_completion_tokens and does not accept max_tokens.
@@ -1056,11 +1215,22 @@ class OpenAIShimMessages {
}
const isGithub = isGithubModelsMode()
if (isGithub && body.max_completion_tokens !== undefined) {
const isMistral = isMistralMode()
const githubEndpointType = getGithubEndpointType(request.baseUrl)
const isGithubCopilot = isGithub && githubEndpointType === 'copilot'
const isGithubModels = isGithub && (githubEndpointType === 'models' || githubEndpointType === 'custom')
if ((isGithub || isMistral) && body.max_completion_tokens !== undefined) {
body.max_tokens = body.max_completion_tokens
delete body.max_completion_tokens
}
// mistral also doesn't recognize body.store
if (isMistral) {
delete body.store
}
if (params.temperature !== undefined) body.temperature = params.temperature
if (params.top_p !== undefined) body.top_p = params.top_p
@@ -1095,12 +1265,11 @@ class OpenAIShimMessages {
const headers: Record<string, string> = {
'Content-Type': 'application/json',
...this.defaultHeaders,
...(options?.headers ?? {}),
...filterAnthropicHeaders(options?.headers),
}
const isGemini = isGeminiMode()
const apiKey =
this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? ''
const isGemini = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const apiKey = this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? ''
// Detect Azure endpoints by hostname (not raw URL) to prevent bypass via
// path segments like https://evil.com/cognitiveservices.azure.com/
let isAzure = false
@@ -1121,15 +1290,17 @@ class OpenAIShimMessages {
const geminiCredential = await resolveGeminiCredential(process.env)
if (geminiCredential.kind !== 'none') {
headers.Authorization = `Bearer ${geminiCredential.credential}`
if (geminiCredential.projectId) {
if (geminiCredential.kind !== 'api-key' && 'projectId' in geminiCredential && geminiCredential.projectId) {
headers['x-goog-user-project'] = geminiCredential.projectId
}
}
}
if (isGithub) {
headers.Accept = 'application/vnd.github.v3+json'
headers['X-GitHub-Api-Version'] = GITHUB_API_VERSION
if (isGithubCopilot) {
Object.assign(headers, COPILOT_HEADERS)
} else if (isGithubModels) {
headers['Accept'] = 'application/vnd.github+json'
headers['X-GitHub-Api-Version'] = '2022-11-28'
}
// Build the chat completions URL
@@ -1181,9 +1352,83 @@ class OpenAIShimMessages {
await sleepMs(delaySec * 1000)
continue
}
// Read body exactly once here — Response body is a stream that can only
// be consumed a single time.
const errorBody = await response.text().catch(() => 'unknown error')
const rateHint =
isGithub && response.status === 429 ? formatRetryAfterHint(response) : ''
// If GitHub Copilot returns error about /chat/completions,
// try the /responses endpoint (needed for GPT-5+ models)
if (isGithub && response.status === 400) {
if (errorBody.includes('/chat/completions') || errorBody.includes('not accessible')) {
const responsesUrl = `${request.baseUrl}/responses`
const responsesBody: Record<string, unknown> = {
model: request.resolvedModel,
input: convertAnthropicMessagesToResponsesInput(
params.messages as Array<{
role?: string
message?: { role?: string; content?: unknown }
content?: unknown
}>,
),
stream: params.stream ?? false,
store: false,
}
if (!Array.isArray(responsesBody.input) || responsesBody.input.length === 0) {
responsesBody.input = [
{
type: 'message',
role: 'user',
content: [{ type: 'input_text', text: '' }],
},
]
}
const systemText = convertSystemPrompt(params.system)
if (systemText) {
responsesBody.instructions = systemText
}
if (body.max_tokens !== undefined) {
responsesBody.max_output_tokens = body.max_tokens
}
if (params.tools && params.tools.length > 0) {
const convertedTools = convertToolsToResponsesTools(
params.tools as Array<{
name?: string
description?: string
input_schema?: Record<string, unknown>
}>,
)
if (convertedTools.length > 0) {
responsesBody.tools = convertedTools
}
}
const responsesResponse = await fetch(responsesUrl, {
method: 'POST',
headers,
body: JSON.stringify(responsesBody),
signal: options?.signal,
})
if (responsesResponse.ok) {
return responsesResponse
}
const responsesErrorBody = await responsesResponse.text().catch(() => 'unknown error')
let responsesErrorResponse: object | undefined
try { responsesErrorResponse = JSON.parse(responsesErrorBody) } catch { /* raw text */ }
throw APIError.generate(
responsesResponse.status,
responsesErrorResponse,
`OpenAI API error ${responsesResponse.status}: ${responsesErrorBody}`,
responsesResponse.headers,
)
}
}
let errorResponse: object | undefined
try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ }
throw APIError.generate(
@@ -1233,9 +1478,9 @@ class OpenAIShimMessages {
const choice = data.choices?.[0]
const content: Array<Record<string, unknown>> = []
// Some reasoning models (e.g. GLM-5) put their reply in reasoning_content
// while content stays null — emit reasoning as a thinking block, then
// fall back to it for visible text if content is empty.
// Some reasoning models (e.g. GLM-5) put their chain-of-thought in
// reasoning_content while content stays null. Preserve it as a thinking
// block, but do not surface it as visible assistant text.
const reasoningText = choice?.message?.reasoning_content
if (typeof reasoningText === 'string' && reasoningText) {
content.push({ type: 'thinking', thinking: reasoningText })
@@ -1243,9 +1488,12 @@ class OpenAIShimMessages {
const rawContent =
choice?.message?.content !== '' && choice?.message?.content != null
? choice?.message?.content
: choice?.message?.reasoning_content
: null
if (typeof rawContent === 'string' && rawContent) {
content.push({ type: 'text', text: rawContent })
content.push({
type: 'text',
text: stripLeakedReasoningPreamble(rawContent),
})
} else if (Array.isArray(rawContent) && rawContent.length > 0) {
const parts: string[] = []
for (const part of rawContent) {
@@ -1260,7 +1508,10 @@ class OpenAIShimMessages {
}
const joined = parts.join('\n')
if (joined) {
content.push({ type: 'text', text: joined })
content.push({
type: 'text',
text: stripLeakedReasoningPreamble(joined),
})
}
}
@@ -1350,8 +1601,15 @@ export function createOpenAIShimClient(options: {
if (process.env.GEMINI_MODEL && !process.env.OPENAI_MODEL) {
process.env.OPENAI_MODEL = process.env.GEMINI_MODEL
}
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)) {
process.env.OPENAI_BASE_URL =
process.env.MISTRAL_BASE_URL ?? 'https://api.mistral.ai/v1'
process.env.OPENAI_API_KEY = process.env.MISTRAL_API_KEY
if (process.env.MISTRAL_MODEL) {
process.env.OPENAI_MODEL = process.env.MISTRAL_MODEL
}
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
process.env.OPENAI_BASE_URL ??= GITHUB_MODELS_DEFAULT_BASE
process.env.OPENAI_BASE_URL ??= GITHUB_COPILOT_BASE
process.env.OPENAI_API_KEY ??=
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? ''
}

View File

@@ -23,6 +23,9 @@ test.each([
['github:gpt-4o', 'gpt-4o'],
['gpt-4o', 'gpt-4o'],
['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL],
// normalizeGithubModelsApiModel preserves provider prefix for models.github.ai compatibility
['github:openai/gpt-4.1', 'openai/gpt-4.1'],
['openai/gpt-4.1', 'openai/gpt-4.1'],
] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => {
expect(normalizeGithubModelsApiModel(input)).toBe(expected)
})
@@ -34,6 +37,20 @@ test('resolveProviderRequest applies GitHub normalization when CLAUDE_CODE_USE_G
expect(r.transport).toBe('chat_completions')
})
test('resolveProviderRequest routes GitHub GPT-5 codex models to responses transport', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const r = resolveProviderRequest({ model: 'gpt-5.3-codex' })
expect(r.resolvedModel).toBe('gpt-5.3-codex')
expect(r.transport).toBe('codex_responses')
})
test('resolveProviderRequest keeps gpt-5-mini on chat_completions for GitHub', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const r = resolveProviderRequest({ model: 'gpt-5-mini' })
expect(r.resolvedModel).toBe('gpt-5-mini')
expect(r.transport).toBe('chat_completions')
})
test('resolveProviderRequest leaves model unchanged without GitHub flag', () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
const r = resolveProviderRequest({ model: 'github:gpt-4o' })

View File

@@ -7,8 +7,9 @@ import { isEnvTruthy } from '../../utils/envUtils.js'
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
/** Default GitHub Models API model when user selects copilot / github:copilot */
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'openai/gpt-4.1'
export const DEFAULT_MISTRAL_BASE_URL = 'https://api.mistral.ai/v1'
/** Default GitHub Copilot API model when user selects copilot / github:copilot */
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'gpt-4o'
const CODEX_ALIAS_MODELS: Record<
string,
@@ -227,6 +228,21 @@ export function shouldUseCodexTransport(
return isCodexBaseUrl(explicitBaseUrl) || (!explicitBaseUrl && isCodexAlias(model))
}
function shouldUseGithubResponsesApi(model: string): boolean {
const normalized = model.trim().toLowerCase()
// Codex-branded models require /responses.
if (normalized.includes('codex')) return true
// GPT-5+ models use /responses, except gpt-5-mini.
const match = /^gpt-(\d+)/.exec(normalized)
if (!match) return false
const major = Number(match[1])
if (major < 5) return false
if (normalized.startsWith('gpt-5-mini')) return false
return true
}
export function isLocalProviderUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
try {
@@ -280,19 +296,61 @@ export function isCodexBaseUrl(baseUrl: string | undefined): boolean {
}
/**
* Normalize user model string for GitHub Models inference (models.github.ai).
* Mirrors runtime devsper `github._normalize_model_id`.
* Normalize user model string for GitHub Copilot API inference.
* Mirrors how Copilot resolves model IDs internally.
*/
export function normalizeGithubModelsApiModel(requestedModel: string): string {
export function normalizeGithubCopilotModel(requestedModel: string): string {
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
const segment =
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
if (!segment || segment.toLowerCase() === 'copilot') {
return DEFAULT_GITHUB_MODELS_API_MODEL
}
// Strip provider prefix if present (e.g., "openai/gpt-4o" -> "gpt-4o")
const slashIndex = segment.indexOf('/')
if (slashIndex !== -1) {
return segment.slice(slashIndex + 1)
}
return segment
}
/**
* Normalize user model string for GitHub Models API inference.
* Only normalizes the default alias, preserves provider-qualified models.
*/
export function normalizeGithubModelsApiModel(requestedModel: string): string {
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
const segment =
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
// Only normalize the default alias for GitHub Models
if (!segment || segment.toLowerCase() === 'copilot') {
return DEFAULT_GITHUB_MODELS_API_MODEL
}
// Preserve provider prefix for GitHub Models (e.g., "openai/gpt-4.1" stays as-is)
return segment
}
export const GITHUB_COPILOT_BASE_URL = 'https://api.githubcopilot.com'
export const GITHUB_MODELS_BASE_URL = 'https://models.github.ai/inference'
export function getGithubEndpointType(
baseUrl: string | undefined,
): 'copilot' | 'models' | 'custom' {
if (!baseUrl) return 'copilot'
try {
const hostname = new URL(baseUrl).hostname.toLowerCase()
if (hostname === 'api.githubcopilot.com') {
return 'copilot'
}
if (hostname === 'models.github.ai' || hostname.endsWith('.github.ai')) {
return 'models'
}
return 'custom'
} catch {
return 'copilot'
}
}
export function resolveProviderRequest(options?: {
model?: string
baseUrl?: string
@@ -300,41 +358,64 @@ export function resolveProviderRequest(options?: {
reasoningEffortOverride?: ReasoningEffort
}): ResolvedProviderRequest {
const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const isMistralMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL)
const requestedModel =
options?.model?.trim() ||
process.env.OPENAI_MODEL?.trim() ||
(isMistralMode
? process.env.MISTRAL_MODEL?.trim()
: process.env.OPENAI_MODEL?.trim()) ||
options?.fallbackModel?.trim() ||
(isGithubMode ? 'github:copilot' : 'gpt-4o')
const descriptor = parseModelDescriptor(requestedModel)
const rawBaseUrl =
asEnvUrl(options?.baseUrl) ??
asEnvUrl(process.env.OPENAI_BASE_URL) ??
asEnvUrl(
isMistralMode ? (process.env.MISTRAL_BASE_URL ?? DEFAULT_MISTRAL_BASE_URL) : process.env.OPENAI_BASE_URL,
) ??
asEnvUrl(process.env.OPENAI_API_BASE)
const githubEndpointType = isGithubMode
? getGithubEndpointType(rawBaseUrl)
: 'custom'
const isGithubCopilot = isGithubMode && githubEndpointType === 'copilot'
const isGithubModels = isGithubMode && githubEndpointType === 'models'
const isGithubCustom = isGithubMode && githubEndpointType === 'custom'
const githubResolvedModel = isGithubMode
? normalizeGithubModelsApiModel(requestedModel)
: requestedModel
const transport: ProviderTransport =
shouldUseCodexTransport(requestedModel, rawBaseUrl)
shouldUseCodexTransport(requestedModel, rawBaseUrl) ||
(isGithubCopilot && shouldUseGithubResponsesApi(githubResolvedModel))
? 'codex_responses'
: 'chat_completions'
const resolvedModel =
transport === 'chat_completions' &&
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
? normalizeGithubModelsApiModel(requestedModel)
: descriptor.baseModel
// For GitHub Copilot API, normalize to real model ID (e.g., "github:copilot" -> "gpt-4o")
// For GitHub Models/custom endpoints:
// - Normalize default alias (github:copilot -> gpt-4o)
// - Preserve provider-qualified models (openai/gpt-4.1 stays as-is)
const resolvedModel = isGithubCopilot
? normalizeGithubCopilotModel(descriptor.baseModel)
: (isGithubModels || isGithubCustom
? normalizeGithubModelsApiModel(descriptor.baseModel)
: descriptor.baseModel)
const reasoning = options?.reasoningEffortOverride
? { effort: options.reasoningEffortOverride }
: descriptor.reasoning
return {
transport,
requestedModel,
resolvedModel,
baseUrl:
(rawBaseUrl ??
(transport === 'codex_responses'
? DEFAULT_CODEX_BASE_URL
: DEFAULT_OPENAI_BASE_URL)
(isGithubCopilot && transport === 'codex_responses'
? GITHUB_COPILOT_BASE_URL
: (isGithubMode
? GITHUB_COPILOT_BASE_URL
: DEFAULT_OPENAI_BASE_URL))
).replace(/\/+$/, ''),
reasoning,
}
@@ -343,6 +424,7 @@ export function resolveProviderRequest(options?: {
export function getAdditionalModelOptionsCacheScope(): string | null {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_MISTRAL) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) &&

View File

@@ -0,0 +1,46 @@
import { describe, expect, test } from 'bun:test'
import {
looksLikeLeakedReasoningPrefix,
shouldBufferPotentialReasoningPrefix,
stripLeakedReasoningPreamble,
} from './reasoningLeakSanitizer.ts'
describe('reasoning leak sanitizer', () => {
test('strips explicit internal reasoning preambles', () => {
const text =
'The user just said "hey" - a simple greeting. I should respond briefly and friendly.\n\nHey! How can I help you today?'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(true)
expect(stripLeakedReasoningPreamble(text)).toBe(
'Hey! How can I help you today?',
)
})
test('does not strip normal user-facing advice that mentions "the user should"', () => {
const text =
'The user should reset their password immediately.\n\nHere are the steps...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
test('does not strip legitimate first-person advice about responding to an incident', () => {
const text =
'I need to respond to this security incident immediately. The system is compromised.\n\nHere are the remediation steps...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
test('does not strip legitimate first-person advice about answering a support ticket', () => {
const text =
'I need to answer the support ticket before end of day. The customer is waiting.\n\nHere is the response I drafted...'
expect(looksLikeLeakedReasoningPrefix(text)).toBe(false)
expect(shouldBufferPotentialReasoningPrefix(text)).toBe(false)
expect(stripLeakedReasoningPreamble(text)).toBe(text)
})
})

View File

@@ -0,0 +1,54 @@
const EXPLICIT_REASONING_START_RE =
/^\s*(i should\b|i need to\b|let me think\b|the task\b|the request\b)/i
const EXPLICIT_REASONING_META_RE =
/\b(user|request|question|prompt|message|task|greeting|small talk|briefly|friendly|concise)\b/i
const USER_META_START_RE =
/^\s*the user\s+(just\s+)?(said|asked|is asking|wants|wanted|mentioned|seems|appears)\b/i
const USER_REASONING_RE =
/^\s*the user\s+(just\s+)?(said|asked|is asking|wants|wanted|mentioned|seems|appears)\b[\s\S]*\b(i should|i need to|let me think|respond|reply|answer|greeting|small talk|briefly|friendly|concise)\b/i
export function shouldBufferPotentialReasoningPrefix(text: string): boolean {
const normalized = text.trim()
if (!normalized) return false
if (looksLikeLeakedReasoningPrefix(normalized)) {
return true
}
const hasParagraphBoundary = /\n\s*\n/.test(normalized)
if (hasParagraphBoundary) {
return false
}
return (
EXPLICIT_REASONING_START_RE.test(normalized) ||
USER_META_START_RE.test(normalized)
)
}
export function looksLikeLeakedReasoningPrefix(text: string): boolean {
const normalized = text.trim()
if (!normalized) return false
return (
(EXPLICIT_REASONING_START_RE.test(normalized) &&
EXPLICIT_REASONING_META_RE.test(normalized)) ||
USER_REASONING_RE.test(normalized)
)
}
export function stripLeakedReasoningPreamble(text: string): string {
const normalized = text.replace(/\r\n/g, '\n')
const parts = normalized.split(/\n\s*\n/)
if (parts.length < 2) return text
const first = parts[0]?.trim() ?? ''
if (!looksLikeLeakedReasoningPrefix(first)) {
return text
}
const remainder = parts.slice(1).join('\n\n').trim()
return remainder || text
}

View File

@@ -1,4 +1,4 @@
import { afterEach, describe, expect, mock, test } from 'bun:test'
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test'
import { APIError } from '@anthropic-ai/sdk'
// Helper to build a mock APIError with specific headers
@@ -15,15 +15,27 @@ function makeError(headers: Record<string, string>): APIError {
// Save/restore env vars between tests
const originalEnv = { ...process.env }
const envKeys = [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_MODEL',
'OPENAI_BASE_URL',
'OPENAI_API_BASE',
] as const
beforeEach(() => {
for (const key of envKeys) {
delete process.env[key]
}
})
afterEach(() => {
for (const key of [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
]) {
for (const key of envKeys) {
if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key]
}

View File

@@ -0,0 +1,106 @@
import { describe, expect, test } from 'bun:test'
import { AutoFixConfigSchema, getAutoFixConfig, type AutoFixConfig } from './autoFixConfig.js'
describe('AutoFixConfigSchema', () => {
test('parses valid full config', () => {
const input = {
enabled: true,
lint: 'eslint . --fix',
test: 'bun test',
maxRetries: 3,
timeout: 30000,
}
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.enabled).toBe(true)
expect(result.data.lint).toBe('eslint . --fix')
expect(result.data.test).toBe('bun test')
expect(result.data.maxRetries).toBe(3)
expect(result.data.timeout).toBe(30000)
}
})
test('parses minimal config with defaults', () => {
const input = { enabled: true, lint: 'eslint .' }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.maxRetries).toBe(3)
expect(result.data.timeout).toBe(30000)
expect(result.data.test).toBeUndefined()
}
})
test('rejects config with enabled but no lint or test', () => {
const input = { enabled: true }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
test('accepts disabled config without commands', () => {
const input = { enabled: false }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(true)
})
test('rejects negative maxRetries', () => {
const input = { enabled: true, lint: 'eslint .', maxRetries: -1 }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
test('rejects maxRetries above 10', () => {
const input = { enabled: true, lint: 'eslint .', maxRetries: 11 }
const result = AutoFixConfigSchema.safeParse(input)
expect(result.success).toBe(false)
})
})
describe('getAutoFixConfig', () => {
test('returns null when settings have no autoFix', () => {
const result = getAutoFixConfig(undefined)
expect(result).toBeNull()
})
test('returns null when autoFix is disabled', () => {
const result = getAutoFixConfig({ enabled: false })
expect(result).toBeNull()
})
test('returns parsed config when valid and enabled', () => {
const result = getAutoFixConfig({ enabled: true, lint: 'eslint .' })
expect(result).not.toBeNull()
expect(result!.enabled).toBe(true)
expect(result!.lint).toBe('eslint .')
})
})
describe('SettingsSchema autoFix integration', () => {
test('SettingsSchema accepts autoFix field', async () => {
const { SettingsSchema } = await import('../../utils/settings/types.js')
const settings = {
autoFix: {
enabled: true,
lint: 'eslint .',
test: 'bun test',
maxRetries: 3,
timeout: 30000,
},
}
const result = SettingsSchema().safeParse(settings)
expect(result.success).toBe(true)
})
test('SettingsSchema rejects invalid autoFix', async () => {
const { SettingsSchema } = await import('../../utils/settings/types.js')
const settings = {
autoFix: {
enabled: true,
// missing lint and test - should fail refine
},
}
const result = SettingsSchema().safeParse(settings)
expect(result.success).toBe(false)
})
})

View File

@@ -0,0 +1,52 @@
import { z } from 'zod/v4'
export const AutoFixConfigSchema = z
.object({
enabled: z.boolean().describe('Whether auto-fix is enabled'),
lint: z
.string()
.optional()
.describe('Lint command to run after file edits (e.g. "eslint . --fix")'),
test: z
.string()
.optional()
.describe('Test command to run after file edits (e.g. "bun test")'),
maxRetries: z
.number()
.int()
.min(0)
.max(10)
.default(3)
.describe('Maximum number of auto-fix retry attempts (default: 3)'),
timeout: z
.number()
.int()
.min(1000)
.max(300000)
.default(30000)
.describe('Timeout in ms for each lint/test command (default: 30000)'),
})
.refine(
data => !data.enabled || data.lint !== undefined || data.test !== undefined,
{
message: 'At least one of "lint" or "test" must be set when enabled',
},
)
export type AutoFixConfig = z.infer<typeof AutoFixConfigSchema>
export function getAutoFixConfig(
rawConfig: unknown,
): AutoFixConfig | null {
if (!rawConfig || typeof rawConfig !== 'object') {
return null
}
const parsed = AutoFixConfigSchema.safeParse(rawConfig)
if (!parsed.success) {
return null
}
if (!parsed.data.enabled) {
return null
}
return parsed.data
}

View File

@@ -0,0 +1,63 @@
import { describe, expect, test } from 'bun:test'
import {
shouldRunAutoFix,
buildAutoFixContext,
} from './autoFixHook.js'
describe('shouldRunAutoFix', () => {
test('returns true for file_edit tool when autoFix enabled', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_edit', config)).toBe(true)
})
test('returns true for file_write tool when autoFix enabled', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_write', config)).toBe(true)
})
test('returns false for bash tool', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('bash', config)).toBe(false)
})
test('returns false for file_read tool', () => {
const config = { enabled: true, lint: 'eslint .', maxRetries: 3, timeout: 30000 }
expect(shouldRunAutoFix('file_read', config)).toBe(false)
})
test('returns false when config is null', () => {
expect(shouldRunAutoFix('file_edit', null)).toBe(false)
})
})
describe('buildAutoFixContext', () => {
test('formats lint errors as AI-readable context', () => {
const context = buildAutoFixContext({
hasErrors: true,
lintOutput: 'src/foo.ts:10:5 error no-unused-vars',
lintExitCode: 1,
errorSummary: 'Lint errors (exit code 1):\nsrc/foo.ts:10:5 error no-unused-vars',
})
expect(context).toContain('AUTO-FIX')
expect(context).toContain('no-unused-vars')
expect(context).toContain('Please fix')
})
test('returns null when no errors', () => {
const context = buildAutoFixContext({
hasErrors: false,
})
expect(context).toBeNull()
})
test('formats test failures as AI-readable context', () => {
const context = buildAutoFixContext({
hasErrors: true,
testOutput: 'FAIL src/foo.test.ts\n expected true, got false',
testExitCode: 1,
errorSummary: 'Test failures (exit code 1):\nFAIL src/foo.test.ts',
})
expect(context).toContain('AUTO-FIX')
expect(context).toContain('FAIL')
})
})

View File

@@ -0,0 +1,25 @@
import type { AutoFixConfig } from './autoFixConfig.js'
import type { AutoFixResult } from './autoFixRunner.js'
const AUTO_FIX_TOOLS = new Set(['file_edit', 'file_write'])
export function shouldRunAutoFix(
toolName: string,
config: AutoFixConfig | null,
): boolean {
if (!config) return false
return AUTO_FIX_TOOLS.has(toolName)
}
export function buildAutoFixContext(result: AutoFixResult): string | null {
if (!result.hasErrors || !result.errorSummary) return null
return (
`<auto_fix_feedback>\n` +
`AUTO-FIX: The file you just edited has errors. Please fix them:\n\n` +
`${result.errorSummary}\n\n` +
`Please fix these errors in the files you just edited. ` +
`Do not ask the user — just apply the fix.\n` +
`</auto_fix_feedback>`
)
}

View File

@@ -0,0 +1,48 @@
import { describe, expect, test } from 'bun:test'
import { getAutoFixConfig } from './autoFixConfig.js'
import { shouldRunAutoFix, buildAutoFixContext } from './autoFixHook.js'
import { runAutoFixCheck } from './autoFixRunner.js'
describe('autoFix end-to-end flow', () => {
test('full flow: config → shouldRun → check → context', async () => {
const config = getAutoFixConfig({
enabled: true,
lint: 'echo "error: unused" && exit 1',
maxRetries: 2,
timeout: 5000,
})
expect(config).not.toBeNull()
expect(shouldRunAutoFix('file_edit', config)).toBe(true)
const result = await runAutoFixCheck({
lint: config!.lint,
test: config!.test,
timeout: config!.timeout,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
const context = buildAutoFixContext(result)
expect(context).not.toBeNull()
expect(context).toContain('AUTO-FIX')
expect(context).toContain('unused')
})
test('full flow: no errors = no context', async () => {
const config = getAutoFixConfig({
enabled: true,
lint: 'echo "all clean"',
timeout: 5000,
})
const result = await runAutoFixCheck({
lint: config!.lint,
timeout: config!.timeout,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
const context = buildAutoFixContext(result)
expect(context).toBeNull()
})
})

View File

@@ -0,0 +1,103 @@
import { describe, expect, test } from 'bun:test'
import {
runAutoFixCheck,
type AutoFixResult,
type AutoFixCheckOptions,
} from './autoFixRunner.js'
describe('runAutoFixCheck', () => {
test('returns success when lint command exits 0', async () => {
const result = await runAutoFixCheck({
lint: 'echo "all clean"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
expect(result.lintOutput).toContain('all clean')
expect(result.testOutput).toBeUndefined()
})
test('returns errors when lint command exits non-zero', async () => {
const result = await runAutoFixCheck({
lint: 'echo "error: unused var" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.lintOutput).toContain('unused var')
expect(result.lintExitCode).toBe(1)
})
test('returns errors when test command exits non-zero', async () => {
const result = await runAutoFixCheck({
test: 'echo "FAIL test_foo" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.testOutput).toContain('FAIL test_foo')
expect(result.testExitCode).toBe(1)
})
test('runs both lint and test commands', async () => {
const result = await runAutoFixCheck({
lint: 'echo "lint ok"',
test: 'echo "test ok"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
expect(result.lintOutput).toContain('lint ok')
expect(result.testOutput).toContain('test ok')
})
test('skips test if lint fails', async () => {
const result = await runAutoFixCheck({
lint: 'echo "lint error" && exit 1',
test: 'echo "should not run"',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.lintOutput).toContain('lint error')
expect(result.testOutput).toBeUndefined()
})
test('handles timeout gracefully', async () => {
const result = await runAutoFixCheck({
lint: 'sleep 10',
timeout: 100,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
expect(result.timedOut).toBe(true)
})
test('returns success with no commands configured', async () => {
const result = await runAutoFixCheck({
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(false)
})
test('formats error summary for AI consumption', async () => {
const result = await runAutoFixCheck({
lint: 'echo "src/foo.ts:10:5 error no-unused-vars" && exit 1',
timeout: 5000,
cwd: '/tmp',
})
expect(result.hasErrors).toBe(true)
const summary = result.errorSummary
expect(summary).toContain('Lint errors')
expect(summary).toContain('no-unused-vars')
})
})

View File

@@ -0,0 +1,169 @@
import { spawn } from 'child_process'
export interface AutoFixCheckOptions {
lint?: string
test?: string
timeout: number
cwd: string
signal?: AbortSignal
}
export interface AutoFixResult {
hasErrors: boolean
lintOutput?: string
lintExitCode?: number
testOutput?: string
testExitCode?: number
timedOut?: boolean
errorSummary?: string
}
async function runCommand(
command: string,
cwd: string,
timeout: number,
signal?: AbortSignal,
): Promise<{ stdout: string; stderr: string; exitCode: number; timedOut: boolean }> {
return new Promise((resolve) => {
if (signal?.aborted) {
resolve({ stdout: '', stderr: 'Aborted', exitCode: 1, timedOut: false })
return
}
let timedOut = false
let stdout = ''
let stderr = ''
const isWindows = process.platform === 'win32'
const proc = spawn(command, [], {
cwd,
env: { ...process.env },
shell: true,
windowsHide: true,
// On Unix, create a process group so we can kill child processes on timeout/abort
detached: !isWindows,
})
const killTree = () => {
try {
if (!isWindows && proc.pid) {
// Kill the entire process group
process.kill(-proc.pid, 'SIGTERM')
} else {
proc.kill('SIGTERM')
}
} catch {
// Process may have already exited
}
}
const onAbort = () => {
killTree()
}
signal?.addEventListener('abort', onAbort, { once: true })
proc.stdout?.on('data', (data: Buffer) => {
stdout += data.toString()
})
proc.stderr?.on('data', (data: Buffer) => {
stderr += data.toString()
})
const timer = setTimeout(() => {
timedOut = true
killTree()
}, timeout)
proc.on('close', (code) => {
clearTimeout(timer)
signal?.removeEventListener('abort', onAbort)
resolve({
stdout: stdout.slice(0, 10000),
stderr: stderr.slice(0, 10000),
exitCode: code ?? 1,
timedOut,
})
})
proc.on('error', () => {
clearTimeout(timer)
signal?.removeEventListener('abort', onAbort)
resolve({
stdout,
stderr: stderr || 'Command failed to start',
exitCode: 1,
timedOut: false,
})
})
})
}
function buildErrorSummary(result: AutoFixResult): string | undefined {
if (!result.hasErrors) return undefined
const parts: string[] = []
if (result.timedOut) {
parts.push('Command timed out.')
}
if (result.lintExitCode !== undefined && result.lintExitCode !== 0) {
parts.push(`Lint errors (exit code ${result.lintExitCode}):\n${result.lintOutput ?? ''}`)
}
if (result.testExitCode !== undefined && result.testExitCode !== 0) {
parts.push(`Test failures (exit code ${result.testExitCode}):\n${result.testOutput ?? ''}`)
}
return parts.join('\n\n')
}
export async function runAutoFixCheck(
options: AutoFixCheckOptions,
): Promise<AutoFixResult> {
const { lint, test, timeout, cwd, signal } = options
if (!lint && !test) {
return { hasErrors: false }
}
if (signal?.aborted) {
return { hasErrors: false }
}
const result: AutoFixResult = { hasErrors: false }
// Run lint first
if (lint) {
const lintResult = await runCommand(lint, cwd, timeout, signal)
result.lintOutput = (lintResult.stdout + '\n' + lintResult.stderr).trim()
result.lintExitCode = lintResult.exitCode
if (lintResult.timedOut) {
result.hasErrors = true
result.timedOut = true
result.errorSummary = buildErrorSummary(result)
return result
}
if (lintResult.exitCode !== 0) {
result.hasErrors = true
result.errorSummary = buildErrorSummary(result)
return result
}
}
// Run tests only if lint passed (or no lint configured)
if (test) {
const testResult = await runCommand(test, cwd, timeout, signal)
result.testOutput = (testResult.stdout + '\n' + testResult.stderr).trim()
result.testExitCode = testResult.exitCode
if (testResult.timedOut) {
result.hasErrors = true
result.timedOut = true
} else if (testResult.exitCode !== 0) {
result.hasErrors = true
}
}
result.errorSummary = buildErrorSummary(result)
return result
}

View File

@@ -1,4 +1,4 @@
import { afterEach, describe, expect, mock, test } from 'bun:test'
import { afterEach, beforeEach, describe, expect, mock, test } from 'bun:test'
import {
DEFAULT_GITHUB_DEVICE_SCOPE,
@@ -7,14 +7,26 @@ import {
requestDeviceCode,
} from './deviceFlow.js'
async function importFreshModule() {
mock.restore()
return import(`./deviceFlow.ts?ts=${Date.now()}-${Math.random()}`)
}
describe('requestDeviceCode', () => {
const originalFetch = globalThis.fetch
beforeEach(() => {
mock.restore()
globalThis.fetch = originalFetch
})
afterEach(() => {
globalThis.fetch = originalFetch
})
test('parses successful device code response', async () => {
const { requestDeviceCode } = await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(
@@ -42,6 +54,9 @@ describe('requestDeviceCode', () => {
})
test('throws on HTTP error', async () => {
const { requestDeviceCode, GitHubDeviceFlowError } =
await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(new Response('bad', { status: 500 })),
)
@@ -134,6 +149,8 @@ describe('pollAccessToken', () => {
})
test('returns token when GitHub responds with access_token immediately', async () => {
const { pollAccessToken } = await importFreshModule()
let calls = 0
globalThis.fetch = mock(() => {
calls++
@@ -153,6 +170,8 @@ describe('pollAccessToken', () => {
})
test('throws on access_denied', async () => {
const { pollAccessToken } = await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(JSON.stringify({ error: 'access_denied' }), {
@@ -168,3 +187,62 @@ describe('pollAccessToken', () => {
).rejects.toThrow(/denied/)
})
})
describe('exchangeForCopilotToken', () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test('parses successful Copilot token response', async () => {
const { exchangeForCopilotToken } = await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(
JSON.stringify({
token: 'copilot-token-xyz',
expires_at: 1700000000,
refresh_in: 3600,
endpoints: {
api: 'https://api.githubcopilot.com',
},
}),
{ status: 200 },
),
),
)
const result = await exchangeForCopilotToken('oauth-token', globalThis.fetch)
expect(result.token).toBe('copilot-token-xyz')
expect(result.expires_at).toBe(1700000000)
expect(result.refresh_in).toBe(3600)
expect(result.endpoints.api).toBe('https://api.githubcopilot.com')
})
test('throws on HTTP error', async () => {
const { exchangeForCopilotToken, GitHubDeviceFlowError } =
await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(new Response('unauthorized', { status: 401 })),
)
await expect(
exchangeForCopilotToken('bad-token', globalThis.fetch),
).rejects.toThrow(GitHubDeviceFlowError)
})
test('throws on malformed response', async () => {
const { exchangeForCopilotToken } = await importFreshModule()
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(JSON.stringify({ invalid: 'data' }), { status: 200 }),
),
)
await expect(
exchangeForCopilotToken('oauth-token', globalThis.fetch),
).rejects.toThrow(/Malformed/)
})
})

View File

@@ -1,19 +1,35 @@
/**
* GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow).
* Uses GitHub Copilot's official OAuth app for device authentication.
*/
import { execFileNoThrow } from '../../utils/execFileNoThrow.js'
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Ov23liXjWSSui6QIahPl'
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Iv1.b507a08c87ecfe98'
export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code'
export const GITHUB_DEVICE_ACCESS_TOKEN_URL =
'https://github.com/login/oauth/access_token'
export const COPILOT_TOKEN_URL = 'https://api.github.com/copilot_internal/v2/token'
// OAuth app device flow does not accept the GitHub Models permission token
// scope (models:read). Use an OAuth-safe default.
const OAUTH_SAFE_GITHUB_DEVICE_SCOPE = 'read:user'
export const DEFAULT_GITHUB_DEVICE_SCOPE = OAUTH_SAFE_GITHUB_DEVICE_SCOPE
/** Only read:user scope — required for Copilot OAuth */
export const DEFAULT_GITHUB_DEVICE_SCOPE = 'read:user'
export const COPILOT_HEADERS: Record<string, string> = {
'User-Agent': 'GitHubCopilotChat/0.26.7',
'Editor-Version': 'vscode/1.99.3',
'Editor-Plugin-Version': 'copilot-chat/0.26.7',
'Copilot-Integration-Id': 'vscode-chat',
}
export type CopilotTokenResponse = {
token: string
expires_at: number
refresh_in: number
endpoints: {
api: string
}
}
export class GitHubDeviceFlowError extends Error {
constructor(message: string) {
@@ -30,6 +46,8 @@ export type DeviceCodeResult = {
interval: number
}
type FetchLike = (input: RequestInfo | URL, init?: RequestInit) => Promise<Response>
export function getGithubDeviceFlowClientId(): string {
return (
process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() ||
@@ -44,21 +62,21 @@ function sleep(ms: number): Promise<void> {
export async function requestDeviceCode(options?: {
clientId?: string
scope?: string
fetchImpl?: typeof fetch
fetchImpl?: FetchLike
}): Promise<DeviceCodeResult> {
const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
if (!clientId) {
throw new GitHubDeviceFlowError(
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID or paste a PAT instead.',
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID.',
)
}
const fetchFn = options?.fetchImpl ?? fetch
const requestedScope =
options?.scope?.trim() || DEFAULT_GITHUB_DEVICE_SCOPE
const scopesToTry =
requestedScope === OAUTH_SAFE_GITHUB_DEVICE_SCOPE
requestedScope === DEFAULT_GITHUB_DEVICE_SCOPE
? [requestedScope]
: [requestedScope, OAUTH_SAFE_GITHUB_DEVICE_SCOPE]
: [requestedScope, DEFAULT_GITHUB_DEVICE_SCOPE]
let lastError = 'Device code request failed.'
@@ -77,7 +95,7 @@ export async function requestDeviceCode(options?: {
lastError = `Device code request failed: ${res.status} ${text}`
const isInvalidScope = /invalid_scope/i.test(text)
const canRetryWithFallback =
scope !== OAUTH_SAFE_GITHUB_DEVICE_SCOPE && isInvalidScope
scope !== DEFAULT_GITHUB_DEVICE_SCOPE && isInvalidScope
if (canRetryWithFallback) {
continue
}
@@ -114,7 +132,7 @@ export type PollOptions = {
clientId?: string
initialInterval?: number
timeoutSeconds?: number
fetchImpl?: typeof fetch
fetchImpl?: FetchLike
}
export async function pollAccessToken(
@@ -197,3 +215,49 @@ export async function openVerificationUri(uri: string): Promise<void> {
// User can open the URL manually
}
}
/**
* Exchange an OAuth access token for a Copilot API token.
* The OAuth token alone cannot be used with the Copilot API endpoint.
*/
export async function exchangeForCopilotToken(
oauthToken: string,
fetchImpl?: FetchLike,
): Promise<CopilotTokenResponse> {
const fetchFn = fetchImpl ?? fetch
const res = await fetchFn(COPILOT_TOKEN_URL, {
method: 'GET',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${oauthToken}`,
...COPILOT_HEADERS,
},
})
if (!res.ok) {
const text = await res.text().catch(() => '')
throw new GitHubDeviceFlowError(
`Copilot token exchange failed: ${res.status} ${text}`,
)
}
const data = (await res.json()) as Record<string, unknown>
const token = data.token
const expires_at = data.expires_at
const refresh_in = data.refresh_in
const endpoints = data.endpoints
if (
typeof token !== 'string' ||
typeof expires_at !== 'number' ||
typeof refresh_in !== 'number' ||
!endpoints ||
typeof endpoints !== 'object' ||
typeof (endpoints as Record<string, unknown>).api !== 'string'
) {
throw new GitHubDeviceFlowError('Malformed Copilot token response')
}
return {
token,
expires_at,
refresh_in,
endpoints: endpoints as { api: string },
}
}

View File

@@ -1,6 +1,11 @@
// Mock rate limits for testing [internal-only]
// The external build keeps this module as a stable no-op surface so imports
// remain valid without exposing internal-only rate-limit simulation behavior.
// This allows testing various rate limit scenarios without hitting actual limits
//
// WARNING: This is for internal testing/demo purposes only!
// The mock headers may not exactly match the API specification or real-world behavior.
// Always validate against actual API responses before relying on this for production features.
import { setMockBillingAccessOverride } from '../utils/billing.js'
import type { OverageDisabledReason } from './claudeAiLimits.js'

View File

@@ -645,7 +645,7 @@ const internalOnlyTips: Tip[] =
{
id: 'skillify',
content: async () =>
'[internal] Turn repeatable workflows into reusable project skills when they keep recurring',
'[internal] Use /skillify to turn repeatable recurring workflows into reusable project skills',
cooldownSessions: 15,
isRelevant: async () => true,
},

View File

@@ -29,6 +29,13 @@ import {
} from '../../utils/permissions/PermissionResult.js'
import { checkRuleBasedPermissions } from '../../utils/permissions/permissions.js'
import { formatError } from '../../utils/toolErrors.js'
import { getAutoFixConfig } from '../autoFix/autoFixConfig.js'
import { shouldRunAutoFix, buildAutoFixContext } from '../autoFix/autoFixHook.js'
import { runAutoFixCheck } from '../autoFix/autoFixRunner.js'
// Track auto-fix retry count per query chain to enforce maxRetries cap.
// Key: queryChainId (or 'default'), Value: number of auto-fix attempts used.
const autoFixRetryCount = new Map<string, number>()
import { isMcpTool } from '../mcp/utils.js'
import type { McpServerType, MessageUpdateLazy } from './toolExecution.js'
@@ -185,6 +192,65 @@ export async function* runPostToolUseHooks<Input extends AnyObject, Output>(
}
}
}
// Auto-fix: run lint/test if configured for this tool
const autoFixSettings = toolUseContext.getAppState().settings
const autoFixConfig = getAutoFixConfig(
autoFixSettings && typeof autoFixSettings === 'object' && 'autoFix' in autoFixSettings
? (autoFixSettings as Record<string, unknown>).autoFix
: undefined,
)
if (shouldRunAutoFix(tool.name, autoFixConfig) && autoFixConfig) {
// Enforce maxRetries cap to prevent unbounded auto-fix loops.
// Uses queryChainId to scope the counter to the current conversation turn.
const chainKey = (toolUseContext.queryTracking?.chainId as string) ?? 'default'
const currentRetries = autoFixRetryCount.get(chainKey) ?? 0
if (currentRetries >= autoFixConfig.maxRetries) {
// Max retries reached — skip auto-fix and let the user know
yield {
message: createAttachmentMessage({
type: 'hook_additional_context',
content: [
`<auto_fix_feedback>\nAUTO-FIX: Maximum retry limit (${autoFixConfig.maxRetries}) reached. ` +
`Skipping further auto-fix attempts. Please review the errors manually.\n</auto_fix_feedback>`,
],
hookName: `AutoFix:${tool.name}`,
toolUseID,
hookEvent: 'PostToolUse',
}),
}
} else {
try {
const cwd = toolUseContext.options?.cwd ?? process.cwd()
const autoFixResult = await runAutoFixCheck({
lint: autoFixConfig.lint,
test: autoFixConfig.test,
timeout: autoFixConfig.timeout,
cwd,
signal: toolUseContext.abortController.signal,
})
const autoFixContext = buildAutoFixContext(autoFixResult)
if (autoFixContext) {
autoFixRetryCount.set(chainKey, currentRetries + 1)
yield {
message: createAttachmentMessage({
type: 'hook_additional_context',
content: [autoFixContext],
hookName: `AutoFix:${tool.name}`,
toolUseID,
hookEvent: 'PostToolUse',
}),
}
} else {
// Lint/test passed — reset the retry counter for this chain
autoFixRetryCount.delete(chainKey)
}
} catch (autoFixError) {
logError(autoFixError)
}
}
}
} catch (error) {
logError(error)
}

View File

@@ -0,0 +1,68 @@
import { readdir, readFile, writeFile } from 'fs/promises'
import { basename, relative } from 'path'
import { getWikiPaths } from './paths.js'
async function listMarkdownFiles(dir: string): Promise<string[]> {
const entries = await readdir(dir, { withFileTypes: true })
const files: string[] = []
for (const entry of entries) {
const fullPath = `${dir}/${entry.name}`
if (entry.isDirectory()) {
files.push(...(await listMarkdownFiles(fullPath)))
} else if (entry.isFile() && entry.name.endsWith('.md')) {
files.push(fullPath)
}
}
return files.sort()
}
async function getPageTitle(path: string): Promise<string> {
const content = await readFile(path, 'utf8')
const titleLine = content
.split('\n')
.map(line => line.trim())
.find(line => line.startsWith('# '))
return titleLine ? titleLine.replace(/^#\s+/, '') : basename(path, '.md')
}
export async function rebuildWikiIndex(cwd: string): Promise<void> {
const paths = getWikiPaths(cwd)
const pageFiles = await listMarkdownFiles(paths.pagesDir)
const sourceFiles = await listMarkdownFiles(paths.sourcesDir)
const pageLinks = await Promise.all(
pageFiles.map(async file => {
const rel = relative(paths.root, file)
const title = await getPageTitle(file)
return `- [${title}](./${rel.replace(/\\/g, '/')})`
}),
)
const sourceLinks = sourceFiles.map(file => {
const rel = relative(paths.root, file).replace(/\\/g, '/')
const title = basename(file, '.md')
return `- [${title}](./${rel})`
})
const content = `# ${basename(cwd)} Wiki
This wiki is maintained by OpenClaude as a durable project knowledge layer.
## Core Pages
${pageLinks.length > 0 ? pageLinks.join('\n') : '- No pages yet'}
## Sources
${sourceLinks.length > 0 ? sourceLinks.join('\n') : '- No sources yet'}
## Recent Updates
- See [log.md](./log.md)
`
await writeFile(paths.indexFile, content, 'utf8')
}

View File

@@ -0,0 +1,48 @@
import { afterEach, expect, test } from 'bun:test'
import { mkdtemp, readFile, rm, writeFile } from 'fs/promises'
import { tmpdir } from 'os'
import { join } from 'path'
import { ingestLocalWikiSource } from './ingest.js'
import { getWikiPaths } from './paths.js'
const tempDirs: string[] = []
afterEach(async () => {
await Promise.all(
tempDirs.splice(0).map(dir => rm(dir, { recursive: true, force: true })),
)
})
async function makeProjectDir(): Promise<string> {
const dir = await mkdtemp(join(tmpdir(), 'openclaude-wiki-ingest-'))
tempDirs.push(dir)
return dir
}
test('ingestLocalWikiSource creates a source note and updates log/index', async () => {
const cwd = await makeProjectDir()
const sourcePath = join(cwd, 'notes.md')
await writeFile(
sourcePath,
'# Design Notes\n\nThis subsystem coordinates provider routing and session state.\nIt should be documented for future contributors.\n',
'utf8',
)
const result = await ingestLocalWikiSource(cwd, 'notes.md')
const paths = getWikiPaths(cwd)
expect(result.sourceFile).toBe('notes.md')
expect(result.title).toBe('Design Notes')
expect(result.sourceNote.startsWith('.openclaude/wiki/sources/')).toBe(true)
const sourceNote = await readFile(join(cwd, result.sourceNote), 'utf8')
expect(sourceNote).toContain('# Design Notes')
expect(sourceNote).toContain('Path: `notes.md`')
const log = await readFile(paths.logFile, 'utf8')
expect(log).toContain('Ingested `notes.md`')
const index = await readFile(paths.indexFile, 'utf8')
expect(index).toContain('./sources/')
expect(index).toContain(result.sourceNote.replace('.openclaude/wiki/', './'))
})

View File

@@ -0,0 +1,93 @@
import { appendFile, readFile, stat, writeFile } from 'fs/promises'
import { basename, extname, isAbsolute, relative, resolve } from 'path'
import { initializeWiki } from './init.js'
import { rebuildWikiIndex } from './indexBuilder.js'
import { getWikiPaths } from './paths.js'
import type { WikiIngestResult } from './types.js'
import {
extractTitleFromText,
sanitizeWikiSlug,
summarizeText,
} from './utils.js'
function buildSourceNote(params: {
title: string
sourcePath: string
ingestedAt: string
summary: string
excerpt: string
}): string {
const { title, sourcePath, ingestedAt, summary, excerpt } = params
return `# ${title}
## Source
- Path: \`${sourcePath}\`
- Ingested at: ${ingestedAt}
## Summary
${summary}
## Excerpt
\`\`\`
${excerpt}
\`\`\`
## Linked Pages
- [Architecture](../pages/architecture.md)
`
}
function buildLogEntry(sourcePath: string, title: string, ingestedAt: string): string {
return `- ${ingestedAt}: Ingested \`${sourcePath}\` into source note "${title}"`
}
export async function ingestLocalWikiSource(
cwd: string,
rawPath: string,
): Promise<WikiIngestResult> {
await initializeWiki(cwd)
const resolvedPath = isAbsolute(rawPath) ? rawPath : resolve(cwd, rawPath)
const fileInfo = await stat(resolvedPath)
if (!fileInfo.isFile()) {
throw new Error(`Not a file: ${resolvedPath}`)
}
const content = await readFile(resolvedPath, 'utf8')
const relSourcePath = relative(cwd, resolvedPath).replace(/\\/g, '/')
const ingestedAt = new Date().toISOString()
const baseName = basename(resolvedPath, extname(resolvedPath))
const title = extractTitleFromText(baseName, content)
const summary = summarizeText(content)
const excerpt = content.split('\n').slice(0, 20).join('\n').trim()
const slug = sanitizeWikiSlug(`${baseName}-${Date.now()}`) || `source-${Date.now()}`
const paths = getWikiPaths(cwd)
const sourceNotePath = `${paths.sourcesDir}/${slug}.md`
await writeFile(
sourceNotePath,
buildSourceNote({
title,
sourcePath: relSourcePath,
ingestedAt,
summary,
excerpt,
}),
'utf8',
)
await appendFile(paths.logFile, `${buildLogEntry(relSourcePath, title, ingestedAt)}\n`, 'utf8')
await rebuildWikiIndex(cwd)
return {
sourceFile: relSourcePath,
sourceNote: relative(cwd, sourceNotePath).replace(/\\/g, '/'),
summary,
title,
}
}

View File

@@ -0,0 +1,54 @@
import { afterEach, expect, test } from 'bun:test'
import { mkdtemp, readFile, rm } from 'fs/promises'
import { tmpdir } from 'os'
import { join } from 'path'
import { initializeWiki } from './init.js'
import { getWikiPaths } from './paths.js'
const tempDirs: string[] = []
afterEach(async () => {
await Promise.all(
tempDirs.splice(0).map(dir => rm(dir, { recursive: true, force: true })),
)
})
async function makeProjectDir(): Promise<string> {
const dir = await mkdtemp(join(tmpdir(), 'openclaude-wiki-init-'))
tempDirs.push(dir)
return dir
}
test('initializeWiki creates the expected wiki scaffold', async () => {
const cwd = await makeProjectDir()
const result = await initializeWiki(cwd)
const paths = getWikiPaths(cwd)
expect(result.alreadyExisted).toBe(false)
expect(result.createdFiles).toEqual([
'.openclaude/wiki/schema.md',
'.openclaude/wiki/index.md',
'.openclaude/wiki/log.md',
'.openclaude/wiki/pages/architecture.md',
])
expect(await readFile(paths.schemaFile, 'utf8')).toContain(
'# OpenClaude Wiki Schema',
)
expect(await readFile(paths.indexFile, 'utf8')).toContain('Wiki')
expect(await readFile(paths.logFile, 'utf8')).toContain(
'Wiki initialized by OpenClaude',
)
expect(await readFile(join(paths.pagesDir, 'architecture.md'), 'utf8')).toContain(
'# Architecture',
)
})
test('initializeWiki is idempotent and preserves existing files', async () => {
const cwd = await makeProjectDir()
await initializeWiki(cwd)
const second = await initializeWiki(cwd)
expect(second.alreadyExisted).toBe(true)
expect(second.createdFiles).toEqual([])
})

140
src/services/wiki/init.ts Normal file
View File

@@ -0,0 +1,140 @@
import { mkdir, writeFile } from 'fs/promises'
import { basename, relative } from 'path'
import { getWikiPaths } from './paths.js'
import type { WikiInitResult } from './types.js'
function buildSchemaTemplate(projectName: string): string {
return `# OpenClaude Wiki Schema
This wiki stores durable, human-readable project knowledge for ${projectName}.
## Goals
- Keep useful project knowledge in markdown, not only in chat history
- Prefer synthesized facts over raw copy-paste
- Keep source attribution explicit
- Make pages easy for both humans and agents to update
## Structure
- \`index.md\`: top-level navigation and major topics
- \`log.md\`: append-only update log
- \`pages/\`: durable topic and architecture pages
- \`sources/\`: source ingestion notes and summaries
## Page Rules
- Keep pages focused on one topic
- Use stable headings such as:
- \`## Summary\`
- \`## Key Facts\`
- \`## Relationships\`
- \`## Open Questions\`
- \`## Sources\`
- Add or update facts only when they are grounded in project files or explicit source notes
- Prefer editing an existing page over creating duplicates
`
}
function buildIndexTemplate(projectName: string): string {
return `# ${projectName} Wiki
This wiki is maintained by OpenClaude as a durable project knowledge layer.
## Core Pages
- [Architecture](./pages/architecture.md)
## Sources
- Source notes live in [sources/](./sources/)
## Recent Updates
- See [log.md](./log.md)
`
}
function buildLogTemplate(timestamp: string): string {
return `# Wiki Update Log
- ${timestamp}: Wiki initialized by OpenClaude
`
}
function buildArchitectureTemplate(projectName: string): string {
return `# Architecture
## Summary
High-level architecture notes for ${projectName}.
## Key Facts
- This page is the starting point for durable architecture knowledge.
## Relationships
- Link this page to major subsystems as the wiki grows.
## Open Questions
- What are the most important runtime subsystems?
- Which files best represent the system architecture?
## Sources
- Wiki bootstrap
`
}
async function ensureFile(
filePath: string,
content: string,
createdFiles: string[],
): Promise<void> {
try {
await writeFile(filePath, content, { encoding: 'utf8', flag: 'wx' })
createdFiles.push(filePath)
} catch (error: unknown) {
if (
typeof error === 'object' &&
error !== null &&
'code' in error &&
error.code === 'EEXIST'
) {
return
}
throw error
}
}
export async function initializeWiki(cwd: string): Promise<WikiInitResult> {
const paths = getWikiPaths(cwd)
const createdDirectories: string[] = []
const createdFiles: string[] = []
for (const dir of [paths.root, paths.pagesDir, paths.sourcesDir]) {
await mkdir(dir, { recursive: true })
createdDirectories.push(dir)
}
const projectName = basename(cwd)
const timestamp = new Date().toISOString()
await ensureFile(paths.schemaFile, buildSchemaTemplate(projectName), createdFiles)
await ensureFile(paths.indexFile, buildIndexTemplate(projectName), createdFiles)
await ensureFile(paths.logFile, buildLogTemplate(timestamp), createdFiles)
await ensureFile(
`${paths.pagesDir}/architecture.md`,
buildArchitectureTemplate(projectName),
createdFiles,
)
return {
root: paths.root,
createdFiles: createdFiles.map(file => relative(cwd, file)),
createdDirectories: createdDirectories.map(dir => relative(cwd, dir)),
alreadyExisted: createdFiles.length === 0,
}
}

View File

@@ -0,0 +1,18 @@
import { join } from 'path'
import type { WikiPaths } from './types.js'
export const OPENCLAUDE_DIRNAME = '.openclaude'
export const WIKI_DIRNAME = 'wiki'
export function getWikiPaths(cwd: string): WikiPaths {
const root = join(cwd, OPENCLAUDE_DIRNAME, WIKI_DIRNAME)
return {
root,
pagesDir: join(root, 'pages'),
sourcesDir: join(root, 'sources'),
schemaFile: join(root, 'schema.md'),
indexFile: join(root, 'index.md'),
logFile: join(root, 'log.md'),
}
}

View File

@@ -0,0 +1,55 @@
import { afterEach, expect, test } from 'bun:test'
import { mkdtemp, mkdir, rm, writeFile } from 'fs/promises'
import { tmpdir } from 'os'
import { join } from 'path'
import { initializeWiki } from './init.js'
import { getWikiPaths } from './paths.js'
import { getWikiStatus } from './status.js'
const tempDirs: string[] = []
afterEach(async () => {
await Promise.all(
tempDirs.splice(0).map(dir => rm(dir, { recursive: true, force: true })),
)
})
async function makeProjectDir(): Promise<string> {
const dir = await mkdtemp(join(tmpdir(), 'openclaude-wiki-status-'))
tempDirs.push(dir)
return dir
}
test('getWikiStatus reports uninitialized wiki state', async () => {
const cwd = await makeProjectDir()
const status = await getWikiStatus(cwd)
expect(status.initialized).toBe(false)
expect(status.pageCount).toBe(0)
expect(status.sourceCount).toBe(0)
expect(status.lastUpdatedAt).toBeNull()
})
test('getWikiStatus counts pages and sources for initialized wiki', async () => {
const cwd = await makeProjectDir()
await initializeWiki(cwd)
const paths = getWikiPaths(cwd)
await writeFile(join(paths.pagesDir, 'commands.md'), '# Commands\n', 'utf8')
await mkdir(join(paths.sourcesDir, 'external'), { recursive: true })
await writeFile(
join(paths.sourcesDir, 'external', 'spec.md'),
'# Spec\n',
'utf8',
)
const status = await getWikiStatus(cwd)
expect(status.initialized).toBe(true)
expect(status.pageCount).toBe(2)
expect(status.sourceCount).toBe(1)
expect(status.hasSchema).toBe(true)
expect(status.hasIndex).toBe(true)
expect(status.hasLog).toBe(true)
expect(status.lastUpdatedAt).not.toBeNull()
})

View File

@@ -0,0 +1,82 @@
import { readdir, stat } from 'fs/promises'
import { getWikiPaths } from './paths.js'
import type { WikiStatus } from './types.js'
async function pathExists(path: string): Promise<boolean> {
try {
await stat(path)
return true
} catch {
return false
}
}
async function listMarkdownFiles(dir: string): Promise<string[]> {
if (!(await pathExists(dir))) {
return []
}
const entries = await readdir(dir, { withFileTypes: true })
const files: string[] = []
for (const entry of entries) {
const fullPath = `${dir}/${entry.name}`
if (entry.isDirectory()) {
files.push(...(await listMarkdownFiles(fullPath)))
} else if (entry.isFile() && entry.name.endsWith('.md')) {
files.push(fullPath)
}
}
return files
}
async function getLastUpdatedAt(pathsToCheck: string[]): Promise<string | null> {
const mtimes: number[] = []
for (const path of pathsToCheck) {
try {
const info = await stat(path)
mtimes.push(info.mtimeMs)
} catch {
continue
}
}
if (mtimes.length === 0) {
return null
}
return new Date(Math.max(...mtimes)).toISOString()
}
export async function getWikiStatus(cwd: string): Promise<WikiStatus> {
const paths = getWikiPaths(cwd)
const [hasRoot, hasSchema, hasIndex, hasLog, pages, sources] =
await Promise.all([
pathExists(paths.root),
pathExists(paths.schemaFile),
pathExists(paths.indexFile),
pathExists(paths.logFile),
listMarkdownFiles(paths.pagesDir),
listMarkdownFiles(paths.sourcesDir),
])
return {
initialized: hasRoot && hasSchema && hasIndex && hasLog,
root: paths.root,
pageCount: pages.length,
sourceCount: sources.length,
hasSchema,
hasIndex,
hasLog,
lastUpdatedAt: await getLastUpdatedAt([
paths.schemaFile,
paths.indexFile,
paths.logFile,
...pages,
...sources,
]),
}
}

View File

@@ -0,0 +1,33 @@
export type WikiPaths = {
root: string
pagesDir: string
sourcesDir: string
schemaFile: string
indexFile: string
logFile: string
}
export type WikiInitResult = {
root: string
createdFiles: string[]
createdDirectories: string[]
alreadyExisted: boolean
}
export type WikiStatus = {
initialized: boolean
root: string
pageCount: number
sourceCount: number
hasSchema: boolean
hasIndex: boolean
hasLog: boolean
lastUpdatedAt: string | null
}
export type WikiIngestResult = {
sourceFile: string
sourceNote: string
summary: string
title: string
}

View File

@@ -0,0 +1,36 @@
export function sanitizeWikiSlug(value: string): string {
return value
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.replace(/^-+|-+$/g, '')
.replace(/-{2,}/g, '-')
}
export function summarizeText(input: string, maxLength = 280): string {
const normalized = input.replace(/\s+/g, ' ').trim()
if (!normalized) {
return 'No summary available.'
}
if (normalized.length <= maxLength) {
return normalized
}
return `${normalized.slice(0, maxLength - 1).trimEnd()}`
}
export function extractTitleFromText(
fallbackName: string,
content: string,
): string {
const firstNonEmptyLine = content
.split('\n')
.map(line => line.trim())
.find(Boolean)
if (!firstNonEmptyLine) {
return fallbackName
}
return firstNonEmptyLine.replace(/^#+\s*/, '') || fallbackName
}

View File

@@ -0,0 +1,13 @@
import type { Command } from '../commands.js'
import { createStore } from './store.js'
const pluginCommandsStore = createStore<Command[]>([])
export const getPluginCommandsState = (): Command[] =>
pluginCommandsStore.getState()
export const subscribePluginCommands = pluginCommandsStore.subscribe
export function setPluginCommandsState(commands: Command[]): void {
pluginCommandsStore.setState(() => [...commands])
}

View File

@@ -27,19 +27,19 @@ function getClaudeCodeGuideBasePrompt(): string {
? `${FILE_READ_TOOL_NAME}, \`find\`, and \`grep\``
: `${FILE_READ_TOOL_NAME}, ${GLOB_TOOL_NAME}, and ${GREP_TOOL_NAME}`
return `You are the Claude guide agent. Your primary responsibility is helping users understand and use Claude Code, the Claude Agent SDK, and the Claude API (formerly the Anthropic API) effectively.
return `You are the OpenClaude guide agent. Your primary responsibility is helping users understand and use OpenClaude, the Claude Agent SDK, and the Claude API (formerly the Anthropic API) effectively.
**Your expertise spans three domains:**
1. **Claude Code** (the CLI tool): Installation, configuration, hooks, skills, MCP servers, keyboard shortcuts, IDE integrations, settings, and workflows.
1. **OpenClaude** (the CLI tool): Installation, configuration, hooks, skills, MCP servers, keyboard shortcuts, IDE integrations, settings, and workflows.
2. **Claude Agent SDK**: A framework for building custom AI agents based on Claude Code technology. Available for Node.js/TypeScript and Python.
2. **Claude Agent SDK**: A framework for building custom AI agents. Available for Node.js/TypeScript and Python.
3. **Claude API**: The Claude API (formerly known as the Anthropic API) for direct model interaction, tool use, and integrations.
**Documentation sources:**
- **Claude Code docs** (${CLAUDE_CODE_DOCS_MAP_URL}): Fetch this for questions about the Claude Code CLI tool, including:
- **Claude Code docs** (${CLAUDE_CODE_DOCS_MAP_URL}): Use these as the compatibility reference for questions about the OpenClaude CLI tool, including:
- Installation, setup, and getting started
- Hooks (pre/post command execution)
- Custom skills
@@ -97,7 +97,7 @@ function getFeedbackGuideline(): string {
export const CLAUDE_CODE_GUIDE_AGENT: BuiltInAgentDefinition = {
agentType: CLAUDE_CODE_GUIDE_AGENT_TYPE,
whenToUse: `Use this agent when the user asks questions ("Can Claude...", "Does Claude...", "How do I...") about: (1) Claude Code (the CLI tool) - features, hooks, slash commands, MCP servers, settings, IDE integrations, keyboard shortcuts; (2) Claude Agent SDK - building custom agents; (3) Claude API (formerly Anthropic API) - API usage, tool use, Anthropic SDK usage. **IMPORTANT:** Before spawning a new agent, check if there is already a running or recently completed claude-code-guide agent that you can continue via ${SEND_MESSAGE_TOOL_NAME}.`,
whenToUse: `Use this agent when the user asks questions ("Can OpenClaude...", "Does OpenClaude...", "How do I...") about: (1) OpenClaude (the CLI tool) - features, hooks, slash commands, MCP servers, settings, IDE integrations, keyboard shortcuts; (2) Claude Agent SDK - building custom agents; (3) Claude API (formerly Anthropic API) - API usage, tool use, Anthropic SDK usage. **IMPORTANT:** Before spawning a new agent, check if there is already a running or recently completed claude-code-guide agent that you can continue via ${SEND_MESSAGE_TOOL_NAME}.`,
// Ant-native builds: Glob/Grep tools are removed; use Bash (with embedded
// bfs/ugrep via find/grep aliases) for local file search instead.
tools: hasEmbeddedSearchTools()

View File

@@ -21,7 +21,7 @@ function getExploreSystemPrompt(): string {
? `- Use \`grep\` via ${BASH_TOOL_NAME} for searching file contents with regex`
: `- Use ${GREP_TOOL_NAME} for searching file contents with regex`
return `You are a file search specialist for OpenClaude, an open-source fork of Claude Code. You excel at thoroughly navigating and exploring codebases.
return `You are a file search specialist for OpenClaude. You excel at thoroughly navigating and exploring codebases.
=== CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS ===
This is a READ-ONLY exploration task. You are STRICTLY PROHIBITED from:

View File

@@ -1,6 +1,6 @@
import type { BuiltInAgentDefinition } from '../loadAgentsDir.js'
const SHARED_PREFIX = `You are an agent for OpenClaude, an open-source fork of Claude Code. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done.`
const SHARED_PREFIX = `You are an agent for OpenClaude, an open-source coding agent and CLI. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done.`
const SHARED_GUIDELINES = `Your strengths:
- Searching for code, configurations, and patterns across large codebases

View File

@@ -18,7 +18,7 @@ function getPlanV2SystemPrompt(): string {
? `\`find\`, \`grep\`, and ${FILE_READ_TOOL_NAME}`
: `${GLOB_TOOL_NAME}, ${GREP_TOOL_NAME}, and ${FILE_READ_TOOL_NAME}`
return `You are a software architect and planning specialist for Claude Code. Your role is to explore the codebase and design implementation plans.
return `You are a software architect and planning specialist for OpenClaude. Your role is to explore the codebase and design implementation plans.
=== CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS ===
This is a READ-ONLY planning task. You are STRICTLY PROHIBITED from:

View File

@@ -1,6 +1,6 @@
import type { BuiltInAgentDefinition } from '../loadAgentsDir.js'
const STATUSLINE_SYSTEM_PROMPT = `You are a status line setup agent for Claude Code. Your job is to create or update the statusLine command in the user's Claude Code settings.
const STATUSLINE_SYSTEM_PROMPT = `You are a status line setup agent for OpenClaude. Your job is to create or update the statusLine command in the user's OpenClaude settings.
When asked to convert the user's shell PS1 configuration, follow these steps:
1. Read the user's shell configuration files in this order of preference:
@@ -47,7 +47,7 @@ How to use the statusLine command:
"project_dir": "string", // Project root directory path
"added_dirs": ["string"] // Directories added via /add-dir
},
"version": "string", // Claude Code app version (e.g., "1.0.71")
"version": "string", // OpenClaude app version (e.g., "1.0.71")
"output_style": {
"name": "string", // Output style name (e.g., "default", "Explanatory", "Learning")
},
@@ -110,10 +110,11 @@ How to use the statusLine command:
To display both 5-hour and 7-day limits when available:
- input=$(cat); five=$(echo "$input" | jq -r '.rate_limits.five_hour.used_percentage // empty'); week=$(echo "$input" | jq -r '.rate_limits.seven_day.used_percentage // empty'); out=""; [ -n "$five" ] && out="5h:$(printf '%.0f' "$five")%"; [ -n "$week" ] && out="$out 7d:$(printf '%.0f' "$week")%"; echo "$out"
2. For longer commands, you can save a new file in the user's ~/.claude directory, e.g.:
- ~/.claude/statusline-command.sh and reference that file in the settings.
2. For longer commands, save a new file alongside the user's existing settings home instead of creating a new config directory.
- Prefer the currently used config home if one already exists.
- In most existing setups this will be ~/.claude/statusline-command.sh.
3. Update the user's ~/.claude/settings.json with:
3. Update the user's resolved settings file with:
{
"statusLine": {
"type": "command",
@@ -121,20 +122,23 @@ How to use the statusLine command:
}
}
4. If ~/.claude/settings.json is a symlink, update the target file instead.
4. Use the existing config home already in use by the installation.
- Prefer the settings file that already exists.
- If the settings file is a symlink, update the target file instead.
- Do not create a second parallel settings home just because another directory name also exists.
Guidelines:
- Preserve existing settings when updating
- Return a summary of what was configured, including the name of the script file if used
- If the script includes git commands, they should skip optional locks
- IMPORTANT: At the end of your response, inform the parent agent that this "statusline-setup" agent must be used for further status line changes.
Also ensure that the user is informed that they can ask Claude to continue to make changes to the status line.
Also ensure that the user is informed that they can ask OpenClaude to continue to make changes to the status line.
`
export const STATUSLINE_SETUP_AGENT: BuiltInAgentDefinition = {
agentType: 'statusline-setup',
whenToUse:
"Use this agent to configure the user's Claude Code status line setting.",
"Use this agent to configure the user's OpenClaude status line setting.",
tools: ['Read', 'Edit'],
source: 'built-in',
baseDir: 'built-in',

View File

@@ -14,8 +14,21 @@ import {
export const inputSchema = lazySchema(() => z.object({}).passthrough())
type InputSchema = ReturnType<typeof inputSchema>
// MCP tools can return either a plain string or an array of content blocks
// (text, images, etc.). The outputSchema must reflect both shapes so the model
// knows rich content is possible.
export const outputSchema = lazySchema(() =>
z.string().describe('MCP tool execution result'),
z.union([
z.string().describe('MCP tool execution result as text'),
z
.array(
z.object({
type: z.string(),
text: z.string().optional(),
}),
)
.describe('MCP tool execution result as content blocks'),
]),
)
type OutputSchema = ReturnType<typeof outputSchema>
@@ -65,7 +78,19 @@ export const MCPTool = buildTool({
renderToolUseProgressMessage,
renderToolResultMessage,
isResultTruncated(output: Output): boolean {
return isOutputLineTruncated(output)
if (typeof output === 'string') {
return isOutputLineTruncated(output)
}
// Array of content blocks — check if any text block exceeds the display limit
if (Array.isArray(output)) {
return output.some(
block =>
block?.type === 'text' &&
typeof block.text === 'string' &&
isOutputLineTruncated(block.text),
)
}
return false
},
mapToolResultToToolResultBlockParam(content, toolUseID) {
return {

View File

@@ -1,6 +1,29 @@
import { describe, expect, test } from 'bun:test'
import type { Command } from '../../commands.js'
import { SkillTool } from './SkillTool.js'
import { renderToolUseMessage } from './UI.js'
function createPromptCommand(
name: string,
options: {
source?: 'builtin' | 'plugin' | 'mcp' | 'bundled'
loadedFrom?: Command['loadedFrom']
} = {},
): Command {
return {
type: 'prompt',
name,
description: `${name} description`,
progressMessage: `${name} progress`,
contentLength: 0,
source: options.source ?? 'builtin',
loadedFrom: options.loadedFrom,
async getPromptForCommand() {
return []
},
}
}
describe('SkillTool missing parameter handling', () => {
test('missing skill stays required at the schema level', async () => {
@@ -29,3 +52,47 @@ describe('SkillTool missing parameter handling', () => {
expect(parsed.success).toBe(true)
})
})
describe('SkillTool renderToolUseMessage', () => {
test('plugin skills render correctly without plugin command metadata', () => {
const pluginSkillName = 'plugin:review-pr'
expect(
renderToolUseMessage(
{ skill: pluginSkillName },
{
commands: [],
},
),
).toBe(pluginSkillName)
expect(
renderToolUseMessage(
{ skill: pluginSkillName },
{
commands: [
createPromptCommand(pluginSkillName, {
source: 'plugin',
loadedFrom: 'plugin',
}),
],
},
),
).toBe(pluginSkillName)
})
test('legacy commands still render with a slash prefix when metadata is present', () => {
expect(
renderToolUseMessage(
{ skill: 'legacy-command' },
{
commands: [
createPromptCommand('legacy-command', {
loadedFrom: 'commands_DEPRECATED',
}),
],
},
),
).toBe('/legacy-command')
})
})

Some files were not shown because too many files have changed in this diff Show More