Compare commits

..

28 Commits

Author SHA1 Message Date
gnanam1990
b5dbb71a44 fix: preserve explicit ollama startup intent 2026-04-07 19:40:47 +05:30
gnanam1990
b2cabdd950 fix: preserve explicit provider intent for active profiles 2026-04-07 18:47:35 +05:30
gnanam1990
139610950c fix: preserve explicit provider startup intent 2026-04-07 14:50:20 +05:30
gnanam1990
65dd19cf87 fix: preserve explicit startup provider selection 2026-04-07 10:08:30 +05:30
Vasanth T
e365cb4010 fix: address code scanning alerts (#434)
* fix: address code scanning alerts

Parse Gemini hostnames instead of matching raw URL substrings, redact gRPC error logs, and harden the Finder drag-drop test escape helper so the flagged paths are fixed without regressing working behavior.

* Potential fix for pull request finding 'CodeQL / Clear-text logging of sensitive information'

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* fix: restore safe grpc error summaries

A later autofix commit removed the exported gRPC error summarizer while the new regression test still imported it. Restore the safe name/code-only summary so CI stays green without reintroducing clear-text logging.

* fix: keep grpc logging generic

Remove the stale helper/test pair and keep the gRPC startup and stream logs free of error-derived data so the CodeQL clear-text logging alert stays closed while the rest of the security fixes remain intact.

---------

Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-07 00:43:09 +08:00
CRABHIVE
52d33a87a0 fix: include MCP tool results in microcompact to reduce token waste (#348)
## Summary

- Added `isCompactableTool()` helper in `microCompact.ts` that matches
  both the existing COMPACTABLE_TOOLS set and any tool prefixed `mcp__`
- MCP tool results were never compacted because the hardcoded allowlist
  only contained 9 built-in tools — MCP tools fell through and persisted
  in full for the entire session, wasting 10-500K tokens/session

## Impact

- user-facing impact: long sessions using MCP servers (GitHub, Slack,
  Playwright, etc.) will compact stale MCP tool results, reducing token
  usage and delaying autocompact triggers
- developer/maintainer impact: new MCP servers are automatically covered
  via prefix match — no need to update the allowlist per-server

## Testing

- [x] `bun run build`
- [x] `bun run smoke`
- [x] focused tests: `bun test src/services/compact/microCompact.test.ts`
  - module exports load correctly
  - estimateMessageTokens counts MCP tool_use blocks
  - microcompactMessages processes MCP tools without error
  - microcompactMessages processes mixed built-in and MCP tools

## Notes

- provider/model path tested: n/a (compaction logic is model-agnostic)
- screenshots attached (if UI changed): n/a
- follow-up work or known limitations: subagent results and thinking
  blocks are still not compacted (separate RFCs)

https://claude.ai/code/session_01D7kprMn4c66a5WrZscF7rv

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-06 23:13:20 +08:00
KRATOS
b4bd95b477 fix: normalize malformed Bash tool arguments from OpenAI-compatible providers (#385)
* fix: normalize malformed Bash tool arguments from OpenAI-compatible providers

* fix: keep invalid Bash tool args from becoming commands

* fix: preserve malformed Bash JSON literals

* test: stabilize rebased PR 385 checks

* test: isolate provider profile env assertions

* fix: extend tool argument normalization to all tools and harden edge cases

- Extend STRING_ARGUMENT_TOOL_FIELDS to normalize Read, Write, Edit,
  Glob, and Grep plain-string arguments (fixes "Invalid tool parameters"
  errors reported by VennDev)
- Normalize streaming Bash args regardless of finish_reason, not only
  when finish_reason is 'tool_calls'
- Broaden isLikelyStructuredObjectLiteral to catch malformed object-shaped
  strings like {command:"pwd"} and {'command':'pwd'} (fixes CR2 from
  Vasanthdev2004)
- Apply blank/object-literal guard to all tools, not just Bash
- Extract duplicated JSON repair suffix combinations into shared constant
- Add 32 isolated unit tests for toolArgumentNormalization

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: skip streaming normalization on finish_reason length

Truncated tool calls (finish_reason: 'length') now preserve the raw
buffer instead of normalizing into executable commands, preventing
incomplete commands from becoming runnable.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: comprehensive tool argument normalization hardening

- Remove all { raw: ... } returns that caused InputValidationError with
  z.strictObject schemas — return {} instead for clean Zod errors
- Extend normalizeAtStop buffering to all mapped tools (Read, Write,
  Edit, Glob, Grep) so streaming paths also get normalized
- Make repairPossiblyTruncatedObjectJson generic — repair any valid
  JSON object, not just ones with a command field
- Export hasToolFieldMapping for streaming normalizeAtStop decision
- Skip normalization on finish_reason: length to preserve raw truncated
  buffer
- Update all test expectations to match new behavior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 22:08:45 +08:00
Otávio Carvalho
1e057025d6 Fix GLM-5 and other reasoning models appearing to hang via OpenAI shim (#365)
* Fix GLM-5 and other reasoning models appearing to hang via OpenAI shim

Reasoning models like GLM-5 and DeepSeek stream chain-of-thought in
`reasoning_content` while `content` stays empty (""). The OpenAI shim
only read `delta.content`, so it saw empty strings and never emitted
any Anthropic stream events — causing the UI to appear frozen.

- Add `reasoning_content` to streaming chunk and non-streaming response types
- Emit `reasoning_content` as thinking blocks (thinking_delta) in streaming mode
- Properly transition from thinking to text blocks when content phase begins
- Fall back to `reasoning_content` in non-streaming mode when content is null

Fixes #214

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fix non-streaming reasoning_content fallback and add tests

- Use explicit empty-string check instead of || for content fallback
  so content: "" doesn't leak reasoning_content as visible text
- Close thinking block before tool call blocks in streaming path
- Add non-streaming and streaming reasoning_content tests

Co-Authored-By: GLM-5.1 <noreply@openclaude.dev>

* Fix flaky Ink reconciler tests caused by react-compiler memoization

Remove hard throw in createTextInstance that crashed when hostContext.isInsideText
was stale due to react-compiler element caching. Add timeout guards to prevent
test hangs when render errors prevent exit() from firing.

Co-Authored-By: Claude GLM-5.1 <noreply@openclaude.dev>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: GLM-5.1 <noreply@openclaude.dev>
2026-04-06 22:02:29 +08:00
Agent_J
aff2bd87e4 fix: avoid sync github credential reads in provider manager (#428)
* fix: avoid sync github credential reads in provider manager

* test: stabilize provider manager async credential test

* fix: avoid first-frame github provider false negative

---------

Co-authored-by: KRATOS <84986124+gnanam1990@users.noreply.github.com>
2026-04-06 21:29:53 +08:00
hsain9357
72e6a945fe Fixed gemini error Function call is missing a thought_signature in functionCall parts (#426)
* docs(docs): add agent guidance and repository instructions

- Created `AGENTS.md` and `CLAUDE.md` to provide high-signal guidance for AI agents and developers working in the repository.
- Outlined critical developer commands for building, testing, and running diagnostics using `bun`.
- Documented the repository architecture, source entrypoints, and core service logic.
- Defined framework-specific quirks, including module stubbing for internal modules and macro versioning.
- Established style and workflow guidelines regarding telemetry, environment variables, and security scan requirements.

* feat(api): support gemini thought signatures in openai shim

- Added `isGeminiMode` utility to detect Gemini backends via `CLAUDE_CODE_USE_GEMINI` or `OPENAI_BASE_URL`.
- Updated `convertMessages` to extract `thought_signature` from thinking blocks and inject them into tool calls.
- Implemented a fallback mechanism that provides a `skip_thought_signature_validator` string to avoid 400 validation errors when a signature is missing.
- Enhanced `openaiStreamToAnthropic` and `OpenAIShimMessages` to correctly preserve and pass through Gemini-specific metadata in `extra_content`.

* refactor(api): improve gemini metadata handling and remove redundant docs

- Updated `src/services/api/openaiShim.ts` to merge existing `google`-specific metadata within `extra_content` instead of overwriting it.
- Simplified the `thought_signature` assignment logic to use a fallback value of `skip_thought_signature_validator` when no signature is provided.
- Deleted `AGENTS.md` and `CLAUDE.md` files to eliminate redundant agent guidance documentation.

* fix(api): propagate gemini thought signatures to all parallel tool calls

- Removed the index constraint when assigning the `signature` from a `thinkingBlock` to tool calls in `openaiShim.ts`.
- Ensured that the `thought_signature` is applied to every tool call in a parallel set, rather than just the first one.
- Aligned the shim with Gemini API requirements, which mandate that the same signature must be present on every replayed function call part within an assistant turn.
2026-04-06 21:04:49 +08:00
Kevin Codex
39f3b2babd test: isolate latest main suite regressions (#427) 2026-04-06 19:50:31 +08:00
Agent_J
ff7d49990d feat: GitHub provider lifecycle and onboarding hardening (#351)
* feat: improve GitHub provider onboarding and lifecycle

* fix: address copilot review in provider manager

* fix: address follow-up copilot review comments

* test: resolve rebase conflict in provider profiles suite

* fix: clear stale github hydrated marker

* fix: harden github onboarding auth precedence

* fix: remove merge markers from provider tests

* fix: resolve latest copilot onboarding comments

---------

Co-authored-by: KRATOS <84986124+gnanam1990@users.noreply.github.com>
2026-04-06 19:18:58 +08:00
Vasanth T
8ece290087 fix: suppress startup dialogs when input is buffered (#423)
Co-authored-by: OpenClaude Worker 3 <worker-3@openclaude.local>
2026-04-06 18:31:38 +08:00
Kevin Codex
6c61790063 test: fix leaked ink mocks in full suite (#424) 2026-04-06 18:10:02 +08:00
NikitaBabenko
26eef92fe7 feat: add headless gRPC server for external agent integration (#278)
* gRPC Server

* gRPC fix

* UpdProto

* fix: address PR review feedback for gRPC server

- Update bun.lock for new dependencies (frozen-lockfile CI fix)
- Add multi-turn session persistence via initialMessages
- Replace hardcoded done payload with real token counts
- Default bind to localhost instead of 0.0.0.0

* fix(grpc): startup parity, cancel interrupt, and cli text fallback

- Replace enableConfigs() with await init() in start-grpc.ts for full
  bootstrap parity with the main CLI (env vars, CA certs, mTLS, proxy,
  OAuth, Windows shell)
- Call engine.interrupt() before call.end() in the cancel handler so
  in-flight model/tool execution is actually stopped
- Show done.full_text in the CLI client when no text_chunk was received,
  preventing silent drops when streaming is unavailable

* fix(grpc): wire session_id end-to-end and remove dead provider field

- Move session_id from ClientMessage into ChatRequest to fix proto-loader
  oneofs encoding bug and make the field functional
- Implement in-memory session store so reconnecting with the same
  session_id resumes conversation context across streams
- Remove ChatRequest.provider — per-request provider routing requires
  global process.env mutation, unsafe for concurrent clients; provider
  is configured via env vars at server startup

* fix(grpc): mirror CLI auth bootstrap in start-grpc and fix tool_name field

scripts/start-grpc.ts now runs the same provider/auth bootstrap as the
normal CLI entrypoint: enableConfigs, safe env vars, Gemini/GitHub token
hydration, saved-profile resolution with warn-and-fallback, and provider
validation before the server binds.

ToolCallResult.tool_name was being populated with the tool_use_id UUID.
Added a toolNameById map (filled in canUseTool) so tool_name now carries
the actual tool name (e.g. "Bash"). The UUID moves to a new tool_use_id
field (proto field 4) for client-side correlation.

* fix(grpc): add tool_use_id to ToolCallStart and interrupt engine on stream close

Two blocker-level issues flagged in code review:

- ToolCallStart was missing tool_use_id, making it impossible for clients
  to correlate tool_start events with tool_result when the same tool runs
  multiple times. Added tool_use_id = 3 to the proto message and populated
  it from the toolUseID parameter in canUseTool.

- On stream close without an explicit CancelSignal the server only nulled
  the engine reference, leaving the underlying model/tool work running
  as an orphan. Added engine.interrupt() in the call.on('end') handler
  to stop work immediately when the client disconnects.

* fix(grpc): resolve pending promises on disconnect and guard post-cancel writes

Four lifecycle and contract issues identified during proactive review:

- Pending permission Promises in canUseTool would hang forever if the
  client disconnected mid-stream. On call 'end', all pending resolvers
  are now called with 'no' so the engine can unblock and terminate.

- The done message and session save could fire after call.end() when
  a CancelSignal arrived mid-generation. Added an `interrupted` flag
  set on both cancel and stream close to gate all post-loop writes.

- The session map had no eviction policy, allowing unbounded memory
  growth. Capped at MAX_SESSIONS=1000 with FIFO eviction of the
  oldest entry.

- Field 3 was silently absent from ChatRequest. Added `reserved 3`
  to document the gap and prevent accidental reuse in future.

* fix(grpc): reset previousMessages on each new request to prevent session history leak

previousMessages was declared at stream scope and only overwritten when
the incoming session_id already existed in the session store. A second
request on the same stream with a new session_id would silently inherit
the first request's conversation history in initialMessages instead of
starting fresh, violating the session contract.

Fix: reset previousMessages to [] at the start of each ChatRequest
before the session-store lookup.

* fix(grpc): reset interrupted flag between requests and guard against concurrent ChatRequest

Two stream-scoped state bugs found during proactive audit:

- The `interrupted` flag was never reset between requests on the same
  stream. If the first request was cancelled, all subsequent requests
  would silently skip the done message, causing the client to hang.

- A second ChatRequest arriving while the first was still processing
  would overwrite the engine reference, corrupting the lifecycle of
  both requests. Now returns ALREADY_EXISTS error instead. Engine is
  nulled after the for-await loop completes so subsequent requests
  can proceed normally.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-06 17:54:10 +08:00
Paulo Reis
112df59117 fix: convert dragged file paths to @mentions for attachment (#382)
* fix: convert dragged file paths to @mentions for attachment

When non-image files are dragged into the terminal, the file path was
inserted as plain text and never attached. Now detected absolute paths
are converted to @mentions so they get picked up by the attachment system.

* test: add tests for drag-and-drop file path detection

* fix: multi-image drag-and-drop only showing last image

insertTextAtCursor read input and cursorOffset from the React closure,
which is stale when called in a synchronous loop (e.g. onImagePaste for
multiple dragged images). Now uses refs so each insertion chains on the
previous one.

* fix: quote Windows absolute paths to avoid MCP mention collision

Paths containing ':' (e.g. Windows drive letters) are now emitted in
quoted @"..." form so they don't match the MCP resource mention regex.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: decouple dragDropPaths from imagePaste and harden image checks

- Check image extension against the cleaned path (post quote/escape
  stripping) so quoted or backslash-escaped image drops are reliably
  routed to the image paste handler.
- Inline the image extension regex and drop the imagePaste/fsOperations
  imports so the module (and its tests) no longer pull in `bun:bundle`
  and the heavier fs wrapper chain. Use plain `fs.existsSync` for the
  on-disk check.
- Add tests covering quoted image paths, uppercase extensions,
  backslash-escaped image paths, escaped real files with spaces, mixed
  segments containing an image, quoted-nonexistent paths, and leading
  or trailing whitespace.

* test: verify dragged paths with an `@` segment are preserved

Adds a fixture under a scoped-package-style subdir (`@types/index.d.ts`)
so we exercise the realistic `node_modules/@types/...` drag case and
lock in that `extractDraggedFilePaths` returns the raw path unchanged —
the `@` inside the path must not collide with the mention prefix the
caller prepends downstream.

* test: parametrize dragDropPaths cases with test.each

Groups the 21 scenarios into four table-driven describes
(empty-result, single-path, multi-path, backslash-escaped) so that
adding a new case is a one-line row instead of a new `test()` block.
Fixture directories are now created synchronously at describe-load
time so their paths are available to the test.each tables, which are
built before any hook runs.

* test: add contract tests for @-mention extractor boundary

Pins the contract between `extractAtMentionedFiles` and
`extractMcpResourceMentions` so the MCP regex can't silently swallow
quoted file-path mentions.

These tests fail on current HEAD — 3 of 11 cases expose the regression
pointed out in the review on #382: `extractMcpResourceMentions`'s
trailing `\b` backtracks past the closing `"` of a quoted mention and
produces a ghost match for `@"C:\Users\..."`, `@C:\Users\...`, and
`@"/tmp/weird:name.txt"`. The remaining 8 cases lock in the behaviour
that must not change (legitimate `server:resource` mentions and plain
file-path mentions).

Committed failing on purpose as the first half of a test-then-fix
pair; the regex fix follows in a subsequent commit.

* fix: prevent MCP extractor from ghost-matching quoted/Windows paths

The MCP resource regex used `\b` as a trailing anchor with `[^\s]+`
character classes. On any quoted file mention containing a colon
(`@"C:\Users\me\file.txt"`, `@"/tmp/weird:name.txt"`), the engine
backtracked past the closing `"` to satisfy `\b`, producing a ghost
match that collided with `extractAtMentionedFiles`. Unquoted Windows
drive-letter paths (`@C:\Users\me\file.txt`) also matched because a
drive letter is structurally identical to an MCP `server:resource`
token.

Two guards:

1. `(?!")` right after `@` drops quoted tokens entirely, and adding
   `"` to the character classes blocks any mid-match backtracking.
2. A post-match filter discards `^[A-Za-z]:[\\/]` — a single-letter
   server followed by a path separator is always a Windows drive
   prefix, never a real MCP resource.

Legitimate MCP forms (`@server:resource/path`, plugin-scoped like
`@asana-plugin:project-status/123`, inline prose mentions) remain
matched and are pinned by the contract tests added in 04998d5.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 17:49:38 +08:00
Meetpatel006
8724d59d48 fix theme picker live preview broken by react-compiler memoization (#395)
* fix: remove react-compiler memo cache, restore classical JSX so theme preview actually previews

* added themepicker test
2026-04-06 17:46:42 +08:00
Jay Suryawanshi
af08b4f762 docs: add LiteLLM proxy setup guide (#418)
* docs: add LiteLLM proxy setup guide

Document the setup process for LiteLLM and its integration with OpenClaude, including prerequisites, configuration, and troubleshooting steps.

* Revise LiteLLM setup steps for Adocs: fix /provider walkthrough to match actual OpenAI-compatible flowPI key and model

Updated setup instructions for LiteLLM provider configuration.

* docs: fix sub-bullet formatting in /provider steps

* docs: clarify key scope in troubleshooting (LiteLLM proxy process env)

Clarified instruction for upstream provider error regarding API key.
2026-04-06 17:01:56 +08:00
Sarath Babu
5012c160c9 feat: Add Gemini support with thought_signature fix (#404)
* feat: Add Gemini support with thought_signature fix and branding updates

* fix: gate thought_signature preservation strictly to Gemini provider

* fix: explicit extra_content destructuring to seal cross-provider tool search leak
2026-04-06 17:01:06 +08:00
KRATOS
c1934974aa fix: preserve unicode in Windows clipboard fallback (#388)
* fix: preserve unicode in Windows clipboard fallback

* fix: avoid Windows clipboard stdin codepage issues

* test: fix Windows clipboard temp path fixture
2026-04-06 16:12:10 +08:00
Kevin Codex
94de37d44f chore: release 0.1.8 2026-04-06 13:45:02 +08:00
Kevin Codex
3b3aca716d test: fix post-merge suite regressions (#419) 2026-04-06 13:32:05 +08:00
Juan Camilo Auriti
d5852ca73d fix: coalesce consecutive same-role messages for strict template models (#241)
Models served through Ollama/vLLM with strict Jinja templates (Devstral,
Mistral, etc.) require strict user↔assistant role alternation and reject
requests with consecutive messages of the same role.

convertMessages() could produce consecutive user or assistant messages in
three scenarios: batched user input, text-only + tool_use assistant turns,
and tool result remainders followed by another user message.

Added a coalescing pass at the end of convertMessages() that merges
consecutive same-role messages (string concat or array concat), preserving
tool_calls on assistant messages. Tool and system messages are excluded
from coalescing as they have their own alternation rules.

Includes regression tests for both user and assistant coalescing.

Fixes #202
2026-04-06 06:47:11 +08:00
Technomancer702
c534aa5771 Feature: Add local OpenAI-compatible model discovery to /model (#201)
* Add local OpenAI-compatible model discovery to /model

* Guard local OpenAI model discovery from Codex routing

* Preserve remote OpenAI Codex alias behavior
2026-04-06 06:46:06 +08:00
Juan Camilo Auriti
60d3d8961a fix: add missing o1-series and Ollama models to context window table (#250)
Models not in the lookup table fall through to a 200k default, causing
auto-compact to never trigger for models with smaller actual context
windows. Users hit hard context_window_exceeded errors instead.

Added to both context window and max output token tables:
- o1, o1-mini, o1-preview, o1-pro (OpenAI reasoning models)
- llama3.2:1b, qwen3:8b, codestral (common Ollama models)

Relates to #248
2026-04-06 06:39:24 +08:00
Juan Camilo Auriti
3b9893b586 security: force lodash-es 4.18.0 for transitive dependencies (#242)
* security: force lodash-es 4.18.0 for transitive dependencies

PR #225 bumped the direct lodash-es dependency to 4.18.0, but
@anthropic-ai/sandbox-runtime still pulled lodash-es@4.17.23 via its
own ^4.17.23 range. The transitive copy was vulnerable to:

- HIGH: Code Injection via _.template (GHSA-r5fr-rjxr-66jc)
- MODERATE: Prototype Pollution via _.unset/_.omit (GHSA-f23m-r3pf-42rh)

Added overrides field in package.json to force all copies to 4.18.0.
bun audit now reports zero vulnerabilities.

* fix: use lodash-es 4.18.1 instead of deprecated 4.18.0

lodash-es 4.18.0 is explicitly deprecated by the maintainer with
the message "Bad release. Please use lodash-es@4.17.23 instead."
Updated both the direct dependency and the override to 4.18.1, which
is the latest non-deprecated release that patches the CVEs.
2026-04-06 06:37:40 +08:00
Joe Tam
daf2c90b6d Fix duplicate marketplace plugin loading (#364)
Reproduction:
- Enable `frontend-design@claude-code-plugins`
- Enable `frontend-design@claude-plugins-official`
- Start OpenClaude with both marketplace plugins active
- Both plugins load, but downstream command and skill scopes key off the short plugin name, so both collapse to `frontend-design` and can interfere with interactive startup

Fix:
- Collapse duplicate marketplace plugins by short name during merge
- Keep the enabled copy when enabled state differs; otherwise keep the later config entry
- Add regression coverage for both cases
2026-04-06 06:36:45 +08:00
CRABHIVE
4ac7367733 fix: include retry timing in 429 error messages (#366)
## Summary

- Extract retry-after header from 429 API errors and include timing
  guidance in the user-facing error message
- Previously, non-quota 429 errors showed a generic message with no
  guidance on when to retry, only a link to status.anthropic.com

## Impact

- user-facing impact: 429 error messages now tell users when to retry
  instead of just linking to a status page
- developer/maintainer impact: none

## Testing

- [x] `bun run build`
- [ ] `bun run smoke`
- [ ] focused tests: error formatting is pure string construction,
  verified via build + manual inspection

## Notes

- provider/model path tested: applies to all providers returning 429
- screenshots attached (if UI changed): n/a
- follow-up work or known limitations: 529 errors could get similar
  treatment in a follow-up

https://claude.ai/code/session_01D7kprMn4c66a5WrZscF7rv

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-06 06:36:14 +08:00
75 changed files with 6340 additions and 650 deletions

4
.gitignore vendored
View File

@@ -6,4 +6,8 @@ dist/
!.env.example
.openclaude-profile.json
reports/
GEMINI.md
package-lock.json
/.claude
coverage/
.worktrees/

View File

@@ -185,6 +185,41 @@ With Firecrawl enabled:
Free tier at [firecrawl.dev](https://firecrawl.dev) includes 500 credits. The key is optional.
---
## Headless gRPC Server
OpenClaude can be run as a headless gRPC service, allowing you to integrate its agentic capabilities (tools, bash, file editing) into other applications, CI/CD pipelines, or custom user interfaces. The server uses bidirectional streaming to send real-time text chunks, tool calls, and request permissions for sensitive commands.
### 1. Start the gRPC Server
Start the core engine as a gRPC service on `localhost:50051`:
```bash
npm run dev:grpc
```
#### Configuration
| Variable | Default | Description |
|-----------|-------------|------------------------------------------------|
| `GRPC_PORT` | `50051` | Port the gRPC server listens on |
| `GRPC_HOST` | `localhost` | Bind address. Use `0.0.0.0` to expose on all interfaces (not recommended without authentication) |
### 2. Run the Test CLI Client
We provide a lightweight CLI client that communicates exclusively over gRPC. It acts just like the main interactive CLI, rendering colors, streaming tokens, and prompting you for tool permissions (y/n) via the gRPC `action_required` event.
In a separate terminal, run:
```bash
npm run dev:grpc:cli
```
*Note: The gRPC definitions are located in `src/proto/openclaude.proto`. You can use this file to generate clients in Python, Go, Rust, or any other language.*
---
## Source Build And Local Development
```bash

128
bun.lock
View File

@@ -13,6 +13,8 @@
"@anthropic-ai/vertex-sdk": "0.14.4",
"@commander-js/extra-typings": "12.1.0",
"@growthbook/growthbook": "1.6.5",
"@grpc/grpc-js": "^1.14.3",
"@grpc/proto-loader": "^0.8.0",
"@mendable/firecrawl-js": "4.18.1",
"@modelcontextprotocol/sdk": "1.29.0",
"@opentelemetry/api": "1.9.1",
@@ -51,7 +53,7 @@
"ignore": "7.0.5",
"indent-string": "5.0.0",
"jsonc-parser": "3.3.1",
"lodash-es": "4.18.0",
"lodash-es": "4.18.1",
"lru-cache": "11.2.7",
"marked": "15.0.12",
"p-map": "7.0.4",
@@ -84,10 +86,14 @@
"@types/bun": "1.3.11",
"@types/node": "25.5.0",
"@types/react": "19.2.14",
"tsx": "^4.21.0",
"typescript": "5.9.3",
},
},
},
"overrides": {
"lodash-es": "4.18.1",
},
"packages": {
"@alcalzone/ansi-tokenize": ["@alcalzone/ansi-tokenize@0.3.0", "", { "dependencies": { "ansi-styles": "^6.2.1", "is-fullwidth-code-point": "^5.0.0" } }, "sha512-p+CMKJ93HFmLkjXKlXiVGlMQEuRb6H0MokBSwUsX+S6BRX8eV5naFZpQJFfJHjRZY0Hmnqy1/r6UWl3x+19zYA=="],
@@ -181,6 +187,58 @@
"@emnapi/runtime": ["@emnapi/runtime@1.9.2", "", { "dependencies": { "tslib": "^2.4.0" } }, "sha512-3U4+MIWHImeyu1wnmVygh5WlgfYDtyf0k8AbLhMFxOipihf6nrWC4syIm/SwEeec0mNSafiiNnMJwbza/Is6Lw=="],
"@esbuild/aix-ppc64": ["@esbuild/aix-ppc64@0.27.7", "", { "os": "aix", "cpu": "ppc64" }, "sha512-EKX3Qwmhz1eMdEJokhALr0YiD0lhQNwDqkPYyPhiSwKrh7/4KRjQc04sZ8db+5DVVnZ1LmbNDI1uAMPEUBnQPg=="],
"@esbuild/android-arm": ["@esbuild/android-arm@0.27.7", "", { "os": "android", "cpu": "arm" }, "sha512-jbPXvB4Yj2yBV7HUfE2KHe4GJX51QplCN1pGbYjvsyCZbQmies29EoJbkEc+vYuU5o45AfQn37vZlyXy4YJ8RQ=="],
"@esbuild/android-arm64": ["@esbuild/android-arm64@0.27.7", "", { "os": "android", "cpu": "arm64" }, "sha512-62dPZHpIXzvChfvfLJow3q5dDtiNMkwiRzPylSCfriLvZeq0a1bWChrGx/BbUbPwOrsWKMn8idSllklzBy+dgQ=="],
"@esbuild/android-x64": ["@esbuild/android-x64@0.27.7", "", { "os": "android", "cpu": "x64" }, "sha512-x5VpMODneVDb70PYV2VQOmIUUiBtY3D3mPBG8NxVk5CogneYhkR7MmM3yR/uMdITLrC1ml/NV1rj4bMJuy9MCg=="],
"@esbuild/darwin-arm64": ["@esbuild/darwin-arm64@0.27.7", "", { "os": "darwin", "cpu": "arm64" }, "sha512-5lckdqeuBPlKUwvoCXIgI2D9/ABmPq3Rdp7IfL70393YgaASt7tbju3Ac+ePVi3KDH6N2RqePfHnXkaDtY9fkw=="],
"@esbuild/darwin-x64": ["@esbuild/darwin-x64@0.27.7", "", { "os": "darwin", "cpu": "x64" }, "sha512-rYnXrKcXuT7Z+WL5K980jVFdvVKhCHhUwid+dDYQpH+qu+TefcomiMAJpIiC2EM3Rjtq0sO3StMV/+3w3MyyqQ=="],
"@esbuild/freebsd-arm64": ["@esbuild/freebsd-arm64@0.27.7", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-B48PqeCsEgOtzME2GbNM2roU29AMTuOIN91dsMO30t+Ydis3z/3Ngoj5hhnsOSSwNzS+6JppqWsuhTp6E82l2w=="],
"@esbuild/freebsd-x64": ["@esbuild/freebsd-x64@0.27.7", "", { "os": "freebsd", "cpu": "x64" }, "sha512-jOBDK5XEjA4m5IJK3bpAQF9/Lelu/Z9ZcdhTRLf4cajlB+8VEhFFRjWgfy3M1O4rO2GQ/b2dLwCUGpiF/eATNQ=="],
"@esbuild/linux-arm": ["@esbuild/linux-arm@0.27.7", "", { "os": "linux", "cpu": "arm" }, "sha512-RkT/YXYBTSULo3+af8Ib0ykH8u2MBh57o7q/DAs3lTJlyVQkgQvlrPTnjIzzRPQyavxtPtfg0EopvDyIt0j1rA=="],
"@esbuild/linux-arm64": ["@esbuild/linux-arm64@0.27.7", "", { "os": "linux", "cpu": "arm64" }, "sha512-RZPHBoxXuNnPQO9rvjh5jdkRmVizktkT7TCDkDmQ0W2SwHInKCAV95GRuvdSvA7w4VMwfCjUiPwDi0ZO6Nfe9A=="],
"@esbuild/linux-ia32": ["@esbuild/linux-ia32@0.27.7", "", { "os": "linux", "cpu": "ia32" }, "sha512-GA48aKNkyQDbd3KtkplYWT102C5sn/EZTY4XROkxONgruHPU72l+gW+FfF8tf2cFjeHaRbWpOYa/uRBz/Xq1Pg=="],
"@esbuild/linux-loong64": ["@esbuild/linux-loong64@0.27.7", "", { "os": "linux", "cpu": "none" }, "sha512-a4POruNM2oWsD4WKvBSEKGIiWQF8fZOAsycHOt6JBpZ+JN2n2JH9WAv56SOyu9X5IqAjqSIPTaJkqN8F7XOQ5Q=="],
"@esbuild/linux-mips64el": ["@esbuild/linux-mips64el@0.27.7", "", { "os": "linux", "cpu": "none" }, "sha512-KabT5I6StirGfIz0FMgl1I+R1H73Gp0ofL9A3nG3i/cYFJzKHhouBV5VWK1CSgKvVaG4q1RNpCTR2LuTVB3fIw=="],
"@esbuild/linux-ppc64": ["@esbuild/linux-ppc64@0.27.7", "", { "os": "linux", "cpu": "ppc64" }, "sha512-gRsL4x6wsGHGRqhtI+ifpN/vpOFTQtnbsupUF5R5YTAg+y/lKelYR1hXbnBdzDjGbMYjVJLJTd2OFmMewAgwlQ=="],
"@esbuild/linux-riscv64": ["@esbuild/linux-riscv64@0.27.7", "", { "os": "linux", "cpu": "none" }, "sha512-hL25LbxO1QOngGzu2U5xeXtxXcW+/GvMN3ejANqXkxZ/opySAZMrc+9LY/WyjAan41unrR3YrmtTsUpwT66InQ=="],
"@esbuild/linux-s390x": ["@esbuild/linux-s390x@0.27.7", "", { "os": "linux", "cpu": "s390x" }, "sha512-2k8go8Ycu1Kb46vEelhu1vqEP+UeRVj2zY1pSuPdgvbd5ykAw82Lrro28vXUrRmzEsUV0NzCf54yARIK8r0fdw=="],
"@esbuild/linux-x64": ["@esbuild/linux-x64@0.27.7", "", { "os": "linux", "cpu": "x64" }, "sha512-hzznmADPt+OmsYzw1EE33ccA+HPdIqiCRq7cQeL1Jlq2gb1+OyWBkMCrYGBJ+sxVzve2ZJEVeePbLM2iEIZSxA=="],
"@esbuild/netbsd-arm64": ["@esbuild/netbsd-arm64@0.27.7", "", { "os": "none", "cpu": "arm64" }, "sha512-b6pqtrQdigZBwZxAn1UpazEisvwaIDvdbMbmrly7cDTMFnw/+3lVxxCTGOrkPVnsYIosJJXAsILG9XcQS+Yu6w=="],
"@esbuild/netbsd-x64": ["@esbuild/netbsd-x64@0.27.7", "", { "os": "none", "cpu": "x64" }, "sha512-OfatkLojr6U+WN5EDYuoQhtM+1xco+/6FSzJJnuWiUw5eVcicbyK3dq5EeV/QHT1uy6GoDhGbFpprUiHUYggrw=="],
"@esbuild/openbsd-arm64": ["@esbuild/openbsd-arm64@0.27.7", "", { "os": "openbsd", "cpu": "arm64" }, "sha512-AFuojMQTxAz75Fo8idVcqoQWEHIXFRbOc1TrVcFSgCZtQfSdc1RXgB3tjOn/krRHENUB4j00bfGjyl2mJrU37A=="],
"@esbuild/openbsd-x64": ["@esbuild/openbsd-x64@0.27.7", "", { "os": "openbsd", "cpu": "x64" }, "sha512-+A1NJmfM8WNDv5CLVQYJ5PshuRm/4cI6WMZRg1by1GwPIQPCTs1GLEUHwiiQGT5zDdyLiRM/l1G0Pv54gvtKIg=="],
"@esbuild/openharmony-arm64": ["@esbuild/openharmony-arm64@0.27.7", "", { "os": "none", "cpu": "arm64" }, "sha512-+KrvYb/C8zA9CU/g0sR6w2RBw7IGc5J2BPnc3dYc5VJxHCSF1yNMxTV5LQ7GuKteQXZtspjFbiuW5/dOj7H4Yw=="],
"@esbuild/sunos-x64": ["@esbuild/sunos-x64@0.27.7", "", { "os": "sunos", "cpu": "x64" }, "sha512-ikktIhFBzQNt/QDyOL580ti9+5mL/YZeUPKU2ivGtGjdTYoqz6jObj6nOMfhASpS4GU4Q/Clh1QtxWAvcYKamA=="],
"@esbuild/win32-arm64": ["@esbuild/win32-arm64@0.27.7", "", { "os": "win32", "cpu": "arm64" }, "sha512-7yRhbHvPqSpRUV7Q20VuDwbjW5kIMwTHpptuUzV+AA46kiPze5Z7qgt6CLCK3pWFrHeNfDd1VKgyP4O+ng17CA=="],
"@esbuild/win32-ia32": ["@esbuild/win32-ia32@0.27.7", "", { "os": "win32", "cpu": "ia32" }, "sha512-SmwKXe6VHIyZYbBLJrhOoCJRB/Z1tckzmgTLfFYOfpMAx63BJEaL9ExI8x7v0oAO3Zh6D/Oi1gVxEYr5oUCFhw=="],
"@esbuild/win32-x64": ["@esbuild/win32-x64@0.27.7", "", { "os": "win32", "cpu": "x64" }, "sha512-56hiAJPhwQ1R4i+21FVF7V8kSD5zZTdHcVuRFMW0hn753vVfQN8xlx4uOPT4xoGH0Z/oVATuR82AiqSTDIpaHg=="],
"@growthbook/growthbook": ["@growthbook/growthbook@1.6.5", "", { "dependencies": { "dom-mutator": "^0.6.0" } }, "sha512-mUaMsgeUTpRIUOTn33EUXHRK6j7pxBjwqH4WpQyq+pukjd1AIzWlEa6w7i6bInJUcweGgP2beXZmaP6b6UPn7A=="],
"@grpc/grpc-js": ["@grpc/grpc-js@1.14.3", "", { "dependencies": { "@grpc/proto-loader": "^0.8.0", "@js-sdsl/ordered-map": "^4.4.2" } }, "sha512-Iq8QQQ/7X3Sac15oB6p0FmUg/klxQvXLeileoqrTRGJYLV+/9tubbr9ipz0GKHjmXVsgFPo/+W+2cA8eNcR+XA=="],
@@ -453,7 +511,7 @@
"cli-highlight": ["cli-highlight@2.1.11", "", { "dependencies": { "chalk": "^4.0.0", "highlight.js": "^10.7.1", "mz": "^2.4.0", "parse5": "^5.1.1", "parse5-htmlparser2-tree-adapter": "^6.0.0", "yargs": "^16.0.0" }, "bin": { "highlight": "bin/highlight" } }, "sha512-9KDcoEVwyUXrjcJNvHD0NFc/hiwe/WPVYIleQh2O1N2Zro5gWJZ/K+3DGn8w8P/F6FxOgzyC5bxDyHIgCSPhGg=="],
"cliui": ["cliui@7.0.4", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^7.0.0" } }, "sha512-OcRE68cOsVMXp1Yvonl/fzkQOyjLSu/8bhPDfQt0e0/Eb283TKP20Fs2MqoPsr9SwA595rRCA+QMzYc9nBP+JQ=="],
"cliui": ["cliui@8.0.1", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.1", "wrap-ansi": "^7.0.0" } }, "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ=="],
"code-excerpt": ["code-excerpt@4.0.0", "", { "dependencies": { "convert-to-spaces": "^2.0.1" } }, "sha512-xxodCmBen3iy2i0WtAK8FlFNrRzjUqjRsMfho58xT/wvZU1YTM3fCnRjcy1gJPMepaRlgm/0e6w8SpWHpn3/cA=="],
@@ -521,6 +579,8 @@
"es-set-tostringtag": ["es-set-tostringtag@2.1.0", "", { "dependencies": { "es-errors": "^1.3.0", "get-intrinsic": "^1.2.6", "has-tostringtag": "^1.0.2", "hasown": "^2.0.2" } }, "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA=="],
"esbuild": ["esbuild@0.27.7", "", { "optionalDependencies": { "@esbuild/aix-ppc64": "0.27.7", "@esbuild/android-arm": "0.27.7", "@esbuild/android-arm64": "0.27.7", "@esbuild/android-x64": "0.27.7", "@esbuild/darwin-arm64": "0.27.7", "@esbuild/darwin-x64": "0.27.7", "@esbuild/freebsd-arm64": "0.27.7", "@esbuild/freebsd-x64": "0.27.7", "@esbuild/linux-arm": "0.27.7", "@esbuild/linux-arm64": "0.27.7", "@esbuild/linux-ia32": "0.27.7", "@esbuild/linux-loong64": "0.27.7", "@esbuild/linux-mips64el": "0.27.7", "@esbuild/linux-ppc64": "0.27.7", "@esbuild/linux-riscv64": "0.27.7", "@esbuild/linux-s390x": "0.27.7", "@esbuild/linux-x64": "0.27.7", "@esbuild/netbsd-arm64": "0.27.7", "@esbuild/netbsd-x64": "0.27.7", "@esbuild/openbsd-arm64": "0.27.7", "@esbuild/openbsd-x64": "0.27.7", "@esbuild/openharmony-arm64": "0.27.7", "@esbuild/sunos-x64": "0.27.7", "@esbuild/win32-arm64": "0.27.7", "@esbuild/win32-ia32": "0.27.7", "@esbuild/win32-x64": "0.27.7" }, "bin": { "esbuild": "bin/esbuild" } }, "sha512-IxpibTjyVnmrIQo5aqNpCgoACA/dTKLTlhMHihVHhdkxKyPO1uBBthumT0rdHmcsk9uMonIWS0m4FljWzILh3w=="],
"escalade": ["escalade@3.2.0", "", {}, "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA=="],
"escape-html": ["escape-html@1.0.3", "", {}, "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow=="],
@@ -567,6 +627,8 @@
"fresh": ["fresh@2.0.0", "", {}, "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A=="],
"fsevents": ["fsevents@2.3.3", "", { "os": "darwin" }, "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw=="],
"function-bind": ["function-bind@1.1.2", "", {}, "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA=="],
"fuse.js": ["fuse.js@7.1.0", "", {}, "sha512-trLf4SzuuUxfusZADLINj+dE8clK1frKdmqiJNb1Es75fmI5oY6X2mxLVUciLLjxqw/xr72Dhy+lER6dGd02FQ=="],
@@ -585,6 +647,8 @@
"get-stream": ["get-stream@9.0.1", "", { "dependencies": { "@sec-ant/readable-stream": "^0.4.1", "is-stream": "^4.0.1" } }, "sha512-kVCxPF3vQM/N0B1PmoqVUqgHP+EeVjmZSQn+1oCRPxd2P21P2F19lIgbR3HBosbB1PUhOAoctJnfEn2GbN2eZA=="],
"get-tsconfig": ["get-tsconfig@4.13.7", "", { "dependencies": { "resolve-pkg-maps": "^1.0.0" } }, "sha512-7tN6rFgBlMgpBML5j8typ92BKFi2sFQvIdpAqLA2beia5avZDrMs0FLZiM5etShWq5irVyGcGMEA1jcDaK7A/Q=="],
"google-auth-library": ["google-auth-library@9.15.1", "", { "dependencies": { "base64-js": "^1.3.0", "ecdsa-sig-formatter": "^1.0.11", "gaxios": "^6.1.1", "gcp-metadata": "^6.1.0", "gtoken": "^7.0.0", "jws": "^4.0.0" } }, "sha512-Jb6Z0+nvECVz+2lzSMt9u98UsoakXxA2HGHMCxh+so3n90XgYWkq5dur19JAJV7ONiJY22yBTyJB1TSkvPq9Ng=="],
"google-logging-utils": ["google-logging-utils@0.0.2", "", {}, "sha512-NEgUnEcBiP5HrPzufUkBzJOD/Sxsco3rLNo1F1TNf7ieU8ryUzBhqba8r756CjLX7rn3fHl6iLEwPYuqpoKgQQ=="],
@@ -657,7 +721,7 @@
"locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="],
"lodash-es": ["lodash-es@4.18.0", "", {}, "sha512-koAgswPPA+UTaPN64Etp+PGP+WT6oqOS2NMi5yDkMaiGw9qY4VxQbQF0mtKMyr4BlTznWyzePV5UpECTJQmSUA=="],
"lodash-es": ["lodash-es@4.18.1", "", {}, "sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A=="],
"lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="],
@@ -761,6 +825,8 @@
"require-main-filename": ["require-main-filename@2.0.0", "", {}, "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg=="],
"resolve-pkg-maps": ["resolve-pkg-maps@1.0.0", "", {}, "sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw=="],
"retry": ["retry@0.12.0", "", {}, "sha512-9LkiTwjUh6rT555DtE9rTX+BKByPfrMzEAtnlEtdEwr3Nkffwiihqe2bWADg+OQRjt9gl6ICdmB/ZFDCGAtSow=="],
"router": ["router@2.2.0", "", { "dependencies": { "debug": "^4.4.0", "depd": "^2.0.0", "is-promise": "^4.0.0", "parseurl": "^1.3.3", "path-to-regexp": "^8.0.0" } }, "sha512-nLTrUKm2UyiL7rlhapu/Zl45FwNgkZGaCpZbIHajDYgwlJCOzLSk+cIPAnsEqV955GjILJnKbdQC1nVPz+gAYQ=="],
@@ -831,6 +897,8 @@
"tslib": ["tslib@1.14.1", "", {}, "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg=="],
"tsx": ["tsx@4.21.0", "", { "dependencies": { "esbuild": "~0.27.0", "get-tsconfig": "^4.7.5" }, "optionalDependencies": { "fsevents": "~2.3.3" }, "bin": { "tsx": "dist/cli.mjs" } }, "sha512-5C1sg4USs1lfG0GFb2RLXsdpXqBSEhAaA/0kPL01wxzpMqLILNxIxIOKiILz+cdg/pLnOUxFYOR5yhHU666wbw=="],
"turndown": ["turndown@7.2.2", "", { "dependencies": { "@mixmark-io/domino": "^2.2.0" } }, "sha512-1F7db8BiExOKxjSMU2b7if62D/XOyQyZbPKq/nUwopfgnHlqXHqQ0lvfUTeUIr1lZJzOPFn43dODyMSIfvWRKQ=="],
"type-fest": ["type-fest@4.41.0", "", {}, "sha512-TeTSQ6H5YHvpqVwBRcnLDCBnDOHWYu7IvGbHT6N8AOymcr9PJGjc1GTtiWZTYg0NCgYwvnYWEkVChQAr9bjfwA=="],
@@ -881,9 +949,9 @@
"yaml": ["yaml@2.8.3", "", { "bin": { "yaml": "bin.mjs" } }, "sha512-AvbaCLOO2Otw/lW5bmh9d/WEdcDFdQp2Z2ZUH3pX9U2ihyUY0nvLv7J6TrWowklRGPYbB/IuIMfYgxaCPg5Bpg=="],
"yargs": ["yargs@16.2.0", "", { "dependencies": { "cliui": "^7.0.2", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.0", "y18n": "^5.0.5", "yargs-parser": "^20.2.2" } }, "sha512-D1mvvtDG0L5ft/jGWkLpG1+m0eQxOfaBvTNELraWj22wSVUMWxZUvYgJYcKh6jGGIkJFhH4IZPQhR4TKpc8mBw=="],
"yargs": ["yargs@17.7.2", "", { "dependencies": { "cliui": "^8.0.1", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.3", "y18n": "^5.0.5", "yargs-parser": "^21.1.1" } }, "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w=="],
"yargs-parser": ["yargs-parser@20.2.9", "", {}, "sha512-y11nGElTIV+CT3Zv9t7VKl+Q3hTQoT9a1Qzezhhl6Rp21gJ/IVTW7Z3y9EWXhuUBC2Shnf+DX0antecpAwSP8w=="],
"yargs-parser": ["yargs-parser@21.1.1", "", {}, "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="],
"yoctocolors": ["yoctocolors@2.1.2", "", {}, "sha512-CzhO+pFNo8ajLM2d2IW/R93ipy99LWjtwblvC1RsoSUMZgyLbYFr221TnSNT7GjGdYui6P459mw9JH/g/zW2ug=="],
@@ -891,8 +959,6 @@
"zod-to-json-schema": ["zod-to-json-schema@3.25.2", "", { "peerDependencies": { "zod": "^3.25.28 || ^4" } }, "sha512-O/PgfnpT1xKSDeQYSCfRI5Gy3hPf91mKVDuYLUHZJMiDFptvP41MSnWofm8dnCm0256ZNfZIM7DSzuSMAFnjHA=="],
"@anthropic-ai/sandbox-runtime/lodash-es": ["lodash-es@4.17.23", "", {}, "sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg=="],
"@aws-crypto/crc32/@aws-crypto/util": ["@aws-crypto/util@5.2.0", "", { "dependencies": { "@aws-sdk/types": "^3.222.0", "@smithy/util-utf8": "^2.0.0", "tslib": "^2.6.2" } }, "sha512-4RkU9EsI6ZpBve5fseQlGNUWKMa1RLPQ1dnjnQoe07ldfIzcsGb5hC5W0Dm7u423KWzawlrpbjXBrXCEv9zazQ=="],
"@aws-crypto/crc32/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],
@@ -1085,8 +1151,6 @@
"@emnapi/runtime/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],
"@grpc/proto-loader/yargs": ["yargs@17.7.2", "", { "dependencies": { "cliui": "^8.0.1", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.3", "y18n": "^5.0.5", "yargs-parser": "^21.1.1" } }, "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/core": ["@opentelemetry/core@1.30.1", "", { "dependencies": { "@opentelemetry/semantic-conventions": "1.28.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-OOCM2C/QIURhJMuKaekP3TRBxBKxG/TWWA0TL2J6nXUtDnuCtccy49LUJF8xPFXMX+0LMcxFpCo8M9cGY1W6rQ=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/otlp-exporter-base": ["@opentelemetry/otlp-exporter-base@0.57.2", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/otlp-transformer": "0.57.2" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-XdxEzL23Urhidyebg5E6jZoaiW5ygP/mRjxLHixogbqwDy2Faduzb5N0o/Oi+XTIJu+iyxXdVORjXax+Qgfxag=="],
@@ -1305,6 +1369,8 @@
"cli-highlight/chalk": ["chalk@4.1.2", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="],
"cli-highlight/yargs": ["yargs@16.2.0", "", { "dependencies": { "cliui": "^7.0.2", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.0", "y18n": "^5.0.5", "yargs-parser": "^20.2.2" } }, "sha512-D1mvvtDG0L5ft/jGWkLpG1+m0eQxOfaBvTNELraWj22wSVUMWxZUvYgJYcKh6jGGIkJFhH4IZPQhR4TKpc8mBw=="],
"cliui/string-width": ["string-width@4.2.3", "", { "dependencies": { "emoji-regex": "^8.0.0", "is-fullwidth-code-point": "^3.0.0", "strip-ansi": "^6.0.1" } }, "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="],
"cliui/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
@@ -1359,12 +1425,6 @@
"@aws-sdk/nested-clients/@smithy/util-base64/@smithy/util-buffer-from": ["@smithy/util-buffer-from@4.2.2", "", { "dependencies": { "@smithy/is-array-buffer": "^4.2.2", "tslib": "^2.6.2" } }, "sha512-FDXD7cvUoFWwN6vtQfEta540Y/YBe5JneK3SoZg9bThSoOAC/eGeYEua6RkBgKjGa/sz6Y+DuBZj3+YEY21y4Q=="],
"@grpc/proto-loader/yargs/cliui": ["cliui@8.0.1", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.1", "wrap-ansi": "^7.0.0" } }, "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ=="],
"@grpc/proto-loader/yargs/string-width": ["string-width@4.2.3", "", { "dependencies": { "emoji-regex": "^8.0.0", "is-fullwidth-code-point": "^3.0.0", "strip-ansi": "^6.0.1" } }, "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="],
"@grpc/proto-loader/yargs/yargs-parser": ["yargs-parser@21.1.1", "", {}, "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/core/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="],
"@opentelemetry/exporter-trace-otlp-grpc/@opentelemetry/otlp-transformer/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A=="],
@@ -1431,6 +1491,12 @@
"cli-highlight/chalk/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="],
"cli-highlight/yargs/cliui": ["cliui@7.0.4", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^7.0.0" } }, "sha512-OcRE68cOsVMXp1Yvonl/fzkQOyjLSu/8bhPDfQt0e0/Eb283TKP20Fs2MqoPsr9SwA595rRCA+QMzYc9nBP+JQ=="],
"cli-highlight/yargs/string-width": ["string-width@4.2.3", "", { "dependencies": { "emoji-regex": "^8.0.0", "is-fullwidth-code-point": "^3.0.0", "strip-ansi": "^6.0.1" } }, "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="],
"cli-highlight/yargs/yargs-parser": ["yargs-parser@20.2.9", "", {}, "sha512-y11nGElTIV+CT3Zv9t7VKl+Q3hTQoT9a1Qzezhhl6Rp21gJ/IVTW7Z3y9EWXhuUBC2Shnf+DX0antecpAwSP8w=="],
"cliui/string-width/emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="],
"cliui/string-width/is-fullwidth-code-point": ["is-fullwidth-code-point@3.0.0", "", {}, "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="],
@@ -1471,16 +1537,6 @@
"@aws-sdk/nested-clients/@smithy/util-base64/@smithy/util-buffer-from/@smithy/is-array-buffer": ["@smithy/is-array-buffer@4.2.2", "", { "dependencies": { "tslib": "^2.6.2" } }, "sha512-n6rQ4N8Jj4YTQO3YFrlgZuwKodf4zUFs7EJIWH86pSCWBaAtAGBFfCM7Wx6D2bBJ2xqFNxGBSrUWswT3M0VJow=="],
"@grpc/proto-loader/yargs/cliui/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
"@grpc/proto-loader/yargs/cliui/wrap-ansi": ["wrap-ansi@7.0.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="],
"@grpc/proto-loader/yargs/string-width/emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="],
"@grpc/proto-loader/yargs/string-width/is-fullwidth-code-point": ["is-fullwidth-code-point@3.0.0", "", {}, "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="],
"@grpc/proto-loader/yargs/string-width/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
"@opentelemetry/otlp-grpc-exporter-base/@opentelemetry/otlp-transformer/@opentelemetry/resources/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="],
"@opentelemetry/otlp-grpc-exporter-base/@opentelemetry/otlp-transformer/@opentelemetry/sdk-trace-base/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="],
@@ -1501,6 +1557,16 @@
"@smithy/smithy-client/@smithy/util-stream/@smithy/node-http-handler/@smithy/querystring-builder": ["@smithy/querystring-builder@2.2.0", "", { "dependencies": { "@smithy/types": "^2.12.0", "@smithy/util-uri-escape": "^2.2.0", "tslib": "^2.6.2" } }, "sha512-L1kSeviUWL+emq3CUVSgdogoM/D9QMFaqxL/dd0X7PCNWmPXqt+ExtrBjqT0V7HLN03Vs9SuiLrG3zy3JGnE5A=="],
"cli-highlight/yargs/cliui/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
"cli-highlight/yargs/cliui/wrap-ansi": ["wrap-ansi@7.0.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="],
"cli-highlight/yargs/string-width/emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="],
"cli-highlight/yargs/string-width/is-fullwidth-code-point": ["is-fullwidth-code-point@3.0.0", "", {}, "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="],
"cli-highlight/yargs/string-width/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
"qrcode/yargs/cliui/strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
"qrcode/yargs/cliui/wrap-ansi": ["wrap-ansi@6.2.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA=="],
@@ -1513,16 +1579,16 @@
"yargs/string-width/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"@grpc/proto-loader/yargs/cliui/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"@grpc/proto-loader/yargs/cliui/wrap-ansi/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="],
"@grpc/proto-loader/yargs/string-width/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"@smithy/smithy-client/@smithy/util-stream/@smithy/fetch-http-handler/@smithy/querystring-builder/@smithy/util-uri-escape": ["@smithy/util-uri-escape@2.2.0", "", { "dependencies": { "tslib": "^2.6.2" } }, "sha512-jtmJMyt1xMD/d8OtbVJ2gFZOSKc+ueYJZPW20ULW1GOp/q/YIM0wNh+u8ZFao9UaIGz4WoPW8hC64qlWLIfoDA=="],
"@smithy/smithy-client/@smithy/util-stream/@smithy/node-http-handler/@smithy/querystring-builder/@smithy/util-uri-escape": ["@smithy/util-uri-escape@2.2.0", "", { "dependencies": { "tslib": "^2.6.2" } }, "sha512-jtmJMyt1xMD/d8OtbVJ2gFZOSKc+ueYJZPW20ULW1GOp/q/YIM0wNh+u8ZFao9UaIGz4WoPW8hC64qlWLIfoDA=="],
"cli-highlight/yargs/cliui/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"cli-highlight/yargs/cliui/wrap-ansi/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="],
"cli-highlight/yargs/string-width/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"qrcode/yargs/cliui/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"qrcode/yargs/cliui/wrap-ansi/ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="],

144
docs/litellm-setup.md Normal file
View File

@@ -0,0 +1,144 @@
# LiteLLM Setup
OpenClaude can connect to LiteLLM through LiteLLM's OpenAI-compatible proxy.
## Overview
LiteLLM is an open-source LLM gateway that provides a unified API to 100+ model providers. By running the LiteLLM Proxy, you can route OpenClaude requests through LiteLLM to access any of its supported providers — all while using OpenClaude's existing OpenAI-compatible provider path.
## Prerequisites
- LiteLLM installed (`pip install litellm[proxy]`)
- A `litellm_config.yaml` or equivalent LiteLLM configuration
- LiteLLM Proxy running on a local or remote port
## 1. Start the LiteLLM Proxy
### Basic installation
```bash
pip install litellm[proxy]
```
### Configure LiteLLM
Create a `litellm_config.yaml` with your desired model aliases:
```yaml
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
- model_name: claude-sonnet-4
litellm_params:
model: anthropic/claude-sonnet-4-5-20250929
api_key: os.environ/ANTHROPIC_API_KEY
- model_name: gemini-2.5-flash
litellm_params:
model: gemini/gemini-2.5-flash
api_key: os.environ/GEMINI_API_KEY
- model_name: llama-3.3-70b
litellm_params:
model: together_ai/meta-llama/Llama-3.3-70B-Instruct-Turbo
api_key: os.environ/TOGETHER_API_KEY
```
### Run the proxy
```bash
litellm --config litellm_config.yaml --port 4000
```
The proxy will start at `http://localhost:4000` by default.
## 2. Point OpenClaude to LiteLLM
### Option A: Environment Variables
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:4000
export OPENAI_API_KEY=<your-master-key-or-placeholder>
export OPENAI_MODEL=<your-litellm-model-alias>
openclaude
```
Replace `<your-litellm-model-alias>` with a model name from your `litellm_config.yaml` (e.g., `gpt-4o`, `claude-sonnet-4`, `gemini-2.5-flash`).
### Option B: Using /provider
1. Run `openclaude`
2. Type `/provider` to open the provider setup flow
3. Choose the **OpenAI-compatible** option
4. When prompted for the API key, enter the key required by your LiteLLM proxy
If your local LiteLLM setup does not enforce auth, you may still need to enter a placeholder value
- 5. When prompted for the base URL, enter `http://localhost:4000`
6. 6. When prompted for the model, enter the LiteLLM model name or alias you configured
7. 7. Save the provider configuration
## 3. Example LiteLLM Configs
### Multi-provider routing with spend tracking
```yaml
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
- model_name: claude-sonnet-4
litellm_params:
model: anthropic/claude-sonnet-4-5-20250929
api_key: os.environ/ANTHROPIC_API_KEY
- model_name: deepseek-chat
litellm_params:
model: deepseek/deepseek-chat
api_key: os.environ/DEEPSEEK_API_KEY
litellm_settings:
set_verbose: false
num_retries: 3
```
### With a master key for auth
```bash
# Start proxy with a master key
litellm --config litellm_config.yaml --port 4000 --master_key sk-my-master-key
# Connect OpenClaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:4000
export OPENAI_API_KEY=sk-my-master-key
export OPENAI_MODEL=gpt-4o
openclaude
```
## 4. Notes
- `OPENAI_MODEL` must match the **LiteLLM model alias** defined in your config, not the upstream raw provider model name.
- If your proxy requires authentication, use the proxy key (or `master_key`) in `OPENAI_API_KEY`.
- LiteLLM's OpenAI-compatible endpoint accepts the same request format as OpenAI, so OpenClaude works without any code changes.
- You can switch between any provider configured in LiteLLM by simply changing the `OPENAI_MODEL` value — no need to reconfigure OpenClaude.
## 5. Troubleshooting
| Issue | Likely Cause | Fix |
|-------|--------------|-----|
| 404 or Model Not Found | Model alias doesn't exist in LiteLLM config | Verify the `model_name` in `litellm_config.yaml` matches `OPENAI_MODEL` |
| Connection Refused | LiteLLM proxy isn't running | Start the proxy with `litellm --config litellm_config.yaml --port 4000` |
| Auth Failed | Missing or wrong `master_key` | Set the correct key in `OPENAI_API_KEY` |
| Upstream provider error | The backend provider key is missing or invalid | Ensure the upstream API key (e.g., `OPENAI_API_KEY`) is set in your LiteLLM proxy process environment |
| Tools fail but chat works | The selected model has weak function/tool calling support | Switch to a model with strong tool support (e.g., GPT-4o, Claude Sonnet) |
## 6. Resources
- [LiteLLM Proxy Docs](https://docs.litellm.ai/docs/proxy/quick_start)
- [LiteLLM Provider List](https://docs.litellm.ai/docs/providers)
- [LiteLLM OpenAI-Compatible Endpoints](https://docs.litellm.ai/docs/proxy/openai_compatible_proxy)

View File

@@ -1,6 +1,6 @@
{
"name": "@gitlawb/openclaude",
"version": "0.1.7",
"version": "0.1.8",
"description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models",
"type": "module",
"bin": {
@@ -30,6 +30,8 @@
"profile:code": "bun run profile:init -- --provider ollama --model qwen2.5-coder:7b",
"dev:fast": "bun run profile:fast && bun run dev:ollama:fast",
"dev:code": "bun run profile:code && bun run dev:profile",
"dev:grpc": "bun run scripts/start-grpc.ts",
"dev:grpc:cli": "bun run scripts/grpc-cli.ts",
"start": "node dist/cli.mjs",
"test": "bun test",
"test:coverage": "bun test --coverage --coverage-reporter=lcov --coverage-dir=coverage --max-concurrency=1 && bun run scripts/render-coverage-heatmap.ts",
@@ -57,6 +59,8 @@
"@anthropic-ai/vertex-sdk": "0.14.4",
"@commander-js/extra-typings": "12.1.0",
"@growthbook/growthbook": "1.6.5",
"@grpc/grpc-js": "^1.14.3",
"@grpc/proto-loader": "^0.8.0",
"@mendable/firecrawl-js": "4.18.1",
"@modelcontextprotocol/sdk": "1.29.0",
"@opentelemetry/api": "1.9.1",
@@ -95,7 +99,7 @@
"ignore": "7.0.5",
"indent-string": "5.0.0",
"jsonc-parser": "3.3.1",
"lodash-es": "4.18.0",
"lodash-es": "4.18.1",
"lru-cache": "11.2.7",
"marked": "15.0.12",
"p-map": "7.0.4",
@@ -128,6 +132,7 @@
"@types/bun": "1.3.11",
"@types/node": "25.5.0",
"@types/react": "19.2.14",
"tsx": "^4.21.0",
"typescript": "5.9.3"
},
"engines": {
@@ -150,5 +155,8 @@
"license": "SEE LICENSE FILE",
"publishConfig": {
"access": "public"
},
"overrides": {
"lodash-es": "4.18.1"
}
}

121
scripts/grpc-cli.ts Normal file
View File

@@ -0,0 +1,121 @@
import * as grpc from '@grpc/grpc-js'
import * as protoLoader from '@grpc/proto-loader'
import path from 'path'
import * as readline from 'readline'
const PROTO_PATH = path.resolve(import.meta.dirname, '../src/proto/openclaude.proto')
const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true,
})
const protoDescriptor = grpc.loadPackageDefinition(packageDefinition) as any
const openclaudeProto = protoDescriptor.openclaude.v1
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
})
function askQuestion(query: string): Promise<string> {
return new Promise(resolve => {
rl.question(query, resolve)
})
}
async function main() {
const host = process.env.GRPC_HOST || 'localhost'
const port = process.env.GRPC_PORT || '50051'
const client = new openclaudeProto.AgentService(
`${host}:${port}`,
grpc.credentials.createInsecure()
)
let call: grpc.ClientDuplexStream<any, any> | null = null
const startStream = () => {
call = client.Chat()
let textStreamed = false
call.on('data', async (serverMessage: any) => {
if (serverMessage.text_chunk) {
process.stdout.write(serverMessage.text_chunk.text)
textStreamed = true
} else if (serverMessage.tool_start) {
console.log(`\n\x1b[36m[Tool Call]\x1b[0m \x1b[1m${serverMessage.tool_start.tool_name}\x1b[0m`)
console.log(`\x1b[90m${serverMessage.tool_start.arguments_json}\x1b[0m\n`)
} else if (serverMessage.tool_result) {
console.log(`\n\x1b[32m[Tool Result]\x1b[0m \x1b[1m${serverMessage.tool_result.tool_name}\x1b[0m`)
const out = serverMessage.tool_result.output
if (out.length > 500) {
console.log(`\x1b[90m${out.substring(0, 500)}...\n(Output truncated, total length: ${out.length})\x1b[0m`)
} else {
console.log(`\x1b[90m${out}\x1b[0m`)
}
} else if (serverMessage.action_required) {
const action = serverMessage.action_required
console.log(`\n\x1b[33m[Action Required]\x1b[0m`)
const reply = await askQuestion(`\x1b[1m${action.question}\x1b[0m (y/n) > `)
call?.write({
input: {
prompt_id: action.prompt_id,
reply: reply.trim()
}
})
} else if (serverMessage.done) {
if (!textStreamed && serverMessage.done.full_text) {
process.stdout.write(serverMessage.done.full_text)
}
textStreamed = false
console.log('\n\x1b[32m[Generation Complete]\x1b[0m')
promptUser()
} else if (serverMessage.error) {
console.error(`\n\x1b[31m[Server Error]\x1b[0m ${serverMessage.error.message}`)
promptUser()
}
})
call.on('end', () => {
console.log('\n\x1b[90m[Stream closed by server]\x1b[0m')
// Don't prompt user here, let 'done' or 'error' handlers do it
})
call.on('error', (err: Error) => {
console.error('\n\x1b[31m[Stream Error]\x1b[0m', err.message)
promptUser()
})
}
const promptUser = async () => {
const message = await askQuestion('\n\x1b[35m> \x1b[0m')
if (message.trim().toLowerCase() === '/exit' || message.trim().toLowerCase() === '/quit') {
console.log('Bye!')
rl.close()
process.exit(0)
}
if (!call || call.destroyed) {
startStream()
}
call!.write({
request: {
session_id: 'cli-session-1',
message: message,
working_directory: process.cwd()
}
})
}
console.log('\x1b[32mOpenClaude gRPC CLI\x1b[0m')
console.log('\x1b[90mType /exit to quit.\x1b[0m')
promptUser()
}
main()

50
scripts/start-grpc.ts Normal file
View File

@@ -0,0 +1,50 @@
import { GrpcServer } from '../src/grpc/server.ts'
import { init } from '../src/entrypoints/init.ts'
// Polyfill MACRO which is normally injected by the bundler
Object.assign(globalThis, {
MACRO: {
VERSION: '0.1.7',
DISPLAY_VERSION: '0.1.7',
PACKAGE_URL: '@gitlawb/openclaude',
}
})
async function main() {
console.log('Starting OpenClaude gRPC Server...')
await init()
// Mirror CLI bootstrap: hydrate secure tokens and resolve provider profile
const { enableConfigs } = await import('../src/utils/config.js')
enableConfigs()
const { applySafeConfigEnvironmentVariables } = await import('../src/utils/managedEnv.js')
applySafeConfigEnvironmentVariables()
const { hydrateGeminiAccessTokenFromSecureStorage } = await import('../src/utils/geminiCredentials.js')
hydrateGeminiAccessTokenFromSecureStorage()
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../src/utils/githubModelsCredentials.js')
hydrateGithubModelsTokenFromSecureStorage()
const { buildStartupEnvFromProfile, applyProfileEnvToProcessEnv } = await import('../src/utils/providerProfile.js')
const { getProviderValidationError, validateProviderEnvOrExit } = await import('../src/utils/providerValidation.js')
const startupEnv = await buildStartupEnvFromProfile({ processEnv: process.env })
if (startupEnv !== process.env) {
const startupProfileError = await getProviderValidationError(startupEnv)
if (startupProfileError) {
console.warn(`Warning: ignoring saved provider profile. ${startupProfileError}`)
} else {
applyProfileEnvToProcessEnv(process.env, startupEnv)
}
}
await validateProviderEnvOrExit()
const port = process.env.GRPC_PORT ? parseInt(process.env.GRPC_PORT, 10) : 50051
const host = process.env.GRPC_HOST || 'localhost'
const server = new GrpcServer()
server.start(port, host)
}
main().catch((err) => {
console.error('Fatal error starting gRPC server:', err)
process.exit(1)
})

View File

@@ -0,0 +1,42 @@
import { afterEach, expect, mock, test } from 'bun:test'
const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_MODEL: process.env.OPENAI_MODEL,
}
afterEach(() => {
mock.restore()
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
})
test('opens the model picker without awaiting local model discovery refresh', async () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'http://127.0.0.1:8080/v1'
process.env.OPENAI_MODEL = 'qwen2.5-coder-7b-instruct'
let resolveDiscovery: (() => void) | undefined
const discoverOpenAICompatibleModelOptions = mock(
() =>
new Promise<void>(resolve => {
resolveDiscovery = resolve
}),
)
mock.module('../../utils/model/openaiModelDiscovery.js', () => ({
discoverOpenAICompatibleModelOptions,
}))
const { call } = await import(`./model.js?ts=${Date.now()}-${Math.random()}`)
const result = await Promise.race([
call(() => {}, {} as never, ''),
new Promise(resolve => setTimeout(() => resolve('timeout'), 50)),
])
resolveDiscovery?.()
expect(result).not.toBe('timeout')
})

View File

@@ -4,6 +4,7 @@ import * as React from 'react';
import type { CommandResultDisplay } from '../../commands.js';
import { ModelPicker } from '../../components/ModelPicker.js';
import { COMMON_HELP_ARGS, COMMON_INFO_ARGS } from '../../constants/xml.js';
import { fetchBootstrapData } from '../../services/api/bootstrap.js';
import { type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS, logEvent } from '../../services/analytics/index.js';
import { useAppState, useSetAppState } from '../../state/AppState.js';
import type { LocalJSXCommandCall } from '../../types/command.js';
@@ -19,6 +20,7 @@ import { getActiveOpenAIModelOptionsCache, setActiveOpenAIModelOptionsCache } fr
import { getDefaultMainLoopModelSetting, isOpus1mMergeEnabled, renderDefaultModelSetting } from '../../utils/model/model.js';
import { isModelAllowed } from '../../utils/model/modelAllowlist.js';
import { validateModel } from '../../utils/model/validateModel.js';
import { getAdditionalModelOptionsCacheScope } from '../../services/api/providerConfig.js';
function ModelPickerWrapper(t0) {
const $ = _c(17);
const {
@@ -319,7 +321,9 @@ export const call: LocalJSXCommandCall = async (onDone, _context, args) => {
});
return <SetModelAndClose args={args} onDone={onDone} />;
}
await refreshOpenAIModelOptionsCache();
if (getAdditionalModelOptionsCacheScope()?.startsWith('openai:')) {
void refreshOpenAIModelOptionsCache();
}
return <ModelPickerWrapper onDone={onDone} />;
};
function renderModelLabel(model: string | null): string {

View File

@@ -2,6 +2,7 @@ import type { Command } from '../../commands.js'
const onboardGithub: Command = {
name: 'onboard-github',
aliases: ['onboarding-github', 'onboardgithub', 'onboardinggithub'],
description:
'Interactive setup for GitHub Models: device login or PAT, saved to secure storage',
type: 'local-jsx',

View File

@@ -0,0 +1,148 @@
import { describe, expect, test } from 'bun:test'
import {
activateGithubOnboardingMode,
applyGithubOnboardingProcessEnv,
buildGithubOnboardingSettingsEnv,
hasExistingGithubModelsLoginToken,
shouldForceGithubRelogin,
} from './onboard-github.js'
describe('shouldForceGithubRelogin', () => {
test.each(['force', '--force', 'relogin', '--relogin', 'reauth', '--reauth'])(
'treats %s as force re-login',
arg => {
expect(shouldForceGithubRelogin(arg)).toBe(true)
},
)
test('returns false for empty or unknown args', () => {
expect(shouldForceGithubRelogin('')).toBe(false)
expect(shouldForceGithubRelogin(undefined)).toBe(false)
expect(shouldForceGithubRelogin('something-else')).toBe(false)
})
test('treats force flags as present in multi-word args', () => {
expect(shouldForceGithubRelogin('--force extra')).toBe(true)
expect(shouldForceGithubRelogin('foo --relogin bar')).toBe(true)
expect(shouldForceGithubRelogin('abc reauth xyz')).toBe(true)
})
})
describe('hasExistingGithubModelsLoginToken', () => {
test('returns true when GITHUB_TOKEN is present', () => {
expect(
hasExistingGithubModelsLoginToken({ GITHUB_TOKEN: 'token' }, ''),
).toBe(true)
})
test('returns true when GH_TOKEN is present', () => {
expect(
hasExistingGithubModelsLoginToken({ GH_TOKEN: 'token' }, ''),
).toBe(true)
})
test('returns true when stored token exists', () => {
expect(hasExistingGithubModelsLoginToken({}, 'stored-token')).toBe(true)
})
test('returns false when both env and stored token are missing', () => {
expect(hasExistingGithubModelsLoginToken({}, '')).toBe(false)
})
})
describe('onboarding auth precedence cleanup', () => {
test('clears preexisting OpenAI auth when switching to GitHub', () => {
const env: NodeJS.ProcessEnv = {
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'gpt-4o',
OPENAI_API_KEY: 'sk-stale-openai-key',
OPENAI_ORG: 'org-old',
OPENAI_PROJECT: 'project-old',
OPENAI_ORGANIZATION: 'org-legacy',
OPENAI_BASE_URL: 'https://api.openai.com/v1',
OPENAI_API_BASE: 'https://api.openai.com/v1',
CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED: '1',
CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID: 'profile_old',
}
applyGithubOnboardingProcessEnv('github:copilot', env)
expect(env.CLAUDE_CODE_USE_GITHUB).toBe('1')
expect(env.OPENAI_MODEL).toBe('github:copilot')
expect(env.OPENAI_API_KEY).toBeUndefined()
expect(env.OPENAI_ORG).toBeUndefined()
expect(env.OPENAI_PROJECT).toBeUndefined()
expect(env.OPENAI_ORGANIZATION).toBeUndefined()
expect(env.OPENAI_BASE_URL).toBeUndefined()
expect(env.OPENAI_API_BASE).toBeUndefined()
expect(env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED).toBeUndefined()
expect(env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBeUndefined()
const settingsEnv = buildGithubOnboardingSettingsEnv('github:copilot')
expect(settingsEnv.CLAUDE_CODE_USE_GITHUB).toBe('1')
expect(settingsEnv.OPENAI_MODEL).toBe('github:copilot')
expect(settingsEnv.OPENAI_API_KEY).toBeUndefined()
expect(settingsEnv.OPENAI_ORG).toBeUndefined()
expect(settingsEnv.OPENAI_PROJECT).toBeUndefined()
expect(settingsEnv.OPENAI_ORGANIZATION).toBeUndefined()
})
})
describe('activateGithubOnboardingMode', () => {
test('activates settings/env/hydration in order when merge succeeds', () => {
const calls: string[] = []
const result = activateGithubOnboardingMode(' github:copilot ', {
mergeSettingsEnv: model => {
calls.push(`merge:${model}`)
return { ok: true }
},
applyProcessEnv: model => {
calls.push(`apply:${model}`)
},
hydrateToken: () => {
calls.push('hydrate')
},
onChangeAPIKey: () => {
calls.push('onChangeAPIKey')
},
})
expect(result).toEqual({ ok: true })
expect(calls).toEqual([
'merge:github:copilot',
'apply:github:copilot',
'hydrate',
'onChangeAPIKey',
])
})
test('stops activation when settings merge fails', () => {
const calls: string[] = []
const result = activateGithubOnboardingMode(DEFAULT_MODEL_FOR_TESTS, {
mergeSettingsEnv: () => {
calls.push('merge')
return { ok: false, detail: 'settings write failed' }
},
applyProcessEnv: () => {
calls.push('apply')
},
hydrateToken: () => {
calls.push('hydrate')
},
onChangeAPIKey: () => {
calls.push('onChangeAPIKey')
},
})
expect(result).toEqual({ ok: false, detail: 'settings write failed' })
expect(calls).toEqual(['merge'])
})
})
const DEFAULT_MODEL_FOR_TESTS = 'github:copilot'

View File

@@ -12,11 +12,20 @@ import {
import type { LocalJSXCommandCall } from '../../types/command.js'
import {
hydrateGithubModelsTokenFromSecureStorage,
readGithubModelsToken,
saveGithubModelsToken,
} from '../../utils/githubModelsCredentials.js'
import { updateSettingsForSource } from '../../utils/settings/settings.js'
const DEFAULT_MODEL = 'github:copilot'
const FORCE_RELOGIN_ARGS = new Set([
'force',
'--force',
'relogin',
'--relogin',
'reauth',
'--reauth',
])
type Step =
| 'menu'
@@ -24,17 +33,72 @@ type Step =
| 'pat'
| 'error'
export function shouldForceGithubRelogin(args?: string): boolean {
const normalized = (args ?? '').trim().toLowerCase()
if (!normalized) {
return false
}
return normalized.split(/\s+/).some(arg => FORCE_RELOGIN_ARGS.has(arg))
}
export function hasExistingGithubModelsLoginToken(
env: NodeJS.ProcessEnv = process.env,
storedToken?: string,
): boolean {
const envToken = env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()
if (envToken) {
return true
}
const persisted = (storedToken ?? readGithubModelsToken())?.trim()
return Boolean(persisted)
}
export function buildGithubOnboardingSettingsEnv(
model: string,
): Record<string, string | undefined> {
return {
CLAUDE_CODE_USE_GITHUB: '1',
OPENAI_MODEL: model,
OPENAI_API_KEY: undefined,
OPENAI_ORG: undefined,
OPENAI_PROJECT: undefined,
OPENAI_ORGANIZATION: undefined,
OPENAI_BASE_URL: undefined,
OPENAI_API_BASE: undefined,
CLAUDE_CODE_USE_OPENAI: undefined,
CLAUDE_CODE_USE_GEMINI: undefined,
CLAUDE_CODE_USE_BEDROCK: undefined,
CLAUDE_CODE_USE_VERTEX: undefined,
CLAUDE_CODE_USE_FOUNDRY: undefined,
}
}
export function applyGithubOnboardingProcessEnv(
model: string,
env: NodeJS.ProcessEnv = process.env,
): void {
env.CLAUDE_CODE_USE_GITHUB = '1'
env.OPENAI_MODEL = model
delete env.OPENAI_API_KEY
delete env.OPENAI_ORG
delete env.OPENAI_PROJECT
delete env.OPENAI_ORGANIZATION
delete env.OPENAI_BASE_URL
delete env.OPENAI_API_BASE
delete env.CLAUDE_CODE_USE_OPENAI
delete env.CLAUDE_CODE_USE_GEMINI
delete env.CLAUDE_CODE_USE_BEDROCK
delete env.CLAUDE_CODE_USE_VERTEX
delete env.CLAUDE_CODE_USE_FOUNDRY
delete env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED
delete env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID
}
function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
const { error } = updateSettingsForSource('userSettings', {
env: {
CLAUDE_CODE_USE_GITHUB: '1',
OPENAI_MODEL: model,
CLAUDE_CODE_USE_OPENAI: undefined as any,
CLAUDE_CODE_USE_GEMINI: undefined as any,
CLAUDE_CODE_USE_BEDROCK: undefined as any,
CLAUDE_CODE_USE_VERTEX: undefined as any,
CLAUDE_CODE_USE_FOUNDRY: undefined as any,
},
env: buildGithubOnboardingSettingsEnv(model) as any,
})
if (error) {
return { ok: false, detail: error.message }
@@ -42,6 +106,32 @@ function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
return { ok: true }
}
export function activateGithubOnboardingMode(
model: string = DEFAULT_MODEL,
options?: {
mergeSettingsEnv?: (model: string) => { ok: boolean; detail?: string }
applyProcessEnv?: (model: string) => void
hydrateToken?: () => void
onChangeAPIKey?: () => void
},
): { ok: boolean; detail?: string } {
const normalizedModel = model.trim() || DEFAULT_MODEL
const mergeSettingsEnv = options?.mergeSettingsEnv ?? mergeUserSettingsEnv
const applyProcessEnv = options?.applyProcessEnv ?? applyGithubOnboardingProcessEnv
const hydrateToken =
options?.hydrateToken ?? hydrateGithubModelsTokenFromSecureStorage
const merged = mergeSettingsEnv(normalizedModel)
if (!merged.ok) {
return merged
}
applyProcessEnv(normalizedModel)
hydrateToken()
options?.onChangeAPIKey?.()
return { ok: true }
}
function OnboardGithub(props: {
onDone: Parameters<LocalJSXCommandCall>[0]
onChangeAPIKey: () => void
@@ -64,19 +154,17 @@ function OnboardGithub(props: {
setStep('error')
return
}
const merged = mergeUserSettingsEnv(model.trim() || DEFAULT_MODEL)
if (!merged.ok) {
const activated = activateGithubOnboardingMode(model, {
onChangeAPIKey,
})
if (!activated.ok) {
setErrorMsg(
`Token saved, but settings were not updated: ${merged.detail ?? 'unknown error'}. ` +
`Token saved, but settings were not updated: ${activated.detail ?? 'unknown error'}. ` +
`Add env CLAUDE_CODE_USE_GITHUB=1 and OPENAI_MODEL to ~/.claude/settings.json manually.`,
)
setStep('error')
return
}
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_MODEL = model.trim() || DEFAULT_MODEL
hydrateGithubModelsTokenFromSecureStorage()
onChangeAPIKey()
onDone(
'GitHub Models onboard complete. Token stored in secure storage; user settings updated. Restart if the model does not switch.',
{ display: 'user' },
@@ -147,11 +235,11 @@ function OnboardGithub(props: {
{deviceHint.verification_uri}
</Text>
<Text dimColor>
A browser window may have opened. Waiting for authorization
A browser window may have opened. Waiting for authorization...
</Text>
</>
) : (
<Text dimColor>Requesting device code from GitHub</Text>
<Text dimColor>Requesting device code from GitHub...</Text>
)}
<Spinner />
</Box>
@@ -206,7 +294,7 @@ function OnboardGithub(props: {
<Text bold>GitHub Models setup</Text>
<Text dimColor>
Stores your token in the OS credential store (macOS Keychain when available)
and enables CLAUDE_CODE_USE_GITHUB in your user settings no export
and enables CLAUDE_CODE_USE_GITHUB in your user settings - no export
GITHUB_TOKEN needed for future runs.
</Text>
<Select
@@ -227,7 +315,28 @@ function OnboardGithub(props: {
)
}
export const call: LocalJSXCommandCall = async (onDone, context) => {
export const call: LocalJSXCommandCall = async (onDone, context, args) => {
const forceRelogin = shouldForceGithubRelogin(args)
if (hasExistingGithubModelsLoginToken() && !forceRelogin) {
const activated = activateGithubOnboardingMode(DEFAULT_MODEL, {
onChangeAPIKey: context.onChangeAPIKey,
})
if (!activated.ok) {
onDone(
`GitHub token detected, but settings activation failed: ${activated.detail ?? 'unknown error'}. ` +
'Set CLAUDE_CODE_USE_GITHUB=1 and OPENAI_MODEL=github:copilot in user settings manually.',
{ display: 'system' },
)
return null
}
onDone(
'GitHub Models already authorized. Activated GitHub Models mode using your existing token. Use /onboard-github --force to re-authenticate.',
{ display: 'user' },
)
return null
}
return (
<OnboardGithub
onDone={onDone}

View File

@@ -52,7 +52,11 @@ async function renderFinalFrame(node: React.ReactNode): Promise<string> {
patchConsole: false,
})
await instance.waitUntilExit()
// Timeout guard: if render throws before exit effect fires, don't hang
await Promise.race([
instance.waitUntilExit(),
new Promise<void>(resolve => setTimeout(resolve, 3000)),
])
return stripAnsi(extractLastFrame(getOutput()))
}
@@ -197,6 +201,21 @@ test('buildProfileSaveMessage maps provider fields without echoing secrets', ()
expect(message).not.toContain('sk-secret-12345678')
})
test('buildProfileSaveMessage labels local openai-compatible profiles consistently', () => {
const message = buildProfileSaveMessage(
'openai',
{
OPENAI_MODEL: 'gpt-5.4',
OPENAI_BASE_URL: 'http://127.0.0.1:8080/v1',
},
'D:/codings/Opensource/openclaude/.openclaude-profile.json',
)
expect(message).toContain('Saved Local OpenAI-compatible profile.')
expect(message).toContain('Model: gpt-5.4')
expect(message).toContain('Endpoint: http://127.0.0.1:8080/v1')
})
test('buildProfileSaveMessage describes Gemini access token / ADC mode clearly', () => {
const message = buildProfileSaveMessage(
'gemini',
@@ -230,6 +249,51 @@ test('buildCurrentProviderSummary redacts poisoned model and endpoint values', (
expect(summary.endpointLabel).toBe('sk-...5678')
})
test('buildCurrentProviderSummary labels generic local openai-compatible providers', () => {
const summary = buildCurrentProviderSummary({
processEnv: {
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'qwen2.5-coder-7b-instruct',
OPENAI_BASE_URL: 'http://127.0.0.1:8080/v1',
},
persisted: null,
})
expect(summary.providerLabel).toBe('Local OpenAI-compatible')
expect(summary.modelLabel).toBe('qwen2.5-coder-7b-instruct')
expect(summary.endpointLabel).toBe('http://127.0.0.1:8080/v1')
})
test('buildCurrentProviderSummary does not relabel local gpt-5.4 providers as Codex', () => {
const summary = buildCurrentProviderSummary({
processEnv: {
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'gpt-5.4',
OPENAI_BASE_URL: 'http://127.0.0.1:8080/v1',
},
persisted: null,
})
expect(summary.providerLabel).toBe('Local OpenAI-compatible')
expect(summary.modelLabel).toBe('gpt-5.4')
expect(summary.endpointLabel).toBe('http://127.0.0.1:8080/v1')
})
test('buildCurrentProviderSummary recognizes GitHub Models mode', () => {
const summary = buildCurrentProviderSummary({
processEnv: {
CLAUDE_CODE_USE_GITHUB: '1',
OPENAI_MODEL: 'github:copilot',
OPENAI_BASE_URL: 'https://models.github.ai/inference',
},
persisted: null,
})
expect(summary.providerLabel).toBe('GitHub Models')
expect(summary.modelLabel).toBe('github:copilot')
expect(summary.endpointLabel).toBe('https://models.github.ai/inference')
})
test('getProviderWizardDefaults ignores poisoned current provider values', () => {
const defaults = getProviderWizardDefaults({
OPENAI_API_KEY: 'sk-secret-12345678',

View File

@@ -15,6 +15,7 @@ import { Box, Text } from '../../ink.js'
import {
DEFAULT_CODEX_BASE_URL,
DEFAULT_OPENAI_BASE_URL,
isLocalProviderUrl,
resolveCodexApiCredentials,
resolveProviderRequest,
} from '../../services/api/providerConfig.js'
@@ -52,7 +53,11 @@ import {
recommendOllamaModel,
type RecommendationGoal,
} from '../../utils/providerRecommendation.js'
import { hasLocalOllama, listOllamaModels } from '../../utils/providerDiscovery.js'
import {
getLocalOpenAICompatibleProviderLabel,
hasLocalOllama,
listOllamaModels,
} from '../../utils/providerDiscovery.js'
type ProviderChoice = 'auto' | ProviderProfile | 'clear'
@@ -173,6 +178,23 @@ export function buildCurrentProviderSummary(options?: {
}
}
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB)) {
return {
providerLabel: 'GitHub Models',
modelLabel: getSafeDisplayValue(
processEnv.OPENAI_MODEL ?? 'github:copilot',
processEnv,
),
endpointLabel: getSafeDisplayValue(
processEnv.OPENAI_BASE_URL ??
processEnv.OPENAI_API_BASE ??
'https://models.github.ai/inference',
processEnv,
),
savedProfileLabel,
}
}
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_OPENAI)) {
const request = resolveProviderRequest({
model: processEnv.OPENAI_MODEL,
@@ -182,10 +204,8 @@ export function buildCurrentProviderSummary(options?: {
let providerLabel = 'OpenAI-compatible'
if (request.transport === 'codex_responses') {
providerLabel = 'Codex'
} else if (request.baseUrl.includes('localhost:11434')) {
providerLabel = 'Ollama'
} else if (request.baseUrl.includes('localhost:1234')) {
providerLabel = 'LM Studio'
} else if (isLocalProviderUrl(request.baseUrl)) {
providerLabel = getLocalOpenAICompatibleProviderLabel(request.baseUrl)
}
return {
@@ -272,16 +292,20 @@ function buildSavedProfileSummary(
),
}
case 'openai':
default:
default: {
const baseUrl = env.OPENAI_BASE_URL ?? DEFAULT_OPENAI_BASE_URL
return {
providerLabel: 'OpenAI-compatible',
providerLabel: isLocalProviderUrl(baseUrl)
? getLocalOpenAICompatibleProviderLabel(baseUrl)
: 'OpenAI-compatible',
modelLabel: getSafeDisplayValue(
env.OPENAI_MODEL ?? 'gpt-4o',
process.env,
env,
),
endpointLabel: getSafeDisplayValue(
env.OPENAI_BASE_URL ?? DEFAULT_OPENAI_BASE_URL,
baseUrl,
process.env,
env,
),
@@ -290,6 +314,7 @@ function buildSavedProfileSummary(
? 'configured'
: undefined,
}
}
}
}

View File

@@ -67,6 +67,7 @@ import { isBilledAsExtraUsage } from '../../utils/extraUsage.js';
import { getFastModeUnavailableReason, isFastModeAvailable, isFastModeCooldown, isFastModeEnabled, isFastModeSupportedByModel } from '../../utils/fastMode.js';
import { isFullscreenEnvEnabled } from '../../utils/fullscreen.js';
import type { PromptInputHelpers } from '../../utils/handlePromptSubmit.js';
import { extractDraggedFilePaths } from '../../utils/dragDropPaths.js';
import { getImageFromClipboard, PASTE_THRESHOLD } from '../../utils/imagePaste.js';
import type { ImageDimensions } from '../../utils/imageResizer.js';
import { cacheImagePath, storeImage } from '../../utils/imageStore.js';
@@ -1204,6 +1205,22 @@ function PromptInput({
// Clean up pasted text - strip ANSI escape codes and normalize line endings and tabs
let text = stripAnsi(rawText).replace(/\r/g, '\n').replaceAll('\t', ' ');
// Detect file paths from drag-and-drop and convert to @mentions.
// When files are dragged into the terminal, the terminal sends their
// absolute paths via bracketed paste. Image files are handled by the
// image paste handler upstream; here we handle non-image files by
// converting them to @mentions so they get attached on submit.
const draggedPaths = extractDraggedFilePaths(text);
if (draggedPaths.length > 0) {
const mentions = draggedPaths
.map(p => (p.includes(' ') || p.includes(':') ? `@"${p}"` : `@${p}`))
.join(' ');
// Ensure spacing around the mention(s) relative to existing input
const charBefore = input[cursorOffset - 1];
const prefix = charBefore && !/\s/.test(charBefore) ? ' ' : '';
text = prefix + mentions + ' ';
}
// Match typed/auto-suggest: `!cmd` pasted into empty input enters bash mode.
if (input.length === 0) {
const pastedMode = getModeFromInput(text);
@@ -1245,12 +1262,23 @@ function PromptInput({
if (isNonSpacePrintable(input, key)) return ' ' + input;
return input;
}, []);
// Ref mirrors cursorOffset for use in synchronous loops (e.g. multi-image
// paste) where React batches state updates and the closure value is stale.
const cursorOffsetRef = useRef(cursorOffset);
cursorOffsetRef.current = cursorOffset;
function insertTextAtCursor(text: string) {
// Push current state to buffer before inserting
pushToBuffer(input, cursorOffset, pastedContents);
const newInput = input.slice(0, cursorOffset) + text + input.slice(cursorOffset);
// Use refs for input/cursor so back-to-back calls in the same event
// (e.g. onImagePaste loop for multiple dragged images) chain correctly
// instead of each reading the same stale closure values.
const currentInput = lastInternalInputRef.current;
const currentOffset = cursorOffsetRef.current;
pushToBuffer(currentInput, currentOffset, pastedContents);
const newInput = currentInput.slice(0, currentOffset) + text + currentInput.slice(currentOffset);
trackAndSetInput(newInput);
setCursorOffset(cursorOffset + text.length);
const newOffset = currentOffset + text.length;
cursorOffsetRef.current = newOffset;
setCursorOffset(newOffset);
}
const doublePressEscFromEmpty = useDoublePress(() => {}, () => onShowMessageSelector());

View File

@@ -123,8 +123,6 @@ const SuggestionItemRow = memo(function SuggestionItemRow({
maxColumnWidth ?? stringWidth(item.displayText) + 5,
maxNameWidth,
)
const displayTextColor = isSelected ? 'inverseText' : item.color
const shouldDim = !isSelected
let displayText = item.displayText
if (stringWidth(displayText) > displayTextWidth - 2) {
@@ -144,21 +142,17 @@ const SuggestionItemRow = memo(function SuggestionItemRow({
const truncatedDescription = item.description
? truncateToWidth(item.description.replace(/\s+/g, ' '), descriptionWidth)
: ''
const lineContent = `${paddedDisplayText}${tagText}${truncatedDescription}`
return (
<Box width="100%" opaque={true} backgroundColor={rowBackgroundColor}>
<Text wrap="truncate">
<Text color={displayTextColor} dimColor={shouldDim} bold={isSelected}>
{paddedDisplayText}
</Text>
{tagText ? (
<Text color={textColor} dimColor={!isSelected}>
{tagText}
</Text>
) : null}
<Text color={textColor} dimColor={!isSelected}>
{truncatedDescription}
</Text>
<Text
color={textColor}
dimColor={!isSelected}
bold={isSelected}
wrap="truncate"
>
{lineContent}
</Text>
</Box>
)

View File

@@ -0,0 +1,305 @@
import { PassThrough } from 'node:stream'
import { afterEach, expect, mock, test } from 'bun:test'
import React from 'react'
import stripAnsi from 'strip-ansi'
import { createRoot } from '../ink.js'
import { AppStateProvider } from '../state/AppState.js'
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
const ORIGINAL_ENV = {
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
GH_TOKEN: process.env.GH_TOKEN,
}
function extractLastFrame(output: string): string {
let lastFrame: string | null = null
let cursor = 0
while (cursor < output.length) {
const start = output.indexOf(SYNC_START, cursor)
if (start === -1) {
break
}
const contentStart = start + SYNC_START.length
const end = output.indexOf(SYNC_END, contentStart)
if (end === -1) {
break
}
const frame = output.slice(contentStart, end)
if (frame.trim().length > 0) {
lastFrame = frame
}
cursor = end + SYNC_END.length
}
return lastFrame ?? output
}
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
getOutput: () => string
} {
let output = ''
const stdout = new PassThrough()
const stdin = new PassThrough() as PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
stdout.on('data', chunk => {
output += chunk.toString()
})
return {
stdout,
stdin,
getOutput: () => output,
}
}
async function waitForCondition(
predicate: () => boolean,
options?: { timeoutMs?: number; intervalMs?: number },
): Promise<void> {
const timeoutMs = options?.timeoutMs ?? 2000
const intervalMs = options?.intervalMs ?? 10
const startedAt = Date.now()
while (Date.now() - startedAt < timeoutMs) {
if (predicate()) {
return
}
await Bun.sleep(intervalMs)
}
throw new Error('Timed out waiting for ProviderManager test condition')
}
function createDeferred<T>(): {
promise: Promise<T>
resolve: (value: T) => void
} {
let resolve!: (value: T) => void
const promise = new Promise<T>(r => {
resolve = r
})
return { promise, resolve }
}
function mockProviderProfilesModule(): void {
mock.module('../utils/providerProfiles.js', () => ({
addProviderProfile: () => null,
applyActiveProviderProfileFromConfig: () => {},
deleteProviderProfile: () => ({ removed: false, activeProfileId: null }),
getActiveProviderProfile: () => null,
getProviderPresetDefaults: () => ({
provider: 'openai',
name: 'Mock provider',
baseUrl: 'http://localhost:11434/v1',
model: 'mock-model',
apiKey: '',
}),
getProviderProfiles: () => [],
setActiveProviderProfile: () => null,
updateProviderProfile: () => null,
}))
}
function mockProviderManagerDependencies(
syncRead: () => string | undefined,
asyncRead: () => Promise<string | undefined>,
): void {
mockProviderProfilesModule()
mock.module('../utils/githubModelsCredentials.js', () => ({
clearGithubModelsToken: () => ({ success: true }),
GITHUB_MODELS_HYDRATED_ENV_MARKER: 'CLAUDE_CODE_GITHUB_TOKEN_HYDRATED',
hydrateGithubModelsTokenFromSecureStorage: () => {},
readGithubModelsToken: syncRead,
readGithubModelsTokenAsync: asyncRead,
}))
mock.module('../utils/settings/settings.js', () => ({
updateSettingsForSource: () => ({ error: null }),
}))
}
async function waitForFrameOutput(
getOutput: () => string,
predicate: (output: string) => boolean,
timeoutMs = 2500,
): Promise<string> {
let output = ''
await waitForCondition(() => {
output = stripAnsi(extractLastFrame(getOutput()))
return predicate(output)
}, { timeoutMs })
return output
}
async function mountProviderManager(
ProviderManager: React.ComponentType<{
mode: 'first-run' | 'manage'
onDone: () => void
}>,
): Promise<{
getOutput: () => string
dispose: () => Promise<void>
}> {
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(
<AppStateProvider>
<ProviderManager
mode="manage"
onDone={() => {}}
/>
</AppStateProvider>,
)
return {
getOutput,
dispose: async () => {
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(0)
},
}
}
async function renderProviderManagerFrame(
ProviderManager: React.ComponentType<{
mode: 'first-run' | 'manage'
onDone: () => void
}>,
options?: {
waitForOutput?: (output: string) => boolean
timeoutMs?: number
},
): Promise<string> {
const mounted = await mountProviderManager(ProviderManager)
const output = await waitForFrameOutput(
mounted.getOutput,
frame => {
if (!options?.waitForOutput) {
return frame.includes('Provider manager')
}
return options.waitForOutput(frame)
},
options?.timeoutMs ?? 2500,
)
await mounted.dispose()
return output
}
afterEach(() => {
mock.restore()
for (const [key, value] of Object.entries(ORIGINAL_ENV)) {
if (value === undefined) {
delete process.env[key as keyof typeof ORIGINAL_ENV]
} else {
process.env[key as keyof typeof ORIGINAL_ENV] = value
}
}
})
test('ProviderManager resolves GitHub virtual provider from async storage without sync reads in render flow', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const syncRead = mock(() => {
throw new Error('sync credential read should not run in ProviderManager render flow')
})
const asyncRead = mock(async () => 'stored-token')
mockProviderManagerDependencies(syncRead, asyncRead)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const output = await renderProviderManagerFrame(ProviderManager, {
waitForOutput: frame =>
frame.includes('Provider manager') &&
frame.includes('GitHub Models') &&
frame.includes('token stored'),
})
expect(output).toContain('Provider manager')
expect(output).toContain('GitHub Models')
expect(output).toContain('token stored')
expect(output).not.toContain('No provider profiles configured yet.')
expect(syncRead).not.toHaveBeenCalled()
expect(asyncRead).toHaveBeenCalled()
})
test('ProviderManager avoids first-frame false negative while stored-token lookup is pending', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
delete process.env.GH_TOKEN
const syncRead = mock(() => {
throw new Error('sync credential read should not run in ProviderManager render flow')
})
const deferredStoredToken = createDeferred<string | undefined>()
const asyncRead = mock(async () => deferredStoredToken.promise)
mockProviderManagerDependencies(syncRead, asyncRead)
const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const mounted = await mountProviderManager(ProviderManager)
const firstFrame = await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('Provider manager'),
)
expect(firstFrame).toContain('Checking GitHub Models credentials...')
expect(firstFrame).not.toContain('No provider profiles configured yet.')
deferredStoredToken.resolve('stored-token')
const resolvedFrame = await waitForFrameOutput(
mounted.getOutput,
frame => frame.includes('GitHub Models') && frame.includes('token stored'),
)
expect(resolvedFrame).toContain('GitHub Models')
expect(resolvedFrame).toContain('token stored')
await mounted.dispose()
expect(syncRead).not.toHaveBeenCalled()
expect(asyncRead).toHaveBeenCalled()
})

View File

@@ -5,6 +5,7 @@ import { useKeybinding } from '../keybindings/useKeybinding.js'
import type { ProviderProfile } from '../utils/config.js'
import {
addProviderProfile,
applyActiveProviderProfileFromConfig,
deleteProviderProfile,
getActiveProviderProfile,
getProviderPresetDefaults,
@@ -14,6 +15,15 @@ import {
type ProviderProfileInput,
updateProviderProfile,
} from '../utils/providerProfiles.js'
import {
clearGithubModelsToken,
GITHUB_MODELS_HYDRATED_ENV_MARKER,
hydrateGithubModelsTokenFromSecureStorage,
readGithubModelsToken,
readGithubModelsTokenAsync,
} from '../utils/githubModelsCredentials.js'
import { isEnvTruthy } from '../utils/envUtils.js'
import { updateSettingsForSource } from '../utils/settings/settings.js'
import { Select } from './CustomSelect/index.js'
import { Pane } from './design-system/Pane.js'
import TextInput from './TextInput.js'
@@ -75,6 +85,13 @@ const FORM_STEPS: Array<{
},
]
const GITHUB_PROVIDER_ID = '__github_models__'
const GITHUB_PROVIDER_LABEL = 'GitHub Models'
const GITHUB_PROVIDER_DEFAULT_MODEL = 'github:copilot'
const GITHUB_PROVIDER_DEFAULT_BASE_URL = 'https://models.github.ai/inference'
type GithubCredentialSource = 'stored' | 'env' | 'none'
function toDraft(profile: ProviderProfile): ProviderDraft {
return {
name: profile.name,
@@ -102,11 +119,83 @@ function profileSummary(profile: ProviderProfile, isActive: boolean): string {
return `${providerKind} · ${profile.baseUrl} · ${profile.model} · ${keyInfo}${activeSuffix}`
}
function getGithubCredentialSourceFromEnv(
processEnv: NodeJS.ProcessEnv = process.env,
): GithubCredentialSource {
if (processEnv.GITHUB_TOKEN?.trim() || processEnv.GH_TOKEN?.trim()) {
return 'env'
}
return 'none'
}
async function resolveGithubCredentialSource(
processEnv: NodeJS.ProcessEnv = process.env,
): Promise<GithubCredentialSource> {
const envSource = getGithubCredentialSourceFromEnv(processEnv)
if (envSource !== 'none') {
return envSource
}
if (await readGithubModelsTokenAsync()) {
return 'stored'
}
return 'none'
}
function isGithubProviderAvailable(
credentialSource: GithubCredentialSource,
processEnv: NodeJS.ProcessEnv = process.env,
): boolean {
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB)) {
return true
}
return credentialSource !== 'none'
}
function getGithubProviderModel(
processEnv: NodeJS.ProcessEnv = process.env,
): string {
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB)) {
return processEnv.OPENAI_MODEL?.trim() || GITHUB_PROVIDER_DEFAULT_MODEL
}
return GITHUB_PROVIDER_DEFAULT_MODEL
}
function getGithubProviderSummary(
isActive: boolean,
credentialSource: GithubCredentialSource,
processEnv: NodeJS.ProcessEnv = process.env,
): string {
const credentialSummary =
credentialSource === 'stored'
? 'token stored'
: credentialSource === 'env'
? 'token via env'
: 'no token found'
const activeSuffix = isActive ? ' (active)' : ''
return `github-models · ${GITHUB_PROVIDER_DEFAULT_BASE_URL} · ${getGithubProviderModel(processEnv)} · ${credentialSummary}${activeSuffix}`
}
export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
const initialGithubCredentialSource = getGithubCredentialSourceFromEnv()
const initialIsGithubActive = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const initialHasGithubCredential = initialGithubCredentialSource !== 'none'
const [profiles, setProfiles] = React.useState(() => getProviderProfiles())
const [activeProfileId, setActiveProfileId] = React.useState(
() => getActiveProviderProfile()?.id,
)
const [githubProviderAvailable, setGithubProviderAvailable] = React.useState(
() => isGithubProviderAvailable(initialGithubCredentialSource),
)
const [githubCredentialSource, setGithubCredentialSource] = React.useState<GithubCredentialSource>(
() => initialGithubCredentialSource,
)
const [isGithubActive, setIsGithubActive] = React.useState(() => initialIsGithubActive)
const [isGithubCredentialSourceResolved, setIsGithubCredentialSourceResolved] =
React.useState(() => initialHasGithubCredential || initialIsGithubActive)
const githubRefreshEpochRef = React.useRef(0)
const [screen, setScreen] = React.useState<Screen>(
mode === 'first-run' ? 'select-preset' : 'menu',
)
@@ -126,16 +215,155 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
const currentStepKey = currentStep.key
const currentValue = draft[currentStepKey]
const refreshGithubProviderState = React.useCallback((): void => {
const envCredentialSource = getGithubCredentialSourceFromEnv()
const githubActive = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const canResolveFromEnv = githubActive || envCredentialSource !== 'none'
if (canResolveFromEnv) {
githubRefreshEpochRef.current += 1
setGithubCredentialSource(envCredentialSource)
setGithubProviderAvailable(isGithubProviderAvailable(envCredentialSource))
setIsGithubActive(githubActive)
setIsGithubCredentialSourceResolved(true)
return
}
setIsGithubCredentialSourceResolved(false)
const refreshEpoch = ++githubRefreshEpochRef.current
void (async () => {
const credentialSource = await resolveGithubCredentialSource()
if (refreshEpoch !== githubRefreshEpochRef.current) {
return
}
setGithubCredentialSource(credentialSource)
setGithubProviderAvailable(isGithubProviderAvailable(credentialSource))
setIsGithubActive(isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB))
setIsGithubCredentialSourceResolved(true)
})()
}, [])
React.useEffect(() => {
refreshGithubProviderState()
return () => {
githubRefreshEpochRef.current += 1
}
}, [refreshGithubProviderState])
function refreshProfiles(): void {
const nextProfiles = getProviderProfiles()
setProfiles(nextProfiles)
setActiveProfileId(getActiveProviderProfile()?.id)
refreshGithubProviderState()
}
function clearStartupProviderOverrideFromUserSettings(): string | null {
const { error } = updateSettingsForSource('userSettings', {
env: {
CLAUDE_CODE_USE_OPENAI: undefined as any,
CLAUDE_CODE_USE_GEMINI: undefined as any,
CLAUDE_CODE_USE_GITHUB: undefined as any,
CLAUDE_CODE_USE_BEDROCK: undefined as any,
CLAUDE_CODE_USE_VERTEX: undefined as any,
CLAUDE_CODE_USE_FOUNDRY: undefined as any,
},
})
return error ? error.message : null
}
function closeWithCancelled(message: string): void {
onDone({ action: 'cancelled', message })
}
function activateGithubProvider(): string | null {
const { error } = updateSettingsForSource('userSettings', {
env: {
CLAUDE_CODE_USE_GITHUB: '1',
OPENAI_MODEL: GITHUB_PROVIDER_DEFAULT_MODEL,
OPENAI_API_KEY: undefined as any,
OPENAI_ORG: undefined as any,
OPENAI_PROJECT: undefined as any,
OPENAI_ORGANIZATION: undefined as any,
OPENAI_BASE_URL: undefined as any,
OPENAI_API_BASE: undefined as any,
CLAUDE_CODE_USE_OPENAI: undefined as any,
CLAUDE_CODE_USE_GEMINI: undefined as any,
CLAUDE_CODE_USE_BEDROCK: undefined as any,
CLAUDE_CODE_USE_VERTEX: undefined as any,
CLAUDE_CODE_USE_FOUNDRY: undefined as any,
},
})
if (error) {
return error.message
}
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_MODEL = GITHUB_PROVIDER_DEFAULT_MODEL
delete process.env.OPENAI_API_KEY
delete process.env.OPENAI_ORG
delete process.env.OPENAI_PROJECT
delete process.env.OPENAI_ORGANIZATION
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_BEDROCK
delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY
delete process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED
delete process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
hydrateGithubModelsTokenFromSecureStorage()
return null
}
function deleteGithubProvider(): string | null {
const storedTokenBeforeClear = readGithubModelsToken()?.trim()
const cleared = clearGithubModelsToken()
if (!cleared.success) {
return cleared.warning ?? 'Could not clear GitHub credentials.'
}
const { error } = updateSettingsForSource('userSettings', {
env: {
CLAUDE_CODE_USE_GITHUB: undefined as any,
OPENAI_MODEL: undefined as any,
OPENAI_BASE_URL: undefined as any,
OPENAI_API_BASE: undefined as any,
},
})
if (error) {
return error.message
}
const hydratedTokenInSession = process.env.GITHUB_TOKEN?.trim()
if (
process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER] === '1' &&
hydratedTokenInSession &&
(!storedTokenBeforeClear || hydratedTokenInSession === storedTokenBeforeClear)
) {
delete process.env.GITHUB_TOKEN
}
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
delete process.env.OPENAI_MODEL
delete process.env.OPENAI_API_KEY
delete process.env.OPENAI_ORG
delete process.env.OPENAI_PROJECT
delete process.env.OPENAI_ORGANIZATION
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE
// Restore active provider profile immediately when one exists.
applyActiveProviderProfileFromConfig()
return null
}
function startCreateFromPreset(preset: ProviderPreset): void {
const defaults = getProviderPresetDefaults(preset)
const nextDraft = {
@@ -187,11 +415,20 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
return
}
const isActiveSavedProfile = getActiveProviderProfile()?.id === saved.id
const settingsOverrideError = isActiveSavedProfile
? clearStartupProviderOverrideFromUserSettings()
: null
refreshProfiles()
setStatusMessage(
const successMessage =
editingProfileId
? `Updated provider: ${saved.name}`
: `Added provider: ${saved.name} (now active)`,
: `Added provider: ${saved.name} (now active)`
setStatusMessage(
settingsOverrideError
? `${successMessage}. Warning: could not clear startup provider override (${settingsOverrideError}).`
: successMessage,
)
if (mode === 'first-run') {
@@ -413,6 +650,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
function renderMenu(): React.ReactNode {
const hasProfiles = profiles.length > 0
const hasSelectableProviders = hasProfiles || githubProviderAvailable
const options = [
{
@@ -424,7 +662,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
value: 'activate',
label: 'Set active provider',
description: 'Switch the active provider profile',
disabled: !hasProfiles,
disabled: !hasSelectableProviders,
},
{
value: 'edit',
@@ -436,7 +674,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
value: 'delete',
label: 'Delete provider',
description: 'Remove a provider profile',
disabled: !hasProfiles,
disabled: !hasSelectableProviders,
},
{
value: 'done',
@@ -455,14 +693,29 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
</Text>
{statusMessage && <Text>{statusMessage}</Text>}
<Box flexDirection="column">
{profiles.length === 0 ? (
<Text dimColor>No provider profiles configured yet.</Text>
{profiles.length === 0 && !githubProviderAvailable ? (
isGithubCredentialSourceResolved ? (
<Text dimColor>No provider profiles configured yet.</Text>
) : (
<Text dimColor>Checking GitHub Models credentials...</Text>
)
) : (
profiles.map(profile => (
<Text key={profile.id} dimColor>
- {profile.name}: {profileSummary(profile, profile.id === activeProfileId)}
</Text>
))
<>
{profiles.map(profile => (
<Text key={profile.id} dimColor>
- {profile.name}: {profileSummary(profile, profile.id === activeProfileId)}
</Text>
))}
{githubProviderAvailable ? (
<Text dimColor>
- {GITHUB_PROVIDER_LABEL}:{' '}
{getGithubProviderSummary(
isGithubActive,
githubCredentialSource,
)}
</Text>
) : null}
</>
)}
</Box>
<Select
@@ -474,7 +727,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
setScreen('select-preset')
break
case 'activate':
if (profiles.length > 0) {
if (hasSelectableProviders) {
setScreen('select-active')
}
break
@@ -484,7 +737,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
}
break
case 'delete':
if (profiles.length > 0) {
if (hasSelectableProviders) {
setScreen('select-delete')
}
break
@@ -504,8 +757,29 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
title: string,
emptyMessage: string,
onSelect: (profileId: string) => void,
options?: { includeGithub?: boolean },
): React.ReactNode {
if (profiles.length === 0) {
const includeGithub = options?.includeGithub ?? false
const selectOptions = profiles.map(profile => ({
value: profile.id,
label:
profile.id === activeProfileId
? `${profile.name} (active)`
: profile.name,
description: `${profile.provider === 'anthropic' ? 'anthropic' : 'openai-compatible'} · ${profile.baseUrl} · ${profile.model}`,
}))
if (includeGithub && githubProviderAvailable) {
selectOptions.push({
value: GITHUB_PROVIDER_ID,
label: isGithubActive
? `${GITHUB_PROVIDER_LABEL} (active)`
: GITHUB_PROVIDER_LABEL,
description: `github-models · ${GITHUB_PROVIDER_DEFAULT_BASE_URL} · ${getGithubProviderModel()}`,
})
}
if (selectOptions.length === 0) {
return (
<Box flexDirection="column" gap={1}>
<Text color="remember" bold>
@@ -528,25 +802,16 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
)
}
const options = profiles.map(profile => ({
value: profile.id,
label:
profile.id === activeProfileId
? `${profile.name} (active)`
: profile.name,
description: `${profile.provider === 'anthropic' ? 'anthropic' : 'openai-compatible'} · ${profile.baseUrl} · ${profile.model}`,
}))
return (
<Box flexDirection="column" gap={1}>
<Text color="remember" bold>
{title}
</Text>
<Select
options={options}
options={selectOptions}
onChange={onSelect}
onCancel={() => setScreen('menu')}
visibleOptionCount={Math.min(10, Math.max(2, options.length))}
visibleOptionCount={Math.min(10, Math.max(2, selectOptions.length))}
/>
</Box>
)
@@ -566,16 +831,36 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
'Set active provider',
'No providers available. Add one first.',
profileId => {
if (profileId === GITHUB_PROVIDER_ID) {
const githubError = activateGithubProvider()
if (githubError) {
setErrorMessage(`Could not activate GitHub provider: ${githubError}`)
setScreen('menu')
return
}
refreshProfiles()
setStatusMessage(`Active provider: ${GITHUB_PROVIDER_LABEL}`)
setScreen('menu')
return
}
const active = setActiveProviderProfile(profileId)
if (!active) {
setErrorMessage('Could not change active provider.')
setScreen('menu')
return
}
const settingsOverrideError =
clearStartupProviderOverrideFromUserSettings()
refreshProfiles()
setStatusMessage(`Active provider: ${active.name}`)
setStatusMessage(
settingsOverrideError
? `Active provider: ${active.name}. Warning: could not clear startup provider override (${settingsOverrideError}).`
: `Active provider: ${active.name}`,
)
setScreen('menu')
},
{ includeGithub: true },
)
break
case 'select-edit':
@@ -592,15 +877,35 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
'Delete provider',
'No providers available. Add one first.',
profileId => {
if (profileId === GITHUB_PROVIDER_ID) {
const githubDeleteError = deleteGithubProvider()
if (githubDeleteError) {
setErrorMessage(`Could not delete GitHub provider: ${githubDeleteError}`)
} else {
refreshProfiles()
setStatusMessage('GitHub provider deleted')
}
setScreen('menu')
return
}
const result = deleteProviderProfile(profileId)
if (!result.removed) {
setErrorMessage('Could not delete provider.')
} else {
const settingsOverrideError = result.activeProfileId
? clearStartupProviderOverrideFromUserSettings()
: null
refreshProfiles()
setStatusMessage('Provider deleted')
setStatusMessage(
settingsOverrideError
? `Provider deleted. Warning: could not clear startup provider override (${settingsOverrideError}).`
: 'Provider deleted',
)
}
setScreen('menu')
},
{ includeGithub: true },
)
break
case 'menu':

View File

@@ -5,6 +5,9 @@
* Addresses: https://github.com/Gitlawb/openclaude/issues/55
*/
import { isLocalProviderUrl } from '../services/api/providerConfig.js'
import { getLocalOpenAICompatibleProviderLabel } from '../utils/providerDiscovery.js'
declare const MACRO: { VERSION: string; DISPLAY_VERSION?: string }
const ESC = '\x1b['
@@ -99,7 +102,7 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
if (useOpenAI) {
const rawModel = process.env.OPENAI_MODEL || 'gpt-4o'
const baseUrl = process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1'
const isLocal = /localhost|127\.0\.0\.1|0\.0\.0\.0/.test(baseUrl)
const isLocal = isLocalProviderUrl(baseUrl)
let name = 'OpenAI'
if (/deepseek/i.test(baseUrl) || /deepseek/i.test(rawModel)) name = 'DeepSeek'
else if (/openrouter/i.test(baseUrl)) name = 'OpenRouter'
@@ -107,10 +110,8 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
else if (/groq/i.test(baseUrl)) name = 'Groq'
else if (/mistral/i.test(baseUrl) || /mistral/i.test(rawModel)) name = 'Mistral'
else if (/azure/i.test(baseUrl)) name = 'Azure OpenAI'
else if (/localhost:11434/i.test(baseUrl)) name = 'Ollama'
else if (/localhost:1234/i.test(baseUrl)) name = 'LM Studio'
else if (/llama/i.test(rawModel)) name = 'Meta Llama'
else if (isLocal) name = 'Local'
else if (isLocal) name = getLocalOpenAICompatibleProviderLabel(baseUrl)
// Resolve model alias to actual model name + reasoning effort
let displayModel = rawModel

View File

@@ -0,0 +1,113 @@
import { describe, expect, it, mock } from 'bun:test'
// We can't fully render ThemePicker due to complex dependencies
// But we can test the theme options generation logic
describe('ThemePicker', () => {
describe('theme options', () => {
it('generates correct theme options without AUTO_THEME feature flag', () => {
// Since we can't easily mock bun:bundle, test the options structure
// The real test would require integration testing
const expectedOptions = [
{ label: "Dark mode", value: "dark" },
{ label: "Light mode", value: "light" },
{ label: "Dark mode (colorblind-friendly)", value: "dark-daltonized" },
{ label: "Light mode (colorblind-friendly)", value: "light-daltonized" },
{ label: "Dark mode (ANSI colors only)", value: "dark-ansi" },
{ label: "Light mode (ANSI colors only)", value: "light-ansi" },
]
expect(expectedOptions.length).toBe(6)
})
it('includes auto theme when AUTO_THEME feature is enabled', () => {
// Test the structure when auto is present
const optionsWithAuto = [
{ label: "Auto (match terminal)", value: "auto" },
{ label: "Dark mode", value: "dark" },
]
expect(optionsWithAuto[0].value).toBe('auto')
})
})
describe('handleRowFocus callback', () => {
it('setPreviewTheme is called with theme setting', () => {
const setPreviewTheme = mock()
const handleRowFocus = (setting: string) => setPreviewTheme(setting)
handleRowFocus('dark')
expect(setPreviewTheme).toHaveBeenCalledWith('dark')
})
})
describe('handleSelect callback', () => {
it('calls savePreview and onThemeSelect', () => {
const savePreview = mock()
const onThemeSelect = mock()
const handleSelect = (setting: string) => {
savePreview()
onThemeSelect(setting)
}
handleSelect('light')
expect(savePreview).toHaveBeenCalled()
expect(onThemeSelect).toHaveBeenCalledWith('light')
})
})
describe('handleCancel callback', () => {
it('calls cancelPreview and gracefulShutdown when not skipExitHandling', () => {
const cancelPreview = mock()
const gracefulShutdown = mock()
const handleCancel = (skipExitHandling: boolean, onCancelProp?: () => void) => {
cancelPreview()
if (skipExitHandling) {
onCancelProp?.()
} else {
gracefulShutdown(0)
}
}
handleCancel(false)
expect(cancelPreview).toHaveBeenCalled()
expect(gracefulShutdown).toHaveBeenCalledWith(0)
})
it('calls onCancelProp when skipExitHandling is true', () => {
const cancelPreview = mock()
const onCancelProp = mock()
const handleCancel = (skipExitHandling: boolean, onCancelProp?: () => void) => {
cancelPreview()
if (skipExitHandling) {
onCancelProp?.()
}
}
handleCancel(true, onCancelProp)
expect(cancelPreview).toHaveBeenCalled()
expect(onCancelProp).toHaveBeenCalled()
})
})
describe('syntax hint logic', () => {
it('shows disabled hint when syntax highlighting is disabled', () => {
const syntaxHighlightingDisabled = true
const syntaxToggleShortcut = 'Ctrl+T'
const hint = syntaxHighlightingDisabled
? `Syntax highlighting disabled (${syntaxToggleShortcut} to enable)`
: `Syntax highlighting enabled (${syntaxToggleShortcut} to disable)`
expect(hint).toContain('disabled')
})
it('shows enabled hint when syntax highlighting is active', () => {
const syntaxHighlightingDisabled = false
const syntaxToggleShortcut = 'Ctrl+T'
const hint = !syntaxHighlightingDisabled
? `Syntax highlighting enabled (${syntaxToggleShortcut} to disable)`
: `Syntax highlighting disabled (${syntaxToggleShortcut} to enable)`
expect(hint).toContain('enabled')
})
})
})

View File

@@ -1,13 +1,14 @@
import { c as _c } from "react-compiler-runtime";
import { feature } from 'bun:bundle';
import type { StructuredPatchHunk } from 'diff';
import * as React from 'react';
import { useExitOnCtrlCDWithKeybindings } from '../hooks/useExitOnCtrlCDWithKeybindings.js';
import { useExitOnCtrlCDWithKeybindings } from '../hooks/useExitOnCtrlCDWithKeybindings.js'
import { useTerminalSize } from '../hooks/useTerminalSize.js';
import { Box, Text, usePreviewTheme, useTheme, useThemeSetting } from '../ink.js';
import { useRegisterKeybindingContext } from '../keybindings/KeybindingContext.js';
import { useKeybinding } from '../keybindings/useKeybinding.js';
import { useShortcutDisplay } from '../keybindings/useShortcutDisplay.js';
import { useAppState, useSetAppState } from '../state/AppState.js';
import type { AppState } from '../state/AppStateStore.js';
import { gracefulShutdown } from '../utils/gracefulShutdown.js';
import { updateSettingsForSource } from '../utils/settings/settings.js';
import type { ThemeSetting } from '../utils/theme.js';
@@ -16,6 +17,17 @@ import { Byline } from './design-system/Byline.js';
import { KeyboardShortcutHint } from './design-system/KeyboardShortcutHint.js';
import { getColorModuleUnavailableReason, getSyntaxTheme } from './StructuredDiff/colorDiff.js';
import { StructuredDiff } from './StructuredDiff.js';
type StructuredDiffComponent = React.ComponentType<{
patch: StructuredPatchHunk
dim: boolean
filePath: string
firstLine: string | null
width: number
skipHighlighting?: boolean
}>
const StructuredDiffView = StructuredDiff as StructuredDiffComponent
export type ThemePickerProps = {
onThemeSelect: (setting: ThemeSetting) => void;
showIntroText?: boolean;
@@ -26,307 +38,224 @@ export type ThemePickerProps = {
skipExitHandling?: boolean;
/** Called when the user cancels (presses Escape). If skipExitHandling is true and this is provided, it will be called instead of just saving the preview. */
onCancel?: () => void;
};
export function ThemePicker(t0) {
const $ = _c(59);
const {
onThemeSelect,
showIntroText: t1,
helpText: t2,
showHelpTextBelow: t3,
hideEscToCancel: t4,
skipExitHandling: t5,
onCancel: onCancelProp
} = t0;
const showIntroText = t1 === undefined ? false : t1;
const helpText = t2 === undefined ? "" : t2;
const showHelpTextBelow = t3 === undefined ? false : t3;
const hideEscToCancel = t4 === undefined ? false : t4;
const skipExitHandling = t5 === undefined ? false : t5;
}
const DEMO_PATCH: StructuredPatchHunk = {
oldStart: 1,
newStart: 1,
oldLines: 3,
newLines: 3,
lines: [
' function greet() {',
'- console.log("Hello, World!");',
'+ console.log("Hello, Claude!");',
' }',
],
}
/**
* Theme chooser with live preview. Implemented without react-compiler `_c` memo
* caches so preview/subtree reconciliation cannot stick on stale element refs when
* `setPreviewTheme` updates the resolved palette.
*/
export function ThemePicker({
onThemeSelect,
showIntroText = false,
helpText = '',
showHelpTextBelow = false,
hideEscToCancel = false,
skipExitHandling = false,
onCancel: onCancelProp,
}: ThemePickerProps) {
const [theme] = useTheme();
const themeSetting = useThemeSetting();
const {
columns
} = useTerminalSize();
let t6;
if ($[0] === Symbol.for("react.memo_cache_sentinel")) {
t6 = getColorModuleUnavailableReason();
$[0] = t6;
} else {
t6 = $[0];
}
const colorModuleUnavailableReason = t6;
let t7;
if ($[1] !== theme) {
t7 = colorModuleUnavailableReason === null ? getSyntaxTheme(theme) : null;
$[1] = theme;
$[2] = t7;
} else {
t7 = $[2];
}
const syntaxTheme = t7;
const {
setPreviewTheme,
savePreview,
cancelPreview
} = usePreviewTheme();
const syntaxHighlightingDisabled = useAppState(_temp) ?? false;
const { columns } = useTerminalSize();
const colorModuleUnavailableReason = React.useMemo(
() => getColorModuleUnavailableReason(),
[],
)
const syntaxTheme =
colorModuleUnavailableReason === null ? getSyntaxTheme(theme) : null
const { setPreviewTheme, savePreview, cancelPreview } = usePreviewTheme()
const syntaxHighlightingDisabled = useAppState(
(s: AppState) => s.settings.syntaxHighlightingDisabled ?? false
);
const setAppState = useSetAppState();
useRegisterKeybindingContext("ThemePicker");
useRegisterKeybindingContext("ThemePicker", true);
const syntaxToggleShortcut = useShortcutDisplay("theme:toggleSyntaxHighlighting", "ThemePicker", "ctrl+t");
let t8;
if ($[3] !== setAppState || $[4] !== syntaxHighlightingDisabled) {
t8 = () => {
if (colorModuleUnavailableReason === null) {
const newValue = !syntaxHighlightingDisabled;
updateSettingsForSource("userSettings", {
const toggleSyntax = React.useCallback(() => {
if (colorModuleUnavailableReason === null) {
const newValue = !syntaxHighlightingDisabled
updateSettingsForSource("userSettings", {
syntaxHighlightingDisabled: newValue
});
setAppState(prev => ({
...prev,
settings: {
...prev.settings,
syntaxHighlightingDisabled: newValue
});
setAppState(prev => ({
...prev,
settings: {
...prev.settings,
syntaxHighlightingDisabled: newValue
}
}));
}
};
$[3] = setAppState;
$[4] = syntaxHighlightingDisabled;
$[5] = t8;
} else {
t8 = $[5];
}
let t9;
if ($[6] === Symbol.for("react.memo_cache_sentinel")) {
t9 = {
context: "ThemePicker"
};
$[6] = t9;
} else {
t9 = $[6];
}
useKeybinding("theme:toggleSyntaxHighlighting", t8, t9);
const exitState = useExitOnCtrlCDWithKeybindings(skipExitHandling ? _temp2 : undefined);
let t10;
if ($[7] === Symbol.for("react.memo_cache_sentinel")) {
t10 = [...(feature("AUTO_THEME") ? [{
label: "Auto (match terminal)",
value: "auto" as const
}] : []), {
label: "Dark mode",
value: "dark"
}, {
label: "Light mode",
value: "light"
}, {
label: "Dark mode (colorblind-friendly)",
value: "dark-daltonized"
}, {
label: "Light mode (colorblind-friendly)",
value: "light-daltonized"
}, {
label: "Dark mode (ANSI colors only)",
value: "dark-ansi"
}, {
label: "Light mode (ANSI colors only)",
value: "light-ansi"
}];
$[7] = t10;
} else {
t10 = $[7];
}
const themeOptions = t10;
let t11;
if ($[8] !== showIntroText) {
t11 = showIntroText ? <Text>Let's get started.</Text> : <Text bold={true} color="permission">Theme</Text>;
$[8] = showIntroText;
$[9] = t11;
} else {
t11 = $[9];
}
let t12;
if ($[10] === Symbol.for("react.memo_cache_sentinel")) {
t12 = <Text bold={true}>Choose the text style that looks best with your terminal</Text>;
$[10] = t12;
} else {
t12 = $[10];
}
let t13;
if ($[11] !== helpText || $[12] !== showHelpTextBelow) {
t13 = helpText && !showHelpTextBelow && <Text dimColor={true}>{helpText}</Text>;
$[11] = helpText;
$[12] = showHelpTextBelow;
$[13] = t13;
} else {
t13 = $[13];
}
let t14;
if ($[14] !== t13) {
t14 = <Box flexDirection="column">{t12}{t13}</Box>;
$[14] = t13;
$[15] = t14;
} else {
t14 = $[15];
}
let t15;
if ($[16] !== setPreviewTheme) {
t15 = setting => {
setPreviewTheme(setting as ThemeSetting);
};
$[16] = setPreviewTheme;
$[17] = t15;
} else {
t15 = $[17];
}
let t16;
if ($[18] !== onThemeSelect || $[19] !== savePreview) {
t16 = setting_0 => {
savePreview();
onThemeSelect(setting_0 as ThemeSetting);
};
$[18] = onThemeSelect;
$[19] = savePreview;
$[20] = t16;
} else {
t16 = $[20];
}
let t17;
if ($[21] !== cancelPreview || $[22] !== onCancelProp || $[23] !== skipExitHandling) {
t17 = skipExitHandling ? () => {
cancelPreview();
onCancelProp?.();
} : async () => {
cancelPreview();
await gracefulShutdown(0);
};
$[21] = cancelPreview;
$[22] = onCancelProp;
$[23] = skipExitHandling;
$[24] = t17;
} else {
t17 = $[24];
}
let t18;
if ($[25] !== t15 || $[26] !== t16 || $[27] !== t17 || $[28] !== themeSetting) {
t18 = <Select options={themeOptions} onFocus={t15} onChange={t16} onCancel={t17} visibleOptionCount={themeOptions.length} defaultValue={themeSetting} defaultFocusValue={themeSetting} />;
$[25] = t15;
$[26] = t16;
$[27] = t17;
$[28] = themeSetting;
$[29] = t18;
} else {
t18 = $[29];
}
let t19;
if ($[30] !== t11 || $[31] !== t14 || $[32] !== t18) {
t19 = <Box flexDirection="column" gap={1}>{t11}{t14}{t18}</Box>;
$[30] = t11;
$[31] = t14;
$[32] = t18;
$[33] = t19;
} else {
t19 = $[33];
}
let t20;
if ($[34] === Symbol.for("react.memo_cache_sentinel")) {
t20 = {
oldStart: 1,
newStart: 1,
oldLines: 3,
newLines: 3,
lines: [" function greet() {", "- console.log(\"Hello, World!\");", "+ console.log(\"Hello, Claude!\");", " }"]
};
$[34] = t20;
} else {
t20 = $[34];
}
let t21;
if ($[35] !== columns) {
t21 = <Box flexDirection="column" borderTop={true} borderBottom={true} borderLeft={false} borderRight={false} borderStyle="dashed" borderColor="subtle"><StructuredDiff patch={t20} dim={false} filePath="demo.js" firstLine={null} width={columns} /></Box>;
$[35] = columns;
$[36] = t21;
} else {
t21 = $[36];
}
const t22 = colorModuleUnavailableReason === "env" ? `Syntax highlighting disabled (via CLAUDE_CODE_SYNTAX_HIGHLIGHT=${process.env.CLAUDE_CODE_SYNTAX_HIGHLIGHT})` : syntaxHighlightingDisabled ? `Syntax highlighting disabled (${syntaxToggleShortcut} to enable)` : syntaxTheme ? `Syntax theme: ${syntaxTheme.theme}${syntaxTheme.source ? ` (from ${syntaxTheme.source})` : ""} (${syntaxToggleShortcut} to disable)` : `Syntax highlighting enabled (${syntaxToggleShortcut} to disable)`;
let t23;
if ($[37] !== t22) {
t23 = <Text dimColor={true}>{" "}{t22}</Text>;
$[37] = t22;
$[38] = t23;
} else {
t23 = $[38];
}
let t24;
if ($[39] !== t21 || $[40] !== t23) {
t24 = <Box flexDirection="column" width="100%">{t21}{t23}</Box>;
$[39] = t21;
$[40] = t23;
$[41] = t24;
} else {
t24 = $[41];
}
let t25;
if ($[42] !== t19 || $[43] !== t24) {
t25 = <Box flexDirection="column" gap={1}>{t19}{t24}</Box>;
$[42] = t19;
$[43] = t24;
$[44] = t25;
} else {
t25 = $[44];
}
const content = t25;
}
}));
}
}, [
colorModuleUnavailableReason,
syntaxHighlightingDisabled,
setAppState,
])
useKeybinding("theme:toggleSyntaxHighlighting", toggleSyntax, {
context: "ThemePicker",
})
const exitState = useExitOnCtrlCDWithKeybindings(
skipExitHandling ? () => {} : undefined,
)
const themeOptions = React.useMemo(
() => [
...(feature("AUTO_THEME")
? [{ label: "Auto (match terminal)", value: "auto" as const }]
: []), {
label: "Dark mode",
value: "dark" as const
}, {
label: "Light mode",
value: "light" as const
}, {
label: "Dark mode (colorblind-friendly)",
value: "dark-daltonized" as const,
}, {
label: "Light mode (colorblind-friendly)",
value: "light-daltonized" as const,
}, {
label: "Dark mode (ANSI colors only)",
value: "dark-ansi" as const
}, {
label: "Light mode (ANSI colors only)",
value: "light-ansi" as const
},],
[],
)
const handleRowFocus = React.useCallback(
(setting: ThemeSetting) => {
setPreviewTheme(setting)
},
[setPreviewTheme],
)
const handleSelect = React.useCallback(
(setting: ThemeSetting) => {
savePreview()
onThemeSelect(setting)
},
[savePreview, onThemeSelect],
)
const handleCancel = React.useCallback(() => {
cancelPreview()
if (skipExitHandling) {
onCancelProp?.()
} else {
void gracefulShutdown(0)
}
}, [cancelPreview, onCancelProp, skipExitHandling])
const syntaxHint =
colorModuleUnavailableReason === 'env'
? `Syntax highlighting disabled (via CLAUDE_CODE_SYNTAX_HIGHLIGHT=${process.env.CLAUDE_CODE_SYNTAX_HIGHLIGHT})`
: syntaxHighlightingDisabled
? `Syntax highlighting disabled (${syntaxToggleShortcut} to enable)`
: syntaxTheme
? `Syntax theme: ${syntaxTheme.theme}${syntaxTheme.source ? ` (from ${syntaxTheme.source})` : ''} (${syntaxToggleShortcut} to disable)`
: `Syntax highlighting enabled (${syntaxToggleShortcut} to disable)`
const header = showIntroText ? (
<Text>{"Let's get started."}</Text>
) : (
<Text bold color="permission">
Theme
</Text>
)
const introBlock = (
<Box flexDirection="column">
<Text bold>Choose the text style that looks best with your terminal</Text>
{helpText && !showHelpTextBelow ? (
<Text dimColor>{helpText}</Text>
) : null}
</Box>
)
const content = (
<Box flexDirection="column" gap={1}>
<Box flexDirection="column" gap={1}>
{header}
{introBlock}
<Select
options={themeOptions}
onFocus={handleRowFocus}
onChange={handleSelect}
onCancel={handleCancel}
visibleOptionCount={themeOptions.length}
defaultValue={themeSetting}
defaultFocusValue={themeSetting}
/>
</Box>
<Box flexDirection="column" width="100%">
<Box
key={theme}
flexDirection="column"
borderTop
borderBottom
borderLeft={false}
borderRight={false}
borderStyle="dashed"
borderColor="subtle"
>
<StructuredDiffView
patch={DEMO_PATCH}
dim={false}
filePath="demo.js"
firstLine={null}
width={columns}
/>
</Box>
<Text dimColor>
{' '}
{syntaxHint}
</Text>
</Box>
</Box>
)
if (!showIntroText) {
let t26;
if ($[45] !== content) {
t26 = <Box flexDirection="column">{content}</Box>;
$[45] = content;
$[46] = t26;
} else {
t26 = $[46];
}
let t27;
if ($[47] !== helpText || $[48] !== showHelpTextBelow) {
t27 = showHelpTextBelow && helpText && <Box marginLeft={3}><Text dimColor={true}>{helpText}</Text></Box>;
$[47] = helpText;
$[48] = showHelpTextBelow;
$[49] = t27;
} else {
t27 = $[49];
}
let t28;
if ($[50] !== exitState || $[51] !== hideEscToCancel) {
t28 = !hideEscToCancel && <Box><Text dimColor={true} italic={true}>{exitState.pending ? <>Press {exitState.keyName} again to exit</> : <Byline><KeyboardShortcutHint shortcut="Enter" action="select" /><KeyboardShortcutHint shortcut="Esc" action="cancel" /></Byline>}</Text></Box>;
$[50] = exitState;
$[51] = hideEscToCancel;
$[52] = t28;
} else {
t28 = $[52];
}
let t29;
if ($[53] !== t27 || $[54] !== t28) {
t29 = <Box marginTop={1}>{t27}{t28}</Box>;
$[53] = t27;
$[54] = t28;
$[55] = t29;
} else {
t29 = $[55];
}
let t30;
if ($[56] !== t26 || $[57] !== t29) {
t30 = <>{t26}{t29}</>;
$[56] = t26;
$[57] = t29;
$[58] = t30;
} else {
t30 = $[58];
}
return t30;
return (
<>
<Box flexDirection="column">{content}</Box>
{showHelpTextBelow && helpText ? (
<Box marginLeft={3}>
<Text dimColor>{helpText}</Text>
</Box>
) : null}
{!hideEscToCancel ? (
<Box marginTop={1}>
<Text dimColor italic>
{exitState.pending ? (
<>Press {exitState.keyName} again to exit</>
) : (
<Byline>
<KeyboardShortcutHint shortcut="Enter" action="select" />
<KeyboardShortcutHint shortcut="Esc" action="cancel" />
</Byline>
)}
</Text>
</Box>
) : null}
</>
)
}
return content;
}
function _temp2() {}
function _temp(s) {
return s.settings.syntaxHighlightingDisabled;
return content
}

252
src/grpc/server.ts Normal file
View File

@@ -0,0 +1,252 @@
import * as grpc from '@grpc/grpc-js'
import * as protoLoader from '@grpc/proto-loader'
import path from 'path'
import { randomUUID } from 'crypto'
import { QueryEngine } from '../QueryEngine.js'
import { getTools } from '../tools.js'
import { getDefaultAppState } from '../state/AppStateStore.js'
import { AppState } from '../state/AppState.js'
import { FileStateCache, READ_FILE_STATE_CACHE_SIZE } from '../utils/fileStateCache.js'
const PROTO_PATH = path.resolve(import.meta.dirname, '../proto/openclaude.proto')
const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true,
})
const protoDescriptor = grpc.loadPackageDefinition(packageDefinition) as any
const openclaudeProto = protoDescriptor.openclaude.v1
const MAX_SESSIONS = 1000
export class GrpcServer {
private server: grpc.Server
private sessions: Map<string, any[]> = new Map()
constructor() {
this.server = new grpc.Server()
this.server.addService(openclaudeProto.AgentService.service, {
Chat: this.handleChat.bind(this),
})
}
start(port: number = 50051, host: string = 'localhost') {
this.server.bindAsync(
`${host}:${port}`,
grpc.ServerCredentials.createInsecure(),
(error, boundPort) => {
if (error) {
console.error('Failed to start gRPC server')
return
}
console.log(`gRPC Server running at ${host}:${boundPort}`)
}
)
}
private handleChat(call: grpc.ServerDuplexStream<any, any>) {
let engine: QueryEngine | null = null
let appState: AppState = getDefaultAppState()
const fileCache: FileStateCache = new FileStateCache(READ_FILE_STATE_CACHE_SIZE, 25 * 1024 * 1024)
// To handle ActionRequired (ask user for permission)
const pendingRequests = new Map<string, (reply: string) => void>()
// Accumulated messages from previous turns for multi-turn context
let previousMessages: any[] = []
let sessionId = ''
let interrupted = false
call.on('data', async (clientMessage) => {
try {
if (clientMessage.request) {
if (engine) {
call.write({
error: {
message: 'A request is already in progress on this stream',
code: 'ALREADY_EXISTS'
}
})
return
}
interrupted = false
const req = clientMessage.request
sessionId = req.session_id || ''
previousMessages = []
// Load previous messages from session store (cross-stream persistence)
if (sessionId && this.sessions.has(sessionId)) {
previousMessages = [...this.sessions.get(sessionId)!]
}
const toolNameById = new Map<string, string>()
engine = new QueryEngine({
cwd: req.working_directory || process.cwd(),
tools: getTools(appState.toolPermissionContext), // Gets all available tools
commands: [], // Slash commands
mcpClients: [],
agents: [],
...(previousMessages.length > 0 ? { initialMessages: previousMessages } : {}),
includePartialMessages: true,
canUseTool: async (tool, input, context, assistantMsg, toolUseID) => {
if (toolUseID) {
toolNameById.set(toolUseID, tool.name)
}
// Notify client of the tool call first
call.write({
tool_start: {
tool_name: tool.name,
arguments_json: JSON.stringify(input),
tool_use_id: toolUseID
}
})
// Ask user for permission
const promptId = randomUUID()
const question = `Approve ${tool.name}?`
call.write({
action_required: {
prompt_id: promptId,
question,
type: 'CONFIRM_COMMAND'
}
})
return new Promise((resolve) => {
pendingRequests.set(promptId, (reply) => {
if (reply.toLowerCase() === 'yes' || reply.toLowerCase() === 'y') {
resolve({ behavior: 'allow' })
} else {
resolve({ behavior: 'deny', reason: 'User denied via gRPC' })
}
})
})
},
getAppState: () => appState,
setAppState: (updater) => { appState = updater(appState) },
readFileCache: fileCache,
userSpecifiedModel: req.model,
fallbackModel: req.model,
})
// Track accumulated response data for FinalResponse
let fullText = ''
let promptTokens = 0
let completionTokens = 0
const generator = engine.submitMessage(req.message)
for await (const msg of generator) {
if (msg.type === 'stream_event') {
if (msg.event.type === 'content_block_delta' && msg.event.delta.type === 'text_delta') {
call.write({
text_chunk: {
text: msg.event.delta.text
}
})
fullText += msg.event.delta.text
}
} else if (msg.type === 'user') {
// Extract tool results
const content = msg.message.content
if (Array.isArray(content)) {
for (const block of content) {
if (block.type === 'tool_result') {
let outputStr = ''
if (typeof block.content === 'string') {
outputStr = block.content
} else if (Array.isArray(block.content)) {
outputStr = block.content.map(c => c.type === 'text' ? c.text : '').join('\n')
}
call.write({
tool_result: {
tool_name: toolNameById.get(block.tool_use_id) ?? block.tool_use_id,
tool_use_id: block.tool_use_id,
output: outputStr,
is_error: block.is_error || false
}
})
}
}
}
} else if (msg.type === 'result') {
// Extract real token counts and final text from the result
if (msg.subtype === 'success') {
if (msg.result) {
fullText = msg.result
}
promptTokens = msg.usage?.input_tokens ?? 0
completionTokens = msg.usage?.output_tokens ?? 0
}
}
}
if (!interrupted) {
// Save messages for multi-turn context in subsequent requests
previousMessages = [...engine.getMessages()]
// Persist to session store for cross-stream resumption
if (sessionId) {
if (!this.sessions.has(sessionId) && this.sessions.size >= MAX_SESSIONS) {
// Evict oldest session (Map preserves insertion order)
this.sessions.delete(this.sessions.keys().next().value)
}
this.sessions.set(sessionId, previousMessages)
}
call.write({
done: {
full_text: fullText,
prompt_tokens: promptTokens,
completion_tokens: completionTokens
}
})
}
engine = null
} else if (clientMessage.input) {
const promptId = clientMessage.input.prompt_id
const reply = clientMessage.input.reply
if (pendingRequests.has(promptId)) {
pendingRequests.get(promptId)!(reply)
pendingRequests.delete(promptId)
}
} else if (clientMessage.cancel) {
interrupted = true
if (engine) {
engine.interrupt()
}
call.end()
}
} catch (err: any) {
console.error('Error processing stream')
call.write({
error: {
message: err.message || "Internal server error",
code: "INTERNAL"
}
})
call.end()
}
})
call.on('end', () => {
interrupted = true
// Unblock any pending permission prompts so canUseTool can return
for (const resolve of pendingRequests.values()) {
resolve('no')
}
if (engine) {
engine.interrupt()
}
engine = null
pendingRequests.clear()
})
}
}

View File

@@ -366,14 +366,12 @@ const reconciler = createReconciler<
createTextInstance(
text: string,
_root: DOMElement,
hostContext: HostContext,
_hostContext: HostContext,
): TextNode {
if (!hostContext.isInsideText) {
throw new Error(
`Text string "${text}" must be rendered inside <Text> component`,
)
}
// react-compiler memoization can reuse cached <Text> elements without
// re-traversing getChildHostContext, so hostContext.isInsideText may be
// stale. Always create the text node — Ink will render it correctly
// regardless of the context tracking state.
return createTextNode(text)
},
resetTextContent() {},

View File

@@ -27,6 +27,21 @@ async function flushClipboardCopy(): Promise<void> {
await new Promise(resolve => setTimeout(resolve, 0))
}
async function waitForExecCall(
command: string,
attempts = 20,
): Promise<(typeof execFileNoThrowMock.mock.calls)[number] | undefined> {
for (let attempt = 0; attempt < attempts; attempt++) {
const call = execFileNoThrowMock.mock.calls.find(([cmd]) => cmd === command)
if (call) {
return call
}
await flushClipboardCopy()
}
return undefined
}
describe('Windows clipboard fallback', () => {
beforeEach(() => {
execFileNoThrowMock.mockClear()
@@ -62,9 +77,7 @@ describe('Windows clipboard fallback', () => {
await setClipboard('Привет мир')
await flushClipboardCopy()
const windowsCall = execFileNoThrowMock.mock.calls.find(
([cmd]) => cmd === 'powershell',
)
const windowsCall = await waitForExecCall('powershell')
expect(windowsCall?.[2]).toMatchObject({
stdin: 'ignore',

101
src/proto/openclaude.proto Normal file
View File

@@ -0,0 +1,101 @@
syntax = "proto3";
package openclaude.v1;
// Main Agent Service
service AgentService {
// Bidirectional stream: client sends tasks and answers to agent prompts,
// server streams text tokens, tool states, and requests permissions.
rpc Chat(stream ClientMessage) returns (stream ServerMessage);
}
// ---------------------------------------------------------
// MESSAGES FROM CLIENT (Input)
// ---------------------------------------------------------
message ClientMessage {
oneof payload {
// 1. Initial request (first message in the stream)
ChatRequest request = 2;
// 2. User response to an agent prompt (e.g., command confirmation)
UserInput input = 3;
// 3. Interrupt signal (if the user clicks "Stop generation")
CancelSignal cancel = 4;
}
}
message ChatRequest {
string message = 1;
string working_directory = 2; // Where the agent should execute commands
reserved 3; // Reserved to prevent accidental reuse
optional string model = 4;
string session_id = 5; // Non-empty = cross-stream session persistence
}
message UserInput {
string reply = 1; // Text response (e.g., "y", "no", or clarification)
string prompt_id = 2; // ID of the prompt we are responding to
}
message CancelSignal {
string reason = 1;
}
// ---------------------------------------------------------
// MESSAGES FROM SERVER (Output / Events)
// ---------------------------------------------------------
message ServerMessage {
// Using oneof guarantees that only one type of event arrives at a time
oneof event {
TextChunk text_chunk = 1; // Chunk of text from LLM
ToolCallStart tool_start = 2; // Agent started using a tool
ToolCallResult tool_result = 3; // Tool returned a result
ActionRequired action_required = 4;// Agent requires human intervention
FinalResponse done = 5; // Generation successfully completed
ErrorResponse error = 6; // A critical error occurred
}
}
// Stream text chunk
message TextChunk {
string text = 1;
}
// Agent decided to use a tool (bash, read_file, etc.)
message ToolCallStart {
string tool_name = 1;
string arguments_json = 2; // Arguments in JSON format
string tool_use_id = 3; // Correlation ID matching ToolCallResult
}
// Result of tool execution
message ToolCallResult {
string tool_name = 1;
string output = 2; // stdout/stderr or file contents
bool is_error = 3; // Did the command itself fail
string tool_use_id = 4; // Correlation ID matching ToolCallStart
}
// Agent paused work and is waiting for user decision
message ActionRequired {
string prompt_id = 1; // Client must return this ID in UserInput
string question = 2; // Question text (e.g., "Execute 'rm -rf /'?")
enum ActionType {
CONFIRM_COMMAND = 0; // Yes/No
REQUEST_INFORMATION = 1; // Text input
}
ActionType type = 3;
}
// Final statistics
message FinalResponse {
string full_text = 1; // The entire generated text
int32 prompt_tokens = 2;
int32 completion_tokens = 3;
}
message ErrorResponse {
string message = 1;
string code = 2;
}

View File

@@ -237,6 +237,8 @@ import { useOfficialMarketplaceNotification } from 'src/hooks/useOfficialMarketp
import { usePromptsFromClaudeInChrome } from 'src/hooks/usePromptsFromClaudeInChrome.js';
import { getTipToShowOnSpinner, recordShownTip } from 'src/services/tips/tipScheduler.js';
import type { Theme } from 'src/utils/theme.js';
import { isPromptTypingSuppressionActive } from './replInputSuppression.js';
import { shouldStartStartupChecks } from './replStartupGates.js';
import { checkAndDisableBypassPermissionsIfNeeded, checkAndDisableAutoModeIfNeeded, useKickOffCheckAndDisableBypassPermissionsIfNeeded, useKickOffCheckAndDisableAutoModeIfNeeded } from 'src/utils/permissions/bypassPermissionsKillswitch.js';
import { SandboxManager } from 'src/utils/sandbox/sandbox-adapter.js';
import { SANDBOX_NETWORK_ACCESS_TOOL_NAME } from 'src/cli/structuredIO.js';
@@ -783,19 +785,6 @@ export function REPL({
});
const tasksV2 = useTasksV2WithCollapseEffect();
// Start background plugin installations
// SECURITY: This code is guaranteed to run ONLY after the "trust this folder" dialog
// has been confirmed by the user. The trust dialog is shown in cli.tsx (line ~387)
// before the REPL component is rendered. The dialog blocks execution until the user
// accepts, and only then is the REPL component mounted and this effect runs.
// This ensures that plugin installations from repository and user settings only
// happen after explicit user consent to trust the current working directory.
useEffect(() => {
if (isRemoteSession) return;
void performStartupChecks(setAppState);
}, [setAppState, isRemoteSession]);
// Allow Claude in Chrome MCP to send prompts through MCP notifications
// and sync permission mode changes to the Chrome extension
usePromptsFromClaudeInChrome(isRemoteSession ? EMPTY_MCP_CLIENTS : mcpClients, toolPermissionContext.mode);
@@ -1336,12 +1325,32 @@ export function REPL({
const [inputValue, setInputValueRaw] = useState(() => consumeEarlyInput());
const inputValueRef = useRef(inputValue);
inputValueRef.current = inputValue;
const startupChecksStartedRef = useRef(false);
const promptTypingSuppressionActive = isPromptTypingSuppressionActive(isPromptInputActive, inputValue);
const insertTextRef = useRef<{
insert: (text: string) => void;
setInputWithCursor: (value: string, cursor: number) => void;
cursorOffset: number;
} | null>(null);
// Start background plugin installations after the initial input window is idle.
// SECURITY: This still runs only after the "trust this folder" dialog has been
// confirmed because the REPL is not mounted until that dialog completes.
useEffect(() => {
if (
!shouldStartStartupChecks({
isRemoteSession,
promptTypingSuppressionActive,
startupChecksStarted: startupChecksStartedRef.current,
})
) {
return;
}
startupChecksStartedRef.current = true;
void performStartupChecks(setAppState);
}, [isRemoteSession, promptTypingSuppressionActive, setAppState]);
// Wrap setInputValue to co-locate suppression state updates.
// Both setState calls happen in the same synchronous context so React
// batches them into a single render, eliminating the extra render that
@@ -2028,7 +2037,7 @@ export function REPL({
if (isMessageSelectorVisible) return 'message-selector';
// Suppress interrupt dialogs while user is actively typing
if (isPromptInputActive) return undefined;
if (promptTypingSuppressionActive) return undefined;
if (sandboxPermissionRequestQueue[0]) return 'sandbox-permission';
// Permission/interactive dialogs (show unless blocked by toolJSX)
@@ -2071,7 +2080,7 @@ export function REPL({
const focusedInputDialog = getFocusedInputDialog();
// True when permission prompts exist but are hidden because the user is typing
const hasSuppressedDialogs = isPromptInputActive && (sandboxPermissionRequestQueue[0] || toolUseConfirmQueue[0] || promptQueue[0] || workerSandboxPermissions.queue[0] || elicitation.queue[0] || showingCostDialog);
const hasSuppressedDialogs = promptTypingSuppressionActive && (sandboxPermissionRequestQueue[0] || toolUseConfirmQueue[0] || promptQueue[0] || workerSandboxPermissions.queue[0] || elicitation.queue[0] || showingCostDialog);
// Keep ref in sync so timer callbacks can read the current value
focusedInputDialogRef.current = focusedInputDialog;

View File

@@ -0,0 +1,18 @@
import { describe, expect, it } from 'bun:test'
import { isPromptTypingSuppressionActive } from './replInputSuppression.js'
describe('isPromptTypingSuppressionActive', () => {
it('suppresses dialogs when early input already exists', () => {
expect(isPromptTypingSuppressionActive(false, 'hello')).toBe(true)
})
it('does not suppress dialogs for empty or whitespace-only input', () => {
expect(isPromptTypingSuppressionActive(false, '')).toBe(false)
expect(isPromptTypingSuppressionActive(false, ' ')).toBe(false)
})
it('keeps suppression active while the typing flag is set', () => {
expect(isPromptTypingSuppressionActive(true, '')).toBe(true)
})
})

View File

@@ -0,0 +1,6 @@
export function isPromptTypingSuppressionActive(
isPromptInputActive: boolean,
inputValue: string,
): boolean {
return isPromptInputActive || inputValue.trim().length > 0
}

View File

@@ -0,0 +1,44 @@
import { describe, expect, test } from 'bun:test'
import { shouldStartStartupChecks } from './replStartupGates.js'
describe('shouldStartStartupChecks', () => {
test('returns false for remote sessions', () => {
expect(
shouldStartStartupChecks({
isRemoteSession: true,
promptTypingSuppressionActive: false,
startupChecksStarted: false,
}),
).toBe(false)
})
test('returns false while prompt typing suppression is active', () => {
expect(
shouldStartStartupChecks({
isRemoteSession: false,
promptTypingSuppressionActive: true,
startupChecksStarted: false,
}),
).toBe(false)
})
test('returns true once local startup is idle and checks have not started', () => {
expect(
shouldStartStartupChecks({
isRemoteSession: false,
promptTypingSuppressionActive: false,
startupChecksStarted: false,
}),
).toBe(true)
})
test('returns false after startup checks have already started', () => {
expect(
shouldStartStartupChecks({
isRemoteSession: false,
promptTypingSuppressionActive: false,
startupChecksStarted: true,
}),
).toBe(false)
})
})

View File

@@ -0,0 +1,11 @@
export function shouldStartStartupChecks(options: {
isRemoteSession: boolean
promptTypingSuppressionActive: boolean
startupChecksStarted: boolean
}): boolean {
return (
!options.isRemoteSession &&
!options.promptTypingSuppressionActive &&
!options.startupChecksStarted
)
}

View File

@@ -14,7 +14,16 @@ import { lazySchema } from '../../utils/lazySchema.js'
import { logError } from '../../utils/log.js'
import { getAPIProvider } from '../../utils/model/providers.js'
import { isEssentialTrafficOnly } from '../../utils/privacyLevel.js'
import type { ModelOption } from '../../utils/model/modelOptions.js'
import {
getLocalOpenAICompatibleProviderLabel,
listOpenAICompatibleModels,
} from '../../utils/providerDiscovery.js'
import { getClaudeCodeUserAgent } from '../../utils/userAgent.js'
import {
getAdditionalModelOptionsCacheScope,
resolveProviderRequest,
} from './providerConfig.js'
const bootstrapResponseSchema = lazySchema(() =>
z.object({
@@ -39,6 +48,12 @@ const bootstrapResponseSchema = lazySchema(() =>
type BootstrapResponse = z.infer<ReturnType<typeof bootstrapResponseSchema>>
type BootstrapCachePayload = {
clientData: Record<string, unknown> | null
additionalModelOptions: ModelOption[]
additionalModelOptionsScope: string
}
async function fetchBootstrapAPI(): Promise<BootstrapResponse | null> {
if (isEssentialTrafficOnly()) {
logForDebugging('[Bootstrap] Skipped: Nonessential traffic disabled')
@@ -108,22 +123,70 @@ async function fetchBootstrapAPI(): Promise<BootstrapResponse | null> {
}
}
async function fetchLocalOpenAIModelOptions(): Promise<BootstrapCachePayload | null> {
const scope = getAdditionalModelOptionsCacheScope()
if (!scope?.startsWith('openai:')) {
return null
}
const { baseUrl } = resolveProviderRequest()
const models = await listOpenAICompatibleModels({
baseUrl,
apiKey: process.env.OPENAI_API_KEY,
})
if (models === null) {
logForDebugging('[Bootstrap] Local OpenAI model discovery failed')
return null
}
const providerLabel = getLocalOpenAICompatibleProviderLabel(baseUrl)
return {
clientData: getGlobalConfig().clientDataCache ?? null,
additionalModelOptionsScope: scope,
additionalModelOptions: models.map(model => ({
value: model,
label: model,
description: `Detected from ${providerLabel}`,
})),
}
}
/**
* Fetch bootstrap data from the API and persist to disk cache.
*/
export async function fetchBootstrapData(): Promise<void> {
try {
const response = await fetchBootstrapAPI()
if (!response) return
const scope = getAdditionalModelOptionsCacheScope()
let payload: BootstrapCachePayload | null = null
const clientData = response.client_data ?? null
const additionalModelOptions = response.additional_model_options ?? []
if (scope === 'firstParty') {
const response = await fetchBootstrapAPI()
if (!response) return
payload = {
clientData: response.client_data ?? null,
additionalModelOptions: response.additional_model_options ?? [],
additionalModelOptionsScope: scope,
}
} else if (scope?.startsWith('openai:')) {
payload = await fetchLocalOpenAIModelOptions()
if (!payload) return
} else {
logForDebugging('[Bootstrap] Skipped: no additional model source')
return
}
const { clientData, additionalModelOptions, additionalModelOptionsScope } =
payload
// Only persist if data actually changed — avoids a config write on every startup.
const config = getGlobalConfig()
if (
isEqual(config.clientDataCache, clientData) &&
isEqual(config.additionalModelOptionsCache, additionalModelOptions)
isEqual(config.additionalModelOptionsCache, additionalModelOptions) &&
config.additionalModelOptionsCacheScope === additionalModelOptionsScope
) {
logForDebugging('[Bootstrap] Cache unchanged, skipping write')
return
@@ -134,6 +197,7 @@ export async function fetchBootstrapData(): Promise<void> {
...current,
clientDataCache: clientData,
additionalModelOptionsCache: additionalModelOptions,
additionalModelOptionsCacheScope: additionalModelOptionsScope,
}))
} catch (error) {
logError(error)

View File

@@ -14,12 +14,19 @@ import {
} from './providerConfig.js'
const tempDirs: string[] = []
const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE,
}
afterEach(() => {
while (tempDirs.length > 0) {
const dir = tempDirs.pop()
if (dir) rmSync(dir, { recursive: true, force: true })
}
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
})
function createTempAuthJson(payload: Record<string, unknown>): string {
@@ -62,12 +69,26 @@ describe('Codex provider config', () => {
})
test('resolves codexplan alias to Codex transport with reasoning', () => {
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE
const resolved = resolveProviderRequest({ model: 'codexplan' })
expect(resolved.transport).toBe('codex_responses')
expect(resolved.resolvedModel).toBe('gpt-5.4')
expect(resolved.reasoning).toEqual({ effort: 'high' })
})
test('does not force Codex transport when a local non-Codex base URL is explicit', () => {
const resolved = resolveProviderRequest({
model: 'codexplan',
baseUrl: 'http://127.0.0.1:8080/v1',
})
expect(resolved.transport).toBe('chat_completions')
expect(resolved.baseUrl).toBe('http://127.0.0.1:8080/v1')
expect(resolved.resolvedModel).toBe('gpt-5.4')
})
test('resolves codexplan to Codex transport even when OPENAI_BASE_URL is the string "undefined"', () => {
// On Windows, env vars can leak as the literal string "undefined" instead of
// the JS value undefined when not properly unset (issue #336).

View File

@@ -557,8 +557,12 @@ export function getAssistantMessageFromError(
const stripped = error.message.replace(/^429\s+/, '')
const innerMessage = stripped.match(/"message"\s*:\s*"([^"]*)"/)?.[1]
const detail = innerMessage || stripped
const retryAfter = (error as APIError).headers?.get?.('retry-after')
const retryHint = retryAfter && !isNaN(Number(retryAfter))
? `Try again in ${retryAfter} seconds.`
: 'Try again in a few seconds.'
return createAssistantAPIErrorMessage({
content: `${API_ERROR_MESSAGE_PREFIX}: Request rejected (429) · ${detail || `this may be a temporary capacity issue${getAPIProvider() === 'firstParty' ? ' — check status.anthropic.com' : ''}`}`,
content: `${API_ERROR_MESSAGE_PREFIX}: Request rejected (429) · ${detail || 'this may be a temporary capacity issue'}${retryHint}`,
error: 'rate_limit',
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -42,6 +42,10 @@ import {
} from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
import {
normalizeToolArguments,
hasToolFieldMapping,
} from './toolArgumentNormalization.js'
type SecretValueSource = Partial<{
OPENAI_API_KEY: string
@@ -56,11 +60,22 @@ const GITHUB_API_VERSION = '2022-11-28'
const GITHUB_429_MAX_RETRIES = 3
const GITHUB_429_BASE_DELAY_SEC = 1
const GITHUB_429_MAX_DELAY_SEC = 32
const GEMINI_API_HOST = 'generativelanguage.googleapis.com'
function isGithubModelsMode(): boolean {
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
}
function hasGeminiApiHost(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
try {
return new URL(baseUrl).hostname.toLowerCase() === GEMINI_API_HOST
} catch {
return false
}
}
function formatRetryAfterHint(response: Response): string {
const ra = response.headers.get('retry-after')
return ra ? ` (Retry-After: ${ra})` : ''
@@ -197,6 +212,13 @@ function convertContentBlocks(
return parts
}
function isGeminiMode(): boolean {
return (
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
hasGeminiApiHost(process.env.OPENAI_BASE_URL)
)
}
function convertMessages(
messages: Array<{ role: string; message?: { role?: string; content?: unknown }; content?: unknown }>,
system: unknown,
@@ -248,6 +270,7 @@ function convertMessages(
// Check for tool_use blocks
if (Array.isArray(content)) {
const toolUses = content.filter((b: { type?: string }) => b.type === 'tool_use')
const thinkingBlock = content.find((b: { type?: string }) => b.type === 'thinking')
const textContent = content.filter(
(b: { type?: string }) => b.type !== 'tool_use' && b.type !== 'thinking',
)
@@ -267,18 +290,46 @@ function convertMessages(
name?: string
input?: unknown
extra_content?: Record<string, unknown>
}) => ({
id: tu.id ?? `call_${crypto.randomUUID().replace(/-/g, '')}`,
type: 'function' as const,
function: {
name: tu.name ?? 'unknown',
arguments:
typeof tu.input === 'string'
? tu.input
: JSON.stringify(tu.input ?? {}),
},
...(tu.extra_content ? { extra_content: tu.extra_content } : {}),
}),
signature?: string
}, index) => {
const toolCall: NonNullable<OpenAIMessage['tool_calls']>[number] = {
id: tu.id ?? `call_${crypto.randomUUID().replace(/-/g, '')}`,
type: 'function' as const,
function: {
name: tu.name ?? 'unknown',
arguments:
typeof tu.input === 'string'
? tu.input
: JSON.stringify(tu.input ?? {}),
},
}
// Preserve existing extra_content if present
if (tu.extra_content) {
toolCall.extra_content = { ...tu.extra_content }
}
// Handle Gemini thought_signature
if (isGeminiMode()) {
// If the model provided a signature in the tool_use block itself (e.g. from a previous Turn/Step)
// Use thinkingBlock.signature for ALL tool calls in the same assistant turn if available.
// The API requires the same signature on every replayed function call part in a parallel set.
const signature = tu.signature ?? (thinkingBlock as any)?.signature
// Merge into existing google-specific metadata if present
const existingGoogle = (toolCall.extra_content?.google as Record<string, unknown>) ?? {}
toolCall.extra_content = {
...toolCall.extra_content,
google: {
...existingGoogle,
thought_signature: signature ?? "skip_thought_signature_validator"
}
}
}
return toolCall
},
)
}
@@ -295,7 +346,41 @@ function convertMessages(
}
}
return result
// Coalescing pass: merge consecutive messages of the same role.
// OpenAI/vLLM/Ollama require strict user↔assistant alternation.
// Multiple consecutive tool messages are allowed (assistant → tool* → user).
// Consecutive user or assistant messages must be merged to avoid Jinja
// template errors like "roles must alternate" (Devstral, Mistral models).
const coalesced: OpenAIMessage[] = []
for (const msg of result) {
const prev = coalesced[coalesced.length - 1]
if (prev && prev.role === msg.role && msg.role !== 'tool' && msg.role !== 'system') {
const prevContent = prev.content
const curContent = msg.content
if (typeof prevContent === 'string' && typeof curContent === 'string') {
prev.content = prevContent + (prevContent && curContent ? '\n' : '') + curContent
} else {
const toArray = (
c: string | Array<{ type: string; text?: string; image_url?: { url: string } }> | undefined,
): Array<{ type: string; text?: string; image_url?: { url: string } }> => {
if (!c) return []
if (typeof c === 'string') return c ? [{ type: 'text', text: c }] : []
return c
}
prev.content = [...toArray(prevContent), ...toArray(curContent)]
}
if (msg.tool_calls?.length) {
prev.tool_calls = [...(prev.tool_calls ?? []), ...msg.tool_calls]
}
} else {
coalesced.push(msg)
}
}
return coalesced
}
/**
@@ -363,7 +448,7 @@ function normalizeSchemaForOpenAI(
function convertTools(
tools: Array<{ name: string; description?: string; input_schema?: Record<string, unknown> }>,
): OpenAITool[] {
const isGemini = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const isGemini = isGeminiMode()
return tools
.filter(t => t.name !== 'ToolSearchTool') // Not relevant for OpenAI
@@ -405,6 +490,7 @@ interface OpenAIStreamChunk {
delta: {
role?: string
content?: string | null
reasoning_content?: string | null
tool_calls?: Array<{
index: number
id?: string
@@ -442,6 +528,30 @@ function convertChunkUsage(
}
}
const JSON_REPAIR_SUFFIXES = [
'}', '"}', ']}', '"]}', '}}', '"}}', ']}}', '"]}}', '"]}]}', '}]}'
]
function repairPossiblyTruncatedObjectJson(raw: string): string | null {
try {
const parsed = JSON.parse(raw)
return parsed && typeof parsed === 'object' && !Array.isArray(parsed)
? raw
: null
} catch {
for (const combo of JSON_REPAIR_SUFFIXES) {
try {
const repaired = raw + combo
const parsed = JSON.parse(repaired)
if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) {
return repaired
}
} catch {}
}
return null
}
}
/**
* Async generator that transforms an OpenAI SSE stream into
* Anthropic-format BetaRawMessageStreamEvent objects.
@@ -452,8 +562,19 @@ async function* openaiStreamToAnthropic(
): AsyncGenerator<AnthropicStreamEvent> {
const messageId = makeMessageId()
let contentBlockIndex = 0
const activeToolCalls = new Map<number, { id: string; name: string; index: number; jsonBuffer: string }>()
const activeToolCalls = new Map<
number,
{
id: string
name: string
index: number
jsonBuffer: string
normalizeAtStop: boolean
}
>()
let hasEmittedContentStart = false
let hasEmittedThinkingStart = false
let hasClosedThinking = false
let lastStopReason: 'tool_use' | 'max_tokens' | 'end_turn' | null = null
let hasEmittedFinalUsage = false
let hasProcessedFinishReason = false
@@ -510,9 +631,34 @@ async function* openaiStreamToAnthropic(
for (const choice of chunk.choices ?? []) {
const delta = choice.delta
// Reasoning models (e.g. GLM-5, DeepSeek) may stream chain-of-thought
// in `reasoning_content` before the actual reply appears in `content`.
// Emit reasoning as a thinking block and content as a text block.
if (delta.reasoning_content != null && delta.reasoning_content !== '') {
if (!hasEmittedThinkingStart) {
yield {
type: 'content_block_start',
index: contentBlockIndex,
content_block: { type: 'thinking', thinking: '' },
}
hasEmittedThinkingStart = true
}
yield {
type: 'content_block_delta',
index: contentBlockIndex,
delta: { type: 'thinking_delta', thinking: delta.reasoning_content },
}
}
// Text content — use != null to distinguish absent field from empty string,
// some providers send "" as first delta to signal streaming start
if (delta.content != null) {
if (delta.content != null && delta.content !== '') {
// Close thinking block if transitioning from reasoning to content
if (hasEmittedThinkingStart && !hasClosedThinking) {
yield { type: 'content_block_stop', index: contentBlockIndex }
contentBlockIndex++
hasClosedThinking = true
}
if (!hasEmittedContentStart) {
yield {
type: 'content_block_start',
@@ -532,7 +678,12 @@ async function* openaiStreamToAnthropic(
if (delta.tool_calls) {
for (const tc of delta.tool_calls) {
if (tc.id && tc.function?.name) {
// New tool call starting
// New tool call starting — close any open thinking block first
if (hasEmittedThinkingStart && !hasClosedThinking) {
yield { type: 'content_block_stop', index: contentBlockIndex }
contentBlockIndex++
hasClosedThinking = true
}
if (hasEmittedContentStart) {
yield {
type: 'content_block_stop',
@@ -543,11 +694,14 @@ async function* openaiStreamToAnthropic(
}
const toolBlockIndex = contentBlockIndex
const initialArguments = tc.function.arguments ?? ''
const normalizeAtStop = hasToolFieldMapping(tc.function.name)
activeToolCalls.set(tc.index, {
id: tc.id,
name: tc.function.name,
index: toolBlockIndex,
jsonBuffer: tc.function.arguments ?? '',
jsonBuffer: initialArguments,
normalizeAtStop,
})
yield {
@@ -559,12 +713,19 @@ async function* openaiStreamToAnthropic(
name: tc.function.name,
input: {},
...(tc.extra_content ? { extra_content: tc.extra_content } : {}),
// Extract Gemini signature from extra_content
...((tc.extra_content?.google as any)?.thought_signature
? {
signature: (tc.extra_content.google as any)
.thought_signature,
}
: {}),
},
}
contentBlockIndex++
// Emit any initial arguments
if (tc.function.arguments) {
if (tc.function.arguments && !normalizeAtStop) {
yield {
type: 'content_block_delta',
index: toolBlockIndex,
@@ -581,6 +742,11 @@ async function* openaiStreamToAnthropic(
if (tc.function.arguments) {
active.jsonBuffer += tc.function.arguments
}
if (active.normalizeAtStop) {
continue
}
yield {
type: 'content_block_delta',
index: active.index,
@@ -599,6 +765,12 @@ async function* openaiStreamToAnthropic(
if (choice.finish_reason && !hasProcessedFinishReason) {
hasProcessedFinishReason = true
// Close any open thinking block that wasn't closed by content transition
if (hasEmittedThinkingStart && !hasClosedThinking) {
yield { type: 'content_block_stop', index: contentBlockIndex }
contentBlockIndex++
hasClosedThinking = true
}
// Close any open content blocks
if (hasEmittedContentStart) {
yield {
@@ -608,16 +780,44 @@ async function* openaiStreamToAnthropic(
}
// Close active tool calls
for (const [, tc] of activeToolCalls) {
if (tc.normalizeAtStop) {
let partialJson: string
if (choice.finish_reason === 'length') {
// Truncated by max tokens — preserve raw buffer to avoid
// turning an incomplete tool call into an executable command
partialJson = tc.jsonBuffer
} else {
const repairedStructuredJson = repairPossiblyTruncatedObjectJson(
tc.jsonBuffer,
)
if (repairedStructuredJson) {
partialJson = repairedStructuredJson
} else {
partialJson = JSON.stringify(
normalizeToolArguments(tc.name, tc.jsonBuffer),
)
}
}
yield {
type: 'content_block_delta',
index: tc.index,
delta: {
type: 'input_json_delta',
partial_json: partialJson,
},
}
yield { type: 'content_block_stop', index: tc.index }
continue
}
let suffixToAdd = ''
if (tc.jsonBuffer) {
try {
JSON.parse(tc.jsonBuffer)
} catch {
const str = tc.jsonBuffer.trimEnd()
const combinations = [
'}', '"}', ']}', '"]}', '}}', '"}}', ']}}', '"]}}', '"]}]}', '}]}'
]
for (const combo of combinations) {
for (const combo of JSON_REPAIR_SUFFIXES) {
try {
JSON.parse(str + combo)
suffixToAdd = combo
@@ -896,7 +1096,7 @@ class OpenAIShimMessages {
...(options?.headers ?? {}),
}
const isGemini = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const isGemini = isGeminiMode()
const apiKey =
this.providerOverride?.apiKey ?? process.env.OPENAI_API_KEY ?? ''
// Detect Azure endpoints by hostname (not raw URL) to prevent bypass via
@@ -1009,6 +1209,7 @@ class OpenAIShimMessages {
| string
| null
| Array<{ type?: string; text?: string }>
reasoning_content?: string | null
tool_calls?: Array<{
id: string
function: { name: string; arguments: string }
@@ -1030,7 +1231,17 @@ class OpenAIShimMessages {
const choice = data.choices?.[0]
const content: Array<Record<string, unknown>> = []
const rawContent = choice?.message?.content
// Some reasoning models (e.g. GLM-5) put their reply in reasoning_content
// while content stays null — emit reasoning as a thinking block, then
// fall back to it for visible text if content is empty.
const reasoningText = choice?.message?.reasoning_content
if (typeof reasoningText === 'string' && reasoningText) {
content.push({ type: 'thinking', thinking: reasoningText })
}
const rawContent =
choice?.message?.content !== '' && choice?.message?.content != null
? choice?.message?.content
: choice?.message?.reasoning_content
if (typeof rawContent === 'string' && rawContent) {
content.push({ type: 'text', text: rawContent })
} else if (Array.isArray(rawContent) && rawContent.length > 0) {
@@ -1053,18 +1264,20 @@ class OpenAIShimMessages {
if (choice?.message?.tool_calls) {
for (const tc of choice.message.tool_calls) {
let input: unknown
try {
input = JSON.parse(tc.function.arguments)
} catch {
input = { raw: tc.function.arguments }
}
const input = normalizeToolArguments(
tc.function.name,
tc.function.arguments,
)
content.push({
type: 'tool_use',
id: tc.id,
name: tc.function.name,
input,
...(tc.extra_content ? { extra_content: tc.extra_content } : {}),
// Extract Gemini signature from extra_content
...((tc.extra_content?.google as any)?.thought_signature
? { signature: (tc.extra_content.google as any).thought_signature }
: {}),
})
}
}

View File

@@ -1,6 +1,22 @@
import { expect, test } from 'bun:test'
import { afterEach, expect, test } from 'bun:test'
import { isLocalProviderUrl } from './providerConfig.js'
import {
getAdditionalModelOptionsCacheScope,
isLocalProviderUrl,
resolveProviderRequest,
} from './providerConfig.js'
const originalEnv = {
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_MODEL: process.env.OPENAI_MODEL,
}
afterEach(() => {
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
})
test('treats localhost endpoints as local', () => {
expect(isLocalProviderUrl('http://localhost:11434/v1')).toBe(true)
@@ -33,3 +49,37 @@ test('treats public hosts as remote', () => {
expect(isLocalProviderUrl('https://example.com/v1')).toBe(false)
expect(isLocalProviderUrl('http://[2001:4860:4860::8888]:11434/v1')).toBe(false)
})
test('creates a cache scope for local openai-compatible providers', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'http://localhost:1234/v1'
process.env.OPENAI_MODEL = 'llama-3.2-3b-instruct'
expect(getAdditionalModelOptionsCacheScope()).toBe(
'openai:http://localhost:1234/v1',
)
})
test('keeps codex alias models on chat completions for local openai-compatible providers', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'http://127.0.0.1:8080/v1'
process.env.OPENAI_MODEL = 'gpt-5.4'
expect(resolveProviderRequest()).toMatchObject({
transport: 'chat_completions',
requestedModel: 'gpt-5.4',
resolvedModel: 'gpt-5.4',
baseUrl: 'http://127.0.0.1:8080/v1',
})
expect(getAdditionalModelOptionsCacheScope()).toBe(
'openai:http://127.0.0.1:8080/v1',
)
})
test('skips local model cache scope for remote openai-compatible providers', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'https://api.openai.com/v1'
process.env.OPENAI_MODEL = 'gpt-4o'
expect(getAdditionalModelOptionsCacheScope()).toBeNull()
})

View File

@@ -219,6 +219,14 @@ export function isCodexAlias(model: string): boolean {
return base in CODEX_ALIAS_MODELS
}
export function shouldUseCodexTransport(
model: string,
baseUrl: string | undefined,
): boolean {
const explicitBaseUrl = asEnvUrl(baseUrl)
return isCodexBaseUrl(explicitBaseUrl) || (!explicitBaseUrl && isCodexAlias(model))
}
export function isLocalProviderUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
try {
@@ -302,13 +310,8 @@ export function resolveProviderRequest(options?: {
asEnvUrl(options?.baseUrl) ??
asEnvUrl(process.env.OPENAI_BASE_URL) ??
asEnvUrl(process.env.OPENAI_API_BASE)
// Use Codex transport only when:
// - the base URL is explicitly the Codex endpoint, OR
// - the model is a Codex alias AND no custom base URL has been set
// A custom OPENAI_BASE_URL (e.g. Azure, OpenRouter) always wins over
// model-name-based Codex detection to prevent auth failures (#200, #203).
const transport: ProviderTransport =
isCodexBaseUrl(rawBaseUrl) || (!rawBaseUrl && isCodexAlias(requestedModel))
shouldUseCodexTransport(requestedModel, rawBaseUrl)
? 'codex_responses'
: 'chat_completions'
@@ -337,6 +340,30 @@ export function resolveProviderRequest(options?: {
}
}
export function getAdditionalModelOptionsCacheScope(): string | null {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_BEDROCK) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_VERTEX) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_FOUNDRY)) {
return 'firstParty'
}
return null
}
const request = resolveProviderRequest()
if (request.transport !== 'chat_completions') {
return null
}
if (!isLocalProviderUrl(request.baseUrl)) {
return null
}
return `openai:${request.baseUrl.toLowerCase()}`
}
export function resolveCodexAuthPath(
env: NodeJS.ProcessEnv = process.env,
): string {

View File

@@ -0,0 +1,180 @@
import { describe, expect, test } from 'bun:test'
import { normalizeToolArguments } from './toolArgumentNormalization'
describe('normalizeToolArguments', () => {
describe('Bash tool', () => {
test('wraps plain string into { command }', () => {
expect(normalizeToolArguments('Bash', 'pwd')).toEqual({ command: 'pwd' })
})
test('wraps multi-word command', () => {
expect(normalizeToolArguments('Bash', 'ls -la /tmp')).toEqual({
command: 'ls -la /tmp',
})
})
test('passes through structured JSON object', () => {
expect(
normalizeToolArguments('Bash', '{"command":"echo hi"}'),
).toEqual({ command: 'echo hi' })
})
test('returns empty object for blank string', () => {
expect(normalizeToolArguments('Bash', '')).toEqual({})
expect(normalizeToolArguments('Bash', ' ')).toEqual({})
})
test('returns parsed blank for JSON-encoded blank string', () => {
expect(normalizeToolArguments('Bash', '""')).toEqual('')
expect(normalizeToolArguments('Bash', '" "')).toEqual(' ')
})
test('returns empty object for malformed structured object literal', () => {
expect(normalizeToolArguments('Bash', '{ "command": "pwd"')).toEqual({})
})
test.each([
['{command:"pwd"}'],
["{'command':'pwd'}"],
['{command: pwd}'],
])(
'returns empty object for malformed object-shaped string %s (does not wrap into command)',
(input) => {
expect(normalizeToolArguments('Bash', input)).toEqual({})
},
)
test.each([
['false', false],
['null', null],
['[]', [] as unknown[]],
['0', 0],
['true', true],
['123', 123],
])(
'preserves JSON literal %s as-is (does not wrap into command)',
(input, expected) => {
expect(normalizeToolArguments('Bash', input)).toEqual(expected)
},
)
test('wraps JSON-encoded string into { command }', () => {
expect(normalizeToolArguments('Bash', '"pwd"')).toEqual({
command: 'pwd',
})
})
})
describe('undefined arguments', () => {
test('returns empty object for undefined', () => {
expect(normalizeToolArguments('Bash', undefined)).toEqual({})
expect(normalizeToolArguments('UnknownTool', undefined)).toEqual({})
})
})
describe('Read tool', () => {
test('wraps plain string into { file_path }', () => {
expect(normalizeToolArguments('Read', '/home/user/file.txt')).toEqual({
file_path: '/home/user/file.txt',
})
})
test('wraps JSON-encoded string into { file_path }', () => {
expect(normalizeToolArguments('Read', '"/home/user/file.txt"')).toEqual({
file_path: '/home/user/file.txt',
})
})
test('passes through structured JSON object', () => {
expect(
normalizeToolArguments('Read', '{"file_path":"/tmp/f.txt","limit":10}'),
).toEqual({ file_path: '/tmp/f.txt', limit: 10 })
})
})
describe('Write tool', () => {
test('wraps plain string into { file_path }', () => {
expect(normalizeToolArguments('Write', '/tmp/out.txt')).toEqual({
file_path: '/tmp/out.txt',
})
})
test('passes through structured JSON object', () => {
expect(
normalizeToolArguments(
'Write',
'{"file_path":"/tmp/out.txt","content":"hello"}',
),
).toEqual({ file_path: '/tmp/out.txt', content: 'hello' })
})
})
describe('Edit tool', () => {
test('wraps plain string into { file_path }', () => {
expect(normalizeToolArguments('Edit', '/tmp/edit.ts')).toEqual({
file_path: '/tmp/edit.ts',
})
})
test('passes through structured JSON object', () => {
expect(
normalizeToolArguments(
'Edit',
'{"file_path":"/tmp/f.ts","old_string":"a","new_string":"b"}',
),
).toEqual({ file_path: '/tmp/f.ts', old_string: 'a', new_string: 'b' })
})
})
describe('Glob tool', () => {
test('wraps plain string into { pattern }', () => {
expect(normalizeToolArguments('Glob', '**/*.ts')).toEqual({
pattern: '**/*.ts',
})
})
test('passes through structured JSON object', () => {
expect(
normalizeToolArguments('Glob', '{"pattern":"*.js","path":"/src"}'),
).toEqual({ pattern: '*.js', path: '/src' })
})
})
describe('Grep tool', () => {
test('wraps plain string into { pattern }', () => {
expect(normalizeToolArguments('Grep', 'TODO')).toEqual({
pattern: 'TODO',
})
})
test('passes through structured JSON object', () => {
expect(
normalizeToolArguments('Grep', '{"pattern":"fixme","path":"/src"}'),
).toEqual({ pattern: 'fixme', path: '/src' })
})
})
describe('unknown tools', () => {
test('returns empty object for plain string (no known field mapping)', () => {
expect(normalizeToolArguments('UnknownTool', 'some value')).toEqual({})
})
test('passes through structured JSON object', () => {
expect(
normalizeToolArguments('UnknownTool', '{"key":"val"}'),
).toEqual({ key: 'val' })
})
test('preserves JSON literals as-is', () => {
expect(normalizeToolArguments('UnknownTool', 'false')).toEqual(false)
expect(normalizeToolArguments('UnknownTool', 'null')).toEqual(null)
expect(normalizeToolArguments('UnknownTool', '[]')).toEqual([])
})
test('returns parsed string for JSON-encoded string on unknown tools', () => {
expect(normalizeToolArguments('UnknownTool', '"hello"')).toEqual(
'hello',
)
})
})
})

View File

@@ -0,0 +1,69 @@
const STRING_ARGUMENT_TOOL_FIELDS: Record<string, string> = {
Bash: 'command',
Read: 'file_path',
Write: 'file_path',
Edit: 'file_path',
Glob: 'pattern',
Grep: 'pattern',
}
function isBlankString(value: string): boolean {
return value.trim().length === 0
}
function isLikelyStructuredObjectLiteral(value: string): boolean {
// Match object-like patterns with key-value syntax:
// {"key":, {key:, {'key':, { "key" :, etc.
// But NOT bash compound commands like { pwd; } or { echo hi; }
return /^\s*\{\s*['"]?\w+['"]?\s*:/.test(value)
}
function isRecord(value: unknown): value is Record<string, unknown> {
return typeof value === 'object' && value !== null && !Array.isArray(value)
}
function getPlainStringToolArgumentField(toolName: string): string | null {
return STRING_ARGUMENT_TOOL_FIELDS[toolName] ?? null
}
export function hasToolFieldMapping(toolName: string): boolean {
return toolName in STRING_ARGUMENT_TOOL_FIELDS
}
function wrapPlainStringToolArguments(
toolName: string,
value: string,
): Record<string, string> | null {
const field = getPlainStringToolArgumentField(toolName)
if (!field) return null
return { [field]: value }
}
export function normalizeToolArguments(
toolName: string,
rawArguments: string | undefined,
): unknown {
if (rawArguments === undefined) return {}
try {
const parsed = JSON.parse(rawArguments)
if (isRecord(parsed)) {
return parsed
}
// Parsed as a non-object JSON value (string, number, boolean, null, array)
if (typeof parsed === 'string' && !isBlankString(parsed)) {
return wrapPlainStringToolArguments(toolName, parsed) ?? parsed
}
// For blank strings, booleans, null, arrays — pass through as-is
// and let Zod schema validation produce a meaningful error
return parsed
} catch {
// rawArguments is not valid JSON — treat as a plain string
if (isBlankString(rawArguments) || isLikelyStructuredObjectLiteral(rawArguments)) {
// Blank or looks like a malformed object literal — don't wrap into
// a tool field to avoid turning garbage into executable input
return {}
}
return wrapPlainStringToolArguments(toolName, rawArguments) ?? {}
}
}

View File

@@ -0,0 +1,127 @@
import { describe, expect, test } from 'bun:test'
import type { Message } from '../../types/message.js'
import { createAssistantMessage, createUserMessage } from '../../utils/messages.js'
// We test the exported collectCompactableToolIds behavior indirectly via
// the public microcompactMessages + time-based path. But first we need to
// verify the core predicate: MCP tools (prefixed 'mcp__') should be
// compactable alongside the built-in tool set.
// Import internals we can test
import { evaluateTimeBasedTrigger } from './microCompact.js'
/**
* Helper: build a minimal assistant message with a tool_use block.
*/
function assistantWithToolUse(toolName: string, toolId: string): Message {
return createAssistantMessage({
content: [
{
type: 'tool_use' as const,
id: toolId,
name: toolName,
input: {},
},
],
})
}
/**
* Helper: build a user message with a tool_result block.
*/
function userWithToolResult(toolId: string, output: string): Message {
return createUserMessage({
content: [
{
type: 'tool_result' as const,
tool_use_id: toolId,
content: output,
},
],
})
}
describe('microCompact MCP tool compaction', () => {
// We can't easily unit-test the private isCompactableTool directly,
// but we can test the full time-based microcompact path which exercises
// collectCompactableToolIds → isCompactableTool under the hood.
// The time-based path is the simplest to trigger: it content-clears
// old tool results when the gap since last assistant message exceeds
// the threshold.
// However, evaluateTimeBasedTrigger depends on config (GrowthBook).
// So instead, let's test the observable behavior by importing the
// microcompactMessages function and checking that MCP tool_use blocks
// are collected.
// Since collectCompactableToolIds is not exported, we test the predicate
// behavior by verifying that the module loads without error and that
// built-in and MCP tools are treated consistently.
test('module exports load correctly', async () => {
const mod = await import('./microCompact.js')
expect(mod.microcompactMessages).toBeFunction()
expect(mod.estimateMessageTokens).toBeFunction()
expect(mod.evaluateTimeBasedTrigger).toBeFunction()
})
test('estimateMessageTokens counts MCP tool_use blocks', async () => {
const { estimateMessageTokens } = await import('./microCompact.js')
const builtinMessages: Message[] = [
assistantWithToolUse('Read', 'tool-builtin-1'),
userWithToolResult('tool-builtin-1', 'file contents here'),
]
const mcpMessages: Message[] = [
assistantWithToolUse('mcp__github__get_file_contents', 'tool-mcp-1'),
userWithToolResult('tool-mcp-1', 'file contents here'),
]
const builtinTokens = estimateMessageTokens(builtinMessages)
const mcpTokens = estimateMessageTokens(mcpMessages)
// Both should produce non-zero estimates
expect(builtinTokens).toBeGreaterThan(0)
expect(mcpTokens).toBeGreaterThan(0)
// The tool_result content is identical, so token estimates should be
// similar (tool_use name differs slightly, so not exactly equal)
expect(Math.abs(builtinTokens - mcpTokens)).toBeLessThan(50)
})
test('microcompactMessages processes MCP tools without error', async () => {
const { microcompactMessages } = await import('./microCompact.js')
const messages: Message[] = [
assistantWithToolUse('mcp__slack__send_message', 'tool-mcp-2'),
userWithToolResult('tool-mcp-2', 'Message sent successfully'),
assistantWithToolUse('mcp__github__create_pull_request', 'tool-mcp-3'),
userWithToolResult('tool-mcp-3', JSON.stringify({ number: 42, url: 'https://github.com/org/repo/pull/42' })),
]
// Should not throw — MCP tools should be handled gracefully
const result = await microcompactMessages(messages)
expect(result).toBeDefined()
expect(result.messages).toBeDefined()
expect(result.messages.length).toBe(messages.length)
})
test('microcompactMessages processes mixed built-in and MCP tools', async () => {
const { microcompactMessages } = await import('./microCompact.js')
const messages: Message[] = [
assistantWithToolUse('Read', 'tool-read-1'),
userWithToolResult('tool-read-1', 'some file content'),
assistantWithToolUse('mcp__playwright__screenshot', 'tool-mcp-4'),
userWithToolResult('tool-mcp-4', 'base64-encoded-screenshot-data'.repeat(100)),
assistantWithToolUse('Bash', 'tool-bash-1'),
userWithToolResult('tool-bash-1', 'command output'),
]
const result = await microcompactMessages(messages)
expect(result).toBeDefined()
expect(result.messages.length).toBe(messages.length)
})
})

View File

@@ -37,7 +37,7 @@ export const TIME_BASED_MC_CLEARED_MESSAGE = '[Old tool result content cleared]'
const IMAGE_MAX_TOKEN_SIZE = 2000
// Only compact these tools
// Only compact these built-in tools (MCP tools are also compactable via prefix match)
const COMPACTABLE_TOOLS = new Set<string>([
FILE_READ_TOOL_NAME,
...SHELL_TOOL_NAMES,
@@ -49,7 +49,13 @@ const COMPACTABLE_TOOLS = new Set<string>([
FILE_WRITE_TOOL_NAME,
])
// --- Cached microcompact state (internal-only, gated by feature('CACHED_MICROCOMPACT')) ---
const MCP_TOOL_PREFIX = 'mcp__'
function isCompactableTool(name: string): boolean {
return COMPACTABLE_TOOLS.has(name) || name.startsWith(MCP_TOOL_PREFIX)
}
// --- Cached microcompact state (gated by feature('CACHED_MICROCOMPACT')) ---
// Lazy-initialized cached MC module and state to avoid importing in external builds.
// The imports and state live inside feature() checks for dead code elimination.
@@ -231,7 +237,7 @@ function collectCompactableToolIds(messages: Message[]): string[] {
Array.isArray(message.message.content)
) {
for (const block of message.message.content) {
if (block.type === 'tool_use' && COMPACTABLE_TOOLS.has(block.name)) {
if (block.type === 'tool_use' && isCompactableTool(block.name)) {
ids.push(block.id)
}
}

View File

@@ -1,6 +1,7 @@
import { afterEach, describe, expect, mock, test } from 'bun:test'
import {
DEFAULT_GITHUB_DEVICE_SCOPE,
GitHubDeviceFlowError,
pollAccessToken,
requestDeviceCode,
@@ -48,6 +49,81 @@ describe('requestDeviceCode', () => {
requestDeviceCode({ clientId: 'x', fetchImpl: globalThis.fetch }),
).rejects.toThrow(GitHubDeviceFlowError)
})
test('uses OAuth-safe default scope', async () => {
let capturedScope = ''
globalThis.fetch = mock((_url: RequestInfo | URL, init?: RequestInit) => {
const body = init?.body
if (body instanceof URLSearchParams) {
capturedScope = body.get('scope') ?? ''
} else {
capturedScope = new URLSearchParams(String(body ?? '')).get('scope') ?? ''
}
return Promise.resolve(
new Response(
JSON.stringify({
device_code: 'abc',
user_code: 'ABCD-1234',
verification_uri: 'https://github.com/login/device',
}),
{ status: 200 },
),
)
})
await requestDeviceCode({ clientId: 'test-client', fetchImpl: globalThis.fetch })
expect(capturedScope).toBe(DEFAULT_GITHUB_DEVICE_SCOPE)
expect(capturedScope).toBe('read:user')
})
test('retries with OAuth-safe scope on invalid_scope', async () => {
const scopesSeen: string[] = []
let callCount = 0
globalThis.fetch = mock((_url: RequestInfo | URL, init?: RequestInit) => {
const body = init?.body
const scope =
body instanceof URLSearchParams
? body.get('scope') ?? ''
: new URLSearchParams(String(body ?? '')).get('scope') ?? ''
scopesSeen.push(scope)
callCount++
if (callCount === 1) {
return Promise.resolve(
new Response(
JSON.stringify({
error: 'invalid_scope',
error_description: 'invalid models scope',
}),
{ status: 400 },
),
)
}
return Promise.resolve(
new Response(
JSON.stringify({
device_code: 'abc',
user_code: 'ABCD-1234',
verification_uri: 'https://github.com/login/device',
}),
{ status: 200 },
),
)
})
const result = await requestDeviceCode({
clientId: 'test-client',
scope: 'read:user,models:read',
fetchImpl: globalThis.fetch,
})
expect(result.device_code).toBe('abc')
expect(callCount).toBe(2)
expect(scopesSeen).toEqual(['read:user,models:read', 'read:user'])
})
})
describe('pollAccessToken', () => {

View File

@@ -10,8 +10,10 @@ export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code'
export const GITHUB_DEVICE_ACCESS_TOKEN_URL =
'https://github.com/login/oauth/access_token'
/** Match runtime devsper github_oauth DEFAULT_SCOPE */
export const DEFAULT_GITHUB_DEVICE_SCOPE = 'read:user,models:read'
// OAuth app device flow does not accept the GitHub Models permission token
// scope (models:read). Use an OAuth-safe default.
const OAUTH_SAFE_GITHUB_DEVICE_SCOPE = 'read:user'
export const DEFAULT_GITHUB_DEVICE_SCOPE = OAUTH_SAFE_GITHUB_DEVICE_SCOPE
export class GitHubDeviceFlowError extends Error {
constructor(message: string) {
@@ -51,38 +53,61 @@ export async function requestDeviceCode(options?: {
)
}
const fetchFn = options?.fetchImpl ?? fetch
const res = await fetchFn(GITHUB_DEVICE_CODE_URL, {
method: 'POST',
headers: { Accept: 'application/json' },
body: new URLSearchParams({
client_id: clientId,
scope: options?.scope ?? DEFAULT_GITHUB_DEVICE_SCOPE,
}),
})
if (!res.ok) {
const text = await res.text().catch(() => '')
throw new GitHubDeviceFlowError(
`Device code request failed: ${res.status} ${text}`,
)
}
const data = (await res.json()) as Record<string, unknown>
const device_code = data.device_code
const user_code = data.user_code
const verification_uri = data.verification_uri
if (
typeof device_code !== 'string' ||
typeof user_code !== 'string' ||
typeof verification_uri !== 'string'
) {
throw new GitHubDeviceFlowError('Malformed device code response from GitHub')
}
return {
device_code,
user_code,
verification_uri,
expires_in: typeof data.expires_in === 'number' ? data.expires_in : 900,
interval: typeof data.interval === 'number' ? data.interval : 5,
const requestedScope =
options?.scope?.trim() || DEFAULT_GITHUB_DEVICE_SCOPE
const scopesToTry =
requestedScope === OAUTH_SAFE_GITHUB_DEVICE_SCOPE
? [requestedScope]
: [requestedScope, OAUTH_SAFE_GITHUB_DEVICE_SCOPE]
let lastError = 'Device code request failed.'
for (const scope of scopesToTry) {
const res = await fetchFn(GITHUB_DEVICE_CODE_URL, {
method: 'POST',
headers: { Accept: 'application/json' },
body: new URLSearchParams({
client_id: clientId,
scope,
}),
})
if (!res.ok) {
const text = await res.text().catch(() => '')
lastError = `Device code request failed: ${res.status} ${text}`
const isInvalidScope = /invalid_scope/i.test(text)
const canRetryWithFallback =
scope !== OAUTH_SAFE_GITHUB_DEVICE_SCOPE && isInvalidScope
if (canRetryWithFallback) {
continue
}
throw new GitHubDeviceFlowError(lastError)
}
const data = (await res.json()) as Record<string, unknown>
const device_code = data.device_code
const user_code = data.user_code
const verification_uri = data.verification_uri
if (
typeof device_code !== 'string' ||
typeof user_code !== 'string' ||
typeof verification_uri !== 'string'
) {
throw new GitHubDeviceFlowError(
'Malformed device code response from GitHub',
)
}
return {
device_code,
user_code,
verification_uri,
expires_in: typeof data.expires_in === 'number' ? data.expires_in : 900,
interval: typeof data.interval === 'number' ? data.interval : 5,
}
}
throw new GitHubDeviceFlowError(lastError)
}
export type PollOptions = {

View File

@@ -9,6 +9,7 @@ import { getGlobalConfig, saveGlobalConfig } from '../utils/config.js'
import { toError } from '../utils/errors.js'
import { logError } from '../utils/log.js'
import { applyConfigEnvironmentVariables } from '../utils/managedEnv.js'
import { persistActiveProviderProfileModel } from '../utils/providerProfiles.js'
import {
permissionModeFromString,
toExternalPermissionMode,
@@ -110,6 +111,12 @@ export function onChangeAppState({
// Save to settings
updateSettingsForSource('userSettings', { model: newState.mainLoopModel })
setMainLoopModelOverride(newState.mainLoopModel)
// Keep active provider profiles in sync with /model choices so restarts
// keep using the last selected model instead of the profile's old default.
if (process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1') {
persistActiveProviderProfileModel(newState.mainLoopModel)
}
}
// expandedView → persist as showExpandedTodos + showSpinnerTree for backwards compat

View File

@@ -0,0 +1,85 @@
import { describe, expect, test } from 'bun:test'
import {
extractAtMentionedFiles,
extractMcpResourceMentions,
} from './attachments.js'
// Contract tests for the two @-mention extractors.
//
// Scope: the narrow contract between `extractAtMentionedFiles` and
// `extractMcpResourceMentions` where both are called on the same input
// and must not both claim the same token. The motivating bug is that
// `extractMcpResourceMentions`'s `\b` anchor lets it backtrack over the
// closing quote of a quoted file mention, producing a ghost match for
// `@"C:\Users\..."`. These tests pin the boundary so any regression in
// the MCP regex is caught immediately.
describe('extractor contract', () => {
describe('extractMcpResourceMentions must return empty for', () => {
const cases: Array<[string, string]> = [
// Primary bug: the quoted form that PromptInput emits for Windows
// paths today. `\b` backtracks past the trailing `"` and produces
// a ghost MCP match on current HEAD.
['a quoted Windows drive-letter path', '@"C:\\Users\\me\\file.txt"'],
// Even if the quote layer were stripped, a bare drive letter
// followed by a path separator is never an MCP resource.
['an unquoted Windows drive-letter path', '@C:\\Users\\me\\file.txt'],
// Sanity: quoted POSIX paths with no `:` at all never matched the
// MCP regex and must keep not matching after the fix.
['a quoted POSIX path with a space', '@"/Users/foo/my file.ts"'],
['an unquoted POSIX path', '@/Users/foo/bar.ts'],
// Quoted POSIX path that embeds a `:` in the filename — the quote
// layer must shield it from MCP matching, same as the Windows case.
['a quoted POSIX path with a colon in the name', '@"/tmp/weird:name.txt"'],
]
test.each(cases)('%s', (_label, input) => {
expect(extractMcpResourceMentions(input)).toEqual([])
})
})
describe('extractMcpResourceMentions still matches legitimate MCP mentions', () => {
// Regression guard for the fix. If someone tightens the MCP regex
// too aggressively, these break and the intent is clear.
const cases: Array<[string, string, string[]]> = [
[
'a simple server:resource token',
'@server:resource/path',
['server:resource/path'],
],
[
'a plugin-scoped server name with a dash',
'@asana-plugin:project-status/123',
['asana-plugin:project-status/123'],
],
[
'an MCP mention inline in prose',
'please check @server:res here',
['server:res'],
],
]
test.each(cases)('%s', (_label, input, expected) => {
expect(extractMcpResourceMentions(input)).toEqual(expected)
})
})
describe('extractAtMentionedFiles extracts the file paths it should', () => {
// Asserted separately from the MCP side: the bug is purely in the
// MCP extractor over-matching, so these assertions are the
// "baseline still works" half of the contract.
const cases: Array<[string, string, string[]]> = [
[
'a quoted Windows drive-letter path',
'@"C:\\Users\\me\\file.txt"',
['C:\\Users\\me\\file.txt'],
],
[
'a quoted POSIX path with a space',
'@"/Users/foo/my file.ts"',
['/Users/foo/my file.ts'],
],
['an unquoted POSIX path', '@/Users/foo/bar.ts', ['/Users/foo/bar.ts']],
]
test.each(cases)('%s', (_label, input, expected) => {
expect(extractAtMentionedFiles(input)).toEqual(expected)
})
})
})

View File

@@ -2793,11 +2793,30 @@ export function extractAtMentionedFiles(content: string): string[] {
export function extractMcpResourceMentions(content: string): string[] {
// Extract MCP resources mentioned with @ symbol in format @server:uri
// Example: "@server1:resource/path" would extract "server1:resource/path"
const atMentionRegex = /(^|\s)@([^\s]+:[^\s]+)\b/g
//
// Two guards against Windows-path / quoted-file collisions (see
// `attachments.extractors.test.ts`):
//
// 1. `(?!")` right after `@` drops quoted tokens entirely. The earlier
// form (without the lookahead and with `[^\s]` character classes)
// backtracked past the closing `"` at the `\b` anchor and produced
// ghost matches like `"C:\Users\...\file.txt` for any quoted file
// mention containing a colon.
// 2. The `"` added to the character classes is belt-and-braces: even
// if the lookahead were later removed or bypassed, the engine can
// no longer consume a quote character mid-match.
const atMentionRegex = /(^|\s)@(?!")([^\s"]+:[^\s"]+)\b/g
const matches = content.match(atMentionRegex) || []
// Remove the prefix (everything before @) from each match
return uniq(matches.map(match => match.slice(match.indexOf('@') + 1)))
return uniq(
matches
.map(match => match.slice(match.indexOf('@') + 1))
// Post-match filter: a single-letter "server" followed by `:\` or
// `:/` is always a Windows drive-letter prefix, never a real MCP
// resource. This covers the unquoted `@C:\Users\...` case that
// the regex alone cannot disambiguate from `@server:resource`.
.filter(m => !/^[A-Za-z]:[\\/]/.test(m)),
)
}
export function extractAgentMentions(content: string): string[] {

View File

@@ -576,6 +576,7 @@ export type GlobalConfig = {
// Additional model options for the model picker (fetched during bootstrap).
additionalModelOptionsCache?: ModelOption[]
additionalModelOptionsCacheScope?: string
// Additional model options discovered from OpenAI-compatible endpoints.
openaiAdditionalModelOptionsCache?: ModelOption[]

View File

@@ -0,0 +1,110 @@
import { afterAll, describe, expect, test } from 'bun:test'
import { mkdirSync, mkdtempSync, rmSync, writeFileSync } from 'fs'
import { tmpdir } from 'os'
import { join } from 'path'
import { extractDraggedFilePaths } from './dragDropPaths.js'
function escapeFinderDraggedPath(filePath: string): string {
return filePath.replace(/([\\ ])/g, '\\$1')
}
describe('extractDraggedFilePaths', () => {
// Paths that exist on any system.
const thisFile = import.meta.path
const packageJson = `${process.cwd()}/package.json`
// Fixtures created synchronously at describe-load time (not in
// `beforeAll`) so their paths are available to `test.each` tables,
// which are built before any hook runs.
const tmpDir = mkdtempSync(join(tmpdir(), 'dragdrop-test-'))
const spacedFile = join(tmpDir, 'my file.txt')
writeFileSync(spacedFile, 'test')
const scopedDir = join(tmpDir, '@types')
mkdirSync(scopedDir)
const atSignFile = join(scopedDir, 'index.d.ts')
writeFileSync(atSignFile, 'test')
afterAll(() => {
rmSync(tmpDir, { recursive: true, force: true })
})
describe('returns an empty array', () => {
const emptyCases: Array<[string, string]> = [
['a non-absolute path', 'relative/path/file.ts'],
['a plain image path', '/Users/foo/image.png'],
['an uppercase image extension', '/Users/foo/SHOT.PNG'],
['a double-quoted image path', '"/Users/foo/shot.png"'],
['a single-quoted image path', "'/Users/foo/shot.jpg'"],
['regular prose text', 'hello world this is text'],
['a nonexistent absolute path', '/definitely/nonexistent/file.ts'],
['a single-quoted nonexistent path', "'/definitely/nonexistent.ts'"],
['an empty string', ''],
['whitespace only', ' \n '],
// Mixed-segment cases: all-or-nothing policy means a single bad
// entry disqualifies the whole paste.
['a mix where one path does not exist', `${thisFile}\n/nonexistent/file.ts`],
['a mix where one segment is an image', `${thisFile}\n/Users/foo/shot.png`],
]
test.each(emptyCases)('for %s', (_label, input) => {
expect(extractDraggedFilePaths(input)).toEqual([])
})
})
describe('resolves a single path', () => {
const singleCases: Array<[string, string, string]> = [
['a plain absolute path', thisFile, thisFile],
['a double-quoted path', `"${thisFile}"`, thisFile],
['a single-quoted path', `'${thisFile}'`, thisFile],
['a path with leading/trailing whitespace', ` ${thisFile} `, thisFile],
// Realistic: dragging something under `node_modules/@types/...`.
// `@` inside the path must not collide with the mention prefix
// that the caller prepends downstream.
['a path containing an `@` segment', atSignFile, atSignFile],
]
test.each(singleCases)('from %s', (_label, input, expected) => {
expect(extractDraggedFilePaths(input)).toEqual([expected])
})
})
describe('resolves multiple paths', () => {
const multiCases: Array<[string, string, string[]]> = [
[
'newline-separated',
`${thisFile}\n${packageJson}`,
[thisFile, packageJson],
],
[
'space-separated (Finder drag)',
`${thisFile} ${packageJson}`,
[thisFile, packageJson],
],
]
test.each(multiCases)('when input is %s', (_label, input, expected) => {
expect(extractDraggedFilePaths(input)).toEqual(expected)
})
})
test('escapeFinderDraggedPath escapes spaces and backslashes', () => {
expect(escapeFinderDraggedPath('/tmp/my\\notes file.txt')).toBe(
'/tmp/my\\\\notes\\ file.txt',
)
})
// Backslash-escaped paths are a Finder/macOS + Linux convention — on
// Windows the shell-escape step is skipped, so these cases do not apply.
if (process.platform !== 'win32') {
describe('handles backslash-escaped paths', () => {
test('returns empty for an escaped image path', () => {
// The image check must apply after escape stripping so Finder
// image drags still route to the image paste handler.
expect(extractDraggedFilePaths('/Users/foo/my\\ shot.png')).toEqual([])
})
test('resolves an escaped real file with a space in its name', () => {
// Raw form matches what a terminal delivers on Finder drag.
const escaped = escapeFinderDraggedPath(spacedFile)
expect(extractDraggedFilePaths(escaped)).toEqual([spacedFile])
})
})
}
})

View File

@@ -0,0 +1,55 @@
import { existsSync } from 'fs'
import { isAbsolute } from 'path'
// Inlined to avoid pulling the full `imagePaste.ts` module (which imports
// `bun:bundle`) into this file's dependency graph. Must stay in sync with
// `IMAGE_EXTENSION_REGEX` in `./imagePaste.ts`.
const IMAGE_EXTENSION_REGEX = /\.(png|jpe?g|gif|webp)$/i
/**
* Detect absolute file paths in pasted text (typically from drag-and-drop).
* Returns the cleaned paths if ALL segments are existing non-image files,
* or an empty array otherwise.
*
* Splitting logic mirrors usePasteHandler: space preceding `/` or a Windows
* drive letter, plus newline separators.
*/
export function extractDraggedFilePaths(text: string): string[] {
const segments = text
.split(/ (?=\/|[A-Za-z]:\\)/)
.flatMap(part => part.split('\n'))
.map(s => s.trim())
.filter(Boolean)
if (segments.length === 0) return []
const cleaned: string[] = []
for (const raw of segments) {
// Strip outer quotes and shell-escape backslashes
let p = raw
if (
(p.startsWith('"') && p.endsWith('"')) ||
(p.startsWith("'") && p.endsWith("'"))
) {
p = p.slice(1, -1)
}
if (process.platform !== 'win32') {
p = p.replace(/\\(.)/g, '$1')
}
// Image files are handled by the upstream image paste handler.
// Check against the cleaned path so quoted/escaped image paths like
// `"/foo/shot.png"` or `/foo/my\ shot.png` are reliably excluded.
if (IMAGE_EXTENSION_REGEX.test(p)) return []
if (!isAbsolute(p)) return []
// Verify the path actually exists on disk. Plain `fs.existsSync` is
// used intentionally here instead of the wrapped `getFsImplementation`
// to keep this module free of the heavy `fsOperations` dependency
// chain — this is a pure existence check with no permission semantics.
if (!existsSync(p)) return []
cleaned.push(p)
}
return cleaned
}

View File

@@ -10,6 +10,8 @@ describe('hydrateGithubModelsTokenFromSecureStorage', () => {
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
GH_TOKEN: process.env.GH_TOKEN,
CLAUDE_CODE_GITHUB_TOKEN_HYDRATED:
process.env.CLAUDE_CODE_GITHUB_TOKEN_HYDRATED,
CLAUDE_CODE_SIMPLE: process.env.CLAUDE_CODE_SIMPLE,
}
@@ -39,15 +41,17 @@ describe('hydrateGithubModelsTokenFromSecureStorage', () => {
}))
const { hydrateGithubModelsTokenFromSecureStorage } = await import(
'./githubModelsCredentials.js'
'./githubModelsCredentials.js?hydrate=sets-token'
)
hydrateGithubModelsTokenFromSecureStorage()
expect(process.env.GITHUB_TOKEN).toBe('stored-secret')
expect(process.env.CLAUDE_CODE_GITHUB_TOKEN_HYDRATED).toBe('1')
})
test('does not override existing GITHUB_TOKEN', async () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.GITHUB_TOKEN = 'already'
delete process.env.CLAUDE_CODE_GITHUB_TOKEN_HYDRATED
mock.module('./secureStorage/index.js', () => ({
getSecureStorage: () => ({
@@ -58,9 +62,10 @@ describe('hydrateGithubModelsTokenFromSecureStorage', () => {
}))
const { hydrateGithubModelsTokenFromSecureStorage } = await import(
'./githubModelsCredentials.js'
'./githubModelsCredentials.js?hydrate=preserve-existing'
)
hydrateGithubModelsTokenFromSecureStorage()
expect(process.env.GITHUB_TOKEN).toBe('already')
expect(process.env.CLAUDE_CODE_GITHUB_TOKEN_HYDRATED).toBeUndefined()
})
})

View File

@@ -1,13 +1,11 @@
import { describe, expect, test } from 'bun:test'
import {
clearGithubModelsToken,
readGithubModelsToken,
saveGithubModelsToken,
} from './githubModelsCredentials.js'
describe('readGithubModelsToken', () => {
test('returns undefined in bare mode', () => {
test('returns undefined in bare mode', async () => {
const { readGithubModelsToken } = await import(
'./githubModelsCredentials.js?read-bare-mode'
)
const prev = process.env.CLAUDE_CODE_SIMPLE
process.env.CLAUDE_CODE_SIMPLE = '1'
expect(readGithubModelsToken()).toBeUndefined()
@@ -20,7 +18,11 @@ describe('readGithubModelsToken', () => {
})
describe('saveGithubModelsToken / clearGithubModelsToken', () => {
test('save returns failure in bare mode', () => {
test('save returns failure in bare mode', async () => {
const { saveGithubModelsToken } = await import(
'./githubModelsCredentials.js?save-bare-mode'
)
const prev = process.env.CLAUDE_CODE_SIMPLE
process.env.CLAUDE_CODE_SIMPLE = '1'
const r = saveGithubModelsToken('abc')
@@ -33,7 +35,11 @@ describe('saveGithubModelsToken / clearGithubModelsToken', () => {
}
})
test('clear succeeds in bare mode', () => {
test('clear succeeds in bare mode', async () => {
const { clearGithubModelsToken } = await import(
'./githubModelsCredentials.js?clear-bare-mode'
)
const prev = process.env.CLAUDE_CODE_SIMPLE
process.env.CLAUDE_CODE_SIMPLE = '1'
expect(clearGithubModelsToken().success).toBe(true)

View File

@@ -3,6 +3,8 @@ import { getSecureStorage } from './secureStorage/index.js'
/** JSON key in the shared OpenClaude secure storage blob. */
export const GITHUB_MODELS_STORAGE_KEY = 'githubModels' as const
export const GITHUB_MODELS_HYDRATED_ENV_MARKER =
'CLAUDE_CODE_GITHUB_TOKEN_HYDRATED' as const
export type GithubModelsCredentialBlob = {
accessToken: string
@@ -21,24 +23,47 @@ export function readGithubModelsToken(): string | undefined {
}
}
export async function readGithubModelsTokenAsync(): Promise<string | undefined> {
if (isBareMode()) return undefined
try {
const data = (await getSecureStorage().readAsync()) as
| ({ githubModels?: GithubModelsCredentialBlob } & Record<string, unknown>)
| null
const t = data?.githubModels?.accessToken?.trim()
return t || undefined
} catch {
return undefined
}
}
/**
* If GitHub Models mode is on and no token is in the environment, copy the
* stored token into process.env so the OpenAI shim and validation see it.
*/
export function hydrateGithubModelsTokenFromSecureStorage(): void {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
return
}
if (process.env.GITHUB_TOKEN?.trim() || process.env.GH_TOKEN?.trim()) {
if (process.env.GH_TOKEN?.trim()) {
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
return
}
if (process.env.GITHUB_TOKEN?.trim()) {
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
return
}
if (isBareMode()) {
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
return
}
const t = readGithubModelsToken()
if (t) {
process.env.GITHUB_TOKEN = t
process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER] = '1'
return
}
delete process.env[GITHUB_MODELS_HYDRATED_ENV_MARKER]
}
export function saveGithubModelsToken(token: string): {

View File

@@ -8,6 +8,7 @@ import {
} from './managedEnvConstants.js'
import { clearMTLSCache } from './mtls.js'
import { clearProxyCache, configureGlobalAgents } from './proxy.js'
import { filterSettingsEnvForExplicitProvider } from './providerEnvSelection.js'
import { applyActiveProviderProfileFromConfig } from './providerProfiles.js'
import { isSettingSourceEnabled } from './settings/constants.js'
import {
@@ -87,7 +88,9 @@ function filterSettingsEnv(
env: Record<string, string> | undefined,
): Record<string, string> {
return withoutCcdSpawnEnvKeys(
withoutHostManagedProviderVars(withoutSSHTunnelVars(env)),
filterSettingsEnvForExplicitProvider(
withoutHostManagedProviderVars(withoutSSHTunnelVars(env)),
),
)
}

View File

@@ -1,4 +1,5 @@
import { feature } from 'bun:bundle'
import { getAPIProvider } from './model/providers.js'
import type { BetaUsage as Usage } from '@anthropic-ai/sdk/resources/beta/messages/messages.mjs'
import type {
ContentBlock,
@@ -1765,6 +1766,7 @@ export function stripCallerFieldFromAssistantMessage(
id: block.id,
name: block.name,
input: block.input,
...(getAPIProvider() === 'gemini' && (block as any).extra_content ? { extra_content: (block as any).extra_content } : {})
}
}),
},
@@ -2221,21 +2223,24 @@ export function normalizeMessagesForAPI(
// When tool search is enabled, preserve all fields including 'caller'
if (toolSearchEnabled) {
const { extra_content, ...restBlock } = block as any
return {
...block,
...restBlock,
name: canonicalName,
input: normalizedInput,
...(getAPIProvider() === 'gemini' && extra_content ? { extra_content } : {})
}
}
// When tool search is NOT enabled, explicitly construct tool_use
// block with only standard API fields to avoid sending fields like
// 'caller' that may be stored in sessions from tool search runs
return {
return {
type: 'tool_use' as const,
id: block.id,
name: canonicalName,
input: normalizedInput,
...(getAPIProvider() === 'gemini' && (block as any).extra_content ? { extra_content: (block as any).extra_content } : {})
}
}
return block

View File

@@ -80,7 +80,9 @@ export function getUserSpecifiedModelSetting(): ModelSetting | undefined {
const provider = getAPIProvider()
specifiedModel =
(provider === 'gemini' ? process.env.GEMINI_MODEL : undefined) ||
(provider === 'openai' || provider === 'gemini' ? process.env.OPENAI_MODEL : undefined) ||
(provider === 'openai' || provider === 'gemini' || provider === 'github'
? process.env.OPENAI_MODEL
: undefined) ||
(provider === 'firstParty' ? process.env.ANTHROPIC_MODEL : undefined) ||
settings.model ||
undefined
@@ -237,6 +239,10 @@ export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
if (getAPIProvider() === 'openai') {
return process.env.OPENAI_MODEL || 'gpt-4o'
}
// GitHub provider: always use the configured GitHub model
if (getAPIProvider() === 'github') {
return process.env.OPENAI_MODEL || 'github:copilot'
}
// Codex provider: always use the configured Codex model (default gpt-5.4)
if (getAPIProvider() === 'codex') {
return process.env.OPENAI_MODEL || 'gpt-5.4'

View File

@@ -0,0 +1,83 @@
import { afterEach, beforeEach, expect, mock, test } from 'bun:test'
import { resetModelStringsForTestingOnly } from '../../bootstrap/state.js'
import { saveGlobalConfig } from '../config.js'
async function importFreshModelOptionsModule() {
mock.restore()
mock.module('./providers.js', () => ({
getAPIProvider: () => 'github',
}))
const nonce = `${Date.now()}-${Math.random()}`
return import(`./modelOptions.js?ts=${nonce}`)
}
const originalEnv = {
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK,
CLAUDE_CODE_USE_VERTEX: process.env.CLAUDE_CODE_USE_VERTEX,
CLAUDE_CODE_USE_FOUNDRY: process.env.CLAUDE_CODE_USE_FOUNDRY,
OPENAI_MODEL: process.env.OPENAI_MODEL,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
ANTHROPIC_CUSTOM_MODEL_OPTION: process.env.ANTHROPIC_CUSTOM_MODEL_OPTION,
}
beforeEach(() => {
mock.restore()
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_BEDROCK
delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY
delete process.env.OPENAI_MODEL
delete process.env.OPENAI_BASE_URL
delete process.env.ANTHROPIC_CUSTOM_MODEL_OPTION
resetModelStringsForTestingOnly()
})
afterEach(() => {
process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
process.env.CLAUDE_CODE_USE_BEDROCK = originalEnv.CLAUDE_CODE_USE_BEDROCK
process.env.CLAUDE_CODE_USE_VERTEX = originalEnv.CLAUDE_CODE_USE_VERTEX
process.env.CLAUDE_CODE_USE_FOUNDRY = originalEnv.CLAUDE_CODE_USE_FOUNDRY
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.ANTHROPIC_CUSTOM_MODEL_OPTION =
originalEnv.ANTHROPIC_CUSTOM_MODEL_OPTION
saveGlobalConfig(current => ({
...current,
additionalModelOptionsCache: [],
additionalModelOptionsCacheScope: undefined,
openaiAdditionalModelOptionsCache: [],
openaiAdditionalModelOptionsCacheByProfile: {},
providerProfiles: [],
activeProviderProfileId: undefined,
}))
resetModelStringsForTestingOnly()
})
test('GitHub provider exposes only default + GitHub model in /model options', async () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_BEDROCK
delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY
process.env.OPENAI_MODEL = 'github:copilot'
delete process.env.ANTHROPIC_CUSTOM_MODEL_OPTION
const { getModelOptions } = await importFreshModelOptionsModule()
const options = getModelOptions(false)
const nonDefault = options.filter(
(option: { value: unknown }) => option.value !== null,
)
expect(nonDefault.length).toBe(1)
expect(nonDefault[0]?.value).toBe('github:copilot')
})

View File

@@ -1,5 +1,6 @@
// biome-ignore-all assist/source/organizeImports: internal-only import markers must not be reordered
import { getInitialMainLoopModel } from '../../bootstrap/state.js'
import { getAdditionalModelOptionsCacheScope } from '../../services/api/providerConfig.js'
import {
isClaudeAISubscriber,
isMaxSubscriber,
@@ -44,6 +45,25 @@ export type ModelOption = {
descriptionForModel?: string
}
function getScopedAdditionalModelOptions(): ModelOption[] {
const config = getGlobalConfig()
const activeScope = getAdditionalModelOptionsCacheScope()
if (!activeScope) {
return []
}
if (config.additionalModelOptionsCacheScope !== undefined) {
return config.additionalModelOptionsCacheScope === activeScope
? (config.additionalModelOptionsCache ?? [])
: []
}
return activeScope === 'firstParty'
? (config.additionalModelOptionsCache ?? [])
: []
}
export function getDefaultOptionForUser(fastMode = false): ModelOption {
if (process.env.USER_TYPE === 'ant') {
const currentModel = renderDefaultModelSetting(
@@ -332,6 +352,18 @@ function getCodexModelOptions(): ModelOption[] {
// @[MODEL LAUNCH]: Update the model picker lists below to include/reorder options for the new model.
// Each user tier (ant, Max/Team Premium, Pro/Team Standard/Enterprise, PAYG 1P, PAYG 3P) has its own list.
function getModelOptionsBase(fastMode = false): ModelOption[] {
if (getAPIProvider() === 'github') {
const githubModel = process.env.OPENAI_MODEL?.trim() || 'github:copilot'
return [
getDefaultOptionForUser(fastMode),
{
value: githubModel,
label: githubModel,
description: 'GitHub Models default',
},
]
}
// When using Ollama, show models from the Ollama server instead of Claude models
if (getAPIProvider() === 'openai' && isOllamaProvider()) {
const defaultOption = getDefaultOptionForUser(fastMode)
@@ -408,6 +440,16 @@ function getModelOptionsBase(fastMode = false): ModelOption[] {
return standardOptions
}
if (getAdditionalModelOptionsCacheScope()?.startsWith('openai:')) {
const activeOpenAIOptions = getActiveOpenAIModelOptionsCache()
return [
getDefaultOptionForUser(fastMode),
...(activeOpenAIOptions.length > 0
? activeOpenAIOptions
: getScopedAdditionalModelOptions()),
]
}
// PAYG 1P API: Default (Sonnet) + Sonnet 1M + Opus 4.6 + Opus 1M + Haiku
if (getAPIProvider() === 'firstParty') {
const payg1POptions = [getDefaultOptionForUser(fastMode)]
@@ -549,6 +591,10 @@ function getKnownModelOption(model: string): ModelOption | null {
}
export function getModelOptions(fastMode = false): ModelOption[] {
if (getAPIProvider() === 'github') {
return filterModelOptionsByAllowlist(getModelOptionsBase(fastMode))
}
const options = getModelOptionsBase(fastMode)
// Add the custom model from the ANTHROPIC_CUSTOM_MODEL_OPTION env var
@@ -566,13 +612,8 @@ export function getModelOptions(fastMode = false): ModelOption[] {
})
}
const additionalOptions =
getAPIProvider() === 'openai'
? getActiveOpenAIModelOptionsCache()
: getGlobalConfig().additionalModelOptionsCache ?? []
// Append additional model options fetched during bootstrap/endpoints.
for (const opt of additionalOptions) {
// Append additional model options fetched during bootstrap
for (const opt of getScopedAdditionalModelOptions()) {
if (!options.some(existing => existing.value === opt.value)) {
options.push(opt)
}

View File

@@ -0,0 +1,54 @@
import { afterEach, expect, test } from 'bun:test'
import { resetModelStringsForTestingOnly } from '../../bootstrap/state.js'
import { parseUserSpecifiedModel } from './model.js'
import { getModelStrings } from './modelStrings.js'
const originalEnv = {
CLAUDE_CODE_USE_GITHUB: process.env.CLAUDE_CODE_USE_GITHUB,
CLAUDE_CODE_USE_OPENAI: process.env.CLAUDE_CODE_USE_OPENAI,
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK,
CLAUDE_CODE_USE_VERTEX: process.env.CLAUDE_CODE_USE_VERTEX,
CLAUDE_CODE_USE_FOUNDRY: process.env.CLAUDE_CODE_USE_FOUNDRY,
}
function clearProviderFlags(): void {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_BEDROCK
delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY
}
afterEach(() => {
process.env.CLAUDE_CODE_USE_GITHUB = originalEnv.CLAUDE_CODE_USE_GITHUB
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
process.env.CLAUDE_CODE_USE_BEDROCK = originalEnv.CLAUDE_CODE_USE_BEDROCK
process.env.CLAUDE_CODE_USE_VERTEX = originalEnv.CLAUDE_CODE_USE_VERTEX
process.env.CLAUDE_CODE_USE_FOUNDRY = originalEnv.CLAUDE_CODE_USE_FOUNDRY
resetModelStringsForTestingOnly()
})
test('GitHub provider model strings are concrete IDs', () => {
clearProviderFlags()
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const modelStrings = getModelStrings()
for (const value of Object.values(modelStrings)) {
expect(typeof value).toBe('string')
expect(value.trim().length).toBeGreaterThan(0)
}
})
test('GitHub provider model strings are safe to parse', () => {
clearProviderFlags()
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const modelStrings = getModelStrings()
expect(() => parseUserSpecifiedModel(modelStrings.sonnet46 as any)).not.toThrow()
})

View File

@@ -25,7 +25,7 @@ const MODEL_KEYS = Object.keys(ALL_MODEL_CONFIGS) as ModelKey[]
function getBuiltinModelStrings(provider: APIProvider): ModelStrings {
// Codex piggybacks on the OpenAI provider transport for Anthropic tier aliases.
// Reuse OpenAI mappings so model string lookups never return undefined.
const providerKey = provider === 'codex' ? 'openai' : provider
const providerKey = provider === 'codex' || provider === 'github' ? 'openai' : provider
const out = {} as ModelStrings
for (const key of MODEL_KEYS) {
out[key] = ALL_MODEL_CONFIGS[key][providerKey]

View File

@@ -23,9 +23,13 @@ const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
'gpt-4.1-nano': 1_047_576,
'gpt-4-turbo': 128_000,
'gpt-4': 8_192,
'o1': 200_000,
'o1-mini': 128_000,
'o1-preview': 128_000,
'o1-pro': 200_000,
'o3': 200_000,
'o3-mini': 200_000,
'o4-mini': 200_000,
'o3': 200_000,
// DeepSeek (V3: 128k context per official docs)
'deepseek-chat': 128_000,
@@ -63,6 +67,9 @@ const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
'phi4:14b': 16_384,
'gemma2:27b': 8_192,
'codellama:13b': 16_384,
'llama3.2:1b': 128_000,
'qwen3:8b': 128_000,
'codestral': 32_768,
}
/**
@@ -82,9 +89,13 @@ const OPENAI_MAX_OUTPUT_TOKENS: Record<string, number> = {
'gpt-4.1-nano': 32_768,
'gpt-4-turbo': 4_096,
'gpt-4': 4_096,
'o1': 100_000,
'o1-mini': 65_536,
'o1-preview': 32_768,
'o1-pro': 100_000,
'o3': 100_000,
'o3-mini': 100_000,
'o4-mini': 100_000,
'o3': 100_000,
// DeepSeek
'deepseek-chat': 8_192,
@@ -120,6 +131,9 @@ const OPENAI_MAX_OUTPUT_TOKENS: Record<string, number> = {
'phi4:14b': 4_096,
'gemma2:27b': 4_096,
'codellama:13b': 4_096,
'llama3.2:1b': 4_096,
'qwen3:8b': 8_192,
'codestral': 8_192,
}
function lookupByModel<T>(table: Record<string, T>, model: string): T | undefined {

View File

@@ -7,6 +7,9 @@ const originalEnv = {
CLAUDE_CODE_USE_BEDROCK: process.env.CLAUDE_CODE_USE_BEDROCK,
CLAUDE_CODE_USE_VERTEX: process.env.CLAUDE_CODE_USE_VERTEX,
CLAUDE_CODE_USE_FOUNDRY: process.env.CLAUDE_CODE_USE_FOUNDRY,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_API_BASE: process.env.OPENAI_API_BASE,
OPENAI_MODEL: process.env.OPENAI_MODEL,
}
afterEach(() => {
@@ -16,6 +19,9 @@ afterEach(() => {
process.env.CLAUDE_CODE_USE_BEDROCK = originalEnv.CLAUDE_CODE_USE_BEDROCK
process.env.CLAUDE_CODE_USE_VERTEX = originalEnv.CLAUDE_CODE_USE_VERTEX
process.env.CLAUDE_CODE_USE_FOUNDRY = originalEnv.CLAUDE_CODE_USE_FOUNDRY
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_API_BASE = originalEnv.OPENAI_API_BASE
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
})
async function importFreshProvidersModule() {
@@ -29,6 +35,9 @@ function clearProviderEnv(): void {
delete process.env.CLAUDE_CODE_USE_BEDROCK
delete process.env.CLAUDE_CODE_USE_VERTEX
delete process.env.CLAUDE_CODE_USE_FOUNDRY
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_API_BASE
delete process.env.OPENAI_MODEL
}
test('first-party provider keeps Anthropic account setup flow enabled', () => {
@@ -69,3 +78,32 @@ test('GEMINI takes precedence over GitHub when both are set', async () => {
expect(getAPIProvider()).toBe('gemini')
})
test('explicit local openai-compatible base URLs stay on the openai provider', async () => {
clearProviderEnv()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'http://127.0.0.1:8080/v1'
process.env.OPENAI_MODEL = 'gpt-5.4'
const { getAPIProvider } = await importFreshProvidersModule()
expect(getAPIProvider()).toBe('openai')
})
test('codex aliases still resolve to the codex provider without a non-codex base URL', async () => {
clearProviderEnv()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_MODEL = 'codexplan'
const { getAPIProvider } = await importFreshProvidersModule()
expect(getAPIProvider()).toBe('codex')
})
test('official OpenAI base URLs now keep provider detection on openai for aliases', async () => {
clearProviderEnv()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'https://api.openai.com/v1'
process.env.OPENAI_MODEL = 'gpt-5.4'
const { getAPIProvider } = await importFreshProvidersModule()
expect(getAPIProvider()).toBe('openai')
})

View File

@@ -1,5 +1,5 @@
import type { AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS } from '../../services/analytics/index.js'
import { isCodexAlias } from '../../services/api/providerConfig.js'
import { shouldUseCodexTransport } from '../../services/api/providerConfig.js'
import { isEnvTruthy } from '../envUtils.js'
export type APIProvider =
@@ -34,11 +34,10 @@ export function usesAnthropicAccountFlow(): boolean {
return getAPIProvider() === 'firstParty'
}
function isCodexModel(): boolean {
const model = (process.env.OPENAI_MODEL || '').trim()
if (!model) return false
// Delegate to the canonical alias table in providerConfig to keep
// the two Codex detection systems (provider type + transport) in sync.
return isCodexAlias(model)
return shouldUseCodexTransport(
process.env.OPENAI_MODEL || '',
process.env.OPENAI_BASE_URL ?? process.env.OPENAI_API_BASE,
)
}
export function getAPIProviderForStatsig(): AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS {

View File

@@ -0,0 +1,71 @@
import { describe, expect, test } from 'bun:test'
import type { LoadedPlugin } from '../../types/plugin.js'
import { mergePluginSources } from './pluginLoader.js'
function marketplacePlugin(
name: string,
marketplace: string,
enabled: boolean,
): LoadedPlugin {
const pluginId = `${name}@${marketplace}`
return {
name,
manifest: { name } as LoadedPlugin['manifest'],
path: `/tmp/${pluginId}`,
source: pluginId,
repository: pluginId,
enabled,
}
}
describe('mergePluginSources', () => {
test('keeps the enabled copy when duplicate marketplace plugins disagree on enabled state', () => {
const enabledOfficial = marketplacePlugin(
'frontend-design',
'claude-plugins-official',
true,
)
const disabledLegacy = marketplacePlugin(
'frontend-design',
'claude-code-plugins',
false,
)
const result = mergePluginSources({
session: [],
marketplace: [disabledLegacy, enabledOfficial],
builtin: [],
})
expect(result.plugins).toEqual([enabledOfficial])
expect(result.errors).toEqual([])
})
test('keeps the later copy when duplicate marketplace plugins are both enabled', () => {
const legacy = marketplacePlugin(
'frontend-design',
'claude-code-plugins',
true,
)
const official = marketplacePlugin(
'frontend-design',
'claude-plugins-official',
true,
)
const result = mergePluginSources({
session: [],
marketplace: [legacy, official],
builtin: [],
})
expect(result.plugins).toEqual([official])
expect(result.errors).toHaveLength(1)
expect(result.errors[0]).toMatchObject({
type: 'generic-error',
source: legacy.source,
plugin: legacy.name,
})
})
})

View File

@@ -3045,24 +3045,63 @@ export function mergePluginSources(sources: {
})
const sessionNames = new Set(sessionPlugins.map(p => p.name))
const marketplacePlugins = sources.marketplace.filter(p => {
if (sessionNames.has(p.name)) {
// Different marketplaces can enable the same short plugin name, but
// downstream command/skill loading scopes by plugin.name.
const marketplacePluginsByName = new Map<string, LoadedPlugin>()
for (const plugin of sources.marketplace) {
if (sessionNames.has(plugin.name)) {
logForDebugging(
`Plugin "${p.name}" from --plugin-dir overrides installed version`,
`Plugin "${plugin.name}" from --plugin-dir overrides installed version`,
)
return false
continue
}
return true
})
const existing = marketplacePluginsByName.get(plugin.name)
if (!existing) {
marketplacePluginsByName.set(plugin.name, plugin)
continue
}
const winner = selectMarketplacePlugin(existing, plugin)
const dropped = winner === existing ? plugin : existing
marketplacePluginsByName.set(plugin.name, winner)
logForDebugging(
`Ignoring duplicate marketplace plugin "${plugin.name}" from ${dropped.source}; using ${winner.source}`,
{ level: 'warn' },
)
if (existing.enabled && plugin.enabled) {
errors.push({
type: 'generic-error',
source: dropped.source,
plugin: plugin.name,
error: `Duplicate marketplace plugin "${plugin.name}" ignored: using "${winner.source}" and skipping "${dropped.source}" to avoid short-name collisions`,
})
}
}
// Session first, then non-overridden marketplace, then builtin.
// Downstream first-match consumers see session plugins before
// installed ones for any that slipped past the name filter.
return {
plugins: [...sessionPlugins, ...marketplacePlugins, ...sources.builtin],
plugins: [
...sessionPlugins,
...marketplacePluginsByName.values(),
...sources.builtin,
],
errors,
}
}
function selectMarketplacePlugin(
current: LoadedPlugin,
candidate: LoadedPlugin,
): LoadedPlugin {
if (current.enabled !== candidate.enabled) {
return candidate.enabled ? candidate : current
}
return candidate
}
/**
* Main plugin loading function that discovers and loads all plugins.
*

View File

@@ -0,0 +1,78 @@
import { afterEach, expect, mock, test } from 'bun:test'
import {
getLocalOpenAICompatibleProviderLabel,
listOpenAICompatibleModels,
} from './providerDiscovery.js'
const originalFetch = globalThis.fetch
const originalEnv = {
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
}
afterEach(() => {
globalThis.fetch = originalFetch
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
})
test('lists models from a local openai-compatible /models endpoint', async () => {
globalThis.fetch = mock((input, init) => {
const url = typeof input === 'string' ? input : input.url
expect(url).toBe('http://localhost:1234/v1/models')
expect(init?.headers).toEqual({ Authorization: 'Bearer local-key' })
return Promise.resolve(
new Response(
JSON.stringify({
data: [
{ id: 'qwen2.5-coder-7b-instruct' },
{ id: 'llama-3.2-3b-instruct' },
{ id: 'qwen2.5-coder-7b-instruct' },
],
}),
{ status: 200 },
),
)
}) as typeof globalThis.fetch
await expect(
listOpenAICompatibleModels({
baseUrl: 'http://localhost:1234/v1',
apiKey: 'local-key',
}),
).resolves.toEqual([
'qwen2.5-coder-7b-instruct',
'llama-3.2-3b-instruct',
])
})
test('returns null when a local openai-compatible /models request fails', async () => {
globalThis.fetch = mock(() =>
Promise.resolve(new Response('not available', { status: 503 })),
) as typeof globalThis.fetch
await expect(
listOpenAICompatibleModels({ baseUrl: 'http://localhost:1234/v1' }),
).resolves.toBeNull()
})
test('detects LM Studio from the default localhost port', () => {
expect(getLocalOpenAICompatibleProviderLabel('http://localhost:1234/v1')).toBe(
'LM Studio',
)
})
test('detects common local openai-compatible providers by hostname', () => {
expect(
getLocalOpenAICompatibleProviderLabel('http://localai.local:8080/v1'),
).toBe('LocalAI')
expect(
getLocalOpenAICompatibleProviderLabel('http://vllm.local:8000/v1'),
).toBe('vLLM')
})
test('falls back to a generic local openai-compatible label', () => {
expect(
getLocalOpenAICompatibleProviderLabel('http://127.0.0.1:8080/v1'),
).toBe('Local OpenAI-compatible')
})

View File

@@ -1,4 +1,5 @@
import type { OllamaModelDescriptor } from './providerRecommendation.ts'
import { DEFAULT_OPENAI_BASE_URL } from '../services/api/providerConfig.js'
export const DEFAULT_OLLAMA_BASE_URL = 'http://localhost:11434'
export const DEFAULT_ATOMIC_CHAT_BASE_URL = 'http://127.0.0.1:1337'
@@ -53,6 +54,64 @@ export function getAtomicChatChatBaseUrl(baseUrl?: string): string {
return `${getAtomicChatApiBaseUrl(baseUrl)}/v1`
}
export function getOpenAICompatibleModelsBaseUrl(baseUrl?: string): string {
return (
baseUrl || process.env.OPENAI_BASE_URL || DEFAULT_OPENAI_BASE_URL
).replace(/\/+$/, '')
}
export function getLocalOpenAICompatibleProviderLabel(baseUrl?: string): string {
try {
const parsed = new URL(getOpenAICompatibleModelsBaseUrl(baseUrl))
const host = parsed.host.toLowerCase()
const hostname = parsed.hostname.toLowerCase()
const path = parsed.pathname.toLowerCase()
const haystack = `${hostname} ${path}`
if (
host.endsWith(':1234') ||
haystack.includes('lmstudio') ||
haystack.includes('lm-studio')
) {
return 'LM Studio'
}
if (host.endsWith(':11434') || haystack.includes('ollama')) {
return 'Ollama'
}
if (haystack.includes('localai')) {
return 'LocalAI'
}
if (haystack.includes('jan')) {
return 'Jan'
}
if (haystack.includes('kobold')) {
return 'KoboldCpp'
}
if (haystack.includes('llama.cpp') || haystack.includes('llamacpp')) {
return 'llama.cpp'
}
if (haystack.includes('vllm')) {
return 'vLLM'
}
if (
haystack.includes('open-webui') ||
haystack.includes('openwebui')
) {
return 'Open WebUI'
}
if (
haystack.includes('text-generation-webui') ||
haystack.includes('oobabooga')
) {
return 'text-generation-webui'
}
} catch {
// Fall back to the generic label when the base URL is malformed.
}
return 'Local OpenAI-compatible'
}
export async function hasLocalOllama(baseUrl?: string): Promise<boolean> {
const { signal, clear } = withTimeoutSignal(1200)
try {
@@ -111,6 +170,46 @@ export async function listOllamaModels(
}
}
export async function listOpenAICompatibleModels(options?: {
baseUrl?: string
apiKey?: string
}): Promise<string[] | null> {
const { signal, clear } = withTimeoutSignal(5000)
try {
const response = await fetch(
`${getOpenAICompatibleModelsBaseUrl(options?.baseUrl)}/models`,
{
method: 'GET',
headers: options?.apiKey
? {
Authorization: `Bearer ${options.apiKey}`,
}
: undefined,
signal,
},
)
if (!response.ok) {
return null
}
const data = (await response.json()) as {
data?: Array<{ id?: string }>
}
return Array.from(
new Set(
(data.data ?? [])
.filter(model => Boolean(model.id))
.map(model => model.id!),
),
)
} catch {
return null
} finally {
clear()
}
}
export async function hasLocalAtomicChat(baseUrl?: string): Promise<boolean> {
const { signal, clear } = withTimeoutSignal(1200)
try {

View File

@@ -0,0 +1,116 @@
import { afterEach, beforeEach, describe, expect, test } from 'bun:test'
import { filterSettingsEnvForExplicitProvider } from './providerEnvSelection.js'
const originalEnv = { ...process.env }
const RESET_KEYS = [
'CLAUDE_CODE_EXPLICIT_PROVIDER',
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
] as const
beforeEach(() => {
for (const key of RESET_KEYS) {
delete process.env[key]
}
})
afterEach(() => {
for (const key of RESET_KEYS) {
if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key]
}
})
describe('filterSettingsEnvForExplicitProvider', () => {
test('does not treat plain provider flags as an explicit CLI override', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
expect(
filterSettingsEnvForExplicitProvider({
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'gpt-4o',
OTHER: 'keep-me',
}),
).toEqual({
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'gpt-4o',
OTHER: 'keep-me',
})
})
test('strips settings-sourced provider flags when CLI provider is explicit', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'openai'
expect(
filterSettingsEnvForExplicitProvider({
CLAUDE_CODE_USE_GITHUB: '1',
CLAUDE_CODE_USE_OPENAI: '1',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
test('strips a stale GitHub model when explicit provider is not github', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'openai'
expect(
filterSettingsEnvForExplicitProvider({
OPENAI_MODEL: 'github:copilot',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
test('keeps a normal OpenAI model when explicit provider is openai', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'openai'
expect(
filterSettingsEnvForExplicitProvider({
OPENAI_MODEL: 'gpt-4o',
OTHER: 'keep-me',
}),
).toEqual({ OPENAI_MODEL: 'gpt-4o', OTHER: 'keep-me' })
})
test('strips a non-GitHub OpenAI model when explicit provider is github', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'github'
expect(
filterSettingsEnvForExplicitProvider({
OPENAI_MODEL: 'gpt-4o',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
test('preserves anthropic startup intent by stripping stale GitHub/OpenAI settings', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'anthropic'
expect(
filterSettingsEnvForExplicitProvider({
CLAUDE_CODE_USE_GITHUB: '1',
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_MODEL: 'github:copilot',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
test('preserves explicit ollama startup intent by stripping OpenAI routing settings', () => {
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'ollama'
expect(
filterSettingsEnvForExplicitProvider({
OPENAI_BASE_URL: 'https://api.openai.com/v1',
OPENAI_MODEL: 'gpt-4o',
OPENAI_API_KEY: 'sk-test',
OTHER: 'keep-me',
}),
).toEqual({ OTHER: 'keep-me' })
})
})

View File

@@ -0,0 +1,63 @@
export const EXPLICIT_PROVIDER_ENV_VAR = 'CLAUDE_CODE_EXPLICIT_PROVIDER'
const PROVIDER_FLAG_KEYS = [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
] as const
export function clearProviderSelectionFlags(
env: NodeJS.ProcessEnv = process.env,
): void {
for (const key of PROVIDER_FLAG_KEYS) {
delete env[key]
}
}
function getExplicitProvider(processEnv: NodeJS.ProcessEnv): string | undefined {
return processEnv[EXPLICIT_PROVIDER_ENV_VAR]?.trim() || undefined
}
function isGithubModel(model: string | undefined): boolean {
return (model ?? '').trim().toLowerCase().startsWith('github:')
}
export function filterSettingsEnvForExplicitProvider(
env: Record<string, string> | undefined,
processEnv: NodeJS.ProcessEnv = process.env,
): Record<string, string> {
if (!env) return {}
const explicitProvider = getExplicitProvider(processEnv)
if (!explicitProvider) {
return env
}
const filtered = { ...env }
for (const key of PROVIDER_FLAG_KEYS) {
delete filtered[key]
}
if (explicitProvider === 'ollama') {
delete filtered.OPENAI_BASE_URL
delete filtered.OPENAI_MODEL
delete filtered.OPENAI_API_KEY
return filtered
}
if (explicitProvider === 'github') {
if (!isGithubModel(filtered.OPENAI_MODEL)) {
delete filtered.OPENAI_MODEL
}
return filtered
}
if (isGithubModel(filtered.OPENAI_MODEL)) {
delete filtered.OPENAI_MODEL
}
return filtered
}

View File

@@ -1,4 +1,4 @@
import { describe, expect, test, afterEach } from 'bun:test'
import { afterEach, beforeEach, describe, expect, test } from 'bun:test'
import {
parseProviderFlag,
applyProviderFlag,
@@ -8,18 +8,28 @@ import {
const originalEnv = { ...process.env }
const RESET_KEYS = [
'CLAUDE_CODE_EXPLICIT_PROVIDER',
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_BASE_URL',
'OPENAI_API_KEY',
'OPENAI_MODEL',
'GEMINI_MODEL',
] as const
beforeEach(() => {
for (const key of RESET_KEYS) {
delete process.env[key]
}
})
afterEach(() => {
for (const key of [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'OPENAI_BASE_URL',
'OPENAI_API_KEY',
'OPENAI_MODEL',
'GEMINI_MODEL',
]) {
for (const key of RESET_KEYS) {
if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key]
}
@@ -75,6 +85,16 @@ describe('applyProviderFlag - openai', () => {
applyProviderFlag('openai', ['--model', 'gpt-4o'])
expect(process.env.OPENAI_MODEL).toBe('gpt-4o')
})
test('clears a previously persisted GitHub flag', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const result = applyProviderFlag('openai', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBe('1')
})
})
describe('applyProviderFlag - gemini', () => {
@@ -96,6 +116,16 @@ describe('applyProviderFlag - github', () => {
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBe('1')
})
test('clears a previously set OpenAI flag', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const result = applyProviderFlag('github', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBe('1')
})
})
describe('applyProviderFlag - bedrock', () => {
@@ -143,6 +173,19 @@ describe('applyProviderFlag - invalid provider', () => {
})
})
describe('applyProviderFlag - anthropic', () => {
test('clears third-party provider flags', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const result = applyProviderFlag('anthropic', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
})
})
describe('applyProviderFlagFromArgs', () => {
test('applies ollama provider and model from argv in one step', () => {
const result = applyProviderFlagFromArgs([

View File

@@ -1,3 +1,8 @@
import {
clearProviderSelectionFlags,
EXPLICIT_PROVIDER_ENV_VAR,
} from './providerEnvSelection.js'
/**
* --provider CLI flag support.
*
@@ -77,6 +82,9 @@ export function applyProviderFlag(
}
}
clearProviderSelectionFlags()
process.env[EXPLICIT_PROVIDER_ENV_VAR] = provider
const model = parseModelFlag(args)
switch (provider as ProviderFlagName) {

View File

@@ -485,6 +485,46 @@ test('buildStartupEnvFromProfile leaves explicit provider selections untouched',
assert.equal(env.OPENAI_API_KEY, undefined)
})
test('buildStartupEnvFromProfile preserves explicit anthropic startup selection', async () => {
const processEnv = {
CLAUDE_CODE_EXPLICIT_PROVIDER: 'anthropic',
}
const env = await buildStartupEnvFromProfile({
persisted: profile('openai', {
CLAUDE_CODE_USE_GITHUB: '1',
OPENAI_MODEL: 'github:copilot',
}),
processEnv,
})
assert.equal(env, processEnv)
assert.equal(env.CLAUDE_CODE_EXPLICIT_PROVIDER, 'anthropic')
assert.equal(env.CLAUDE_CODE_USE_OPENAI, undefined)
assert.equal(env.CLAUDE_CODE_USE_GITHUB, undefined)
assert.equal(env.OPENAI_MODEL, undefined)
})
test('buildStartupEnvFromProfile leaves profile-managed env untouched', async () => {
const processEnv = {
CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED: '1',
ANTHROPIC_BASE_URL: 'https://api.anthropic.com',
ANTHROPIC_MODEL: 'claude-sonnet-4-6',
}
const env = await buildStartupEnvFromProfile({
persisted: profile('openai', {
OPENAI_API_KEY: 'sk-persisted',
OPENAI_MODEL: 'gpt-4o',
}),
processEnv,
})
assert.equal(env, processEnv)
assert.equal(env.ANTHROPIC_MODEL, 'claude-sonnet-4-6')
assert.equal(env.OPENAI_MODEL, undefined)
})
test('buildStartupEnvFromProfile treats explicit falsey provider flags as user intent', async () => {
const processEnv = {
CLAUDE_CODE_USE_OPENAI: '0',

View File

@@ -407,6 +407,15 @@ export function deleteProfileFile(options?: ProfileFileLocation): string {
export function hasExplicitProviderSelection(
processEnv: NodeJS.ProcessEnv = process.env,
): boolean {
// If env was already applied from a provider profile, preserve it.
if (processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1') {
return true
}
if (processEnv.CLAUDE_CODE_EXPLICIT_PROVIDER?.trim()) {
return true
}
return (
processEnv.CLAUDE_CODE_USE_OPENAI !== undefined ||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined ||

View File

@@ -2,10 +2,16 @@ import { afterEach, describe, expect, mock, test } from 'bun:test'
import type { ProviderProfile } from './config.js'
async function importFreshProvidersModule() {
return import(`./model/providers.ts?ts=${Date.now()}-${Math.random()}`)
}
const originalEnv = { ...process.env }
const RESTORED_KEYS = [
'CLAUDE_CODE_EXPLICIT_PROVIDER',
'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED',
'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID',
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
@@ -21,8 +27,35 @@ const RESTORED_KEYS = [
'ANTHROPIC_API_KEY',
] as const
type MockConfigState = {
providerProfiles: ProviderProfile[]
activeProviderProfileId?: string
openaiAdditionalModelOptionsCache: unknown[]
openaiAdditionalModelOptionsCacheByProfile: Record<string, unknown[]>
additionalModelOptionsCache?: unknown[]
additionalModelOptionsCacheScope?: string
}
function createMockConfigState(): MockConfigState {
return {
providerProfiles: [],
activeProviderProfileId: undefined,
openaiAdditionalModelOptionsCache: [],
openaiAdditionalModelOptionsCacheByProfile: {},
additionalModelOptionsCache: [],
additionalModelOptionsCacheScope: undefined,
}
}
let mockConfigState: MockConfigState = createMockConfigState()
function saveMockGlobalConfig(
updater: (current: MockConfigState) => MockConfigState,
): void {
mockConfigState = updater(mockConfigState)
}
afterEach(() => {
mock.restore()
for (const key of RESTORED_KEYS) {
if (originalEnv[key] === undefined) {
delete process.env[key]
@@ -30,8 +63,31 @@ afterEach(() => {
process.env[key] = originalEnv[key]
}
}
mock.restore()
mockConfigState = createMockConfigState()
})
async function importFreshProviderProfileModules() {
mock.restore()
mock.module('./config.js', () => ({
getGlobalConfig: () => mockConfigState,
saveGlobalConfig: (
updater: (current: MockConfigState) => MockConfigState,
) => {
mockConfigState = updater(mockConfigState)
},
}))
const nonce = `${Date.now()}-${Math.random()}`
const providers = await import(`./model/providers.js?ts=${nonce}`)
const providerProfiles = await import(`./providerProfiles.js?ts=${nonce}`)
return {
...providers,
...providerProfiles,
}
}
function buildProfile(overrides: Partial<ProviderProfile> = {}): ProviderProfile {
return {
id: 'provider_test',
@@ -43,57 +99,31 @@ function buildProfile(overrides: Partial<ProviderProfile> = {}): ProviderProfile
}
}
async function importFreshProviderModules() {
mock.restore()
let configState = {
providerProfiles: [] as ProviderProfile[],
activeProviderProfileId: undefined as string | undefined,
openaiAdditionalModelOptionsCache: [] as any[],
openaiAdditionalModelOptionsCacheByProfile: {} as Record<string, any[]>,
}
mock.module('./config.js', () => ({
getGlobalConfig: () => configState,
saveGlobalConfig: (
updater: (current: typeof configState) => typeof configState,
) => {
configState = updater(configState)
},
}))
const providerProfiles = await import(
`./providerProfiles.js?ts=${Date.now()}-${Math.random()}`
)
const providers = await import(
`./model/providers.js?ts=${Date.now()}-${Math.random()}`
)
return {
...providerProfiles,
...providers,
}
}
describe('applyProviderProfileToProcessEnv', () => {
test('openai profile clears competing gemini/github flags', async () => {
const { applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_USE_GEMINI = '1'
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const { applyProviderProfileToProcessEnv, getAPIProvider } =
await importFreshProviderModules()
applyProviderProfileToProcessEnv(buildProfile())
const { getAPIProvider: getFreshAPIProvider } =
await importFreshProvidersModule()
expect(process.env.CLAUDE_CODE_USE_GEMINI).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBe('1')
expect(getAPIProvider()).toBe('openai')
expect(String(process.env.CLAUDE_CODE_USE_OPENAI)).toBe('1')
expect(process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBe(
'provider_test',
)
expect(getFreshAPIProvider()).toBe('openai')
})
test('anthropic profile clears competing gemini/github flags', async () => {
const { applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_USE_GEMINI = '1'
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const { applyProviderProfileToProcessEnv, getAPIProvider } =
await importFreshProviderModules()
applyProviderProfileToProcessEnv(
buildProfile({
@@ -102,21 +132,46 @@ describe('applyProviderProfileToProcessEnv', () => {
model: 'claude-sonnet-4-6',
}),
)
const { getAPIProvider: getFreshAPIProvider } =
await importFreshProvidersModule()
expect(process.env.CLAUDE_CODE_USE_GEMINI).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(getAPIProvider()).toBe('firstParty')
expect(getFreshAPIProvider()).toBe('firstParty')
})
})
describe('applyActiveProviderProfileFromConfig', () => {
test('does not override explicit anthropic startup selection', async () => {
const { applyActiveProviderProfileFromConfig } =
await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_EXPLICIT_PROVIDER = 'anthropic'
const applied = applyActiveProviderProfileFromConfig({
providerProfiles: [
buildProfile({
id: 'saved_github',
baseUrl: 'https://api.githubcopilot.com',
model: 'github:copilot',
}),
],
activeProviderProfileId: 'saved_github',
} as any)
expect(applied).toBeUndefined()
expect(process.env.CLAUDE_CODE_EXPLICIT_PROVIDER).toBe('anthropic')
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBeUndefined()
expect(process.env.OPENAI_MODEL).toBeUndefined()
})
test('does not override explicit startup provider selection', async () => {
const { applyActiveProviderProfileFromConfig } =
await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'http://localhost:11434/v1'
process.env.OPENAI_MODEL = 'qwen2.5:3b'
const { applyActiveProviderProfileFromConfig } =
await importFreshProviderModules()
const applied = applyActiveProviderProfileFromConfig({
providerProfiles: [
@@ -135,12 +190,12 @@ describe('applyActiveProviderProfileFromConfig', () => {
})
test('does not override explicit startup selection when profile marker is stale', async () => {
const { applyActiveProviderProfileFromConfig } =
await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED = '1'
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'http://localhost:11434/v1'
process.env.OPENAI_MODEL = 'qwen2.5:3b'
const { applyActiveProviderProfileFromConfig } =
await importFreshProviderModules()
const applied = applyActiveProviderProfileFromConfig({
providerProfiles: [
@@ -154,12 +209,74 @@ describe('applyActiveProviderProfileFromConfig', () => {
} as any)
expect(applied).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBe('1')
expect(String(process.env.CLAUDE_CODE_USE_OPENAI)).toBe('1')
expect(process.env.OPENAI_BASE_URL).toBe('http://localhost:11434/v1')
expect(process.env.OPENAI_MODEL).toBe('qwen2.5:3b')
})
test('re-applies active profile when profile-managed env drifts', async () => {
const { applyActiveProviderProfileFromConfig, applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules()
applyProviderProfileToProcessEnv(
buildProfile({
id: 'saved_openai',
baseUrl: 'http://192.168.33.108:11434/v1',
model: 'kimi-k2.5:cloud',
}),
)
// Simulate settings/env merge clobbering the model while profile flags remain.
process.env.OPENAI_MODEL = 'github:copilot'
const applied = applyActiveProviderProfileFromConfig({
providerProfiles: [
buildProfile({
id: 'saved_openai',
baseUrl: 'http://192.168.33.108:11434/v1',
model: 'kimi-k2.5:cloud',
}),
],
activeProviderProfileId: 'saved_openai',
} as any)
expect(applied?.id).toBe('saved_openai')
expect(process.env.OPENAI_MODEL).toBe('kimi-k2.5:cloud')
expect(process.env.OPENAI_BASE_URL).toBe('http://192.168.33.108:11434/v1')
})
test('does not re-apply active profile when flags conflict with current provider', async () => {
const { applyActiveProviderProfileFromConfig, applyProviderProfileToProcessEnv } =
await importFreshProviderProfileModules()
applyProviderProfileToProcessEnv(
buildProfile({
id: 'saved_openai',
baseUrl: 'http://192.168.33.108:11434/v1',
model: 'kimi-k2.5:cloud',
}),
)
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_MODEL = 'github:copilot'
const applied = applyActiveProviderProfileFromConfig({
providerProfiles: [
buildProfile({
id: 'saved_openai',
baseUrl: 'http://192.168.33.108:11434/v1',
model: 'kimi-k2.5:cloud',
}),
],
activeProviderProfileId: 'saved_openai',
} as any)
expect(applied).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBe('1')
expect(process.env.OPENAI_MODEL).toBe('github:copilot')
})
test('applies active profile when no explicit provider is selected', async () => {
const { applyActiveProviderProfileFromConfig } =
await importFreshProviderProfileModules()
delete process.env.CLAUDE_CODE_USE_OPENAI
delete process.env.CLAUDE_CODE_USE_GEMINI
delete process.env.CLAUDE_CODE_USE_GITHUB
@@ -169,8 +286,6 @@ describe('applyActiveProviderProfileFromConfig', () => {
process.env.OPENAI_BASE_URL = 'http://localhost:11434/v1'
process.env.OPENAI_MODEL = 'qwen2.5:3b'
const { applyActiveProviderProfileFromConfig } =
await importFreshProviderModules()
const applied = applyActiveProviderProfileFromConfig({
providerProfiles: [
@@ -184,16 +299,82 @@ describe('applyActiveProviderProfileFromConfig', () => {
} as any)
expect(applied?.id).toBe('saved_openai')
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBe('1')
expect(String(process.env.CLAUDE_CODE_USE_OPENAI)).toBe('1')
expect(process.env.OPENAI_BASE_URL).toBe('https://api.openai.com/v1')
expect(process.env.OPENAI_MODEL).toBe('gpt-4o')
})
})
describe('persistActiveProviderProfileModel', () => {
test('updates active profile model and current env for profile-managed sessions', async () => {
const {
applyProviderProfileToProcessEnv,
getProviderProfiles,
persistActiveProviderProfileModel,
} = await importFreshProviderProfileModules()
const activeProfile = buildProfile({
id: 'saved_openai',
baseUrl: 'http://192.168.33.108:11434/v1',
model: 'kimi-k2.5:cloud',
})
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [activeProfile],
activeProviderProfileId: activeProfile.id,
}))
applyProviderProfileToProcessEnv(activeProfile)
const updated = persistActiveProviderProfileModel('minimax-m2.5:cloud')
expect(updated?.id).toBe(activeProfile.id)
expect(updated?.model).toBe('minimax-m2.5:cloud')
expect(process.env.OPENAI_MODEL).toBe('minimax-m2.5:cloud')
expect(process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID).toBe(
activeProfile.id,
)
const saved = getProviderProfiles().find(
(profile: ProviderProfile) => profile.id === activeProfile.id,
)
expect(saved?.model).toBe('minimax-m2.5:cloud')
})
test('does not mutate process env when session is not profile-managed', async () => {
const {
getProviderProfiles,
persistActiveProviderProfileModel,
} = await importFreshProviderProfileModules()
const activeProfile = buildProfile({
id: 'saved_openai',
model: 'kimi-k2.5:cloud',
})
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [activeProfile],
activeProviderProfileId: activeProfile.id,
}))
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_MODEL = 'cli-model'
delete process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED
delete process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID
persistActiveProviderProfileModel('minimax-m2.5:cloud')
expect(process.env.OPENAI_MODEL).toBe('cli-model')
const saved = getProviderProfiles().find(
(profile: ProviderProfile) => profile.id === activeProfile.id,
)
expect(saved?.model).toBe('minimax-m2.5:cloud')
})
})
describe('getProviderPresetDefaults', () => {
test('ollama preset defaults to a local Ollama model', async () => {
const { getProviderPresetDefaults } = await importFreshProviderProfileModules()
delete process.env.OPENAI_MODEL
const { getProviderPresetDefaults } = await importFreshProviderModules()
const defaults = getProviderPresetDefaults('ollama')
@@ -205,21 +386,25 @@ describe('getProviderPresetDefaults', () => {
describe('deleteProviderProfile', () => {
test('deleting final profile clears provider env when active profile applied it', async () => {
const {
addProviderProfile,
applyProviderProfileToProcessEnv,
deleteProviderProfile,
} =
await importFreshProviderModules()
const profile = addProviderProfile({
name: 'Only Profile',
provider: 'openai',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-4o',
apiKey: 'sk-test',
})
} = await importFreshProviderProfileModules()
applyProviderProfileToProcessEnv(
buildProfile({
id: 'only_profile',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-4o',
apiKey: 'sk-test',
}),
)
expect(profile).not.toBeNull()
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [buildProfile({ id: 'only_profile' })],
activeProviderProfileId: 'only_profile',
}))
const result = deleteProviderProfile(profile!.id)
const result = deleteProviderProfile('only_profile')
expect(result.removed).toBe(true)
expect(result.activeProfileId).toBeUndefined()
@@ -244,30 +429,24 @@ describe('deleteProviderProfile', () => {
})
test('deleting final profile preserves explicit startup provider env', async () => {
const { addProviderProfile, deleteProviderProfile } =
await importFreshProviderModules()
const profile = addProviderProfile({
name: 'Only Profile',
provider: 'openai',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-4o',
})
expect(profile).not.toBeNull()
process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED = undefined
delete process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED
const { deleteProviderProfile } = await importFreshProviderProfileModules()
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL = 'http://localhost:11434/v1'
process.env.OPENAI_MODEL = 'qwen2.5:3b'
const result = deleteProviderProfile(profile!.id)
saveMockGlobalConfig(current => ({
...current,
providerProfiles: [buildProfile({ id: 'only_profile' })],
activeProviderProfileId: 'only_profile',
}))
const result = deleteProviderProfile('only_profile')
expect(result.removed).toBe(true)
expect(result.activeProfileId).toBeUndefined()
expect(process.env.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBe('1')
expect(String(process.env.CLAUDE_CODE_USE_OPENAI)).toBe('1')
expect(process.env.OPENAI_BASE_URL).toBe('http://localhost:11434/v1')
expect(process.env.OPENAI_MODEL).toBe('qwen2.5:3b')
})

View File

@@ -5,6 +5,7 @@ import {
type ProviderProfile,
} from './config.js'
import type { ModelOption } from './model/modelOptions.js'
import { EXPLICIT_PROVIDER_ENV_VAR } from './providerEnvSelection.js'
export type ProviderPreset =
| 'anthropic'
@@ -37,6 +38,7 @@ export type ProviderPresetDefaults = Omit<ProviderProfileInput, 'provider'> & {
const DEFAULT_OLLAMA_BASE_URL = 'http://localhost:11434/v1'
const DEFAULT_OLLAMA_MODEL = 'llama3.1:8b'
const PROFILE_ENV_APPLIED_FLAG = 'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED'
const PROFILE_ENV_APPLIED_ID = 'CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID'
function trimValue(value: string | undefined): string {
return value?.trim() ?? ''
@@ -255,6 +257,7 @@ function hasProviderSelectionFlags(
processEnv: NodeJS.ProcessEnv = process.env,
): boolean {
return (
processEnv[EXPLICIT_PROVIDER_ENV_VAR] !== undefined ||
processEnv.CLAUDE_CODE_USE_OPENAI !== undefined ||
processEnv.CLAUDE_CODE_USE_GEMINI !== undefined ||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined ||
@@ -264,6 +267,23 @@ function hasProviderSelectionFlags(
)
}
function hasConflictingProviderFlagsForProfile(
processEnv: NodeJS.ProcessEnv,
profile: ProviderProfile,
): boolean {
if (profile.provider === 'anthropic') {
return hasProviderSelectionFlags(processEnv)
}
return (
processEnv.CLAUDE_CODE_USE_GEMINI !== undefined ||
processEnv.CLAUDE_CODE_USE_GITHUB !== undefined ||
processEnv.CLAUDE_CODE_USE_BEDROCK !== undefined ||
processEnv.CLAUDE_CODE_USE_VERTEX !== undefined ||
processEnv.CLAUDE_CODE_USE_FOUNDRY !== undefined
)
}
function sameOptionalEnvValue(
left: string | undefined,
right: string | undefined,
@@ -284,6 +304,10 @@ function isProcessEnvAlignedWithProfile(
return false
}
if (trimOrUndefined(processEnv[PROFILE_ENV_APPLIED_ID]) !== profile.id) {
return false
}
if (profile.provider === 'anthropic') {
return (
!hasProviderSelectionFlags(processEnv) &&
@@ -339,11 +363,13 @@ export function clearProviderProfileEnvFromProcessEnv(
delete processEnv.ANTHROPIC_MODEL
delete processEnv.ANTHROPIC_API_KEY
delete processEnv[PROFILE_ENV_APPLIED_FLAG]
delete processEnv[PROFILE_ENV_APPLIED_ID]
}
export function applyProviderProfileToProcessEnv(profile: ProviderProfile): void {
clearProviderProfileEnvFromProcessEnv()
process.env[PROFILE_ENV_APPLIED_FLAG] = '1'
process.env[PROFILE_ENV_APPLIED_ID] = profile.id
process.env.ANTHROPIC_MODEL = profile.model
if (profile.provider === 'anthropic') {
@@ -386,12 +412,24 @@ export function applyActiveProviderProfileFromConfig(
return undefined
}
const isCurrentEnvProfileManaged =
processEnv[PROFILE_ENV_APPLIED_FLAG] === '1' &&
trimOrUndefined(processEnv[PROFILE_ENV_APPLIED_ID]) === activeProfile.id
if (!options?.force && hasProviderSelectionFlags(processEnv)) {
// Respect explicit startup provider intent. Re-apply only when the
// current process env is already profile-managed and aligned.
if (!isProcessEnvAlignedWithProfile(processEnv, activeProfile)) {
// Respect explicit startup provider intent. Auto-heal only when this
// exact active profile previously applied the current env.
if (!isCurrentEnvProfileManaged) {
return undefined
}
if (hasConflictingProviderFlagsForProfile(processEnv, activeProfile)) {
return undefined
}
if (isProcessEnvAlignedWithProfile(processEnv, activeProfile)) {
return activeProfile
}
}
applyProviderProfileToProcessEnv(activeProfile)
@@ -496,6 +534,61 @@ export function updateProviderProfile(
return updatedProfile
}
export function persistActiveProviderProfileModel(
model: string,
): ProviderProfile | null {
const nextModel = trimOrUndefined(model)
if (!nextModel) {
return null
}
const activeProfile = getActiveProviderProfile()
if (!activeProfile) {
return null
}
saveGlobalConfig(current => {
const currentProfiles = getProviderProfiles(current)
const profileIndex = currentProfiles.findIndex(
profile => profile.id === activeProfile.id,
)
if (profileIndex < 0) {
return current
}
const currentProfile = currentProfiles[profileIndex]
if (currentProfile.model === nextModel) {
return current
}
const nextProfiles = [...currentProfiles]
nextProfiles[profileIndex] = {
...currentProfile,
model: nextModel,
}
return {
...current,
providerProfiles: nextProfiles,
}
})
const resolvedProfile = getActiveProviderProfile()
if (!resolvedProfile || resolvedProfile.id !== activeProfile.id) {
return null
}
if (
process.env[PROFILE_ENV_APPLIED_FLAG] === '1' &&
trimOrUndefined(process.env[PROFILE_ENV_APPLIED_ID]) === resolvedProfile.id
) {
applyProviderProfileToProcessEnv(resolvedProfile)
}
return resolvedProfile
}
export function setActiveProviderProfile(
profileId: string,
): ProviderProfile | null {

View File

@@ -97,8 +97,12 @@ export function renderToAnsiString(node: React.ReactNode, columns?: number): Pro
patchConsole: false
});
// Wait for the component to exit naturally
await instance.waitUntilExit();
// Wait for the component to exit naturally, with a timeout guard so
// tests never hang indefinitely if a render error prevents exit().
await Promise.race([
instance.waitUntilExit(),
new Promise<void>(resolve => setTimeout(resolve, 3000)),
]);
// Extract only the first frame's content to avoid duplication
// (Ink outputs multiple frames in non-TTY mode)