Compare commits

...

150 Commits

Author SHA1 Message Date
Juan Camilo
037a855528 fix: strip Anthropic-specific params from 3P provider paths
Three silent failure modes affecting all third-party provider users:

1. Thinking blocks serialized as <thinking> text corrupt multi-turn
   context — strip them instead of converting to raw text tags.

2. Unknown models fall through to 200k context window default, so
   auto-compact never triggers — use conservative 8k for unknown
   3P models with a warning log.

3. Session resume with thinking blocks causes 400 or context corruption
   on 3P providers — strip thinking/redacted_thinking content blocks
   from deserialized messages when resuming against a non-Anthropic
   provider.

Addresses findings 2, 3, and 5 from #248.
2026-04-03 14:05:34 +02:00
Vasanth T
7c0ea68b65 fix: address code scanning alerts (#240) 2026-04-03 14:52:35 +05:30
KRATOS
f3a984dde1 fix(security-review): Handle null shell output (#231)
Normalize shell command stdout and stderr before the prompt-shell path and shared tool-result mappers use string operations. This prevents /security-review from crashing when a shell tool returns null output fields and adds regression coverage for both direct mapper calls and prompt generation.

Fixes #165

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-03 10:14:28 +02:00
Brendan
72c6e97094 fix: route ask-user-question footer actions through useInput (#229) 2026-04-03 10:14:17 +02:00
Preetham
f3ab727ec2 Added LM Studio provider setup guide (#227)
* ## PR: Add LM Studio Provider Support

### Summary
Adds comprehensive LM Studio integration to openclaude, following the same pattern as the existing Ollama provider. LM Studio is a popular local LLM inference tool that exposes an OpenAI-compatible API.

### Changes (4 files, 672 insertions)

**New Files:**
- `lmstudio_provider.py` (377 lines) - Full provider implementation with:
  - Health check functions (`check_lmstudio_running`)
  - Model listing (`list_lmstudio_models`)
  - Chat completion (`lmstudio_chat`)
  - Streaming support (`lmstudio_chat_stream`)
  - Comprehensive docstring with setup instructions, troubleshooting, and model recommendations

- `test_lmstudio_provider.py` (227 lines) - Complete test suite with 12 passing tests covering:
  - API URL construction
  - Server health checks
  - Model listing
  - Chat completion functionality

**Modified Files:**
- `docs/quick-start-mac-linux.md` (+34 lines) - Added Option D: LM Studio with setup instructions and troubleshooting
- `docs/quick-start-windows.md` (+34 lines) - Added Option D: LM Studio with PowerShell syntax and troubleshooting

### Key Features
- No API key required (local inference)
- Default port: 1234 (LM Studio's standard)
- OpenAI-compatible API integration
- Consistent with existing provider patterns (Ollama, Atomic Chat)
- All tests passing (12/12)

### Usage
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaude
```

* made pr as doc only pr for lm studio

* LM studio recent ui changes fixes in doc
2026-04-03 12:45:57 +08:00
Vasanth T
29edece72f docs: refresh repository README (#226) 2026-04-03 11:39:26 +08:00
Vasanth T
6181050811 chore: patch dependabot vulnerabilities (#225) 2026-04-03 11:34:09 +08:00
Adam Solomon
0fd0026a76 feat: (Extension of #175) added cross-platform system-wide environment variable setup guide for all providers (#185)
* added Instructions to env example to allow openclaude to be used system wide

* added suggested .env.example changes

I added the suggested .env.example changes suggested earlier within the pr thread
2026-04-03 11:27:14 +08:00
KRATOS
6919d774f2 fix: custom OPENAI_BASE_URL always wins over Codex model alias detection (#222)
* feat: add --provider CLI flag for multi-provider support

Adds a --provider flag that maps friendly provider names to the
environment variables the codebase uses for provider detection.
No more manual env-var configuration — users can now simply run:

  openclaude --provider openai --model gpt-4o
  openclaude --provider gemini --model gemini-2.0-flash
  openclaude --provider ollama --model llama3.2
  openclaude --provider bedrock
  openclaude --provider vertex

Implementation details:
- providerFlag.ts: core logic — maps provider names to env vars,
  uses ??= so explicit env vars always win over the flag defaults
- providerFlag.test.ts: 18 tests covering all 7 providers,
  error messages, model passthrough, and env-var precedence
- cli.tsx: early fast-path (mirrors --bare pattern) — sets env
  vars before Commander option-building and module constants run
- main.tsx: adds --provider to Commander option chain for --help

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: custom OPENAI_BASE_URL always wins over Codex model alias detection

When OPENAI_MODEL=gpt-5.4 (or gpt-5.4-mini) and a custom OPENAI_BASE_URL
is set (Azure, OpenRouter, etc), the transport was incorrectly forced to
codex_responses because gpt-5.4 is in CODEX_ALIAS_MODELS. This caused
requests to be sent with Codex auth instead of the user's API key,
resulting in 401 Unauthorized errors.

Fix: only use codex_responses when the base URL is explicitly the Codex
endpoint, OR when no custom base URL is set and the model is a Codex
alias. An explicit OPENAI_BASE_URL always takes priority over model-name
based Codex detection.

Verified locally: gpt-5.4 via OpenRouter now correctly shows
Provider=OpenRouter, Endpoint=https://openrouter.ai/api/v1 instead of
routing to chatgpt.com/backend-api/codex.

Fixes #200, #203

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 11:11:10 +08:00
Vasanth T
aa69e85795 fix: correct prompt identity branding (#224) 2026-04-03 11:06:26 +08:00
Kevin Codex
66bbb75836 Merge pull request #221 from gnanam1990/fix/keyboard-freeze-mcp-notifications
fix: prevent keyboard freeze when MCP notification effects fire
2026-04-03 10:27:11 +08:00
gnanam1990
2c6ec0119e fix: prevent keyboard freeze when MCP notification effects fire
React 19 requires `supportsMicrotasks: true` in the reconciler host
config so it can flush state updates from passive effects via
queueMicrotask. Without this, state updates triggered inside
useMcpConnectivityStatus were silently dropped, corrupting React's
internal executionContext and causing all keyboard input to freeze
after the "N MCP server(s) need auth" notification appeared.

Root cause (three-part fix):

1. reconciler.ts: declare supportsMicrotasks + scheduleMicrotask so
   React 19 schedules passive-effect flushes correctly.

2. useMcpConnectivityStatus.tsx: wrap the MCP auth notification effect
   in try/catch so any unexpected throw does not propagate into
   flushPassiveEffects and permanently corrupt executionContext.

3. notifications.tsx: wrap addNotification, removeNotification, and
   processQueue in try/catch for the same reason — these are called
   from 12+ notification hooks across passive effects.

Also fixes a pre-existing test isolation bug in context.test.ts where
assigning `undefined` to process.env produced the string "undefined",
polluting the env for subsequent test files.

Resolves: #169, #205, #77
2026-04-03 07:41:53 +05:30
Kevin Codex
74a25d01a6 Merge pull request #206 from alamnahin/feat/ollama-image-passthrough
feat(ollama): pass Anthropic base64 image blocks to Ollama images payload
2026-04-03 10:06:08 +08:00
Kevin Codex
7cf4c88ab8 docs: add security policy 2026-04-03 09:40:17 +08:00
Kevin Codex
f68b9aa57d Create SECURITY.md 2026-04-03 09:17:21 +08:00
Kevin Codex
20d1ee8427 Merge pull request #207 from alamnahin/feat/router-large-request-modeling
fix(router): use large message size when selecting models
2026-04-03 08:58:29 +08:00
Kevin Codex
089a42fc07 Merge pull request #211 from joetam/fix-image-paste-stubs
fix linux clipboard image paste for jpeg/gif/webp
2026-04-03 08:55:50 +08:00
jmt
f5b20fc517 fix: make clipboard images pasteable in OpenClaude
Images in the clipboard could fail to become pasted image attachments in OpenClaude. User-facing symptom: paste would detect that an image existed, but nothing would appear in the prompt, and bundled builds could also fail while converting BMP clipboard images into a format OpenClaude can send to the model.

Linux clipboard image paste had drifted between detection and extraction. checkImage accepted png/jpeg/jpg/gif/webp/bmp, but saveImage only tried image/png and image/bmp. When the clipboard advertised a JPEG, GIF, or WebP image, OpenClaude concluded that an image was present and then failed to write the temp screenshot file, so the paste path returned null and nothing was inserted into the prompt.

Bundled OpenClaude builds had a second failure mode. The build replaces image-processor-napi and sharp with explicit stub modules in bundled mode. getImageProcessor() treated those stubs as real processors, so BMP clipboard images reached sharp(imageBuffer).png() and then failed before they could be converted into a pasteable PNG for OpenClaude.

Keep the Linux clipboard commands generated from one MIME type list and reject __stub-marked image processors up front instead of failing in the middle of image paste.
2026-04-02 15:51:49 -07:00
Md.Nahin Alam
184ec250fd test(router): scope FAKE_KEY via pytest monkeypatch fixture 2026-04-03 04:18:20 +06:00
Md.Nahin Alam
43deb49c2c fix(router): use large request size for model selection 2026-04-03 03:45:33 +06:00
Md.Nahin Alam
0e7a2446c7 feat(ollama): pass base64 image blocks through to Ollama payload 2026-04-03 03:29:00 +06:00
Kevin Codex
63ad0196d6 Merge pull request #172 from devNull-bootloader/main
Add OpenClaude VS Code extension with terminal UI and control center
2026-04-03 02:25:03 +08:00
Kevin Codex
32046e9b40 Merge pull request #191 from BrainSlugs83/security/pin-firecrawl-js-dependency
security: pin @mendable/firecrawl-js to exact version
2026-04-03 02:19:17 +08:00
Mikey
7bd7d0f54d security: pin @mendable/firecrawl-js to exact version
Pins @mendable/firecrawl-js from ^4.18.1 to 4.18.1, consistent with
the pinning policy established in #102.
2026-04-02 11:07:54 -07:00
Kevin Codex
cdf4bad95b Merge pull request #182 from wrenchpilot/development
Enhance local provider URL validation for private IP addresses
2026-04-03 01:56:20 +08:00
Urvish Lanje
4158214895 Merge pull request #3 from devNull-bootloader/feat/initial-vscode-extension
fix: address review feedback for launcher behavior and links
2026-04-02 19:53:21 +02:00
Kevin Codex
a6ed57d0f4 Merge pull request #161 from auriti/fix/block-update-for-3p-providers
fix: block update command for 3P providers, align thinking block handling
2026-04-03 01:52:54 +08:00
James Shawn Carnley
7b68eb1acb Enhance local provider URL detection for IPv6 and loopback ranges 2026-04-02 13:46:10 -04:00
Kevin Codex
84950642ae Merge pull request #168 from firecrawl/add-firecrawl
feat: add Firecrawl backend for WebSearch and WebFetch
2026-04-03 01:43:25 +08:00
Kevin Codex
a287597273 Merge pull request #162 from auriti/fix/provider-aware-error-messages
fix: provider-aware error messages and skip Anthropic key approval for 3P
2026-04-03 01:42:15 +08:00
Kevin Codex
1cd4164062 Merge pull request #159 from Ghoul07-bit/main
Android Termux installation guide
2026-04-03 01:17:17 +08:00
Kevin Codex
47c53a18e8 Merge pull request #174 from gnanam1990/feat/provider-aware-rate-limit
feat: provider-aware rate limit reset delay for OpenAI/GitHub/Codex providers
2026-04-03 01:16:58 +08:00
Urvish Lanje
cf90457428 fix: address review feedback for launcher behavior and links
- point all repository links to Gitlawb/openclaude
- make shim opt-in by default to preserve Anthropic-first flow
- add command availability check with first-run install guidance
- render runtime and shim state dynamically in control center
- make command palette shortcut hint platform-aware
2026-04-02 17:14:42 +00:00
James Shawn Carnley
5e77d82620 Merge branch 'Gitlawb:main' into development 2026-04-02 12:55:59 -04:00
Kevin Codex
11d9660a80 Merge pull request #157 from erdemozyol/fix/status-tab-highlight
fix: refresh tab highlight on horizontal navigation
2026-04-03 00:55:33 +08:00
Kevin Codex
1a57335d74 Merge pull request #160 from auriti/fix/shim-ids-azure-safety
fix: crypto.randomUUID for IDs, Azure Foundry detection, safety filter visibility
2026-04-03 00:54:49 +08:00
Kevin Codex
7bc903d875 Merge pull request #156 from auriti/fix/model-lookup-and-llama-context
fix: deterministic prefix matching and correct Llama 3.x context windows
2026-04-03 00:53:42 +08:00
Kevin Codex
4c22de2585 Merge pull request #179 from gnanam1990/fix/gemini-routing
fix: route CLAUDE_CODE_USE_GEMINI through OpenAI-compatible shim
2026-04-03 00:50:21 +08:00
Leonardo Grigorio
63daf33b48 docs: add Firecrawl section to README 2026-04-02 13:47:59 -03:00
James Shawn Carnley
2ee43d7ee8 Merge branch 'Gitlawb:main' into development 2026-04-02 12:43:24 -04:00
Kevin Codex
3581d3f83f Merge pull request #142 from skfallin/fix/anthropic-schema-format
Strip incompatible JSON Schema keywords from tool schemas
2026-04-03 00:26:45 +08:00
James Shawn Carnley
4a4394bb65 feat: enhance local provider URL validation to include private IPv4 and IPv6 addresses 2026-04-02 12:26:23 -04:00
gnanam1990
b4aa27183d fix: route CLAUDE_CODE_USE_GEMINI through OpenAI-compatible shim
The Gemini provider uses Google's OpenAI-compatible endpoint
(generativelanguage.googleapis.com/v1beta/openai) but the client
routing condition in client.ts only checked CLAUDE_CODE_USE_OPENAI
and CLAUDE_CODE_USE_GITHUB — CLAUDE_CODE_USE_GEMINI was missing.

This caused every Gemini request to fall through to the Anthropic
client path. Since ANTHROPIC_API_KEY is not set when using Gemini,
the Anthropic SDK threw:

  "Could not resolve authentication method. Expected either apiKey
   or authToken to be set."

Fix: add CLAUDE_CODE_USE_GEMINI to the OpenAI shim routing condition
so Gemini requests correctly reach createOpenAIShimClient(), which
maps GEMINI_API_KEY → OPENAI_API_KEY and sets OPENAI_BASE_URL to
the Google endpoint.

Closes #176

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:51:26 +05:30
Kevin Codex
96b9e0235b Merge pull request #177 from gnanam1990/feat/env-example
feat: add .env.example with all provider configurations
2026-04-03 00:16:38 +08:00
gnanam1990
7095abb837 feat: add .env.example with all provider configurations
New contributors had to hunt through README and source files to find
required environment variables. This adds a single reference file at
repo root covering all supported providers with placeholder values,
inline comments, and sensible defaults.

Providers covered:
- Anthropic (default)
- OpenAI
- Google Gemini
- GitHub Models
- Ollama (local)
- AWS Bedrock
- Google Vertex AI

Also includes optional tuning vars: CLAUDE_CODE_MAX_RETRIES,
CLAUDE_CODE_UNATTENDED_RETRY, OPENCLAUDE_ENABLE_EXTENDED_KEYS,
OPENCLAUDE_DISABLE_CO_AUTHORED_BY, API_TIMEOUT_MS, CLAUDE_DEBUG.

Updated .gitignore to add !.env.example exception so the template
is not suppressed by the existing .env.* rule.

Closes #175

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:43:49 +05:30
gnanam1990
8501786852 feat: provider-aware rate limit reset delay
Previously getRateLimitResetDelayMs only read the Anthropic-specific
'anthropic-ratelimit-unified-reset' header (Unix timestamp), returning
null for every other provider. This meant OpenAI, GitHub, and Codex
users in persistent retry mode (CLAUDE_CODE_UNATTENDED_RETRY=1) always
fell back to dumb exponential backoff even when the server included an
exact reset time in the response headers.

This change makes the function provider-aware:

- firstParty (Anthropic): existing behaviour preserved — reads
  'anthropic-ratelimit-unified-reset' Unix timestamp
- openai / codex / github: reads 'x-ratelimit-reset-requests' and
  'x-ratelimit-reset-tokens' (OpenAI relative duration strings like
  "1s", "6m0s", "1h30m0s"), picks the larger of the two so retries
  don't fire before both token and request limits have reset
- bedrock / vertex / foundry / gemini: returns null (no standard
  reset header for these providers)

Adds parseOpenAIDuration() as an exported helper to convert OpenAI's
duration format into milliseconds.

16 new tests covering all provider paths and edge cases.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:30:05 +05:30
skfallin
37d4c21739 fix: make schema sanitization provider-specific 2026-04-02 17:57:42 +02:00
Urvish Lanje
a43023705b Merge pull request #2 from devNull-bootloader/feat/initial-vscode-extension
Initial VS Code Extension for OpenClaude
2026-04-02 17:54:40 +02:00
Kevin Codex
73db9b5fd3 Merge pull request #163 from erdemozyol/feat/codex-status-usage
Add Codex usage to /status
2026-04-02 23:54:07 +08:00
Urvish Lanje
2b5cf9f0c1 feat: initial VS Code extension for OpenClaude
Introduce OpenClaude as a first-class VS Code extension with:

- Built-in Control Center sidebar for seamless workflow integration
- Terminal-first design with authentic monospace UI and ASCII styling
- Quick-launch buttons for OpenClaude terminal, repository access, and command palette
- Status display showing runtime and OpenAI shim configuration
- Dark theme optimized for focus and extended development sessions
- Proper extension manifest with activation events and contribution points
- Debug configuration for local development

This extension provides developers with direct access to OpenClaude
without leaving VS Code, enabling a tighter integration with the editor.
2026-04-02 15:50:56 +00:00
Kevin Codex
4237a72b92 Merge pull request #170 from gnanam1990/fix/security-issue-42
security: fix 5 findings from issue #42 — env leak, ant gate, depth DoS, URL parse, CA cert
2026-04-02 23:38:53 +08:00
gnanam1990
942d09ca9c security: fix 5 findings from issue #42 — env leak, ant gate, depth DoS, URL parse, CA cert
Finding 1 [CRITICAL] — sessionRunner leaks full process.env to child
Extract buildChildEnv() with an explicit allowlist of safe OS/runtime vars.
Child process no longer inherits ANTHROPIC_API_KEY, OPENAI_API_KEY, DB
credentials, or any other secret present in the parent shell environment.
Only CLAUDE_CODE_* bridge vars, PATH, HOME, and standard OS env are passed.

Finding 2 [HIGH] — USER_TYPE=ant activatable by external users
Add isAntEmployee() -> false constant in src/utils/buildConfig.ts.
Replace all three direct process.env.USER_TYPE === 'ant' checks in
setup.ts and onChangeAppState.ts so no external user can activate
Anthropic-internal code paths (commit attribution, system prompt clearing,
dangerously-skip-permissions bypass) by setting USER_TYPE in their shell.

Finding 3 [HIGH] — memoryScan.ts unlimited directory walk
Add MAX_DEPTH=3 guard on readdir({ recursive: true }) results.
Deep or symlink-looped memory directories no longer cause an unbounded
blocking walk before the MAX_MEMORY_FILES cap takes effect.

Finding 5 [HIGH] — buildSdkUrl uses string.includes for protocol detection
Replace apiBaseUrl.includes('localhost') with new URL(apiBaseUrl).hostname
comparison so a remote URL containing 'localhost' in its path no longer
incorrectly gets ws:// (unencrypted) instead of wss://.

Finding 6 [HIGH] — upstream proxy writes unvalidated CA cert to disk
Add isValidPemContent() validation before writeFile in the CA cert download
path. A compromised proxy sending non-PEM data (HTML, JSON, scripts) is now
rejected before it can be appended to the system CA bundle.

Each fix is covered by new unit tests (25 tests across 5 new test files).
All 52 tests pass. Build verified clean on v0.1.7.

Fixes #42

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:04:10 +05:30
Leonardo Grigorio
ac4efae870 feat: add Firecrawl backend for WebSearch and WebFetch tools
WebSearch is currently disabled for all non-Anthropic providers (OpenAI
shim, DeepSeek, Ollama, etc.) because those providers have no native
search backend. This adds Firecrawl as a fallback that activates when
FIRECRAWL_API_KEY is set, unlocking web search for every model
openclaude supports.

WebFetch uses basic HTTP + Turndown for HTML-to-markdown conversion,
which fails silently on JS-rendered SPAs and bot-protected pages.
Firecrawl scrape replaces the fetch layer when FIRECRAWL_API_KEY is set,
returning clean markdown that handles dynamic content correctly.

Changes:
- WebSearchTool: add runFirecrawlSearch() using @mendable/firecrawl-js,
  respects allowed_domains (post-filter) and blocked_domains (-site: operators),
  includes result snippets alongside links. shouldUseFirecrawl() ensures
  firstParty/Vertex/Foundry/Codex providers keep their native backends.
- WebFetchTool: add scrapeWithFirecrawl(), drops into the existing
  applyPromptToMarkdown() pipeline so prompt processing is unchanged.
- Remove "Web search is only available in the US" restriction from
  prompt when Firecrawl is active (it works globally).
2026-04-02 12:18:20 -03:00
Urvish Lanje
4c6adf4774 Merge pull request #1 from devNull-bootloader/copilot/create-vscode-extension-openclaude
Add sleek terminal-style VS Code extension for OpenClaude
2026-04-02 17:13:02 +02:00
copilot-swe-agent[bot]
ff124dcdfb fix: use cryptographic nonce for extension webview CSP
Agent-Logs-Url: https://github.com/devNull-bootloader/openclaude/sessions/30a4694d-1125-4280-a593-74b5e3da601e

Co-authored-by: devNull-bootloader <189463177+devNull-bootloader@users.noreply.github.com>
2026-04-02 15:08:22 +00:00
copilot-swe-agent[bot]
8e8671fc51 feat: add visual OpenClaude control center UI in VS Code extension
Agent-Logs-Url: https://github.com/devNull-bootloader/openclaude/sessions/30a4694d-1125-4280-a593-74b5e3da601e

Co-authored-by: devNull-bootloader <189463177+devNull-bootloader@users.noreply.github.com>
2026-04-02 15:07:20 +00:00
Leonardo Grigorio
4c1ba35aa1 Revert "docs: add MCP servers guide with Firecrawl as featured example"
This reverts commit 5baee3b491.
2026-04-02 12:02:42 -03:00
Leonardo Grigorio
5baee3b491 docs: add MCP servers guide with Firecrawl as featured example
Adds docs/mcp-servers.md — the first documentation on how to configure
MCP servers in OpenClaude. Covers .mcp.json setup, the Firecrawl MCP
server for web scraping and search, available tools, and a pattern for
adding multiple servers.
2026-04-02 12:01:54 -03:00
copilot-swe-agent[bot]
43ba2cbfae feat: add VS Code extension with terminal launcher and custom theme
Agent-Logs-Url: https://github.com/devNull-bootloader/openclaude/sessions/5c0e9230-42be-4cce-a5d6-e85d665ea72a

Co-authored-by: devNull-bootloader <189463177+devNull-bootloader@users.noreply.github.com>
2026-04-02 14:58:36 +00:00
erdemozyol
5c25ac4e9a Add Codex usage to /status 2026-04-02 17:37:07 +03:00
erdemozyol
84ac06bac9 fix: show display version in status 2026-04-02 17:28:34 +03:00
Juan Camilo
c66b859342 fix: provider-aware error messages and skip Anthropic key approval for 3P
1. errors.ts: Add getCustomOffSwitchMessage() that returns a
   provider-neutral message for 3P users instead of the hardcoded
   "Opus is experiencing high load, please use /model to switch to
   Sonnet" which is misleading for OpenAI/Gemini/Ollama users.
   The original constant is preserved for backward-compatible string
   matching in error handlers.

2. Onboarding.tsx: Skip the "approve API key" step when a 3P provider
   is active. Previously, having ANTHROPIC_API_KEY in the environment
   (e.g., from a previous Anthropic setup) triggered an irrelevant
   Anthropic key approval UI even when using Gemini or OpenAI.
2026-04-02 16:23:12 +02:00
Juan Camilo
1709f5c098 fix: block update command for 3P providers, align thinking block handling
1. cli/update.ts: Block the update command for third-party providers.
   The update mechanism downloads from Anthropic's GCS bucket, which
   would silently replace the OpenClaude build (with the OpenAI shim)
   with the upstream Claude Code binary (without it). Now shows an
   actionable message directing users to rebuild from source.

2. codexShim.ts: Filter thinking blocks from assistant history, matching
   the openaiShim behavior. Without this, thinking blocks were included
   as plain text in assistant messages for the Codex transport but
   excluded for the OpenAI transport — causing inconsistent history
   when switching providers mid-session.
2026-04-02 16:18:10 +02:00
Juan Camilo
5d6443799a fix: crypto.randomUUID for IDs, Azure Foundry detection, safety filter visibility
Three targeted fixes:

1. Replace Math.random() with crypto.randomUUID() for message and tool
   call IDs in both openaiShim.ts and codexShim.ts. Math.random() is
   not cryptographically secure and predictable in seeded environments.

2. Anchor Azure endpoint detection to parsed hostname instead of raw
   URL regex. Adds support for Azure AI Foundry (services.ai.azure.com)
   alongside existing cognitiveservices and openai Azure endpoints.
   Prevents SSRF-style bypass via path segments.

3. Surface content safety filter blocks to the user. When Gemini or
   Azure returns finish_reason 'content_filter' or 'safety', emit a
   visible text block '[Content blocked by provider safety filter]'
   instead of silently returning empty/truncated content with
   stop_reason 'end_turn'. Applied to both streaming and non-streaming.
2026-04-02 16:14:35 +02:00
Ghoul07-bit
3ef09f911e Create ANDROID_INSTALL.md
Installation Guide to run OpenClaude on andriod
2026-04-02 15:10:20 +01:00
Kevin Codex
3353101e83 chore: release 0.1.7 2026-04-02 22:07:28 +08:00
erdemozyol
6f4aa02123 fix: refresh tab highlight on horizontal navigation 2026-04-02 16:58:45 +03:00
Juan Camilo
b65921e8c3 fix: deterministic prefix matching and correct Llama 3.x context windows
Two fixes in openaiContextWindows.ts:

1. Sort lookup keys by length descending in lookupByModel() so the most
   specific prefix always wins. Without this, 'gpt-4-turbo-preview'
   could match 'gpt-4' (8k) instead of 'gpt-4-turbo' (128k) depending
   on V8's object key iteration order.

2. Update Llama 3.1/3.2/3.3 context windows from 8,192 to 128,000.
   These models support 128k context natively (Meta official specs).
   The previous 8k value was Ollama's default num_ctx, not the model's
   actual capability, causing premature auto-compact warnings.
2026-04-02 15:50:52 +02:00
skfallin
0fe8551d33 Merge branch 'main' into fix/anthropic-schema-format 2026-04-02 15:50:16 +02:00
Kevin Codex
145c99b297 Merge pull request #151 from auriti/fix/gemini-auth-dummy-key-bypass
fix: prevent ANTHROPIC_API_KEY from interfering with Gemini provider auth
2026-04-02 21:43:04 +08:00
skfallin
6319df02f0 Merge upstream/main into fix/anthropic-schema-format 2026-04-02 15:42:28 +02:00
Kevin Codex
3c8c63a78e Merge pull request #153 from auriti/fix/report-openai-cached-tokens
fix: report cached tokens from OpenAI prompt_tokens_details
2026-04-02 21:41:47 +08:00
Kevin Codex
35676be381 Merge pull request #143 from sooth/codex/repl-memory-and-schema-hardening
[codex] fix: trim persisted tool results and sanitize MCP schemas
2026-04-02 21:41:30 +08:00
Juan Camilo
d430ddd568 fix: prevent ANTHROPIC_API_KEY from interfering with Gemini provider auth
Two fixes for issue #133 where setting ANTHROPIC_API_KEY=dummy alongside
CLAUDE_CODE_USE_GEMINI=1 causes "Invalid API key" errors:

1. auth.ts: In the CI branch of getAnthropicApiKeyWithSource(), the
   ANTHROPIC_API_KEY value was returned without checking isUsing3PServices().
   A dummy key leaked into the Anthropic key resolution pipeline even when
   Gemini was the active provider. Now guards with isUsing3PServices().

2. errors.ts: The x-api-key error handler surfaced "Invalid API key" for
   any provider. Added getAPIProvider() === 'firstParty' guard so 3P users
   see the real underlying error instead of a misleading auth message.

Note: The cli.tsx Gemini validation fix (originally part of this PR) was
independently implemented in PR #121 and is already on main.
2026-04-02 15:40:07 +02:00
Kevin Codex
1514220ee7 Merge pull request #144 from Meetpatel006/main
feat: add Codex/OpenAI effort picker and stabilize model/suggestion navigation and its display the current model with effort
2026-04-02 21:25:48 +08:00
Kevin Codex
680cd69d8a Merge pull request #150 from Vasanthdev2004/slash-highlight-fix
fix: make selected slash suggestion visibly highlighted
2026-04-02 21:24:04 +08:00
Meet Patel
0a5849e4d2 Merge branch 'main' of https://github.com/Meetpatel006/openclaude
# Conflicts:
#	src/utils/status.tsx
2026-04-02 18:53:30 +05:30
Juan Camilo
708a0a18fe fix: report cached tokens from OpenAI prompt_tokens_details
OpenAI returns cached token counts in usage.prompt_tokens_details.cached_tokens
but the shim hardcoded cache_read_input_tokens to 0. This made prompt
caching invisible to the cost tracker and session summary even when
OpenAI's automatic caching was actively reducing costs.

Changes:
- Extend OpenAIStreamChunk usage interface with prompt_tokens_details
- Map cached_tokens to cache_read_input_tokens in convertChunkUsage()
- Same fix in _convertNonStreamingResponse() for non-streaming path
- cache_creation_input_tokens remains 0 (OpenAI auto-caching has no
  creation cost — it is free and automatic)
2026-04-02 15:21:37 +02:00
sooth
5c4469fe81 fix: trim persisted tool results and sanitize MCP schemas 2026-04-02 09:20:40 -04:00
Meet Patel
8f50f17674 feat: Refactor model handling & reasoning effort across navigation, typeahead, OpenAI/Codex providers, API shim, configs, and UI (adds EffortPicker, new mappings/options, unique suggestion IDs, effort utilities; removes deprecated aliases; defaults Codex to gpt-5.4; improves selection logic and status display) 2026-04-02 18:49:07 +05:30
Kevin Codex
9f48bb4431 Merge pull request #135 from auriti/fix/shim-reliability-and-protocol-compliance
fix: shim reliability and protocol compliance overhaul
2026-04-02 21:15:44 +08:00
Vasanthdev2004
4d0886a4fe fix: keep slash highlight in sync in fullscreen 2026-04-02 18:42:56 +05:30
Kevin Codex
6e311f96a3 Merge pull request #149 from gnanam1990/docs/non-technical-setup-guide
docs: split beginner and advanced setup guides
2026-04-02 21:04:27 +08:00
Kevin Codex
0a1ac92341 Merge pull request #138 from erdemozyol/fix/codex-websearch-and-agent-fallback
fix: support Codex web tools and non-git agents
2026-04-02 21:02:43 +08:00
Kevin Codex
1ee2ce931a Merge pull request #117 from auriti/fix/context-isenvtruthy-mismatch
fix: use isEnvTruthy() for provider detection in context window lookup
2026-04-02 21:01:15 +08:00
Kevin Codex
bc2a4bcdd5 Merge pull request #121 from Vasanthdev2004/provider-setup-wizard
feat: add guided /provider setup for saved profiles
2026-04-02 21:00:41 +08:00
Vasanthdev2004
118b0793e0 fix: move slash suggestion highlight with selection 2026-04-02 18:25:52 +05:30
Vasanthdev2004
5ccda35941 fix: highlight selected slash suggestion 2026-04-02 18:18:48 +05:30
Juan Camilo
f385740bd6 fix: use isEnvTruthy() for provider detection in context window lookup
Replace raw === '1' || === 'true' comparisons with isEnvTruthy() in
context.ts for consistency with getAPIProvider() in providers.ts.
This also covers the newly added CLAUDE_CODE_USE_GITHUB provider.

Add native Gemini model entries (without google/ prefix) to both
context window and max output token tables. Corrects gemini-2.5-pro
and gemini-2.5-flash max output tokens to 65,536 (was 8,192/32,768).
2026-04-02 14:43:03 +02:00
gnanam1990
ef251fe3f5 Merge upstream/main into docs/non-technical-setup-guide 2026-04-02 18:12:28 +05:30
Juan Camilo
f4818dc213 fix: shim reliability and protocol compliance overhaul
Addresses the most critical remaining issues in the provider shim layer,
building on top of #124 (recursive schema normalization + try/finally).

openaiShim.ts:
- Throw APIError via SDK factory instead of plain Error — enables retry
  on 429/503 (was completely broken: zero retries for all 3P providers)
- Guard stop_reason !== null before emitting usage-only message_delta
  (Azure/Groq send usage before finish_reason)
- Fix assistant content: join text parts instead of invalid as-string cast
  (Mistral rejects array content on assistant role)
- Expose real HTTP Response in withResponse() for header inspection
- Skip stream_options for local providers (Ollama < 0.5 compatibility)

codexShim.ts:
- Throw APIError at all 4 throw sites (HTTP + 3 streaming errors)
- Add tool_choice 'none' mapping (was silently ignored)
- Forward is_error flag with Error: prefix (matching openaiShim)
2026-04-02 14:41:40 +02:00
gnanam1990
aac326fa3f docs(setup): add beginner and advanced guides
Split the setup documentation into a simple beginner path and a separate advanced path. Add OS-specific quick starts for Windows and macOS/Linux so non-technical users can copy and paste the right commands without sorting through Bun and source-build instructions.
2026-04-02 18:09:04 +05:30
Vasanthdev2004
71a3f36e95 Merge origin/main into provider-setup-wizard 2026-04-02 18:03:44 +05:30
Meet Patel
23216ca01c feat: Refactor model handling & reasoning effort across navigation, typeahead, OpenAI/Codex providers, API shim, configs, and UI (adds EffortPicker, new mappings/options, unique suggestion IDs, effort utilities; removes deprecated aliases; defaults Codex to gpt-5.4; improves selection logic and status display) 2026-04-02 17:58:06 +05:30
Kevin Codex
3d72d9e5e2 Merge pull request #137 from gnanam1990/feat/mcp-doctor
feat(mcp): add doctor diagnostics command
2026-04-02 20:25:41 +08:00
Kevin Codex
4260f5bcd7 Merge pull request #123 from auriti/fix/assert-min-version-provider-guard
fix: skip assertMinVersion for third-party providers
2026-04-02 20:24:37 +08:00
Kevin Codex
49b9c043f5 Merge pull request #120 from auriti/fix/migration-provider-guard
fix: skip Anthropic model migration for third-party providers
2026-04-02 20:22:50 +08:00
Kevin Codex
a7ec88b1e5 Merge pull request #122 from auriti/fix/pin-github-actions-sha
security: pin GitHub Actions to immutable SHA digests
2026-04-02 20:21:26 +08:00
Kevin Codex
903a30916a Merge pull request #107 from rithulkamesh/main
feat: GitHub Models provider + interactive onboard (keychain-backed)
2026-04-02 20:14:51 +08:00
Kevin Codex
6b7c0e5339 Merge pull request #74 from Vect0rM/feature/atomic-chat-integration
feat: add support for Atomic Chat provider
2026-04-02 20:13:37 +08:00
skfallin
0c88dea247 Strip incompatible JSON Schema keywords from tool schemas 2026-04-02 13:50:47 +02:00
erdemozyol
cec3629017 fix: support codex web tools and non-git agents 2026-04-02 14:08:22 +03:00
Misha Skvortsov
7c09b1f01c docs: add Atomic Chat to README provider examples and launch profiles
Made-with: Cursor
2026-04-02 13:58:50 +03:00
Rithul Kamesh
0a42839475 fix(github): address PR feedback for onboarding flow
- Set competing provider flags to undefined in updateSettingsForSource to ensure clean GitHub boot
- Fix resolveProviderRequest to default to github:copilot when OPENAI_MODEL is unset
- Hydrate secure tokens and managed settings in system-check.ts to prevent false negatives
- Add models:read scope to GitHub device flow
2026-04-02 15:38:54 +05:30
Misha Skvortsov
64ba7fdb9a refactor: enhance Atomic Chat API URL handling
- Updated the `getAtomicChatApiBaseUrl` function to parse the base URL correctly and ensure the pathname is formatted without trailing version segments.
- Cleared search and hash components from the URL to standardize the output.

This change improves the robustness of the URL handling for the Atomic Chat provider.
2026-04-02 12:27:12 +03:00
gnanam1990
fb27164ddf fix(mcp): await failed transport cleanup on Windows
Wait for failed MCP transport cleanup before command exit so targeted live checks do not crash on Windows.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-04-02 14:55:05 +05:30
gnanam1990
ad1f328672 feat(mcp): add doctor command
Add the MCP doctor subcommand with text and JSON output, config-only mode, and scope filtering so users can diagnose MCP issues from the CLI.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-04-02 14:55:05 +05:30
gnanam1990
001f89f62c feat: add MCP doctor diagnostics service
Add the diagnostics core and report model for MCP health, scope, and config analysis. This creates the structured report used by both text and JSON doctor output.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-04-02 14:55:04 +05:30
Kevin Codex
5cd95f4bb1 Merge pull request #116 from Aarondio/fix/tolerant-json-parser
fix(shim): implement tolerant bracket balancer for truncated tool JSON
2026-04-02 17:12:44 +08:00
Juan Camilo
6c4225f6f4 fix: skip assertMinVersion for third-party providers
The version kill-switch calls Anthropic's GrowthBook endpoint to
enforce a minimum version. This is currently safe for 3P users only
because isAnalyticsDisabled() returns true (disabling GrowthBook).
Adding an explicit provider guard makes this safety independent of the
analytics stub, preventing 3P users from being blocked by Anthropic's
version requirements in case of future upstream merges.
2026-04-02 11:09:20 +02:00
Juan Camilo
3ca6c299d6 security: pin GitHub Actions to immutable SHA digests
Pin all GitHub Actions to commit SHA instead of mutable version tags
to prevent supply chain attacks via tag poisoning. This is especially
important for third-party actions like oven-sh/setup-bun.
2026-04-02 11:09:19 +02:00
Juan Camilo
7a7437b309 fix: skip Anthropic model migration for third-party providers
Add provider guard to migrateSonnet1mToSonnet45() so it only runs for
firstParty (Anthropic) users. Without this, a 3P user with
model='sonnet[1m]' would have it rewritten to an Anthropic-specific
alias that is invalid for OpenAI/Gemini/Ollama providers.
2026-04-02 11:09:18 +02:00
Kevin Codex
c94f9e18c3 Merge pull request #124 from salmanrajz/fix/recursive-schema-normalization
fix: make normalizeSchemaForOpenAI recursive for nested objects
2026-04-02 17:03:37 +08:00
Kevin Codex
e16917614c Merge pull request #136 from rajrasane/fix/graceful-exit-ui-artifacts
fix: Enhance graceful shutdown handling
2026-04-02 17:03:04 +08:00
Kevin Codex
38d35e314f Merge pull request #108 from gnanam1990/fix/openrouter-gemini-model-id-103
docs: replace stale OpenRouter Gemini example
2026-04-02 16:44:06 +08:00
salmanrajz
14de9cf0fb refactor: address code review feedback
- Make getProviderLabel() switch exhaustive with explicit openai/gemini
  arms instead of falling through to env-var checks in default
- Add clarifying comment on additionalProperties override in schema
  normalization
2026-04-02 12:36:05 +04:00
Raj Rasane
7f969200fb Add exit reason types and improve graceful shutdown handling 2026-04-02 14:00:32 +05:30
salmanrajz
e494015e9a fix: wrap streaming reader in try/finally to release lock and prevent resource leaks
Partially addresses #112. The streaming reader in openaiStreamToAnthropic
had no error handling - if an error occurred during streaming, the reader
lock was never released. Wrapped the while loop in try/finally to ensure
reader.releaseLock() is always called.
2026-04-02 12:12:24 +04:00
salmanrajz
5b20fe783d fix: make CostThresholdDialog provider-aware instead of hardcoding Anthropic
Partially addresses #39. The cost threshold dialog hardcoded
'Anthropic API' in the title, which is misleading for users on
OpenAI, Gemini, Ollama, or other providers. Now detects the active
provider via getAPIProvider() and shows the correct label.
2026-04-02 12:00:07 +04:00
salmanrajz
6aec8416cc fix: make normalizeSchemaForOpenAI recursive for nested objects
Fixes #111. normalizeSchemaForOpenAI only processed the top-level
object schema, leaving nested objects untouched. OpenAI strict mode
rejects schemas where nested objects have properties not listed in
their required array, causing 400 errors on tools with nested params.

Now recurses into properties, items, and anyOf/oneOf/allOf combinators
(matching the pattern used by enforceStrictSchema in codexShim.ts).
Also adds additionalProperties: false to nested objects in strict mode.

Build verified passing.
2026-04-02 11:51:04 +04:00
Vasanthdev2004
08f0b6030e feat: add guided /provider setup 2026-04-02 13:13:50 +05:30
Mike
4f78bde085 Delete hello/world 2026-04-02 10:37:54 +03:00
Misha Skvortsov
3b7b9740f2 fix: update OPENAI_API_KEY message and add Atomic Chat URL check
- Updated the message for the OPENAI_API_KEY check to include Atomic Chat as an allowed local provider.
- Introduced a new function to check if the base URL corresponds to Atomic Chat, enhancing the system's ability to identify local providers.
- Adjusted the Ollama processor mode check to skip processing when an Atomic Chat local provider is detected.
2026-04-02 10:37:54 +03:00
Misha Skvortsov
577e654ae7 feat: add support for Atomic Chat provider
- Introduced a new provider profile for Atomic Chat, allowing it to be used alongside existing providers.
- Updated `package.json` to include a new development script for launching Atomic Chat.
- Modified `smart_router.py` to recognize Atomic Chat as a local provider that does not require an API key.
- Enhanced provider discovery and launch scripts to handle Atomic Chat, including model listing and connection checks.
- Added tests to ensure proper environment setup and behavior for Atomic Chat profiles.

This update expands the functionality of the application to support local LLMs via Atomic Chat, improving versatility for users.
2026-04-02 10:37:54 +03:00
Rithul Kamesh
f07f11b7b6 fix: use bun test for provider-recommendation script to resolve module errors 2026-04-02 12:53:56 +05:30
Aarondio
d156aed32d fix(shim): implement tolerant bracket balancer for truncated tool JSON 2026-04-02 08:14:52 +01:00
gnanam1990
93bc50f8cd docs: replace stale OpenRouter Gemini example
Update the OpenRouter Gemini README example to a model ID that works in current OpenRouter validation, and note that model availability can change over time.
2026-04-02 11:37:26 +05:30
Rithul Kamesh
2619401d34 Remove github-models-pr-draft.md 2026-04-02 11:26:27 +05:30
Rithul Kamesh
25c5987276 feat: add support for GitHub Models provider
- Introduced environment variable CLAUDE_CODE_USE_GITHUB to enable GitHub Models.
- Added checks for GITHUB_TOKEN or GH_TOKEN for authentication.
- Updated base URL handling to include GitHub Models default.
- Enhanced provider detection and error handling for GitHub Models.
- Updated relevant functions and components to accommodate the new provider.
2026-04-02 11:25:28 +05:30
Kevin Codex
1059915c84 Merge pull request #105 from rajrasane/fix/third-party-provider-compatibility
fix: Improve session title handling and Docker compatibility
2026-04-02 13:50:18 +08:00
Kevin Codex
fcb1b82d9b Merge pull request #104 from slx618/fix/azure-max-completion-tokens
fix Azure OpenAI max token parameter
2026-04-02 13:40:23 +08:00
Kevin Codex
e54c39e3cb Merge pull request #100 from Vasanthdev2004/ripgrep-install-hint
fix: add clearer ripgrep install guidance
2026-04-02 13:39:52 +08:00
Kevin Codex
a6ba34a3de Merge pull request #99 from gigachad80/main
Update resume command in gracefulShutdown message
2026-04-02 13:36:45 +08:00
Kevin Codex
7128a938d9 Merge pull request #101 from BrainSlugs83/security/compile-time-telemetry-removal
security: kill GrowthBook phone-home and auto-updater at build time
2026-04-02 13:35:27 +08:00
Raj Rasane
f340b199c8 refactor: simplify session title fallback to static 'Open Claude' 2026-04-02 11:04:35 +05:30
Raj Rasane
63546dcd9c chore: rename default terminal title to Open Claude 2026-04-02 11:04:35 +05:30
Raj Rasane
302d9d4e44 fix: enable session title generation for non-firstParty providers 2026-04-02 11:04:35 +05:30
Raj Rasane
310f1d344a fix: provide local session title fallback for 3P providers
When using non-Anthropic providers (Ollama, Gemini, Codex), the
underlying call to queryHaiku for session title generation fails.
Previously, this caused the catch block to return null, leaving the
terminal tab permanently stuck on 'Claude Code'.

Now, when the API call fails, we gracefully derive a title locally from
the user's first message (first 7 words, sentence-cased), ensuring
users still see a meaningful session title in their terminal tab.
2026-04-02 11:04:35 +05:30
Raj Rasane
9590066b5b fix: gracefully handle Docker/remote Ollama in system-check
When Ollama runs inside Docker or a remote container, the native
'ollama ps' command is unavailable on the host. Instead of hard-failing
and blocking CLI startup, downgrade to a pass() with a warning when
the HTTP ping has already confirmed the server is reachable.
2026-04-02 11:04:35 +05:30
Kevin Codex
ad947e996a Merge pull request #102 from BrainSlugs83/security/pin-exact-dependency-versions
security: pin all dependencies to exact versions
2026-04-02 13:34:15 +08:00
Kevin Codex
b2ba2c0cc5 Merge pull request #98 from Vasanthdev2004/windows-csiu-input-fix
fix: support CSI-u printable input on Windows
2026-04-02 13:30:36 +08:00
Mikey
0746802b6a security: kill GrowthBook phone-home and auto-updater at build time
Adds a Bun build plugin that replaces analytics/telemetry modules with
no-op stubs at compile time.

Primary targets (NOT killed by PR #94 or the feature() shim):

  - GrowthBook: phones home to api.anthropic.com on every launch,
    sending account UUID, org UUID, email, device ID, subscription
    type. Refreshes every 6 hours. Now returns defaults without
    making any network call.

  - Auto-updater: contacts storage.googleapis.com and npm registry
    on launch to check for new versions. Now returns null/no-op.

Defense-in-depth (already gated by PR #94 or feature flags, but
now the code itself is replaced with empty functions):

  - Datadog, 1P event logging, BigQuery metrics, Perfetto tracing,
    session tracing, plugin fetch telemetry, transcript sharing.

Deliberately NOT stubbed:

  - Plugin marketplace (downloads.claude.ai) — needed for /plugin
  - User-configurable OTel (CLAUDE_CODE_ENABLE_TELEMETRY) — opt-in

Implementation: separate plugin file (scripts/no-telemetry-plugin.ts)
with a 2-line hook in build.ts. The plugin file does not exist
upstream so it cannot cause merge conflicts.
2026-04-01 21:57:15 -07:00
Vasanthdev2004
2bade922ef fix: add clearer ripgrep install guidance 2026-04-02 10:19:36 +05:30
Dark Yagami
4918caa22b Update resume command in gracefulShutdown message 2026-04-02 10:18:27 +05:30
Vasanthdev2004
ffbc1f8f6e fix: support CSI-u printable input on Windows 2026-04-02 10:05:16 +05:30
Mikey
5f75f67a27 security: pin all dependencies to exact versions
Removes caret (^) ranges from all 74 dependencies in package.json,
locking each to the exact version resolved in bun.lock.

Motivation: the axios supply chain attack of March 31 2026 demonstrated
that caret ranges are a live attack vector. axios@^1.14.0 would have
resolved to the trojanized 1.14.1 (bundled plain-crypto-js RAT, C2
sfrclak.com). Both 1.14.1 and 0.30.4 were unpublished within 24h.

Key pins:
  axios      ^1.14.0  → 1.14.0   (trojanized 1.14.1 blocked)
  undici     ^7.3.0   → 7.24.6   (7 CVEs between 7.3 and 7.24)
  yaml       ^2.7.0   → 2.8.3    (CVE-2026-33532 fix)
  ajv        ^8.17.0  → 8.18.0   (ReDoS fix)
  lodash-es  ^4.17.21 → 4.17.23  (prototype pollution fix)
  zod        ^3.24.0  → 3.25.76  (large range locked)

All 74 deps verified: integrity hashes match npm registry, no known
supply chain incidents, no postinstall scripts in lockfile.
2026-04-01 21:29:42 -07:00
Alex
f3ebd7d256 fix: convert max_tokens to max_completion_tokens for Azure OpenAI
Azure OpenAI API rejects the max_tokens parameter and requires
max_completion_tokens instead. This change ensures the conversion
is robust by validating that max_tokens is a positive number before
using it, preventing edge cases like null or "null" string values
from being incorrectly sent.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-02 12:01:01 +08:00
Kevin Codex
1a60509fdc Merge pull request #96 from gnanam1990/fix/startup-screen-version-display
fix: show correct version in startup screen
2026-04-02 11:43:42 +08:00
gnanam1990
47b19c9a00 fix: style version number in startup screen accent orange
Apply the existing ACCENT colour (rgb 240 148 100) to the version
string so it stands out against the dim label, matching the warm
orange used throughout the startup screen for stars and status text.

Requested in #95.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 09:11:12 +05:30
gnanam1990
8c6a10517f fix: show correct version in startup screen
StartupScreen.ts was reading the version via globalThis['MACRO_DISPLAY_VERSION']
which is never populated — the Bun bundler inlines it as MACRO.DISPLAY_VERSION
(dot notation), not as a globalThis key.

Result: startup screen always showed the hardcoded fallback 'v0.1.4' regardless
of the installed version.

Fix: use MACRO.DISPLAY_VERSION ?? MACRO.VERSION directly, consistent with
cli.tsx, main.tsx, and logoV2Utils.ts.

Fixes #95

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 09:05:00 +05:30
161 changed files with 12728 additions and 1873 deletions

250
.env.example Normal file
View File

@@ -0,0 +1,250 @@
# =============================================================================
# OpenClaude Environment Configuration
# =============================================================================
# Copy this file to .env and fill in your values:
# cp .env.example .env
#
# Only set the variables for the provider you want to use.
# All other sections can be left commented out.
# =============================================================================
# =============================================================================
# SYSTEM-WIDE SETUP (OPTIONAL)
# =============================================================================
# Instead of using a .env file per project, you can set these variables
# system-wide so OpenClaude works from any directory on your machine.
#
# STEP 1: Pick your provider variables from the list below.
# STEP 2: Set them using the method for your OS (see further down).
#
# ── Provider variables ───────────────────────────────────────────────
#
# Option 1 — Anthropic:
# ANTHROPIC_API_KEY=sk-ant-your-key-here
# ANTHROPIC_MODEL=claude-sonnet-4-5 (optional)
# ANTHROPIC_BASE_URL=https://api.anthropic.com (optional)
#
# Option 2 — OpenAI:
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_API_KEY=sk-your-key-here
# OPENAI_MODEL=gpt-4o
# OPENAI_BASE_URL=https://api.openai.com/v1 (optional)
#
# Option 3 — Google Gemini:
# CLAUDE_CODE_USE_GEMINI=1
# GEMINI_API_KEY=your-gemini-key-here
# GEMINI_MODEL=gemini-2.0-flash
# GEMINI_BASE_URL=https://generativelanguage.googleapis.com (optional)
#
# Option 4 — GitHub Models:
# CLAUDE_CODE_USE_GITHUB=1
# GITHUB_TOKEN=ghp_your-token-here
#
# Option 5 — Ollama (local):
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_BASE_URL=http://localhost:11434/v1
# OPENAI_API_KEY=ollama
# OPENAI_MODEL=llama3.2
#
# Option 6 — LM Studio (local):
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_BASE_URL=http://localhost:1234/v1
# OPENAI_MODEL=your-model-id-here
# OPENAI_API_KEY=lmstudio (optional)
#
# Option 7 — AWS Bedrock (may also need: aws configure):
# CLAUDE_CODE_USE_BEDROCK=1
# AWS_REGION=us-east-1
# AWS_DEFAULT_REGION=us-east-1
# AWS_BEARER_TOKEN_BEDROCK=your-bearer-token-here
# ANTHROPIC_BEDROCK_BASE_URL=https://bedrock-runtime.us-east-1.amazonaws.com
#
# Option 8 — Google Vertex AI:
# CLAUDE_CODE_USE_VERTEX=1
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
# CLOUD_ML_REGION=us-east5
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
#
# ── How to set variables on each OS ──────────────────────────────────
#
# macOS (zsh):
# 1. Open: nano ~/.zshrc
# 2. Add each variable as: export VAR_NAME=value
# 3. Save and reload: source ~/.zshrc
#
# Linux (bash):
# 1. Open: nano ~/.bashrc
# 2. Add each variable as: export VAR_NAME=value
# 3. Save and reload: source ~/.bashrc
#
# Windows (PowerShell):
# Run for each variable:
# [System.Environment]::SetEnvironmentVariable('VAR_NAME', 'value', 'User')
# Then restart your terminal.
#
# Windows (Command Prompt):
# Run for each variable:
# setx VAR_NAME value
# Then restart your terminal.
#
# Windows (GUI):
# Settings > System > About > Advanced System Settings >
# Environment Variables > under "User variables" click New,
# then add each variable.
#
# ── Important notes ──────────────────────────────────────────────────
#
# LOCAL SERVERS: If using LM Studio or Ollama, the server MUST be
# running with a model loaded before you launch OpenClaude —
# otherwise you'll get connection errors.
#
# SWITCHING PROVIDERS: To temporarily switch, unset the relevant
# variables in your current terminal session:
#
# macOS / Linux:
# unset VAR_NAME
# # e.g.: unset CLAUDE_CODE_USE_OPENAI OPENAI_BASE_URL OPENAI_MODEL
#
# Windows (PowerShell — current session only):
# Remove-Item Env:VAR_NAME
#
# To permanently remove a variable on Windows:
# [System.Environment]::SetEnvironmentVariable('VAR_NAME', $null, 'User')
#
# LOAD ORDER:
# Shell and system environment variables are inherited by the process.
# Project .env files are only used if your launcher or shell loads them
# before starting OpenClaude.
# COMPATIBILITY:
# System-wide variables work regardless of how you run OpenClaude:
# npx, global npm install, bun run, or node directly. Any process
# launched from your terminal inherits your shell's environment.
#
# REMINDER: Make sure .env is in your .gitignore to avoid committing secrets.
# =============================================================================
# =============================================================================
# PROVIDER SELECTION — uncomment ONE block below
# =============================================================================
# -----------------------------------------------------------------------------
# Option 1: Anthropic (default — no provider flag needed)
# -----------------------------------------------------------------------------
ANTHROPIC_API_KEY=sk-ant-your-key-here
# Override the default model (optional)
# ANTHROPIC_MODEL=claude-sonnet-4-5
# Use a custom Anthropic-compatible endpoint (optional)
# ANTHROPIC_BASE_URL=https://api.anthropic.com
# -----------------------------------------------------------------------------
# Option 2: OpenAI
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_API_KEY=sk-your-key-here
# OPENAI_MODEL=gpt-4o
# Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com)
# OPENAI_BASE_URL=https://api.openai.com/v1
# -----------------------------------------------------------------------------
# Option 3: Google Gemini
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_GEMINI=1
# GEMINI_API_KEY=your-gemini-key-here
# GEMINI_MODEL=gemini-2.0-flash
# Use a custom Gemini endpoint (optional)
# GEMINI_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai
# -----------------------------------------------------------------------------
# Option 4: GitHub Models
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_GITHUB=1
# GITHUB_TOKEN=ghp_your-token-here
# -----------------------------------------------------------------------------
# Option 5: Ollama (local models)
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_BASE_URL=http://localhost:11434/v1
# OPENAI_API_KEY=ollama
# OPENAI_MODEL=llama3.2
# -----------------------------------------------------------------------------
# Option 6: LM Studio (local models)
# -----------------------------------------------------------------------------
# LM Studio exposes an OpenAI-compatible API, so we use the OpenAI provider.
# Make sure LM Studio is running with the Developer server enabled
# (Developer tab > toggle server ON).
#
# Steps:
# 1. Download and install LM Studio from https://lmstudio.ai
# 2. Search for and download a model (e.g. any coding or instruct model)
# 3. Load the model and start the Developer server
# 4. Set OPENAI_MODEL to the model ID shown in LM Studio's Developer tab
#
# The default server URL is http://localhost:1234 — change the port below
# if you've configured a different one in LM Studio.
#
# OPENAI_API_KEY is optional — LM Studio runs locally and ignores it.
# Some clients require a non-empty value; if you get auth errors, set it
# to any dummy value (e.g. "lmstudio").
#
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_BASE_URL=http://localhost:1234/v1
# OPENAI_MODEL=your-model-id-here
# -----------------------------------------------------------------------------
# Option 7: AWS Bedrock
# -----------------------------------------------------------------------------
# You may also need AWS CLI credentials configured (run: aws configure)
# or have AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set in your
# environment in addition to the variables below.
#
# CLAUDE_CODE_USE_BEDROCK=1
# AWS_REGION=us-east-1
# AWS_DEFAULT_REGION=us-east-1
# AWS_BEARER_TOKEN_BEDROCK=your-bearer-token-here
# ANTHROPIC_BEDROCK_BASE_URL=https://bedrock-runtime.us-east-1.amazonaws.com
# -----------------------------------------------------------------------------
# Option 8: Google Vertex AI
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_VERTEX=1
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
# CLOUD_ML_REGION=us-east5
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
# =============================================================================
# OPTIONAL TUNING
# =============================================================================
# Max number of API retries on failure (default: 10)
# CLAUDE_CODE_MAX_RETRIES=10
# Enable persistent retry mode for unattended/CI sessions
# Retries 429/529 indefinitely with smart backoff
# CLAUDE_CODE_UNATTENDED_RETRY=1
# Enable extended key reporting (Kitty keyboard protocol)
# Useful for iTerm2, WezTerm, Ghostty if modifier keys feel off
# OPENCLAUDE_ENABLE_EXTENDED_KEYS=1
# Disable "Co-authored-by" line in git commits made by OpenClaude
# OPENCLAUDE_DISABLE_CO_AUTHORED_BY=1
# Custom timeout for API requests in milliseconds (default: varies)
# API_TIMEOUT_MS=60000
# Enable debug logging
# CLAUDE_DEBUG=1

View File

@@ -6,21 +6,24 @@ on:
branches:
- main
permissions:
contents: read
jobs:
smoke-and-tests:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set up Node.js
uses: actions/setup-node@v4
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
with:
node-version: 22
- name: Set up Bun
uses: oven-sh/setup-bun@v2
uses: oven-sh/setup-bun@4bc047ad259df6fc24a6c9b0f9a0cb08cf17fbe5 # v2.0.1
with:
bun-version: 1.3.11

1
.gitignore vendored
View File

@@ -3,5 +3,6 @@ dist/
*.tsbuildinfo
.env
.env.*
!.env.example
.openclaude-profile.json
reports/

162
ANDROID_INSTALL.md Normal file
View File

@@ -0,0 +1,162 @@
# OpenClaude on Android (Termux)
A complete guide to running OpenClaude on Android using Termux + proot Ubuntu.
---
## Prerequisites
- Android phone with ~700MB free storage
- [Termux](https://f-droid.org/en/packages/com.termux/) installed from **F-Droid** (not Play Store)
- An [OpenRouter](https://openrouter.ai) API key (free, no credit card required)
---
## Why This Setup?
OpenClaude requires [Bun](https://bun.sh) to build, and Bun does not support Android natively. The workaround is running a real Ubuntu environment inside Termux via `proot-distro`, where Bun's Linux binary works correctly.
---
## Installation
### Step 1 — Update Termux
```bash
pkg update && pkg upgrade
```
Press `N` or Enter for any config file conflict prompts.
### Step 2 — Install dependencies
```bash
pkg install nodejs-lts git proot-distro
```
Verify Node.js:
```bash
node --version # should be v20+
```
### Step 3 — Clone OpenClaude
```bash
git clone https://github.com/Gitlawb/openclaude.git
cd openclaude
npm install
npm link
```
### Step 4 — Install Ubuntu via proot
```bash
proot-distro install ubuntu
```
This downloads ~200400MB. Wait for it to complete.
### Step 5 — Install Bun inside Ubuntu
```bash
proot-distro login ubuntu
curl -fsSL https://bun.sh/install | bash
source ~/.bashrc
bun --version # should show 1.3.11+
```
### Step 6 — Build OpenClaude
```bash
cd /data/data/com.termux/files/home/openclaude
bun run build
```
You should see:
```
✓ Built openclaude v0.1.6 → dist/cli.mjs
```
### Step 7 — Save env vars permanently
Still inside Ubuntu, add your OpenRouter config to `.bashrc`:
```bash
echo 'export CLAUDE_CODE_USE_OPENAI=1' >> ~/.bashrc
echo 'export OPENAI_API_KEY=your_openrouter_key_here' >> ~/.bashrc
echo 'export OPENAI_BASE_URL=https://openrouter.ai/api/v1' >> ~/.bashrc
echo 'export OPENAI_MODEL=qwen/qwen3.6-plus-preview:free' >> ~/.bashrc
source ~/.bashrc
```
Replace `your_openrouter_key_here` with your actual key from [openrouter.ai/keys](https://openrouter.ai/keys).
### Step 8 — Run OpenClaude
```bash
node dist/cli.mjs
```
Select **3** (3rd-party platform) at the login screen. Your env vars will be detected automatically.
---
## Restarting After Closing Termux
Every time you reopen Termux after killing it, run:
```bash
proot-distro login ubuntu
cd /data/data/com.termux/files/home/openclaude
node dist/cli.mjs
```
---
## Recommended Free Model
**`qwen/qwen3.6-plus-preview:free`** — Best free model on OpenRouter as of April 2026.
- 1M token context window
- Beats Claude 4.5 Opus on Terminal-Bench 2.0 agentic coding (61.6 vs 59.3)
- Built-in chain-of-thought reasoning
- Native tool use and function calling
- $0/M tokens (preview period)
> ⚠️ Free status may change when the preview period ends. Check [openrouter.ai](https://openrouter.ai/qwen/qwen3.6-plus-preview:free) for current pricing.
---
## Alternative Free Models (OpenRouter)
| Model ID | Context | Notes |
|---|---|---|
| `qwen/qwen3-coder:free` | 262K | Best for pure coding tasks |
| `openai/gpt-oss-120b:free` | 131K | OpenAI open model, strong tool calling |
| `nvidia/nemotron-3-super-120b-a12b:free` | 262K | Hybrid MoE, good general use |
| `meta-llama/llama-3.3-70b-instruct:free` | 66K | Reliable, widely tested |
Switch models anytime:
```bash
export OPENAI_MODEL=qwen/qwen3-coder:free
node dist/cli.mjs
```
---
## Why Not Groq or Cerebras?
Both were tested and fail due to OpenClaude's large system prompt (~50K tokens):
- **Groq free tier**: TPM limits too low (6K12K tokens/min)
- **Cerebras free tier**: TPM limits exceeded, even on `llama3.1-8b`
OpenRouter free models have no TPM restrictions — only 20 req/min and 200 req/day.
---
## Tips
- **Don't swipe Termux away** from recent apps mid-session — use the home button to minimize instead.
- The Ubuntu environment persists between Termux sessions; your build and config are saved.
- Run `bun run build` again only if you pull updates to the OpenClaude repo.

440
README.md
View File

@@ -1,377 +1,211 @@
# OpenClaude
Use Claude Code with **any LLM** — not just Claude.
OpenClaude is an open-source coding-agent CLI that works with more than one model provider.
OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping the same terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
## Why OpenClaude
---
## Install
### Option A: npm (recommended)
```bash
npm install -g @gitlawb/openclaude
```
### Option B: From source (requires Bun)
Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions such as `1.3.4` can fail with a large batch of unresolved module errors during `bun run build`.
```bash
# Clone from gitlawb
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
# Install dependencies
bun install
# Build
bun run build
# Link globally (optional)
npm link
```
### Option C: Run directly with Bun (no build step)
```bash
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run dev
```
- Use one CLI across cloud and local model providers
- Save provider profiles inside the app with `/provider`
- Run locally with Ollama or Atomic Chat
- Keep core coding-agent workflows: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
---
## Quick Start
### 1. Set 3 environment variables
### Install
```bash
npm install -g @gitlawb/openclaude
```
If the npm install path later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
### Start
```bash
openclaude
```
Inside OpenClaude:
- run `/provider` for guided setup of OpenAI-compatible, Gemini, Ollama, or Codex profiles
- run `/onboard-github` for GitHub Models setup
### Fastest OpenAI setup
macOS / Linux:
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
```
### 2. Run it
```bash
# If installed via npm
openclaude
# If built from source
bun run dev
# or after build:
node dist/cli.mjs
```
That's it. The tool system, streaming, file editing, multi-step reasoning — everything works through the model you picked.
The npm package name is `@gitlawb/openclaude`, but the installed CLI command is still `openclaude`.
---
## Provider Examples
### OpenAI
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o
```
### Codex via ChatGPT auth
`codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
`codexspark` maps to GPT-5.3 Codex Spark for faster loops.
If you already use the Codex CLI, OpenClaude will read `~/.codex/auth.json`
automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or
override the token directly with `CODEX_API_KEY`.
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan
# optional if you do not already have ~/.codex/auth.json
export CODEX_API_KEY=...
openclaude
```
### DeepSeek
Windows PowerShell:
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
openclaude
```
### Google Gemini (via OpenRouter)
### Fastest local Ollama setup
macOS / Linux:
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-or-...
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
```
### Ollama (local, free)
```bash
ollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
# no API key needed for local models
export OPENAI_MODEL=qwen2.5-coder:7b
openclaude
```
### LM Studio (local)
Windows PowerShell:
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
```
```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="qwen2.5-coder:7b"
### Together AI
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
```
### Groq
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
```
### Mistral
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
```
### Azure OpenAI
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=your-azure-key
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
openclaude
```
---
## Environment Variables
## Setup Guides
| Variable | Required | Description |
|----------|----------|-------------|
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
| `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
| `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
| `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
| `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
| `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
| `CODEX_HOME` | Codex only | Alternative Codex home directory (`auth.json` will be read from here) |
| `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Set to `1` to suppress the default `Co-Authored-By` trailer in generated git commit messages |
Beginner-friendly guides:
You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
- [Non-Technical Setup](docs/non-technical-setup.md)
- [Windows Quick Start](docs/quick-start-windows.md)
- [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
OpenClaude PR bodies use OpenClaude branding by default. `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` only affects the commit trailer, not PR attribution text.
Advanced and source-build guides:
- [Advanced Setup](docs/advanced-setup.md)
- [Android Install](ANDROID_INSTALL.md)
---
## Runtime Hardening
## Supported Providers
Use these commands to keep the CLI stable and catch environment mistakes early:
```bash
# quick startup sanity check
bun run smoke
# validate provider env + reachability
bun run doctor:runtime
# print machine-readable runtime diagnostics
bun run doctor:runtime:json
# persist a diagnostics report to reports/doctor-runtime.json
bun run doctor:report
# full local hardening check (smoke + runtime doctor)
bun run hardening:check
# strict hardening (includes project-wide typecheck)
bun run hardening:strict
```
Notes:
- `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key (`SUA_CHAVE`) or a missing key for non-local providers.
- Local providers (for example `http://localhost:11434/v1`) can run without `OPENAI_API_KEY`.
- Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
### Provider Launch Profiles
Use profile launchers to avoid repeated environment setup:
```bash
# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
bun run profile:init
# preview the best provider/model for your goal
bun run profile:recommend -- --goal coding --benchmark
# auto-apply the best available local/openai provider/model for your goal
bun run profile:auto -- --goal latency
# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
bun run profile:codex
# openai bootstrap with explicit key
bun run profile:init -- --provider openai --api-key sk-...
# ollama bootstrap with custom model
bun run profile:init -- --provider ollama --model llama3.1:8b
# ollama bootstrap with intelligent model auto-selection
bun run profile:init -- --provider ollama --goal coding
# codex bootstrap with a fast model alias
bun run profile:init -- --provider codex --model codexspark
# launch using persisted profile (.openclaude-profile.json)
bun run dev:profile
# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
bun run dev:codex
# OpenAI profile (requires OPENAI_API_KEY in your shell)
bun run dev:openai
# Ollama profile (defaults: localhost:11434, llama3.1:8b)
bun run dev:ollama
```
`profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
Goal-based Ollama selection only recommends among models that are already installed and reachable from Ollama.
Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
`dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
For `dev:ollama`, make sure Ollama is running locally before launch.
| Provider | Setup Path | Notes |
| --- | --- | --- |
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and compatible local `/v1` servers |
| Gemini | `/provider` or env vars | Google Gemini support through the runtime provider layer |
| GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
| Codex | `/provider` | Uses existing Codex credentials when available |
| Ollama | `/provider` or env vars | Local inference with no API key |
| Atomic Chat | advanced setup | Local Apple Silicon backend |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
---
## What Works
- **All tools**: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
- **Streaming**: Real-time token streaming
- **Tool calling**: Multi-step tool chains (the model calls tools, gets results, continues)
- **Images**: Base64 and URL images passed to vision models
- **Slash commands**: /commit, /review, /compact, /diff, /doctor, etc.
- **Sub-agents**: AgentTool spawns sub-agents using the same provider
- **Memory**: Persistent memory system
## What's Different
- **No thinking mode**: Anthropic's extended thinking is disabled (OpenAI models use different reasoning)
- **No prompt caching**: Anthropic-specific cache headers are skipped
- **No beta features**: Anthropic-specific beta headers are ignored
- **Token limits**: Defaults to 32K max output — some models may cap lower, which is handled gracefully
- Tool-driven coding workflows
Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
- Streaming responses
Real-time token output and tool progress
- Tool calling
Multi-step tool loops with model calls, tool execution, and follow-up responses
- Images
URL and base64 image inputs for providers that support vision
- Provider profiles
Guided setup plus saved `.openclaude-profile.json` support
- Local and remote model backends
Cloud APIs, local servers, and Apple Silicon local inference
---
## How It Works
## Provider Notes
The shim (`src/services/api/openaiShim.ts`) sits between Claude Code and the LLM API:
OpenClaude supports multiple providers, but behavior is not identical across all of them.
```
Claude Code Tool System
|
v
Anthropic SDK interface (duck-typed)
|
v
openaiShim.ts <-- translates formats
|
v
OpenAI Chat Completions API
|
v
Any compatible model
```
- Anthropic-specific features may not exist on other providers
- Tool quality depends heavily on the selected model
- Smaller local models can struggle with long multi-step tool flows
- Some providers impose lower output caps than the CLI defaults, and OpenClaude adapts where possible
It translates:
- Anthropic message blocks → OpenAI messages
- Anthropic tool_use/tool_result → OpenAI function calls
- OpenAI SSE streaming → Anthropic stream events
- Anthropic system prompt arrays → OpenAI system messages
The rest of Claude Code doesn't know it's talking to a different model.
For best results, use models with strong tool/function calling support.
---
## Model Quality Notes
## Web Search and Fetch
Not all models are equal at agentic tool use. Here's a rough guide:
`WebFetch` works out of the box.
| Model | Tool Calling | Code Quality | Speed |
|-------|-------------|-------------|-------|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Smaller models (<7B) | Limited | Limited | Very Fast |
`WebSearch` and richer JS-aware fetching work best with a Firecrawl API key:
For best results, use models with strong function/tool calling support.
```bash
export FIRECRAWL_API_KEY=your-key-here
```
With Firecrawl enabled:
- `WebSearch` is available across more provider setups
- `WebFetch` can handle JavaScript-rendered pages more reliably
Firecrawl is optional. Without it, OpenClaude falls back to the built-in behavior.
---
## Files Changed from Original
## Source Build
```
src/services/api/openaiShim.ts — NEW: OpenAI-compatible API shim (724 lines)
src/services/api/client.ts — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts — Added 'openai' provider type
src/utils/model/configs.ts — Added openai model mappings
src/utils/model/model.ts — Respects OPENAI_MODEL for defaults
src/utils/auth.ts — Recognizes OpenAI as valid 3P provider
```bash
bun install
bun run build
node dist/cli.mjs
```
6 files changed. 786 lines added. Zero dependencies added.
Helpful commands:
- `bun run dev`
- `bun run smoke`
- `bun run doctor:runtime`
---
## Origin
## VS Code Extension
This is a fork of [instructkr/claude-code](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code), which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026.
The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration and theme support.
The original Claude Code source is the property of Anthropic. This repository is not affiliated with or endorsed by Anthropic.
---
## Security
If you believe you found a security issue, see [SECURITY.md](SECURITY.md).
---
## Contributing
Contributions are welcome.
For larger changes, open an issue first so the scope is clear before implementation. Helpful validation commands include:
- `bun run build`
- `bun run smoke`
- focused `bun test ...` runs for touched areas
---
## Disclaimer
OpenClaude is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.
"Claude" and "Claude Code" are trademarks of Anthropic.
---
## License
This repository is provided for educational and research purposes. The original source code is subject to Anthropic's terms. The OpenAI shim additions are public domain.
MIT

69
SECURITY.md Normal file
View File

@@ -0,0 +1,69 @@
# Security Policy
## Supported Versions
Open Claude is currently maintained on the latest `main` branch and the latest
npm release only.
| Version | Supported |
| ------- | --------- |
| Latest release | :white_check_mark: |
| Older releases | :x: |
| Unreleased forks / modified builds | :x: |
Security fixes are generally released in the next patch version and may also be
landed directly on `main` before a package release is published.
## Reporting a Vulnerability
If you believe you have found a security vulnerability in Open Claude, please
report it privately.
Preferred reporting channel:
- GitHub Security Advisories / private vulnerability reporting for this
repository
Please include:
- a clear description of the issue
- affected version, commit, or environment
- reproduction steps or a proof of concept
- impact assessment
- any suggested remediation, if available
Please do **not** open a public issue for an unpatched vulnerability.
## Response Process
Our general goals are:
- initial triage acknowledgment within 7 days
- follow-up after validation when we can reproduce the issue
- coordinated disclosure after a fix is available
Severity, exploitability, and maintenance bandwidth may affect timelines.
## Disclosure and CVEs
Valid reports may be fixed privately first and disclosed after a patch is
available.
If a report is accepted and the issue is significant enough to warrant formal
tracking, we may publish a GitHub Security Advisory and request or assign a CVE
through the appropriate channel. CVE issuance is not guaranteed for every
report.
## Scope
This policy applies to:
- the Open Claude source code in this repository
- official release artifacts published from this repository
- the `@gitlawb/openclaude` npm package
This policy does not cover:
- third-party model providers, endpoints, or hosted services
- local misconfiguration on the reporter's machine
- vulnerabilities in unofficial forks, mirrors, or downstream repackages

146
atomic_chat_provider.py Normal file
View File

@@ -0,0 +1,146 @@
"""
atomic_chat_provider.py
-----------------------
Adds native Atomic Chat support to openclaude.
Lets Claude Code route requests to any locally-running model via
Atomic Chat (Apple Silicon only) at 127.0.0.1:1337.
Atomic Chat exposes an OpenAI-compatible API, so messages are forwarded
directly without translation.
Usage (.env):
PREFERRED_PROVIDER=atomic-chat
ATOMIC_CHAT_BASE_URL=http://127.0.0.1:1337
"""
import httpx
import json
import logging
import os
from typing import AsyncIterator
logger = logging.getLogger(__name__)
ATOMIC_CHAT_BASE_URL = os.getenv("ATOMIC_CHAT_BASE_URL", "http://127.0.0.1:1337")
def _api_url(path: str) -> str:
return f"{ATOMIC_CHAT_BASE_URL}/v1{path}"
async def check_atomic_chat_running() -> bool:
try:
async with httpx.AsyncClient(timeout=3.0) as client:
resp = await client.get(_api_url("/models"))
return resp.status_code == 200
except Exception:
return False
async def list_atomic_chat_models() -> list[str]:
try:
async with httpx.AsyncClient(timeout=5.0) as client:
resp = await client.get(_api_url("/models"))
resp.raise_for_status()
data = resp.json()
return [m["id"] for m in data.get("data", [])]
except Exception as e:
logger.warning(f"Could not list Atomic Chat models: {e}")
return []
async def atomic_chat(
model: str,
messages: list[dict],
system: str | None = None,
max_tokens: int = 4096,
temperature: float = 1.0,
) -> dict:
chat_messages = list(messages)
if system:
chat_messages.insert(0, {"role": "system", "content": system})
payload = {
"model": model,
"messages": chat_messages,
"max_tokens": max_tokens,
"temperature": temperature,
"stream": False,
}
async with httpx.AsyncClient(timeout=120.0) as client:
resp = await client.post(_api_url("/chat/completions"), json=payload)
resp.raise_for_status()
data = resp.json()
choice = data.get("choices", [{}])[0]
assistant_text = choice.get("message", {}).get("content", "")
usage = data.get("usage", {})
return {
"id": data.get("id", "msg_atomic_chat"),
"type": "message",
"role": "assistant",
"content": [{"type": "text", "text": assistant_text}],
"model": model,
"stop_reason": "end_turn",
"stop_sequence": None,
"usage": {
"input_tokens": usage.get("prompt_tokens", 0),
"output_tokens": usage.get("completion_tokens", 0),
},
}
async def atomic_chat_stream(
model: str,
messages: list[dict],
system: str | None = None,
max_tokens: int = 4096,
temperature: float = 1.0,
) -> AsyncIterator[str]:
chat_messages = list(messages)
if system:
chat_messages.insert(0, {"role": "system", "content": system})
payload = {
"model": model,
"messages": chat_messages,
"max_tokens": max_tokens,
"temperature": temperature,
"stream": True,
}
yield "event: message_start\n"
yield f'data: {json.dumps({"type": "message_start", "message": {"id": "msg_atomic_chat_stream", "type": "message", "role": "assistant", "content": [], "model": model, "stop_reason": None, "usage": {"input_tokens": 0, "output_tokens": 0}}})}\n\n'
yield "event: content_block_start\n"
yield f'data: {json.dumps({"type": "content_block_start", "index": 0, "content_block": {"type": "text", "text": ""}})}\n\n'
async with httpx.AsyncClient(timeout=120.0) as client:
async with client.stream("POST", _api_url("/chat/completions"), json=payload) as resp:
resp.raise_for_status()
async for line in resp.aiter_lines():
if not line or not line.startswith("data: "):
continue
raw = line[len("data: "):]
if raw.strip() == "[DONE]":
break
try:
chunk = json.loads(raw)
delta = chunk.get("choices", [{}])[0].get("delta", {})
delta_text = delta.get("content", "")
if delta_text:
yield "event: content_block_delta\n"
yield f'data: {json.dumps({"type": "content_block_delta", "index": 0, "delta": {"type": "text_delta", "text": delta_text}})}\n\n'
finish_reason = chunk.get("choices", [{}])[0].get("finish_reason")
if finish_reason:
usage = chunk.get("usage", {})
yield "event: content_block_stop\n"
yield f'data: {json.dumps({"type": "content_block_stop", "index": 0})}\n\n'
yield "event: message_delta\n"
yield f'data: {json.dumps({"type": "message_delta", "delta": {"stop_reason": "end_turn", "stop_sequence": None}, "usage": {"output_tokens": usage.get("completion_tokens", 0)}})}\n\n'
yield "event: message_stop\n"
yield f'data: {json.dumps({"type": "message_stop"})}\n\n'
break
except json.JSONDecodeError:
continue

161
bun.lock
View File

@@ -5,82 +5,83 @@
"": {
"name": "openclaude",
"dependencies": {
"@alcalzone/ansi-tokenize": "^0.3.0",
"@anthropic-ai/bedrock-sdk": "^0.26.0",
"@anthropic-ai/foundry-sdk": "^0.2.0",
"@anthropic-ai/sandbox-runtime": "^0.0.46",
"@anthropic-ai/sdk": "^0.81.0",
"@anthropic-ai/vertex-sdk": "^0.14.0",
"@commander-js/extra-typings": "^12.0.0",
"@growthbook/growthbook": "^1.3.0",
"@modelcontextprotocol/sdk": "^1.12.0",
"@opentelemetry/api": "^1.9.1",
"@opentelemetry/api-logs": "^0.214.0",
"@opentelemetry/core": "^2.6.1",
"@opentelemetry/exporter-logs-otlp-http": "^0.214.0",
"@opentelemetry/exporter-trace-otlp-grpc": "^0.57.0",
"@opentelemetry/resources": "^2.6.1",
"@opentelemetry/sdk-logs": "^0.214.0",
"@opentelemetry/sdk-metrics": "^2.6.1",
"@opentelemetry/sdk-trace-base": "^2.6.1",
"@opentelemetry/sdk-trace-node": "^2.6.1",
"@opentelemetry/semantic-conventions": "^1.40.0",
"ajv": "^8.17.0",
"auto-bind": "^5.0.1",
"axios": "^1.14.0",
"bidi-js": "^1.0.3",
"chalk": "^5.4.0",
"chokidar": "^4.0.0",
"cli-boxes": "^3.0.0",
"cli-highlight": "^2.1.0",
"code-excerpt": "^4.0.0",
"commander": "^12.0.0",
"diff": "^7.0.0",
"emoji-regex": "^10.4.0",
"env-paths": "^3.0.0",
"execa": "^9.5.0",
"fflate": "^0.8.2",
"figures": "^6.1.0",
"fuse.js": "^7.1.0",
"get-east-asian-width": "^1.3.0",
"google-auth-library": "^9.15.0",
"https-proxy-agent": "^7.0.6",
"ignore": "^7.0.0",
"indent-string": "^5.0.0",
"jsonc-parser": "^3.3.1",
"lodash-es": "^4.17.21",
"lru-cache": "^11.0.0",
"marked": "^15.0.0",
"p-map": "^7.0.3",
"picomatch": "^4.0.0",
"proper-lockfile": "^4.1.2",
"qrcode": "^1.5.4",
"react": "^19.2.4",
"react-compiler-runtime": "^1.0.0",
"react-reconciler": "^0.33.0",
"semver": "^7.6.3",
"shell-quote": "^1.8.2",
"signal-exit": "^4.1.0",
"stack-utils": "^2.0.6",
"strip-ansi": "^7.1.0",
"supports-hyperlinks": "^3.1.0",
"tree-kill": "^1.2.2",
"turndown": "^7.2.0",
"type-fest": "^4.30.0",
"undici": "^7.3.0",
"usehooks-ts": "^3.1.1",
"vscode-languageserver-protocol": "^3.17.5",
"wrap-ansi": "^9.0.0",
"ws": "^8.18.0",
"xss": "^1.0.15",
"yaml": "^2.7.0",
"zod": "^3.24.0",
"@alcalzone/ansi-tokenize": "0.3.0",
"@anthropic-ai/bedrock-sdk": "0.26.4",
"@anthropic-ai/foundry-sdk": "0.2.3",
"@anthropic-ai/sandbox-runtime": "0.0.46",
"@anthropic-ai/sdk": "0.81.0",
"@anthropic-ai/vertex-sdk": "0.14.4",
"@commander-js/extra-typings": "12.1.0",
"@growthbook/growthbook": "1.6.5",
"@mendable/firecrawl-js": "4.18.1",
"@modelcontextprotocol/sdk": "1.29.0",
"@opentelemetry/api": "1.9.1",
"@opentelemetry/api-logs": "0.214.0",
"@opentelemetry/core": "2.6.1",
"@opentelemetry/exporter-logs-otlp-http": "0.214.0",
"@opentelemetry/exporter-trace-otlp-grpc": "0.57.2",
"@opentelemetry/resources": "2.6.1",
"@opentelemetry/sdk-logs": "0.214.0",
"@opentelemetry/sdk-metrics": "2.6.1",
"@opentelemetry/sdk-trace-base": "2.6.1",
"@opentelemetry/sdk-trace-node": "2.6.1",
"@opentelemetry/semantic-conventions": "1.40.0",
"ajv": "8.18.0",
"auto-bind": "5.0.1",
"axios": "1.14.0",
"bidi-js": "1.0.3",
"chalk": "5.6.2",
"chokidar": "4.0.3",
"cli-boxes": "3.0.0",
"cli-highlight": "2.1.11",
"code-excerpt": "4.0.0",
"commander": "12.1.0",
"diff": "8.0.3",
"emoji-regex": "10.6.0",
"env-paths": "3.0.0",
"execa": "9.6.1",
"fflate": "0.8.2",
"figures": "6.1.0",
"fuse.js": "7.1.0",
"get-east-asian-width": "1.5.0",
"google-auth-library": "9.15.1",
"https-proxy-agent": "7.0.6",
"ignore": "7.0.5",
"indent-string": "5.0.0",
"jsonc-parser": "3.3.1",
"lodash-es": "4.18.0",
"lru-cache": "11.2.7",
"marked": "15.0.12",
"p-map": "7.0.4",
"picomatch": "4.0.4",
"proper-lockfile": "4.1.2",
"qrcode": "1.5.4",
"react": "19.2.4",
"react-compiler-runtime": "1.0.0",
"react-reconciler": "0.33.0",
"semver": "7.7.4",
"shell-quote": "1.8.3",
"signal-exit": "4.1.0",
"stack-utils": "2.0.6",
"strip-ansi": "7.2.0",
"supports-hyperlinks": "3.2.0",
"tree-kill": "1.2.2",
"turndown": "7.2.2",
"type-fest": "4.41.0",
"undici": "7.24.6",
"usehooks-ts": "3.1.1",
"vscode-languageserver-protocol": "3.17.5",
"wrap-ansi": "9.0.2",
"ws": "8.20.0",
"xss": "1.0.15",
"yaml": "2.8.3",
"zod": "3.25.76",
},
"devDependencies": {
"@types/bun": "^1.2.0",
"@types/node": "^25.5.0",
"@types/react": "^19.2.14",
"typescript": "^5.7.0",
"@types/bun": "1.3.11",
"@types/node": "25.5.0",
"@types/react": "19.2.14",
"typescript": "5.9.3",
},
},
},
@@ -185,6 +186,8 @@
"@js-sdsl/ordered-map": ["@js-sdsl/ordered-map@4.4.2", "", {}, "sha512-iUKgm52T8HOE/makSxjqoWhe95ZJA1/G1sYsGev2JDKUSS14KAgg1LHb+Ba+IPow0xflbnSkOsZcO08C7w1gYw=="],
"@mendable/firecrawl-js": ["@mendable/firecrawl-js@4.18.1", "", { "dependencies": { "axios": "1.14.0", "firecrawl": "4.16.0", "typescript-event-target": "^1.1.1", "zod": "^3.23.8", "zod-to-json-schema": "^3.23.0" } }, "sha512-NfmJv+xcHoZthj8I3NP/8KAgO8EWcvOcTvCAvszxqs7/6sCs1CRss6Tum6RycZNSwJkr5RzQossN89IlixRfng=="],
"@mixmark-io/domino": ["@mixmark-io/domino@2.2.0", "", {}, "sha512-Y28PR25bHXUg88kCV7nivXrP2Nj2RueZ3/l/jdx6J9f8J4nsEGcgX0Qe6lt7Pa+J79+kPiJU3LguR6O/6zrLOw=="],
"@modelcontextprotocol/sdk": ["@modelcontextprotocol/sdk@1.29.0", "", { "dependencies": { "@hono/node-server": "^1.19.9", "ajv": "^8.17.1", "ajv-formats": "^3.0.1", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "eventsource-parser": "^3.0.0", "express": "^5.2.1", "express-rate-limit": "^8.2.1", "hono": "^4.11.4", "jose": "^6.1.3", "json-schema-typed": "^8.0.2", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", "zod": "^3.25 || ^4.0", "zod-to-json-schema": "^3.25.1" }, "peerDependencies": { "@cfworker/json-schema": "^4.1.1" }, "optionalPeers": ["@cfworker/json-schema"] }, "sha512-zo37mZA9hJWpULgkRpowewez1y6ML5GsXJPY8FI0tBBCd77HEvza4jDqRKOXgHNn867PVGCyTdzqpz0izu5ZjQ=="],
@@ -433,7 +436,7 @@
"depd": ["depd@2.0.0", "", {}, "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw=="],
"diff": ["diff@7.0.0", "", {}, "sha512-PJWHUb1RFevKCwaFA9RlG5tCd+FO5iRh9A8HEtkmBH2Li03iJriB6m6JIN4rGz3K3JLawI7/veA1xzRKP6ISBw=="],
"diff": ["diff@8.0.3", "", {}, "sha512-qejHi7bcSD4hQAZE0tNAawRK1ZtafHDmMTMkrrIGgSLl7hTnQHmKCeB45xAcbfTqK2zowkM3j3bHt/4b/ARbYQ=="],
"dijkstrajs": ["dijkstrajs@1.0.3", "", {}, "sha512-qiSlmBq9+BCdCA/L46dw8Uy93mloxsPSbwnm5yrKn2vMPiy8KyAskTF6zuV/j5BMsmOGZDPs7KjU+mjb670kfA=="],
@@ -495,6 +498,8 @@
"find-up": ["find-up@4.1.0", "", { "dependencies": { "locate-path": "^5.0.0", "path-exists": "^4.0.0" } }, "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw=="],
"firecrawl": ["firecrawl@4.16.0", "", { "dependencies": { "axios": "^1.13.5", "typescript-event-target": "^1.1.1", "zod": "^3.23.8", "zod-to-json-schema": "^3.23.0" } }, "sha512-7SJ/FWhZBtW2gTCE/BsvU+gbfIpfTq+D9IH82l9MacauLVptaY6EdYAhrK3YSMC9yr5NxvxRcpZKcXG/nqjiiQ=="],
"follow-redirects": ["follow-redirects@1.15.11", "", {}, "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ=="],
"form-data": ["form-data@4.0.5", "", { "dependencies": { "asynckit": "^0.4.0", "combined-stream": "^1.0.8", "es-set-tostringtag": "^2.1.0", "hasown": "^2.0.2", "mime-types": "^2.1.12" } }, "sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w=="],
@@ -591,7 +596,7 @@
"locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="],
"lodash-es": ["lodash-es@4.17.23", "", {}, "sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg=="],
"lodash-es": ["lodash-es@4.18.0", "", {}, "sha512-koAgswPPA+UTaPN64Etp+PGP+WT6oqOS2NMi5yDkMaiGw9qY4VxQbQF0mtKMyr4BlTznWyzePV5UpECTJQmSUA=="],
"lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="],
@@ -767,6 +772,8 @@
"typescript": ["typescript@5.9.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="],
"typescript-event-target": ["typescript-event-target@1.1.2", "", {}, "sha512-TvkrTUpv7gCPlcnSoEwUVUBwsdheKm+HF5u2tPAKubkIGMfovdSizCTaZRY/NhR8+Ijy8iZZUapbVQAsNrkFrw=="],
"undici": ["undici@7.24.6", "", {}, "sha512-Xi4agocCbRzt0yYMZGMA6ApD7gvtUFaxm4ZmeacWI4cZxaF6C+8I8QfofC20NAePiB/IcvZmzkJ7XPa471AEtA=="],
"undici-types": ["undici-types@7.18.2", "", {}, "sha512-AsuCzffGHJybSaRrmr5eHr81mwJU3kjw6M+uprWvCXiNeN9SOGwQ3Jn8jb8m3Z6izVgknn1R0FTCEAP2QrLY/w=="],
@@ -817,6 +824,8 @@
"zod-to-json-schema": ["zod-to-json-schema@3.25.2", "", { "peerDependencies": { "zod": "^3.25.28 || ^4" } }, "sha512-O/PgfnpT1xKSDeQYSCfRI5Gy3hPf91mKVDuYLUHZJMiDFptvP41MSnWofm8dnCm0256ZNfZIM7DSzuSMAFnjHA=="],
"@anthropic-ai/sandbox-runtime/lodash-es": ["lodash-es@4.17.23", "", {}, "sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg=="],
"@aws-crypto/crc32/@aws-crypto/util": ["@aws-crypto/util@5.2.0", "", { "dependencies": { "@aws-sdk/types": "^3.222.0", "@smithy/util-utf8": "^2.0.0", "tslib": "^2.6.2" } }, "sha512-4RkU9EsI6ZpBve5fseQlGNUWKMa1RLPQ1dnjnQoe07ldfIzcsGb5hC5W0Dm7u423KWzawlrpbjXBrXCEv9zazQ=="],
"@aws-crypto/crc32/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],

262
docs/advanced-setup.md Normal file
View File

@@ -0,0 +1,262 @@
# OpenClaude Advanced Setup
This guide is for users who want source builds, Bun workflows, provider profiles, diagnostics, or more control over runtime behavior.
## Install Options
### Option A: npm
```bash
npm install -g @gitlawb/openclaude
```
### Option B: From source with Bun
Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions can fail during `bun run build`.
```bash
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run build
npm link
```
### Option C: Run directly with Bun
```bash
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run dev
```
## Provider Examples
### OpenAI
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o
```
### Codex via ChatGPT auth
`codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
`codexspark` maps to GPT-5.3 Codex Spark for faster loops.
If you already use the Codex CLI, OpenClaude reads `~/.codex/auth.json` automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or override the token directly with `CODEX_API_KEY`.
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan
# optional if you do not already have ~/.codex/auth.json
export CODEX_API_KEY=...
openclaude
```
### DeepSeek
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
```
### Google Gemini via OpenRouter
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-or-...
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash-001
```
OpenRouter model availability changes over time. If a model stops working, try another current OpenRouter model before assuming the integration is broken.
### Ollama
```bash
ollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
```
### Atomic Chat (local, Apple Silicon)
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://127.0.0.1:1337/v1
export OPENAI_MODEL=your-model-name
```
No API key is needed for Atomic Chat local models.
Or use the profile launcher:
```bash
bun run dev:atomic-chat
```
Download Atomic Chat from [atomic.chat](https://atomic.chat/). The app must be running with a model loaded before launching.
### LM Studio
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
```
### Together AI
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
```
### Groq
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
```
### Mistral
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
```
### Azure OpenAI
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=your-azure-key
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
```
## Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
| `OPENAI_API_KEY` | Yes* | Your API key (`*` not needed for local models like Ollama or Atomic Chat) |
| `OPENAI_MODEL` | Yes | Model name such as `gpt-4o`, `deepseek-chat`, or `llama3.3:70b` |
| `OPENAI_BASE_URL` | No | API endpoint, defaulting to `https://api.openai.com/v1` |
| `CODEX_API_KEY` | Codex only | Codex or ChatGPT access token override |
| `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
| `CODEX_HOME` | Codex only | Alternative Codex home directory |
| `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Suppress the default `Co-Authored-By` trailer in generated git commits |
You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
## Runtime Hardening
Use these commands to validate your setup and catch mistakes early:
```bash
# quick startup sanity check
bun run smoke
# validate provider env + reachability
bun run doctor:runtime
# print machine-readable runtime diagnostics
bun run doctor:runtime:json
# persist a diagnostics report to reports/doctor-runtime.json
bun run doctor:report
# full local hardening check (smoke + runtime doctor)
bun run hardening:check
# strict hardening (includes project-wide typecheck)
bun run hardening:strict
```
Notes:
- `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key or a missing key for non-local providers.
- Local providers such as `http://localhost:11434/v1`, `http://10.0.0.1:11434/v1`, and `http://127.0.0.1:1337/v1` can run without `OPENAI_API_KEY`.
- Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
## Provider Launch Profiles
Use profile launchers to avoid repeated environment setup:
```bash
# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
bun run profile:init
# preview the best provider/model for your goal
bun run profile:recommend -- --goal coding --benchmark
# auto-apply the best available local/openai provider/model for your goal
bun run profile:auto -- --goal latency
# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
bun run profile:codex
# openai bootstrap with explicit key
bun run profile:init -- --provider openai --api-key sk-...
# ollama bootstrap with custom model
bun run profile:init -- --provider ollama --model llama3.1:8b
# ollama bootstrap with intelligent model auto-selection
bun run profile:init -- --provider ollama --goal coding
# atomic-chat bootstrap (auto-detects running model)
bun run profile:init -- --provider atomic-chat
# codex bootstrap with a fast model alias
bun run profile:init -- --provider codex --model codexspark
# launch using persisted profile (.openclaude-profile.json)
bun run dev:profile
# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
bun run dev:codex
# OpenAI profile (requires OPENAI_API_KEY in your shell)
bun run dev:openai
# Ollama profile (defaults: localhost:11434, llama3.1:8b)
bun run dev:ollama
# Atomic Chat profile (Apple Silicon local LLMs at 127.0.0.1:1337)
bun run dev:atomic-chat
```
`profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
If no profile exists yet, `dev:profile` uses the same goal-aware defaults when picking the initial model.
Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
Use `--provider atomic-chat` when you want Atomic Chat as the local Apple Silicon provider.
Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
`dev:openai`, `dev:ollama`, `dev:atomic-chat`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
For `dev:ollama`, make sure Ollama is running locally before launch.
For `dev:atomic-chat`, make sure Atomic Chat is running with a model loaded before launch.

116
docs/non-technical-setup.md Normal file
View File

@@ -0,0 +1,116 @@
# OpenClaude for Non-Technical Users
This guide is for people who want the easiest setup path.
You do not need to build from source. You do not need Bun. You do not need to understand the full codebase.
If you can copy and paste commands into a terminal, you can set this up.
## What OpenClaude Does
OpenClaude lets you use an AI coding assistant with different model providers such as:
- OpenAI
- DeepSeek
- Gemini
- Ollama
- Codex
For most first-time users, OpenAI is the easiest option.
## Before You Start
You need:
1. Node.js 20 or newer installed
2. A terminal window
3. An API key from your provider, unless you are using a local model like Ollama
## Fastest Path
1. Install OpenClaude with npm
2. Set 3 environment variables
3. Run `openclaude`
## Choose Your Operating System
- Windows: [Windows Quick Start](quick-start-windows.md)
- macOS / Linux: [macOS / Linux Quick Start](quick-start-mac-linux.md)
## Which Provider Should You Choose?
### OpenAI
Choose this if:
- you want the easiest setup
- you already have an OpenAI API key
### Ollama
Choose this if:
- you want to run models locally
- you do not want to depend on a cloud API for testing
### Codex
Choose this if:
- you already use the Codex CLI
- you already have Codex or ChatGPT auth configured
## What Success Looks Like
After you run `openclaude`, the CLI should start and wait for your prompt.
At that point, you can ask it to:
- explain code
- edit files
- run commands
- review changes
## Common Problems
### `openclaude` command not found
Cause:
- npm installed the package, but your terminal has not refreshed yet
Fix:
1. Close the terminal
2. Open a new terminal
3. Run `openclaude` again
### Invalid API key
Cause:
- the key is wrong, expired, or copied incorrectly
Fix:
1. Get a fresh key from your provider
2. Paste it again carefully
3. Re-run `openclaude`
### Ollama not working
Cause:
- Ollama is not installed or not running
Fix:
1. Install Ollama from `https://ollama.com/download`
2. Start Ollama
3. Try again
## Want More Control?
If you want source builds, advanced provider profiles, diagnostics, or Bun-based workflows, use:
- [Advanced Setup](advanced-setup.md)

View File

@@ -0,0 +1,143 @@
# OpenClaude Quick Start for macOS and Linux
This guide uses a standard shell such as Terminal, iTerm, bash, or zsh.
## 1. Install Node.js
Install Node.js 20 or newer from:
- `https://nodejs.org/`
Then check it:
```bash
node --version
npm --version
```
## 2. Install OpenClaude
```bash
npm install -g @gitlawb/openclaude
```
## 3. Pick One Provider
### Option A: OpenAI
Replace `sk-your-key-here` with your real key.
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
openclaude
```
### Option B: DeepSeek
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
openclaude
```
### Option C: Ollama
Install Ollama first from:
- `https://ollama.com/download`
Then run:
```bash
ollama pull llama3.1:8b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.1:8b
openclaude
```
No API key is needed for Ollama local models.
### Option D: LM Studio
Install LM Studio first from:
- `https://lmstudio.ai/`
Then in LM Studio:
1. Download a model (e.g., Llama 3.1 8B, Mistral 7B)
2. Go to the "Developer" tab
3. Select your model and enable the server via the toggle
Then run:
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
# export OPENAI_API_KEY=lmstudio # optional: some users need a dummy key
openclaude
```
Replace `your-model-name` with the model name shown in LM Studio.
No API key is needed for LM Studio local models (but uncomment the `OPENAI_API_KEY` line if you hit auth errors).
## 4. If `openclaude` Is Not Found
Close the terminal, open a new one, and try again:
```bash
openclaude
```
## 5. If Your Provider Fails
Check the basics:
### For OpenAI or DeepSeek
- make sure the key is real
- make sure you copied it fully
### For Ollama
- make sure Ollama is installed
- make sure Ollama is running
- make sure the model was pulled successfully
### For LM Studio
- make sure LM Studio is installed
- make sure LM Studio is running
- make sure the server is enabled (toggle on in the "Developer" tab)
- make sure a model is loaded in LM Studio
- make sure the model name matches what you set in `OPENAI_MODEL`
## 6. Updating OpenClaude
```bash
npm install -g @gitlawb/openclaude@latest
```
## 7. Uninstalling OpenClaude
```bash
npm uninstall -g @gitlawb/openclaude
```
## Need Advanced Setup?
Use:
- [Advanced Setup](advanced-setup.md)

143
docs/quick-start-windows.md Normal file
View File

@@ -0,0 +1,143 @@
# OpenClaude Quick Start for Windows
This guide uses Windows PowerShell.
## 1. Install Node.js
Install Node.js 20 or newer from:
- `https://nodejs.org/`
Then open PowerShell and check it:
```powershell
node --version
npm --version
```
## 2. Install OpenClaude
```powershell
npm install -g @gitlawb/openclaude
```
## 3. Pick One Provider
### Option A: OpenAI
Replace `sk-your-key-here` with your real key.
```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
openclaude
```
### Option B: DeepSeek
```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_BASE_URL="https://api.deepseek.com/v1"
$env:OPENAI_MODEL="deepseek-chat"
openclaude
```
### Option C: Ollama
Install Ollama first from:
- `https://ollama.com/download/windows`
Then run:
```powershell
ollama pull llama3.1:8b
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="llama3.1:8b"
openclaude
```
No API key is needed for Ollama local models.
### Option D: LM Studio
Install LM Studio first from:
- `https://lmstudio.ai/`
Then in LM Studio:
1. Download a model (e.g., Llama 3.1 8B, Mistral 7B)
2. Go to the "Developer" tab
3. Select your model and enable the server via the toggle
Then run:
```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:1234/v1"
$env:OPENAI_MODEL="your-model-name"
# $env:OPENAI_API_KEY="lmstudio" # optional: some users need a dummy key
openclaude
```
Replace `your-model-name` with the model name shown in LM Studio.
No API key is needed for LM Studio local models (but uncomment the `OPENAI_API_KEY` line if you hit auth errors).
## 4. If `openclaude` Is Not Found
Close PowerShell, open a new one, and try again:
```powershell
openclaude
```
## 5. If Your Provider Fails
Check the basics:
### For OpenAI or DeepSeek
- make sure the key is real
- make sure you copied it fully
### For Ollama
- make sure Ollama is installed
- make sure Ollama is running
- make sure the model was pulled successfully
### For LM Studio
- make sure LM Studio is installed
- make sure LM Studio is running
- make sure the server is enabled (toggle on in the "Developer" tab)
- make sure a model is loaded in LM Studio
- make sure the model name matches what you set in `OPENAI_MODEL`
## 6. Updating OpenClaude
```powershell
npm install -g @gitlawb/openclaude@latest
```
## 7. Uninstalling OpenClaude
```powershell
npm uninstall -g @gitlawb/openclaude
```
## Need Advanced Setup?
Use:
- [Advanced Setup](advanced-setup.md)

View File

@@ -49,6 +49,18 @@ def normalize_ollama_model(model_name: str) -> str:
return model_name
def _extract_ollama_image_data(block: dict) -> str | None:
source = block.get("source")
if not isinstance(source, dict):
return None
if source.get("type") != "base64":
return None
data = source.get("data")
if isinstance(data, str) and data:
return data
return None
def anthropic_to_ollama_messages(messages: list[dict]) -> list[dict]:
ollama_messages = []
for msg in messages:
@@ -58,15 +70,23 @@ def anthropic_to_ollama_messages(messages: list[dict]) -> list[dict]:
ollama_messages.append({"role": role, "content": content})
elif isinstance(content, list):
text_parts = []
image_parts = []
for block in content:
if isinstance(block, dict):
if block.get("type") == "text":
text_parts.append(block.get("text", ""))
elif block.get("type") == "image":
text_parts.append("[image]")
image_data = _extract_ollama_image_data(block)
if image_data:
image_parts.append(image_data)
else:
text_parts.append("[image]")
elif isinstance(block, str):
text_parts.append(block)
ollama_messages.append({"role": role, "content": "\n".join(text_parts)})
ollama_message = {"role": role, "content": "\n".join(text_parts)}
if image_parts:
ollama_message["images"] = image_parts
ollama_messages.append(ollama_message)
return ollama_messages

View File

@@ -1,6 +1,6 @@
{
"name": "@gitlawb/openclaude",
"version": "0.1.6",
"version": "0.1.7",
"description": "Claude Code opened to any LLM — OpenAI, Gemini, DeepSeek, Ollama, and 200+ models",
"type": "module",
"bin": {
@@ -21,6 +21,7 @@
"dev:gemini": "bun run scripts/provider-launch.ts gemini",
"dev:ollama": "bun run scripts/provider-launch.ts ollama",
"dev:ollama:fast": "bun run scripts/provider-launch.ts ollama --fast --bare",
"dev:atomic-chat": "bun run scripts/provider-launch.ts atomic-chat",
"profile:init": "bun run scripts/provider-bootstrap.ts",
"profile:recommend": "bun run scripts/provider-recommend.ts",
"profile:auto": "bun run scripts/provider-recommend.ts --apply",
@@ -30,7 +31,7 @@
"dev:fast": "bun run profile:fast && bun run dev:ollama:fast",
"dev:code": "bun run profile:code && bun run dev:profile",
"start": "node dist/cli.mjs",
"test:provider-recommendation": "node --test --experimental-strip-types src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts",
"test:provider-recommendation": "bun test src/utils/providerRecommendation.test.ts src/utils/providerProfile.test.ts",
"typecheck": "tsc --noEmit",
"smoke": "bun run build && node dist/cli.mjs --version",
"test:provider": "bun test src/services/api/*.test.ts src/utils/context.test.ts",
@@ -42,82 +43,83 @@
"prepack": "npm run build"
},
"dependencies": {
"@alcalzone/ansi-tokenize": "^0.3.0",
"@anthropic-ai/bedrock-sdk": "^0.26.0",
"@anthropic-ai/foundry-sdk": "^0.2.0",
"@anthropic-ai/sandbox-runtime": "^0.0.46",
"@anthropic-ai/sdk": "^0.81.0",
"@anthropic-ai/vertex-sdk": "^0.14.0",
"@commander-js/extra-typings": "^12.0.0",
"@growthbook/growthbook": "^1.3.0",
"@modelcontextprotocol/sdk": "^1.12.0",
"@opentelemetry/api": "^1.9.1",
"@opentelemetry/api-logs": "^0.214.0",
"@opentelemetry/core": "^2.6.1",
"@opentelemetry/exporter-logs-otlp-http": "^0.214.0",
"@opentelemetry/exporter-trace-otlp-grpc": "^0.57.0",
"@opentelemetry/resources": "^2.6.1",
"@opentelemetry/sdk-logs": "^0.214.0",
"@opentelemetry/sdk-metrics": "^2.6.1",
"@opentelemetry/sdk-trace-base": "^2.6.1",
"@opentelemetry/sdk-trace-node": "^2.6.1",
"@opentelemetry/semantic-conventions": "^1.40.0",
"ajv": "^8.17.0",
"auto-bind": "^5.0.1",
"axios": "^1.14.0",
"bidi-js": "^1.0.3",
"chalk": "^5.4.0",
"chokidar": "^4.0.0",
"cli-boxes": "^3.0.0",
"cli-highlight": "^2.1.0",
"code-excerpt": "^4.0.0",
"commander": "^12.0.0",
"diff": "^7.0.0",
"emoji-regex": "^10.4.0",
"env-paths": "^3.0.0",
"execa": "^9.5.0",
"fflate": "^0.8.2",
"figures": "^6.1.0",
"fuse.js": "^7.1.0",
"get-east-asian-width": "^1.3.0",
"google-auth-library": "^9.15.0",
"https-proxy-agent": "^7.0.6",
"ignore": "^7.0.0",
"indent-string": "^5.0.0",
"jsonc-parser": "^3.3.1",
"lodash-es": "^4.17.21",
"lru-cache": "^11.0.0",
"marked": "^15.0.0",
"p-map": "^7.0.3",
"picomatch": "^4.0.0",
"proper-lockfile": "^4.1.2",
"qrcode": "^1.5.4",
"react": "^19.2.4",
"react-compiler-runtime": "^1.0.0",
"react-reconciler": "^0.33.0",
"semver": "^7.6.3",
"shell-quote": "^1.8.2",
"signal-exit": "^4.1.0",
"stack-utils": "^2.0.6",
"strip-ansi": "^7.1.0",
"supports-hyperlinks": "^3.1.0",
"tree-kill": "^1.2.2",
"turndown": "^7.2.0",
"type-fest": "^4.30.0",
"undici": "^7.3.0",
"usehooks-ts": "^3.1.1",
"vscode-languageserver-protocol": "^3.17.5",
"wrap-ansi": "^9.0.0",
"ws": "^8.18.0",
"xss": "^1.0.15",
"yaml": "^2.7.0",
"zod": "^3.24.0"
"@alcalzone/ansi-tokenize": "0.3.0",
"@anthropic-ai/bedrock-sdk": "0.26.4",
"@anthropic-ai/foundry-sdk": "0.2.3",
"@anthropic-ai/sandbox-runtime": "0.0.46",
"@anthropic-ai/sdk": "0.81.0",
"@anthropic-ai/vertex-sdk": "0.14.4",
"@commander-js/extra-typings": "12.1.0",
"@growthbook/growthbook": "1.6.5",
"@mendable/firecrawl-js": "4.18.1",
"@modelcontextprotocol/sdk": "1.29.0",
"@opentelemetry/api": "1.9.1",
"@opentelemetry/api-logs": "0.214.0",
"@opentelemetry/core": "2.6.1",
"@opentelemetry/exporter-logs-otlp-http": "0.214.0",
"@opentelemetry/exporter-trace-otlp-grpc": "0.57.2",
"@opentelemetry/resources": "2.6.1",
"@opentelemetry/sdk-logs": "0.214.0",
"@opentelemetry/sdk-metrics": "2.6.1",
"@opentelemetry/sdk-trace-base": "2.6.1",
"@opentelemetry/sdk-trace-node": "2.6.1",
"@opentelemetry/semantic-conventions": "1.40.0",
"ajv": "8.18.0",
"auto-bind": "5.0.1",
"axios": "1.14.0",
"bidi-js": "1.0.3",
"chalk": "5.6.2",
"chokidar": "4.0.3",
"cli-boxes": "3.0.0",
"cli-highlight": "2.1.11",
"code-excerpt": "4.0.0",
"commander": "12.1.0",
"diff": "8.0.3",
"emoji-regex": "10.6.0",
"env-paths": "3.0.0",
"execa": "9.6.1",
"fflate": "0.8.2",
"figures": "6.1.0",
"fuse.js": "7.1.0",
"get-east-asian-width": "1.5.0",
"google-auth-library": "9.15.1",
"https-proxy-agent": "7.0.6",
"ignore": "7.0.5",
"indent-string": "5.0.0",
"jsonc-parser": "3.3.1",
"lodash-es": "4.18.0",
"lru-cache": "11.2.7",
"marked": "15.0.12",
"p-map": "7.0.4",
"picomatch": "4.0.4",
"proper-lockfile": "4.1.2",
"qrcode": "1.5.4",
"react": "19.2.4",
"react-compiler-runtime": "1.0.0",
"react-reconciler": "0.33.0",
"semver": "7.7.4",
"shell-quote": "1.8.3",
"signal-exit": "4.1.0",
"stack-utils": "2.0.6",
"strip-ansi": "7.2.0",
"supports-hyperlinks": "3.2.0",
"tree-kill": "1.2.2",
"turndown": "7.2.2",
"type-fest": "4.41.0",
"undici": "7.24.6",
"usehooks-ts": "3.1.1",
"vscode-languageserver-protocol": "3.17.5",
"wrap-ansi": "9.0.2",
"ws": "8.20.0",
"xss": "1.0.15",
"yaml": "2.8.3",
"zod": "3.25.76"
},
"devDependencies": {
"@types/bun": "^1.2.0",
"@types/node": "^25.5.0",
"@types/react": "^19.2.14",
"typescript": "^5.7.0"
"@types/bun": "1.3.11",
"@types/node": "25.5.0",
"@types/react": "19.2.14",
"typescript": "5.9.3"
},
"engines": {
"node": ">=20.0.0"

View File

@@ -9,6 +9,7 @@
*/
import { readFileSync } from 'fs'
import { noTelemetryPlugin } from './no-telemetry-plugin'
const pkg = JSON.parse(readFileSync('./package.json', 'utf-8'))
const version = pkg.version
@@ -64,6 +65,7 @@ const result = await Bun.build({
'MACRO.NATIVE_PACKAGE_URL': 'undefined',
},
plugins: [
noTelemetryPlugin,
{
name: 'bun-bundle-shim',
setup(build) {

View File

@@ -0,0 +1,225 @@
/**
* No-Telemetry Build Plugin for OpenClaude
*
* Replaces all analytics, telemetry, and phone-home modules with no-op stubs
* at compile time. Zero runtime cost, zero network calls to Anthropic.
*
* This file is NOT tracked upstream — merge conflicts are impossible.
* Only build.ts needs a one-line import + one-line array entry.
*
* Kills:
* - GrowthBook remote feature flags (api.anthropic.com)
* - Datadog event intake
* - 1P event logging (api.anthropic.com/api/event_logging/batch)
* - BigQuery metrics exporter (api.anthropic.com/api/claude_code/metrics)
* - Perfetto / OpenTelemetry session tracing
* - Auto-updater (storage.googleapis.com, npm registry)
* - Plugin fetch telemetry
* - Transcript / feedback sharing
*/
import type { BunPlugin } from 'bun'
// Module path (relative to src/, without extension) → stub source
const stubs: Record<string, string> = {
// ─── Analytics core ─────────────────────────────────────────────
'services/analytics/index': `
export function stripProtoFields(metadata) { return metadata; }
export function attachAnalyticsSink() {}
export function logEvent() {}
export async function logEventAsync() {}
export function _resetForTesting() {}
`,
'services/analytics/growthbook': `
const noop = () => {};
export function onGrowthBookRefresh() { return noop; }
export function hasGrowthBookEnvOverride() { return false; }
export function getAllGrowthBookFeatures() { return {}; }
export function getGrowthBookConfigOverrides() { return {}; }
export function setGrowthBookConfigOverride() {}
export function clearGrowthBookConfigOverrides() {}
export function getApiBaseUrlHost() { return undefined; }
export const initializeGrowthBook = async () => null;
export async function getFeatureValue_DEPRECATED(feature, defaultValue) { return defaultValue; }
export function getFeatureValue_CACHED_MAY_BE_STALE(feature, defaultValue) { return defaultValue; }
export function getFeatureValue_CACHED_WITH_REFRESH(feature, defaultValue) { return defaultValue; }
export function checkStatsigFeatureGate_CACHED_MAY_BE_STALE() { return false; }
export async function checkSecurityRestrictionGate() { return false; }
export async function checkGate_CACHED_OR_BLOCKING() { return false; }
export function refreshGrowthBookAfterAuthChange() {}
export function resetGrowthBook() {}
export async function refreshGrowthBookFeatures() {}
export function setupPeriodicGrowthBookRefresh() {}
export function stopPeriodicGrowthBookRefresh() {}
export async function getDynamicConfig_BLOCKS_ON_INIT(configName, defaultValue) { return defaultValue; }
export function getDynamicConfig_CACHED_MAY_BE_STALE(configName, defaultValue) { return defaultValue; }
`,
'services/analytics/sink': `
export function initializeAnalyticsGates() {}
export function initializeAnalyticsSink() {}
`,
'services/analytics/config': `
export function isAnalyticsDisabled() { return true; }
export function isFeedbackSurveyDisabled() { return true; }
`,
'services/analytics/datadog': `
export const initializeDatadog = async () => false;
export async function shutdownDatadog() {}
export async function trackDatadogEvent() {}
`,
'services/analytics/firstPartyEventLogger': `
export function getEventSamplingConfig() { return {}; }
export function shouldSampleEvent() { return null; }
export async function shutdown1PEventLogging() {}
export function is1PEventLoggingEnabled() { return false; }
export function logEventTo1P() {}
export function logGrowthBookExperimentTo1P() {}
export function initialize1PEventLogging() {}
export async function reinitialize1PEventLoggingIfConfigChanged() {}
`,
'services/analytics/firstPartyEventLoggingExporter': `
export class FirstPartyEventLoggingExporter {
constructor() {}
async export(logs, resultCallback) { resultCallback({ code: 0 }); }
async getQueuedEventCount() { return 0; }
async shutdown() {}
async forceFlush() {}
}
`,
'services/analytics/metadata': `
export function sanitizeToolNameForAnalytics(toolName) { return toolName; }
export function isToolDetailsLoggingEnabled() { return false; }
export function isAnalyticsToolDetailsLoggingEnabled() { return false; }
export function mcpToolDetailsForAnalytics() { return {}; }
export function extractMcpToolDetails() { return undefined; }
export function extractSkillName() { return undefined; }
export function extractToolInputForTelemetry() { return undefined; }
export function getFileExtensionForAnalytics() { return undefined; }
export function getFileExtensionsFromBashCommand() { return undefined; }
export async function getEventMetadata() { return {}; }
export function to1PEventFormat() { return {}; }
`,
// ─── Telemetry subsystems ───────────────────────────────────────
'utils/telemetry/bigqueryExporter': `
export class BigQueryMetricsExporter {
constructor() {}
async export(metrics, resultCallback) { resultCallback({ code: 0 }); }
async shutdown() {}
async forceFlush() {}
selectAggregationTemporality() { return 0; }
}
`,
'utils/telemetry/perfettoTracing': `
export function initializePerfettoTracing() {}
export function isPerfettoTracingEnabled() { return false; }
export function registerAgent() {}
export function unregisterAgent() {}
export function startLLMRequestPerfettoSpan() { return ''; }
export function endLLMRequestPerfettoSpan() {}
export function startToolPerfettoSpan() { return ''; }
export function endToolPerfettoSpan() {}
export function startUserInputPerfettoSpan() { return ''; }
export function endUserInputPerfettoSpan() {}
export function emitPerfettoInstant() {}
export function emitPerfettoCounter() {}
export function startInteractionPerfettoSpan() { return ''; }
export function endInteractionPerfettoSpan() {}
export function getPerfettoEvents() { return []; }
export function resetPerfettoTracer() {}
export async function triggerPeriodicWriteForTesting() {}
export function evictStaleSpansForTesting() {}
export const MAX_EVENTS_FOR_TESTING = 0;
export function evictOldestEventsForTesting() {}
`,
'utils/telemetry/sessionTracing': `
const noopSpan = {
end() {}, setAttribute() {}, setStatus() {},
recordException() {}, addEvent() {}, isRecording() { return false; },
};
export function isBetaTracingEnabled() { return false; }
export function isEnhancedTelemetryEnabled() { return false; }
export function startInteractionSpan() { return noopSpan; }
export function endInteractionSpan() {}
export function startLLMRequestSpan() { return noopSpan; }
export function endLLMRequestSpan() {}
export function startToolSpan() { return noopSpan; }
export function startToolBlockedOnUserSpan() { return noopSpan; }
export function endToolBlockedOnUserSpan() {}
export function startToolExecutionSpan() { return noopSpan; }
export function endToolExecutionSpan() {}
export function endToolSpan() {}
export function addToolContentEvent() {}
export function getCurrentSpan() { return null; }
export async function executeInSpan(spanName, fn) { return fn(noopSpan); }
export function startHookSpan() { return noopSpan; }
export function endHookSpan() {}
`,
// ─── Auto-updater (phones home to GCS + npm) ──────────────────
'utils/autoUpdater': `
export async function assertMinVersion() {}
export async function getMaxVersion() { return undefined; }
export async function getMaxVersionMessage() { return undefined; }
export function shouldSkipVersion() { return true; }
export function getLockFilePath() { return '/tmp/openclaude-update.lock'; }
export async function checkGlobalInstallPermissions() { return { hasPermissions: false, npmPrefix: null }; }
export async function getLatestVersion() { return null; }
export async function getNpmDistTags() { return { latest: null, stable: null }; }
export async function getLatestVersionFromGcs() { return null; }
export async function getGcsDistTags() { return { latest: null, stable: null }; }
export async function getVersionHistory() { return []; }
export async function installGlobalPackage() { return 'success'; }
`,
// ─── Plugin fetch telemetry (not the marketplace itself) ───────
'utils/plugins/fetchTelemetry': `
export function logPluginFetch() {}
export function classifyFetchError() { return 'disabled'; }
`,
// ─── Transcript / feedback sharing ─────────────────────────────
'components/FeedbackSurvey/submitTranscriptShare': `
export async function submitTranscriptShare() { return { success: false }; }
`,
}
function escapeForResolvedPathRegex(modulePath: string): string {
return modulePath
.replace(/[|\\{}()[\]^$+*?.]/g, '\\$&')
.replace(/\//g, '[/\\\\]')
}
export const noTelemetryPlugin: BunPlugin = {
name: 'no-telemetry',
setup(build) {
for (const [modulePath, contents] of Object.entries(stubs)) {
// Build regex that matches the resolved file path on any OS
// e.g. "services/analytics/growthbook" → /services[/\\]analytics[/\\]growthbook\.(ts|js)$/
const escaped = escapeForResolvedPathRegex(modulePath)
const filter = new RegExp(`${escaped}\\.(ts|js)$`)
build.onLoad({ filter }, () => ({
contents,
loader: 'js',
}))
}
console.log(` 🔇 no-telemetry: stubbed ${Object.keys(stubs).length} modules`)
},
}

View File

@@ -1,6 +1,4 @@
// @ts-nocheck
import { writeFileSync } from 'node:fs'
import { resolve } from 'node:path'
import {
resolveCodexApiCredentials,
} from '../src/services/api/providerConfig.js'
@@ -10,18 +8,23 @@ import {
recommendOllamaModel,
} from '../src/utils/providerRecommendation.ts'
import {
buildAtomicChatProfileEnv,
buildCodexProfileEnv,
buildGeminiProfileEnv,
buildOllamaProfileEnv,
buildOpenAIProfileEnv,
createProfileFile,
saveProfileFile,
selectAutoProfile,
type ProfileFile,
type ProviderProfile,
} from '../src/utils/providerProfile.ts'
import {
getAtomicChatChatBaseUrl,
getOllamaChatBaseUrl,
hasLocalAtomicChat,
hasLocalOllama,
listAtomicChatModels,
listOllamaModels,
} from './provider-discovery.ts'
@@ -34,7 +37,7 @@ function parseArg(name: string): string | null {
function parseProviderArg(): ProviderProfile | 'auto' {
const p = parseArg('--provider')?.toLowerCase()
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini') return p
if (p === 'openai' || p === 'ollama' || p === 'codex' || p === 'gemini' || p === 'atomic-chat') return p
return 'auto'
}
@@ -102,6 +105,21 @@ async function main(): Promise<void> {
getOllamaChatBaseUrl,
},
)
} else if (selected === 'atomic-chat') {
const model = argModel || (await listAtomicChatModels(argBaseUrl || undefined))[0]
if (!model) {
if (!(await hasLocalAtomicChat(argBaseUrl || undefined))) {
console.error('Atomic Chat is not running (could not connect to 127.0.0.1:1337).\n Download from https://atomic.chat/ and launch the application.')
} else {
console.error('Atomic Chat is running but no model is loaded. Open Atomic Chat and download or start a model first.')
}
process.exit(1)
}
env = buildAtomicChatProfileEnv(model, {
baseUrl: argBaseUrl,
getAtomicChatChatBaseUrl,
})
} else if (selected === 'codex') {
const builtEnv = buildCodexProfileEnv({
model: argModel,
@@ -147,8 +165,7 @@ async function main(): Promise<void> {
const profile = createProfileFile(selected, env)
const outputPath = resolve(process.cwd(), '.openclaude-profile.json')
writeFileSync(outputPath, JSON.stringify(profile, null, 2), { encoding: 'utf8', mode: 0o600 })
const outputPath = saveProfileFile(profile)
console.log(`Saved profile: ${selected}`)
console.log(`Goal: ${goal}`)

View File

@@ -1,129 +1,13 @@
import type { OllamaModelDescriptor } from '../src/utils/providerRecommendation.ts'
export const DEFAULT_OLLAMA_BASE_URL = 'http://localhost:11434'
function withTimeoutSignal(timeoutMs: number): {
signal: AbortSignal
clear: () => void
} {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), timeoutMs)
return {
signal: controller.signal,
clear: () => clearTimeout(timeout),
}
}
function trimTrailingSlash(value: string): string {
return value.replace(/\/+$/, '')
}
export function getOllamaApiBaseUrl(baseUrl?: string): string {
const parsed = new URL(
baseUrl || process.env.OLLAMA_BASE_URL || DEFAULT_OLLAMA_BASE_URL,
)
const pathname = trimTrailingSlash(parsed.pathname)
parsed.pathname = pathname.endsWith('/v1')
? pathname.slice(0, -3) || '/'
: pathname || '/'
parsed.search = ''
parsed.hash = ''
return trimTrailingSlash(parsed.toString())
}
export function getOllamaChatBaseUrl(baseUrl?: string): string {
return `${getOllamaApiBaseUrl(baseUrl)}/v1`
}
export async function hasLocalOllama(baseUrl?: string): Promise<boolean> {
const { signal, clear } = withTimeoutSignal(1200)
try {
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/tags`, {
method: 'GET',
signal,
})
return response.ok
} catch {
return false
} finally {
clear()
}
}
export async function listOllamaModels(
baseUrl?: string,
): Promise<OllamaModelDescriptor[]> {
const { signal, clear } = withTimeoutSignal(5000)
try {
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/tags`, {
method: 'GET',
signal,
})
if (!response.ok) {
return []
}
const data = await response.json() as {
models?: Array<{
name?: string
size?: number
details?: {
family?: string
families?: string[]
parameter_size?: string
quantization_level?: string
}
}>
}
return (data.models ?? [])
.filter(model => Boolean(model.name))
.map(model => ({
name: model.name!,
sizeBytes: typeof model.size === 'number' ? model.size : null,
family: model.details?.family ?? null,
families: model.details?.families ?? [],
parameterSize: model.details?.parameter_size ?? null,
quantizationLevel: model.details?.quantization_level ?? null,
}))
} catch {
return []
} finally {
clear()
}
}
export async function benchmarkOllamaModel(
modelName: string,
baseUrl?: string,
): Promise<number | null> {
const start = Date.now()
const { signal, clear } = withTimeoutSignal(20000)
try {
const response = await fetch(`${getOllamaApiBaseUrl(baseUrl)}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
signal,
body: JSON.stringify({
model: modelName,
stream: false,
messages: [{ role: 'user', content: 'Reply with OK.' }],
options: {
temperature: 0,
num_predict: 8,
},
}),
})
if (!response.ok) {
return null
}
await response.json()
return Date.now() - start
} catch {
return null
} finally {
clear()
}
}
export {
benchmarkOllamaModel,
DEFAULT_ATOMIC_CHAT_BASE_URL,
DEFAULT_OLLAMA_BASE_URL,
getAtomicChatApiBaseUrl,
getAtomicChatChatBaseUrl,
getOllamaApiBaseUrl,
getOllamaChatBaseUrl,
hasLocalAtomicChat,
hasLocalOllama,
listAtomicChatModels,
listOllamaModels,
} from '../src/utils/providerDiscovery.ts'

View File

@@ -1,7 +1,5 @@
// @ts-nocheck
import { spawn } from 'node:child_process'
import { existsSync, readFileSync } from 'node:fs'
import { resolve } from 'node:path'
import {
resolveCodexApiCredentials,
} from '../src/services/api/providerConfig.js'
@@ -11,13 +9,17 @@ import {
} from '../src/utils/providerRecommendation.ts'
import {
buildLaunchEnv,
loadProfileFile,
selectAutoProfile,
type ProfileFile,
type ProviderProfile,
} from '../src/utils/providerProfile.ts'
import {
getAtomicChatChatBaseUrl,
getOllamaChatBaseUrl,
hasLocalAtomicChat,
hasLocalOllama,
listAtomicChatModels,
listOllamaModels,
} from './provider-discovery.ts'
@@ -48,7 +50,7 @@ function parseLaunchOptions(argv: string[]): LaunchOptions {
continue
}
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini') && requestedProfile === 'auto') {
if ((lower === 'auto' || lower === 'openai' || lower === 'ollama' || lower === 'codex' || lower === 'gemini' || lower === 'atomic-chat') && requestedProfile === 'auto') {
requestedProfile = lower as ProviderProfile | 'auto'
continue
}
@@ -75,17 +77,7 @@ function parseLaunchOptions(argv: string[]): LaunchOptions {
}
function loadPersistedProfile(): ProfileFile | null {
const path = resolve(process.cwd(), '.openclaude-profile.json')
if (!existsSync(path)) return null
try {
const parsed = JSON.parse(readFileSync(path, 'utf8')) as ProfileFile
if (parsed.profile === 'openai' || parsed.profile === 'ollama' || parsed.profile === 'codex' || parsed.profile === 'gemini') {
return parsed
}
return null
} catch {
return null
}
return loadProfileFile()
}
async function resolveOllamaDefaultModel(
@@ -96,6 +88,11 @@ async function resolveOllamaDefaultModel(
return recommended?.name ?? null
}
async function resolveAtomicChatDefaultModel(): Promise<string | null> {
const models = await listAtomicChatModels()
return models[0] ?? null
}
function runCommand(command: string, env: NodeJS.ProcessEnv): Promise<number> {
return runProcess(command, [], env)
}
@@ -127,15 +124,15 @@ function printSummary(profile: ProviderProfile, env: NodeJS.ProcessEnv): void {
console.log(`Launching profile: ${profile}`)
if (profile === 'gemini') {
console.log(`GEMINI_MODEL=${env.GEMINI_MODEL}`)
console.log(`GEMINI_API_KEY_SET=${Boolean(env.GEMINI_API_KEY)}`)
} else if (profile === 'codex') {
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
console.log(`CODEX_API_KEY_SET=${Boolean(resolveCodexApiCredentials(env).apiKey)}`)
} else if (profile === 'atomic-chat') {
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
} else {
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
console.log(`OPENAI_API_KEY_SET=${Boolean(env.OPENAI_API_KEY)}`)
}
}
@@ -143,7 +140,7 @@ async function main(): Promise<void> {
const options = parseLaunchOptions(process.argv.slice(2))
const requestedProfile = options.requestedProfile
if (!requestedProfile) {
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
console.error('Usage: bun run scripts/provider-launch.ts [openai|ollama|codex|gemini|atomic-chat|auto] [--fast] [--goal <latency|balanced|coding>] [-- <cli args>]')
process.exit(1)
}
@@ -175,12 +172,30 @@ async function main(): Promise<void> {
}
}
let resolvedAtomicChatModel: string | null = null
if (
profile === 'atomic-chat' &&
(persisted?.profile !== 'atomic-chat' || !persisted?.env?.OPENAI_MODEL)
) {
if (!(await hasLocalAtomicChat())) {
console.error('Atomic Chat is not running (could not connect to 127.0.0.1:1337).\n Download from https://atomic.chat/ and launch the application.')
process.exit(1)
}
resolvedAtomicChatModel = await resolveAtomicChatDefaultModel()
if (!resolvedAtomicChatModel) {
console.error('Atomic Chat is running but no model is loaded. Open Atomic Chat and download or start a model first.')
process.exit(1)
}
}
const env = await buildLaunchEnv({
profile,
persisted,
goal: options.goal,
getOllamaChatBaseUrl,
resolveOllamaDefaultModel: async () => resolvedOllamaModel || 'llama3.1:8b',
getAtomicChatChatBaseUrl,
resolveAtomicChatDefaultModel: async () => resolvedAtomicChatModel,
})
if (options.fast) {
applyFastFlags(env)

View File

@@ -1,6 +1,4 @@
// @ts-nocheck
import { writeFileSync } from 'node:fs'
import { resolve } from 'node:path'
import {
applyBenchmarkLatency,
@@ -16,6 +14,7 @@ import {
buildOllamaProfileEnv,
buildOpenAIProfileEnv,
createProfileFile,
saveProfileFile,
sanitizeApiKey,
type ProfileFile,
type ProviderProfile,
@@ -153,11 +152,7 @@ async function maybeApplyProfile(
const profileFile = createProfileFile(profile, env)
writeFileSync(
resolve(process.cwd(), '.openclaude-profile.json'),
JSON.stringify(profileFile, null, 2),
'utf8',
)
saveProfileFile(profileFile)
return true
}

View File

@@ -93,11 +93,15 @@ function isLocalBaseUrl(baseUrl: string): boolean {
}
const GEMINI_DEFAULT_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/openai'
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
function currentBaseUrl(): string {
if (isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
return process.env.GEMINI_BASE_URL ?? GEMINI_DEFAULT_BASE_URL
}
if (isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
return process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
}
return process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1'
}
@@ -126,15 +130,47 @@ function checkGeminiEnv(): CheckResult[] {
return results
}
function checkGithubEnv(): CheckResult[] {
const results: CheckResult[] = []
const baseUrl = process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE
results.push(pass('Provider mode', 'GitHub Models provider enabled.'))
const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
if (!token?.trim()) {
results.push(fail('GITHUB_TOKEN', 'Missing. Set GITHUB_TOKEN or GH_TOKEN.'))
} else {
results.push(pass('GITHUB_TOKEN', 'Configured.'))
}
if (!process.env.OPENAI_MODEL) {
results.push(
pass(
'OPENAI_MODEL',
'Not set. Default github:copilot → openai/gpt-4.1 at runtime.',
),
)
} else {
results.push(pass('OPENAI_MODEL', process.env.OPENAI_MODEL))
}
results.push(pass('OPENAI_BASE_URL', baseUrl))
return results
}
function checkOpenAIEnv(): CheckResult[] {
const results: CheckResult[] = []
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
if (useGemini) {
return checkGeminiEnv()
}
if (useGithub && !useOpenAI) {
return checkGithubEnv()
}
if (!useOpenAI) {
results.push(pass('Provider mode', 'Anthropic login flow enabled (CLAUDE_CODE_USE_OPENAI is off).'))
return results
@@ -181,12 +217,21 @@ function checkOpenAIEnv(): CheckResult[] {
}
const key = process.env.OPENAI_API_KEY
const githubToken = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN
if (key === 'SUA_CHAVE') {
results.push(fail('OPENAI_API_KEY', 'Placeholder value detected: SUA_CHAVE.'))
} else if (!key && !isLocalBaseUrl(request.baseUrl)) {
} else if (
!key &&
!isLocalBaseUrl(request.baseUrl) &&
!(useGithub && githubToken?.trim())
) {
results.push(fail('OPENAI_API_KEY', 'Missing key for non-local provider URL.'))
} else if (!key && useGithub && githubToken?.trim()) {
results.push(
pass('OPENAI_API_KEY', 'Not set; GITHUB_TOKEN/GH_TOKEN will be used for GitHub Models.'),
)
} else if (!key) {
results.push(pass('OPENAI_API_KEY', 'Not set (allowed for local providers like Ollama/LM Studio).'))
results.push(pass('OPENAI_API_KEY', 'Not set (allowed for local providers like Atomic Chat/Ollama/LM Studio).'))
} else {
results.push(pass('OPENAI_API_KEY', 'Configured.'))
}
@@ -197,11 +242,19 @@ function checkOpenAIEnv(): CheckResult[] {
async function checkBaseUrlReachability(): Promise<CheckResult> {
const useGemini = isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
const useOpenAI = isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
const useGithub = isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
if (!useGemini && !useOpenAI) {
if (!useGemini && !useOpenAI && !useGithub) {
return pass('Provider reachability', 'Skipped (OpenAI-compatible mode disabled).')
}
if (useGithub) {
return pass(
'Provider reachability',
'Skipped for GitHub Models (inference endpoint differs from OpenAI /models probe).',
)
}
const geminiBaseUrl = 'https://generativelanguage.googleapis.com/v1beta/openai'
const resolvedBaseUrl = useGemini
? (process.env.GEMINI_BASE_URL ?? geminiBaseUrl)
@@ -271,8 +324,21 @@ async function checkBaseUrlReachability(): Promise<CheckResult> {
}
}
function isAtomicChatUrl(baseUrl: string): boolean {
try {
const parsed = new URL(baseUrl)
return parsed.port === '1337' && isLocalBaseUrl(baseUrl)
} catch {
return false
}
}
function checkOllamaProcessorMode(): CheckResult {
if (!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) || isTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
if (
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
) {
return pass('Ollama processor mode', 'Skipped (OpenAI-compatible mode disabled).')
}
@@ -281,6 +347,10 @@ function checkOllamaProcessorMode(): CheckResult {
return pass('Ollama processor mode', 'Skipped (provider URL is not local).')
}
if (isAtomicChatUrl(baseUrl)) {
return pass('Ollama processor mode', 'Skipped (Atomic Chat local provider detected, not Ollama).')
}
const result = spawnSync('ollama', ['ps'], {
cwd: process.cwd(),
encoding: 'utf8',
@@ -289,7 +359,7 @@ function checkOllamaProcessorMode(): CheckResult {
if (result.status !== 0) {
const detail = (result.stderr || result.stdout || 'Unable to run ollama ps').trim()
return fail('Ollama processor mode', detail)
return pass('Ollama processor mode', `Native CLI check failed (${detail}). Assuming valid Docker/remote backend since HTTP ping passed.`)
}
const output = (result.stdout || '').trim()
@@ -319,6 +389,22 @@ function serializeSafeEnvSummary(): Record<string, string | boolean> {
GEMINI_API_KEY_SET: Boolean(process.env.GEMINI_API_KEY ?? process.env.GOOGLE_API_KEY),
}
}
if (
isTruthy(process.env.CLAUDE_CODE_USE_GITHUB) &&
!isTruthy(process.env.CLAUDE_CODE_USE_OPENAI)
) {
return {
CLAUDE_CODE_USE_GITHUB: true,
OPENAI_MODEL:
process.env.OPENAI_MODEL ??
'(unset, default: github:copilot → openai/gpt-4.1)',
OPENAI_BASE_URL:
process.env.OPENAI_BASE_URL ?? GITHUB_MODELS_DEFAULT_BASE,
GITHUB_TOKEN_SET: Boolean(
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN,
),
}
}
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: process.env.OPENAI_BASE_URL,
@@ -344,6 +430,7 @@ function writeJsonReport(
options: CliOptions,
results: CheckResult[],
): void {
const envSummary = serializeSafeEnvSummary()
const payload = {
timestamp: new Date().toISOString(),
cwd: process.cwd(),
@@ -352,12 +439,24 @@ function writeJsonReport(
passed: results.filter(result => result.ok).length,
failed: results.filter(result => !result.ok).length,
},
env: serializeSafeEnvSummary(),
env: envSummary,
results,
}
if (options.json) {
console.log(JSON.stringify(payload, null, 2))
console.log(
JSON.stringify(
{
timestamp: payload.timestamp,
cwd: payload.cwd,
summary: payload.summary,
env: '[redacted in console JSON output; use --out-file for the full report]',
results: payload.results,
},
null,
2,
),
)
}
if (options.outFile) {
@@ -374,6 +473,13 @@ async function main(): Promise<void> {
const options = parseOptions(process.argv.slice(2))
const results: CheckResult[] = []
const { enableConfigs } = await import('../src/utils/config.js')
enableConfigs()
const { applySafeConfigEnvironmentVariables } = await import('../src/utils/managedEnv.js')
applySafeConfigEnvironmentVariables()
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../src/utils/githubModelsCredentials.js')
hydrateGithubModelsTokenFromSecureStorage()
results.push(checkNodeVersion())
results.push(checkBunRuntime())
results.push(checkBuildArtifacts())

View File

@@ -57,8 +57,8 @@ class Provider:
@property
def is_configured(self) -> bool:
"""True if the provider has an API key set."""
if self.name == "ollama":
return True # Ollama needs no API key
if self.name in ("ollama", "atomic-chat"):
return True # Local providers need no API key
return bool(self.api_key)
@property
@@ -93,6 +93,7 @@ def build_default_providers() -> list[Provider]:
big = os.getenv("BIG_MODEL", "gpt-4.1")
small = os.getenv("SMALL_MODEL", "gpt-4.1-mini")
ollama_url = os.getenv("OLLAMA_BASE_URL", "http://localhost:11434")
atomic_chat_url = os.getenv("ATOMIC_CHAT_BASE_URL", "http://127.0.0.1:1337")
return [
Provider(
@@ -119,6 +120,14 @@ def build_default_providers() -> list[Provider]:
big_model=big if "gemini" not in big and "gpt" not in big else "llama3:8b",
small_model=small if "gemini" not in small and "gpt" not in small else "llama3:8b",
),
Provider(
name="atomic-chat",
ping_url=f"{atomic_chat_url}/v1/models",
api_key_env="",
cost_per_1k_tokens=0.0, # free — local (Apple Silicon)
big_model=big if "gemini" not in big and "gpt" not in big else "llama3:8b",
small_model=small if "gemini" not in small and "gpt" not in small else "llama3:8b",
),
]
@@ -219,9 +228,14 @@ class SmartRouter:
return min(available, key=lambda p: p.score(self.strategy))
def get_model_for_provider(
self, provider: Provider, claude_model: str
self,
provider: Provider,
claude_model: str,
is_large_request: bool = False,
) -> str:
"""Map a Claude model name to the provider's actual model."""
if is_large_request:
return provider.big_model
is_large = any(
keyword in claude_model.lower()
for keyword in ["opus", "sonnet", "large", "big"]
@@ -280,7 +294,11 @@ class SmartRouter:
)
provider = min(available, key=lambda p: p.score(self.strategy))
model = self.get_model_for_provider(provider, claude_model)
model = self.get_model_for_provider(
provider,
claude_model,
is_large_request=large,
)
logger.debug(
f"SmartRouter: routing to {provider.name}/{model} "

View File

@@ -1,4 +1,4 @@
import { randomBytes } from 'crypto'
import { randomInt } from 'crypto'
import type { AppState } from './state/AppState.js'
import type { AgentId } from './types/ids.js'
import { getTaskOutputPath } from './utils/task/diskOutput.js'
@@ -97,10 +97,9 @@ const TASK_ID_ALPHABET = '0123456789abcdefghijklmnopqrstuvwxyz'
export function generateTaskId(type: TaskType): string {
const prefix = getTaskIdPrefix(type)
const bytes = randomBytes(8)
let id = prefix
for (let i = 0; i < 8; i++) {
id += TASK_ID_ALPHABET[bytes[i]! % TASK_ID_ALPHABET.length]
id += TASK_ID_ALPHABET[randomInt(TASK_ID_ALPHABET.length)]!
}
return id
}

View File

@@ -290,6 +290,14 @@ export type ToolUseContext = {
* resumeAgentBackground threads one reconstructed from sidechain records.
*/
contentReplacementState?: ContentReplacementState
/**
* Interactive REPL only: mirror persisted tool-result replacements back
* into the live transcript so the original oversized payloads can be
* released from heap once the replacement decision is known.
*/
syncToolResultReplacements?: (
replacements: ReadonlyMap<string, string>,
) => void
/**
* Parent's rendered system prompt bytes, frozen at turn start.
* Used by fork subagents to share the parent's prompt cache — re-calling

View File

@@ -0,0 +1,85 @@
import { expect, test } from 'bun:test'
import { buildChildEnv } from './sessionRunner.ts'
// Finding #42-1: sessionRunner spreads the full parent process.env into the
// child process environment, leaking API keys, DB credentials, proxy secrets.
// Only CLAUDE_CODE_OAUTH_TOKEN was stripped. Fix: explicit allowlist.
const baseOpts = {
accessToken: 'test-access-token',
useCcrV2: false as const,
}
test('buildChildEnv does not leak ANTHROPIC_API_KEY to child', () => {
const parentEnv = {
PATH: '/usr/bin',
HOME: '/home/user',
ANTHROPIC_API_KEY: 'sk-ant-secret-key',
CLAUDE_CODE_SESSION_ACCESS_TOKEN: 'will-be-overwritten',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.ANTHROPIC_API_KEY).toBeUndefined()
})
test('buildChildEnv does not leak OPENAI_API_KEY to child', () => {
const parentEnv = {
PATH: '/usr/bin',
HOME: '/home/user',
OPENAI_API_KEY: 'sk-openai-secret',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.OPENAI_API_KEY).toBeUndefined()
})
test('buildChildEnv does not leak arbitrary secrets to child', () => {
const parentEnv = {
PATH: '/usr/bin',
HOME: '/home/user',
DB_PASSWORD: 'super-secret',
AWS_SECRET_ACCESS_KEY: 'aws-secret',
GITHUB_TOKEN: 'ghp_token',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.DB_PASSWORD).toBeUndefined()
expect(env.AWS_SECRET_ACCESS_KEY).toBeUndefined()
expect(env.GITHUB_TOKEN).toBeUndefined()
})
test('buildChildEnv includes PATH and HOME from parent', () => {
const parentEnv = {
PATH: '/usr/bin:/usr/local/bin',
HOME: '/home/user',
ANTHROPIC_API_KEY: 'sk-secret',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.PATH).toBe('/usr/bin:/usr/local/bin')
expect(env.HOME).toBe('/home/user')
})
test('buildChildEnv sets CLAUDE_CODE_SESSION_ACCESS_TOKEN from opts', () => {
const env = buildChildEnv({ PATH: '/usr/bin' }, { ...baseOpts, accessToken: 'my-token' })
expect(env.CLAUDE_CODE_SESSION_ACCESS_TOKEN).toBe('my-token')
})
test('buildChildEnv sets CLAUDE_CODE_ENVIRONMENT_KIND to bridge', () => {
const env = buildChildEnv({ PATH: '/usr/bin' }, baseOpts)
expect(env.CLAUDE_CODE_ENVIRONMENT_KIND).toBe('bridge')
})
test('buildChildEnv does not pass CLAUDE_CODE_OAUTH_TOKEN to child', () => {
const parentEnv = {
PATH: '/usr/bin',
CLAUDE_CODE_OAUTH_TOKEN: 'oauth-token-to-strip',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.CLAUDE_CODE_OAUTH_TOKEN).toBeUndefined()
})
test('buildChildEnv sets CCR v2 vars when useCcrV2 is true', () => {
const env = buildChildEnv(
{ PATH: '/usr/bin' },
{ accessToken: 'tok', useCcrV2: true, workerEpoch: 42 },
)
expect(env.CLAUDE_CODE_USE_CCR_V2).toBe('1')
expect(env.CLAUDE_CODE_WORKER_EPOCH).toBe('42')
})

View File

@@ -16,6 +16,69 @@ import type {
const MAX_ACTIVITIES = 10
const MAX_STDERR_LINES = 10
/**
* Safe OS and runtime variables that the child process needs to function.
* Everything else (API keys, DB passwords, proxy secrets, etc.) must not
* be inherited — the child authenticates via CLAUDE_CODE_SESSION_ACCESS_TOKEN.
*/
const CHILD_ENV_ALLOWLIST = new Set([
// System / shell
'PATH', 'HOME', 'USERPROFILE', 'HOMEPATH', 'HOMEDRIVE',
'USERNAME', 'USER', 'LOGNAME',
'TEMP', 'TMP', 'TMPDIR',
'SYSTEMROOT', 'SYSTEMDRIVE', 'COMSPEC', 'WINDIR',
'LANG', 'LC_ALL', 'LC_CTYPE',
// Node.js runtime
'NODE_OPTIONS', 'NODE_PATH', 'NODE_ENV',
// OpenClaude session / bridge (non-secret)
'CLAUDE_CODE_ENVIRONMENT_KIND',
'CLAUDE_CODE_FORCE_SANDBOX',
'CLAUDE_CODE_BUBBLEWRAP',
'CLAUDE_CODE_ENTRYPOINT',
'CLAUDE_CODE_COORDINATOR_MODE',
'CLAUDE_CODE_PERMISSIONS_VERSION',
'CLAUDE_CODE_PERMISSIONS_SETTING',
// Display / terminal
'TERM', 'COLORTERM', 'FORCE_COLOR', 'NO_COLOR',
])
type BuildChildEnvOpts = {
accessToken: string
useCcrV2: boolean
workerEpoch?: number
sandbox?: boolean
}
/**
* Build the environment for the child CC process from an explicit allowlist.
* This prevents the parent's API keys and credentials from leaking to the child.
*/
export function buildChildEnv(
parentEnv: NodeJS.ProcessEnv,
opts: BuildChildEnvOpts,
): NodeJS.ProcessEnv {
// Start from allowlisted parent vars only
const env: NodeJS.ProcessEnv = {}
for (const key of Object.keys(parentEnv)) {
if (CHILD_ENV_ALLOWLIST.has(key)) {
env[key] = parentEnv[key]
}
}
// Bridge-required overrides
env.CLAUDE_CODE_OAUTH_TOKEN = undefined // explicitly strip
env.CLAUDE_CODE_ENVIRONMENT_KIND = 'bridge'
if (opts.sandbox) env.CLAUDE_CODE_FORCE_SANDBOX = '1'
env.CLAUDE_CODE_SESSION_ACCESS_TOKEN = opts.accessToken
env.CLAUDE_CODE_POST_FOR_SESSION_INGRESS_V2 = '1'
if (opts.useCcrV2) {
env.CLAUDE_CODE_USE_CCR_V2 = '1'
env.CLAUDE_CODE_WORKER_EPOCH = String(opts.workerEpoch)
}
return env
}
/**
* Sanitize a session ID for use in file names.
* Strips any characters that could cause path traversal (e.g. `../`, `/`)
@@ -303,24 +366,12 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
: []),
]
const env: NodeJS.ProcessEnv = {
...deps.env,
// Strip the bridge's OAuth token so the child CC process uses
// the session access token for inference instead.
CLAUDE_CODE_OAUTH_TOKEN: undefined,
CLAUDE_CODE_ENVIRONMENT_KIND: 'bridge',
...(deps.sandbox && { CLAUDE_CODE_FORCE_SANDBOX: '1' }),
CLAUDE_CODE_SESSION_ACCESS_TOKEN: opts.accessToken,
// v1: HybridTransport (WS reads + POST writes) to Session-Ingress.
// Harmless in v2 mode — transportUtils checks CLAUDE_CODE_USE_CCR_V2 first.
CLAUDE_CODE_POST_FOR_SESSION_INGRESS_V2: '1',
// v2: SSETransport + CCRClient to CCR's /v1/code/sessions/* endpoints.
// Same env vars environment-manager sets in the container path.
...(opts.useCcrV2 && {
CLAUDE_CODE_USE_CCR_V2: '1',
CLAUDE_CODE_WORKER_EPOCH: String(opts.workerEpoch),
}),
}
const env = buildChildEnv(deps.env, {
accessToken: opts.accessToken,
useCcrV2: opts.useCcrV2,
workerEpoch: opts.workerEpoch,
sandbox: deps.sandbox,
})
deps.onDebug(
`[bridge:session] Spawning sessionId=${opts.sessionId} sdkUrl=${opts.sdkUrl} accessToken=${opts.accessToken ? 'present' : 'MISSING'}`,

View File

@@ -0,0 +1,36 @@
import { expect, test } from 'bun:test'
import { buildSdkUrl } from './workSecret.ts'
// Finding #42-5: buildSdkUrl uses string.includes() on the full URL,
// so a remote URL containing "localhost" in its path gets ws:// (unencrypted).
test('buildSdkUrl uses wss for remote URL that contains localhost in path', () => {
const url = buildSdkUrl('https://remote.example.com/proxy/localhost/api', 'sess-1')
expect(url).toContain('wss://')
expect(url).not.toContain('ws://')
})
test('buildSdkUrl uses ws for actual localhost hostname', () => {
const url = buildSdkUrl('http://localhost:8080', 'sess-1')
expect(url).toContain('ws://')
})
test('buildSdkUrl uses ws for 127.0.0.1 hostname', () => {
const url = buildSdkUrl('http://127.0.0.1:3000', 'sess-1')
expect(url).toContain('ws://')
})
test('buildSdkUrl uses wss for regular remote hostname', () => {
const url = buildSdkUrl('https://api.example.com', 'sess-1')
expect(url).toContain('wss://')
})
test('buildSdkUrl uses v2 path for localhost', () => {
const url = buildSdkUrl('http://localhost:8080', 'sess-abc')
expect(url).toContain('/v2/session_ingress/ws/sess-abc')
})
test('buildSdkUrl uses v1 path for remote', () => {
const url = buildSdkUrl('https://api.example.com', 'sess-abc')
expect(url).toContain('/v1/session_ingress/ws/sess-abc')
})

View File

@@ -39,8 +39,8 @@ export function decodeWorkSecret(secret: string): WorkSecret {
* and /v1/ for production (Envoy rewrites /v1/ → /v2/).
*/
export function buildSdkUrl(apiBaseUrl: string, sessionId: string): string {
const isLocalhost =
apiBaseUrl.includes('localhost') || apiBaseUrl.includes('127.0.0.1')
const hostname = new URL(apiBaseUrl).hostname
const isLocalhost = hostname === 'localhost' || hostname === '127.0.0.1'
const protocol = isLocalhost ? 'ws' : 'wss'
const version = isLocalhost ? 'v2' : 'v1'
const host = apiBaseUrl.replace(/^https?:\/\//, '').replace(/\/+$/, '')

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,5 @@
import chalk from 'chalk'
import { getAPIProvider } from 'src/utils/model/providers.js'
import { logEvent } from 'src/services/analytics/index.js'
import {
getLatestVersion,
@@ -28,6 +29,19 @@ import { gte } from 'src/utils/semver.js'
import { getInitialSettings } from 'src/utils/settings/settings.js'
export async function update() {
// Block updates for third-party providers. The update mechanism downloads
// from Anthropic's distribution bucket, which would silently replace the
// OpenClaude build (with the OpenAI shim) with the upstream Claude Code
// binary (without it).
if (getAPIProvider() !== 'firstParty') {
writeToStdout(
chalk.yellow('Auto-update is not available for third-party provider builds.\n') +
'To update, pull the latest source from the repository and rebuild:\n' +
' git pull && bun install && bun run build\n',
)
return
}
logEvent('tengu_update_check', {})
writeToStdout(`Current version: ${MACRO.VERSION}\n`)

View File

@@ -19,6 +19,7 @@ import cost from './commands/cost/index.js'
import diff from './commands/diff/index.js'
import ctx_viz from './commands/ctx_viz/index.js'
import doctor from './commands/doctor/index.js'
import onboardGithub from './commands/onboard-github/index.js'
import memory from './commands/memory/index.js'
import help from './commands/help/index.js'
import ide from './commands/ide/index.js'
@@ -128,6 +129,7 @@ import plan from './commands/plan/index.js'
import fast from './commands/fast/index.js'
import passes from './commands/passes/index.js'
import privacySettings from './commands/privacy-settings/index.js'
import provider from './commands/provider/index.js'
import hooks from './commands/hooks/index.js'
import files from './commands/files/index.js'
import branch from './commands/branch/index.js'
@@ -288,9 +290,11 @@ const COMMANDS = memoize((): Command[] => [
memory,
mobile,
model,
onboardGithub,
outputStyle,
remoteEnv,
plugin,
provider,
pr_comments,
releaseNotes,
reloadPlugins,

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,36 @@
import assert from 'node:assert/strict'
import test from 'node:test'
import { extractGitHubRepoSlug } from './repoSlug.ts'
test('keeps owner/repo input as-is', () => {
assert.equal(extractGitHubRepoSlug('Gitlawb/openclaude'), 'Gitlawb/openclaude')
})
test('extracts slug from https GitHub URLs', () => {
assert.equal(
extractGitHubRepoSlug('https://github.com/Gitlawb/openclaude'),
'Gitlawb/openclaude',
)
assert.equal(
extractGitHubRepoSlug('https://www.github.com/Gitlawb/openclaude.git'),
'Gitlawb/openclaude',
)
})
test('extracts slug from ssh GitHub URLs', () => {
assert.equal(
extractGitHubRepoSlug('git@github.com:Gitlawb/openclaude.git'),
'Gitlawb/openclaude',
)
assert.equal(
extractGitHubRepoSlug('ssh://git@github.com/Gitlawb/openclaude'),
'Gitlawb/openclaude',
)
})
test('rejects malformed or non-GitHub URLs', () => {
assert.equal(extractGitHubRepoSlug('https://gitlab.com/Gitlawb/openclaude'), null)
assert.equal(extractGitHubRepoSlug('https://github.com/Gitlawb'), null)
assert.equal(extractGitHubRepoSlug('not actually github.com/Gitlawb/openclaude'), null)
})

View File

@@ -0,0 +1,38 @@
export function extractGitHubRepoSlug(value: string): string | null {
const trimmed = value.trim()
if (/^[a-z][a-z0-9+.-]*:\/\//i.test(trimmed) && !trimmed.includes('github.com')) {
return null
}
if (!trimmed.includes('github.com')) {
return trimmed
}
const sshMatch = trimmed.match(
/^(?:git@|ssh:\/\/git@)(?:www\.)?github\.com[:/](?<owner>[^/:\s]+)\/(?<repo>[^/\s]+?)(?:\.git)?\/?$/i,
)
if (sshMatch?.groups?.owner && sshMatch.groups.repo) {
return `${sshMatch.groups.owner}/${sshMatch.groups.repo}`
}
try {
const parsed = new URL(trimmed)
const hostname = parsed.hostname.toLowerCase()
if (hostname !== 'github.com' && hostname !== 'www.github.com') {
return null
}
const segments = parsed.pathname
.replace(/^\/+|\/+$/g, '')
.split('/')
.filter(Boolean)
if (segments.length < 2) {
return null
}
return `${segments[0]}/${segments[1]}`.replace(/\.git$/i, '')
} catch {
return null
}
}

View File

@@ -0,0 +1,19 @@
import assert from 'node:assert/strict'
import test from 'node:test'
import { Command } from '@commander-js/extra-typings'
import { registerMcpDoctorCommand } from './doctorCommand.js'
test('registerMcpDoctorCommand adds the doctor subcommand with expected options', () => {
const mcp = new Command('mcp')
registerMcpDoctorCommand(mcp)
const doctor = mcp.commands.find(command => command.name() === 'doctor')
assert.ok(doctor)
assert.equal(doctor?.usage(), '[options] [name]')
const optionFlags = doctor?.options.map(option => option.long)
assert.deepEqual(optionFlags, ['--scope', '--config-only', '--json'])
})

View File

@@ -0,0 +1,25 @@
/**
* MCP doctor CLI subcommand.
*/
import { type Command } from '@commander-js/extra-typings'
export function registerMcpDoctorCommand(mcp: Command): void {
mcp
.command('doctor [name]')
.description(
'Diagnose MCP configuration, precedence, disabled/pending state, and connection health. ' +
'Note: unless --config-only is used, stdio servers may be spawned and remote servers may be contacted. ' +
'Only use this command in directories you trust.',
)
.option('-s, --scope <scope>', 'Restrict config analysis to a specific scope (local, project, user, or enterprise)')
.option('--config-only', 'Skip live connection checks and only analyze configuration state')
.option('--json', 'Output the diagnostics report as JSON')
.action(async (name: string | undefined, options: {
scope?: string
configOnly?: boolean
json?: boolean
}) => {
const { mcpDoctorHandler } = await import('../../cli/handlers/mcp.js')
await mcpDoctorHandler(name, options)
})
}

View File

@@ -0,0 +1,11 @@
import type { Command } from '../../commands.js'
const onboardGithub: Command = {
name: 'onboard-github',
description:
'Interactive setup for GitHub Models: device login or PAT, saved to secure storage',
type: 'local-jsx',
load: () => import('./onboard-github.js'),
}
export default onboardGithub

View File

@@ -0,0 +1,237 @@
import * as React from 'react'
import { useCallback, useState } from 'react'
import { Select } from '../../components/CustomSelect/select.js'
import { Spinner } from '../../components/Spinner.js'
import TextInput from '../../components/TextInput.js'
import { Box, Text } from '../../ink.js'
import {
openVerificationUri,
pollAccessToken,
requestDeviceCode,
} from '../../services/github/deviceFlow.js'
import type { LocalJSXCommandCall } from '../../types/command.js'
import {
hydrateGithubModelsTokenFromSecureStorage,
saveGithubModelsToken,
} from '../../utils/githubModelsCredentials.js'
import { updateSettingsForSource } from '../../utils/settings/settings.js'
const DEFAULT_MODEL = 'github:copilot'
type Step =
| 'menu'
| 'device-busy'
| 'pat'
| 'error'
function mergeUserSettingsEnv(model: string): { ok: boolean; detail?: string } {
const { error } = updateSettingsForSource('userSettings', {
env: {
CLAUDE_CODE_USE_GITHUB: '1',
OPENAI_MODEL: model,
CLAUDE_CODE_USE_OPENAI: undefined as any,
CLAUDE_CODE_USE_GEMINI: undefined as any,
CLAUDE_CODE_USE_BEDROCK: undefined as any,
CLAUDE_CODE_USE_VERTEX: undefined as any,
CLAUDE_CODE_USE_FOUNDRY: undefined as any,
},
})
if (error) {
return { ok: false, detail: error.message }
}
return { ok: true }
}
function OnboardGithub(props: {
onDone: Parameters<LocalJSXCommandCall>[0]
onChangeAPIKey: () => void
}): React.ReactNode {
const { onDone, onChangeAPIKey } = props
const [step, setStep] = useState<Step>('menu')
const [errorMsg, setErrorMsg] = useState<string | null>(null)
const [deviceHint, setDeviceHint] = useState<{
user_code: string
verification_uri: string
} | null>(null)
const [patDraft, setPatDraft] = useState('')
const [cursorOffset, setCursorOffset] = useState(0)
const finalize = useCallback(
async (token: string, model: string = DEFAULT_MODEL) => {
const saved = saveGithubModelsToken(token)
if (!saved.success) {
setErrorMsg(saved.warning ?? 'Could not save token to secure storage.')
setStep('error')
return
}
const merged = mergeUserSettingsEnv(model.trim() || DEFAULT_MODEL)
if (!merged.ok) {
setErrorMsg(
`Token saved, but settings were not updated: ${merged.detail ?? 'unknown error'}. ` +
`Add env CLAUDE_CODE_USE_GITHUB=1 and OPENAI_MODEL to ~/.claude/settings.json manually.`,
)
setStep('error')
return
}
process.env.CLAUDE_CODE_USE_GITHUB = '1'
process.env.OPENAI_MODEL = model.trim() || DEFAULT_MODEL
hydrateGithubModelsTokenFromSecureStorage()
onChangeAPIKey()
onDone(
'GitHub Models onboard complete. Token stored in secure storage; user settings updated. Restart if the model does not switch.',
{ display: 'user' },
)
},
[onChangeAPIKey, onDone],
)
const runDeviceFlow = useCallback(async () => {
setStep('device-busy')
setErrorMsg(null)
setDeviceHint(null)
try {
const device = await requestDeviceCode()
setDeviceHint({
user_code: device.user_code,
verification_uri: device.verification_uri,
})
await openVerificationUri(device.verification_uri)
const token = await pollAccessToken(device.device_code, {
initialInterval: device.interval,
timeoutSeconds: device.expires_in,
})
await finalize(token, DEFAULT_MODEL)
} catch (e) {
setErrorMsg(e instanceof Error ? e.message : String(e))
setStep('error')
}
}, [finalize])
if (step === 'error' && errorMsg) {
const options = [
{
label: 'Back to menu',
value: 'back' as const,
},
{
label: 'Exit',
value: 'exit' as const,
},
]
return (
<Box flexDirection="column" gap={1}>
<Text color="red">{errorMsg}</Text>
<Select
options={options}
onChange={(v: string) => {
if (v === 'back') {
setStep('menu')
setErrorMsg(null)
} else {
onDone('GitHub onboard cancelled', { display: 'system' })
}
}}
/>
</Box>
)
}
if (step === 'device-busy') {
return (
<Box flexDirection="column" gap={1}>
<Text>GitHub device login</Text>
{deviceHint ? (
<>
<Text>
Enter code <Text bold>{deviceHint.user_code}</Text> at{' '}
{deviceHint.verification_uri}
</Text>
<Text dimColor>
A browser window may have opened. Waiting for authorization
</Text>
</>
) : (
<Text dimColor>Requesting device code from GitHub</Text>
)}
<Spinner />
</Box>
)
}
if (step === 'pat') {
return (
<Box flexDirection="column" gap={1}>
<Text>Paste a GitHub personal access token with access to GitHub Models.</Text>
<Text dimColor>Input is masked. Enter to submit; Esc to go back.</Text>
<TextInput
value={patDraft}
mask="*"
onChange={setPatDraft}
onSubmit={async (value: string) => {
const t = value.trim()
if (!t) {
return
}
await finalize(t, DEFAULT_MODEL)
}}
onExit={() => {
setStep('menu')
setPatDraft('')
}}
columns={80}
cursorOffset={cursorOffset}
onChangeCursorOffset={setCursorOffset}
/>
</Box>
)
}
const menuOptions = [
{
label: 'Sign in with browser (device code)',
value: 'device' as const,
},
{
label: 'Paste personal access token',
value: 'pat' as const,
},
{
label: 'Cancel',
value: 'cancel' as const,
},
]
return (
<Box flexDirection="column" gap={1}>
<Text bold>GitHub Models setup</Text>
<Text dimColor>
Stores your token in the OS credential store (macOS Keychain when available)
and enables CLAUDE_CODE_USE_GITHUB in your user settings no export
GITHUB_TOKEN needed for future runs.
</Text>
<Select
options={menuOptions}
onChange={(v: string) => {
if (v === 'cancel') {
onDone('GitHub onboard cancelled', { display: 'system' })
return
}
if (v === 'pat') {
setStep('pat')
return
}
void runDeviceFlow()
}}
/>
</Box>
)
}
export const call: LocalJSXCommandCall = async (onDone, context) => {
return (
<OnboardGithub
onDone={onDone}
onChangeAPIKey={context.onChangeAPIKey}
/>
)
}

View File

@@ -0,0 +1,12 @@
import type { Command } from '../../commands.js'
import { shouldInferenceConfigCommandBeImmediate } from '../../utils/immediateCommand.js'
export default {
type: 'local-jsx',
name: 'provider',
description: 'Set up and save a third-party provider profile for OpenClaude',
get immediate() {
return shouldInferenceConfigCommandBeImmediate()
},
load: () => import('./provider.js'),
} satisfies Command

View File

@@ -0,0 +1,228 @@
import { PassThrough } from 'node:stream'
import { expect, test } from 'bun:test'
import React from 'react'
import stripAnsi from 'strip-ansi'
import { createRoot, render, useApp } from '../../ink.js'
import { AppStateProvider } from '../../state/AppState.js'
import {
buildCurrentProviderSummary,
buildProfileSaveMessage,
getProviderWizardDefaults,
TextEntryDialog,
} from './provider.js'
const SYNC_START = '\x1B[?2026h'
const SYNC_END = '\x1B[?2026l'
function extractLastFrame(output: string): string {
let lastFrame: string | null = null
let cursor = 0
while (cursor < output.length) {
const start = output.indexOf(SYNC_START, cursor)
if (start === -1) {
break
}
const contentStart = start + SYNC_START.length
const end = output.indexOf(SYNC_END, contentStart)
if (end === -1) {
break
}
const frame = output.slice(contentStart, end)
if (frame.trim().length > 0) {
lastFrame = frame
}
cursor = end + SYNC_END.length
}
return lastFrame ?? output
}
async function renderFinalFrame(node: React.ReactNode): Promise<string> {
let output = ''
const { stdout, stdin, getOutput } = createTestStreams()
const instance = await render(node, {
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
await instance.waitUntilExit()
return stripAnsi(extractLastFrame(getOutput()))
}
function createTestStreams(): {
stdout: PassThrough
stdin: PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
getOutput: () => string
} {
let output = ''
const stdout = new PassThrough()
const stdin = new PassThrough() as PassThrough & {
isTTY: boolean
setRawMode: (mode: boolean) => void
ref: () => void
unref: () => void
}
stdin.isTTY = true
stdin.setRawMode = () => {}
stdin.ref = () => {}
stdin.unref = () => {}
;(stdout as unknown as { columns: number }).columns = 120
stdout.on('data', chunk => {
output += chunk.toString()
})
return {
stdout,
stdin,
getOutput: () => output,
}
}
function StepChangeHarness(): React.ReactNode {
const { exit } = useApp()
const [step, setStep] = React.useState<'api' | 'model'>('api')
React.useLayoutEffect(() => {
if (step === 'api') {
setStep('model')
return
}
const timer = setTimeout(exit, 0)
return () => clearTimeout(timer)
}, [exit, step])
return (
<AppStateProvider>
<TextEntryDialog
title="Provider"
subtitle={step === 'api' ? 'API key step' : 'Model step'}
description="Enter the next value"
initialValue={step === 'api' ? 'stale-secret-key' : 'fresh-model-name'}
mask={step === 'api' ? '*' : undefined}
onSubmit={() => {}}
onCancel={() => {}}
/>
</AppStateProvider>
)
}
test('TextEntryDialog resets its input state when initialValue changes', async () => {
const output = await renderFinalFrame(<StepChangeHarness />)
expect(output).toContain('Model step')
expect(output).toContain('fresh-model-name')
expect(output).not.toContain('stale-secret-key')
})
test('wizard step remount prevents a typed API key from leaking into the next field', async () => {
const { stdout, stdin, getOutput } = createTestStreams()
const root = await createRoot({
stdout: stdout as unknown as NodeJS.WriteStream,
stdin: stdin as unknown as NodeJS.ReadStream,
patchConsole: false,
})
root.render(
<AppStateProvider>
<TextEntryDialog
resetStateKey="api"
title="Provider"
subtitle="API key step"
description="Enter the API key"
initialValue=""
mask="*"
onSubmit={() => {}}
onCancel={() => {}}
/>
</AppStateProvider>,
)
await Bun.sleep(25)
stdin.write('sk-secret-12345678')
await Bun.sleep(25)
root.render(
<AppStateProvider>
<TextEntryDialog
resetStateKey="model"
title="Provider"
subtitle="Model step"
description="Enter the model"
initialValue=""
onSubmit={() => {}}
onCancel={() => {}}
/>
</AppStateProvider>,
)
await Bun.sleep(25)
root.unmount()
stdin.end()
stdout.end()
await Bun.sleep(25)
const output = stripAnsi(extractLastFrame(getOutput()))
expect(output).toContain('Model step')
expect(output).not.toContain('sk-secret-12345678')
})
test('buildProfileSaveMessage maps provider fields without echoing secrets', () => {
const message = buildProfileSaveMessage(
'openai',
{
OPENAI_API_KEY: 'sk-secret-12345678',
OPENAI_MODEL: 'gpt-4o',
OPENAI_BASE_URL: 'https://api.openai.com/v1',
},
'D:/codings/Opensource/openclaude/.openclaude-profile.json',
)
expect(message).toContain('Saved OpenAI-compatible profile.')
expect(message).toContain('Model: gpt-4o')
expect(message).toContain('Endpoint: https://api.openai.com/v1')
expect(message).toContain('Credentials: configured')
expect(message).not.toContain('sk-secret-12345678')
})
test('buildCurrentProviderSummary redacts poisoned model and endpoint values', () => {
const summary = buildCurrentProviderSummary({
processEnv: {
CLAUDE_CODE_USE_OPENAI: '1',
OPENAI_API_KEY: 'sk-secret-12345678',
OPENAI_MODEL: 'sk-secret-12345678',
OPENAI_BASE_URL: 'sk-secret-12345678',
},
persisted: null,
})
expect(summary.providerLabel).toBe('OpenAI-compatible')
expect(summary.modelLabel).toBe('sk-...5678')
expect(summary.endpointLabel).toBe('sk-...5678')
})
test('getProviderWizardDefaults ignores poisoned current provider values', () => {
const defaults = getProviderWizardDefaults({
OPENAI_API_KEY: 'sk-secret-12345678',
OPENAI_MODEL: 'sk-secret-12345678',
OPENAI_BASE_URL: 'sk-secret-12345678',
GEMINI_API_KEY: 'AIzaSecret12345678',
GEMINI_MODEL: 'AIzaSecret12345678',
})
expect(defaults.openAIModel).toBe('gpt-4o')
expect(defaults.openAIBaseUrl).toBe('https://api.openai.com/v1')
expect(defaults.geminiModel).toBe('gemini-2.0-flash')
})

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -1,50 +1,53 @@
import { c as _c } from "react-compiler-runtime";
import React from 'react';
import { Box, Link, Text } from '../ink.js';
import { Select } from './CustomSelect/index.js';
import { Dialog } from './design-system/Dialog.js';
import React from 'react'
import { Box, Link, Text } from '../ink.js'
import { Select } from './CustomSelect/index.js'
import { Dialog } from './design-system/Dialog.js'
import { getAPIProvider } from '../utils/model/providers.js'
type Props = {
onDone: () => void;
};
export function CostThresholdDialog(t0) {
const $ = _c(7);
const {
onDone
} = t0;
let t1;
if ($[0] === Symbol.for("react.memo_cache_sentinel")) {
t1 = <Box flexDirection="column"><Text>Learn more about how to monitor your spending:</Text><Link url="https://code.claude.com/docs/en/costs" /></Box>;
$[0] = t1;
} else {
t1 = $[0];
}
let t2;
if ($[1] === Symbol.for("react.memo_cache_sentinel")) {
t2 = [{
value: "ok",
label: "Got it, thanks!"
}];
$[1] = t2;
} else {
t2 = $[1];
}
let t3;
if ($[2] !== onDone) {
t3 = <Select options={t2} onChange={onDone} />;
$[2] = onDone;
$[3] = t3;
} else {
t3 = $[3];
}
let t4;
if ($[4] !== onDone || $[5] !== t3) {
t4 = <Dialog title="You've spent $5 on the Anthropic API this session." onCancel={onDone}>{t1}{t3}</Dialog>;
$[4] = onDone;
$[5] = t3;
$[6] = t4;
} else {
t4 = $[6];
}
return t4;
onDone: () => void
}
function getProviderLabel(): string {
const provider = getAPIProvider()
switch (provider) {
case 'firstParty':
return 'Anthropic API'
case 'bedrock':
return 'AWS Bedrock'
case 'vertex':
return 'Google Vertex'
case 'foundry':
return 'Azure Foundry'
case 'openai':
return 'OpenAI-compatible API'
case 'gemini':
return 'Gemini API'
default:
return 'API'
}
}
export function CostThresholdDialog({ onDone }: Props): React.ReactNode {
const providerLabel = getProviderLabel()
return (
<Dialog
title={`You've spent $5 on the ${providerLabel} this session.`}
onCancel={onDone}
>
<Box flexDirection="column">
<Text>Learn more about how to monitor your spending:</Text>
<Link url="https://code.claude.com/docs/en/costs" />
</Box>
<Select
options={[
{
value: 'ok',
label: 'Got it, thanks!',
},
]}
onChange={onDone}
/>
</Dialog>
)
}
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkJveCIsIkxpbmsiLCJUZXh0IiwiU2VsZWN0IiwiRGlhbG9nIiwiUHJvcHMiLCJvbkRvbmUiLCJDb3N0VGhyZXNob2xkRGlhbG9nIiwidDAiLCIkIiwiX2MiLCJ0MSIsIlN5bWJvbCIsImZvciIsInQyIiwidmFsdWUiLCJsYWJlbCIsInQzIiwidDQiXSwic291cmNlcyI6WyJDb3N0VGhyZXNob2xkRGlhbG9nLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgUmVhY3QgZnJvbSAncmVhY3QnXG5pbXBvcnQgeyBCb3gsIExpbmssIFRleHQgfSBmcm9tICcuLi9pbmsuanMnXG5pbXBvcnQgeyBTZWxlY3QgfSBmcm9tICcuL0N1c3RvbVNlbGVjdC9pbmRleC5qcydcbmltcG9ydCB7IERpYWxvZyB9IGZyb20gJy4vZGVzaWduLXN5c3RlbS9EaWFsb2cuanMnXG5cbnR5cGUgUHJvcHMgPSB7XG4gIG9uRG9uZTogKCkgPT4gdm9pZFxufVxuXG5leHBvcnQgZnVuY3Rpb24gQ29zdFRocmVzaG9sZERpYWxvZyh7IG9uRG9uZSB9OiBQcm9wcyk6IFJlYWN0LlJlYWN0Tm9kZSB7XG4gIHJldHVybiAoXG4gICAgPERpYWxvZ1xuICAgICAgdGl0bGU9XCJZb3UndmUgc3BlbnQgJDUgb24gdGhlIEFudGhyb3BpYyBBUEkgdGhpcyBzZXNzaW9uLlwiXG4gICAgICBvbkNhbmNlbD17b25Eb25lfVxuICAgID5cbiAgICAgIDxCb3ggZmxleERpcmVjdGlvbj1cImNvbHVtblwiPlxuICAgICAgICA8VGV4dD5MZWFybiBtb3JlIGFib3V0IGhvdyB0byBtb25pdG9yIHlvdXIgc3BlbmRpbmc6PC9UZXh0PlxuICAgICAgICA8TGluayB1cmw9XCJodHRwczovL2NvZGUuY2xhdWRlLmNvbS9kb2NzL2VuL2Nvc3RzXCIgLz5cbiAgICAgIDwvQm94PlxuICAgICAgPFNlbGVjdFxuICAgICAgICBvcHRpb25zPXtbXG4gICAgICAgICAge1xuICAgICAgICAgICAgdmFsdWU6ICdvaycsXG4gICAgICAgICAgICBsYWJlbDogJ0dvdCBpdCwgdGhhbmtzIScsXG4gICAgICAgICAgfSxcbiAgICAgICAgXX1cbiAgICAgICAgb25DaGFuZ2U9e29uRG9uZX1cbiAgICAgIC8+XG4gICAgPC9EaWFsb2c+XG4gIClcbn1cbiJdLCJtYXBwaW5ncyI6IjtBQUFBLE9BQU9BLEtBQUssTUFBTSxPQUFPO0FBQ3pCLFNBQVNDLEdBQUcsRUFBRUMsSUFBSSxFQUFFQyxJQUFJLFFBQVEsV0FBVztBQUMzQyxTQUFTQyxNQUFNLFFBQVEseUJBQXlCO0FBQ2hELFNBQVNDLE1BQU0sUUFBUSwyQkFBMkI7QUFFbEQsS0FBS0MsS0FBSyxHQUFHO0VBQ1hDLE1BQU0sRUFBRSxHQUFHLEdBQUcsSUFBSTtBQUNwQixDQUFDO0FBRUQsT0FBTyxTQUFBQyxvQkFBQUMsRUFBQTtFQUFBLE1BQUFDLENBQUEsR0FBQUMsRUFBQTtFQUE2QjtJQUFBSjtFQUFBLElBQUFFLEVBQWlCO0VBQUEsSUFBQUcsRUFBQTtFQUFBLElBQUFGLENBQUEsUUFBQUcsTUFBQSxDQUFBQyxHQUFBO0lBTS9DRixFQUFBLElBQUMsR0FBRyxDQUFlLGFBQVEsQ0FBUixRQUFRLENBQ3pCLENBQUMsSUFBSSxDQUFDLDhDQUE4QyxFQUFuRCxJQUFJLENBQ0wsQ0FBQyxJQUFJLENBQUssR0FBdUMsQ0FBdkMsdUNBQXVDLEdBQ25ELEVBSEMsR0FBRyxDQUdFO0lBQUFGLENBQUEsTUFBQUUsRUFBQTtFQUFBO0lBQUFBLEVBQUEsR0FBQUYsQ0FBQTtFQUFBO0VBQUEsSUFBQUssRUFBQTtFQUFBLElBQUFMLENBQUEsUUFBQUcsTUFBQSxDQUFBQyxHQUFBO0lBRUtDLEVBQUEsSUFDUDtNQUFBQyxLQUFBLEVBQ1MsSUFBSTtNQUFBQyxLQUFBLEVBQ0o7SUFDVCxDQUFDLENBQ0Y7SUFBQVAsQ0FBQSxNQUFBSyxFQUFBO0VBQUE7SUFBQUEsRUFBQSxHQUFBTCxDQUFBO0VBQUE7RUFBQSxJQUFBUSxFQUFBO0VBQUEsSUFBQVIsQ0FBQSxRQUFBSCxNQUFBO0lBTkhXLEVBQUEsSUFBQyxNQUFNLENBQ0ksT0FLUixDQUxRLENBQUFILEVBS1QsQ0FBQyxDQUNTUixRQUFNLENBQU5BLE9BQUssQ0FBQyxHQUNoQjtJQUFBRyxDQUFBLE1BQUFILE1BQUE7SUFBQUcsQ0FBQSxNQUFBUSxFQUFBO0VBQUE7SUFBQUEsRUFBQSxHQUFBUixDQUFBO0VBQUE7RUFBQSxJQUFBUyxFQUFBO0VBQUEsSUFBQVQsQ0FBQSxRQUFBSCxNQUFBLElBQUFHLENBQUEsUUFBQVEsRUFBQTtJQWhCSkMsRUFBQSxJQUFDLE1BQU0sQ0FDQyxLQUFvRCxDQUFwRCxvREFBb0QsQ0FDaERaLFFBQU0sQ0FBTkEsT0FBSyxDQUFDLENBRWhCLENBQUFLLEVBR0ssQ0FDTCxDQUFBTSxFQVFDLENBQ0gsRUFqQkMsTUFBTSxDQWlCRTtJQUFBUixDQUFBLE1BQUFILE1BQUE7SUFBQUcsQ0FBQSxNQUFBUSxFQUFBO0lBQUFSLENBQUEsTUFBQVMsRUFBQTtFQUFBO0lBQUFBLEVBQUEsR0FBQVQsQ0FBQTtFQUFBO0VBQUEsT0FqQlRTLEVBaUJTO0FBQUEiLCJpZ25vcmVMaXN0IjpbXX0=

View File

@@ -84,44 +84,44 @@ const reducer = <T>(state: State<T>, action: Action<T>): State<T> => {
return state
}
// Wrap to first item if at the end
const next = item.next || state.optionMap.first
// If there's a next item in the list, go to it
if (item.next) {
const needsToScroll = item.next.index >= state.visibleToIndex
if (!next) {
if (!needsToScroll) {
return {
...state,
focusedValue: item.next.value,
}
}
const nextVisibleToIndex = Math.min(
state.optionMap.size,
state.visibleToIndex + 1,
)
const nextVisibleFromIndex = nextVisibleToIndex - state.visibleOptionCount
return {
...state,
focusedValue: item.next.value,
visibleFromIndex: nextVisibleFromIndex,
visibleToIndex: nextVisibleToIndex,
}
}
// No next item - wrap to first item
const firstItem = state.optionMap.first
if (!firstItem) {
return state
}
// When wrapping to first, reset viewport to start
if (!item.next && next === state.optionMap.first) {
return {
...state,
focusedValue: next.value,
visibleFromIndex: 0,
visibleToIndex: state.visibleOptionCount,
}
}
const needsToScroll = next.index >= state.visibleToIndex
if (!needsToScroll) {
return {
...state,
focusedValue: next.value,
}
}
const nextVisibleToIndex = Math.min(
state.optionMap.size,
state.visibleToIndex + 1,
)
const nextVisibleFromIndex = nextVisibleToIndex - state.visibleOptionCount
return {
...state,
focusedValue: next.value,
visibleFromIndex: nextVisibleFromIndex,
visibleToIndex: nextVisibleToIndex,
focusedValue: firstItem.value,
visibleFromIndex: 0,
visibleToIndex: state.visibleOptionCount,
}
}
@@ -136,44 +136,43 @@ const reducer = <T>(state: State<T>, action: Action<T>): State<T> => {
return state
}
// Wrap to last item if at the beginning
const previous = item.previous || state.optionMap.last
// If there's a previous item in the list, go to it
if (item.previous) {
const needsToScroll = item.previous.index < state.visibleFromIndex
if (!previous) {
return state
}
if (!needsToScroll) {
return {
...state,
focusedValue: item.previous.value,
}
}
const nextVisibleFromIndex = Math.max(0, state.visibleFromIndex - 1)
const nextVisibleToIndex = nextVisibleFromIndex + state.visibleOptionCount
// When wrapping to last, reset viewport to end
if (!item.previous && previous === state.optionMap.last) {
const nextVisibleToIndex = state.optionMap.size
const nextVisibleFromIndex = Math.max(
0,
nextVisibleToIndex - state.visibleOptionCount,
)
return {
...state,
focusedValue: previous.value,
focusedValue: item.previous.value,
visibleFromIndex: nextVisibleFromIndex,
visibleToIndex: nextVisibleToIndex,
}
}
const needsToScroll = previous.index <= state.visibleFromIndex
if (!needsToScroll) {
return {
...state,
focusedValue: previous.value,
}
// No previous item - wrap to last item
const lastItem = state.optionMap.last
if (!lastItem) {
return state
}
const nextVisibleFromIndex = Math.max(0, state.visibleFromIndex - 1)
const nextVisibleToIndex = nextVisibleFromIndex + state.visibleOptionCount
// When wrapping to last, reset viewport to end
const nextVisibleToIndex = state.optionMap.size
const nextVisibleFromIndex = Math.max(
0,
nextVisibleToIndex - state.visibleOptionCount,
)
return {
...state,
focusedValue: previous.value,
focusedValue: lastItem.value,
visibleFromIndex: nextVisibleFromIndex,
visibleToIndex: nextVisibleToIndex,
}

View File

@@ -0,0 +1,152 @@
import React, { useState } from 'react'
import { Box, Text } from '../ink.js'
import { useMainLoopModel } from '../hooks/useMainLoopModel.js'
import { useAppState, useSetAppState } from '../state/AppState.js'
import type { EffortLevel, OpenAIEffortLevel } from '../utils/effort.js'
import {
getAvailableEffortLevels,
getDisplayedEffortLevel,
getEffortLevelDescription,
getEffortLevelLabel,
getEffortValueDescription,
modelSupportsEffort,
modelUsesOpenAIEffort,
standardEffortToOpenAI,
isOpenAIEffortLevel,
} from '../utils/effort.js'
import { getAPIProvider } from '../utils/model/providers.js'
import { getReasoningEffortForModel } from '../services/api/providerConfig.js'
import { Select } from './CustomSelect/select.js'
import { effortLevelToSymbol } from './EffortIndicator.js'
import { KeyboardShortcutHint } from './design-system/KeyboardShortcutHint.js'
import { Byline } from './design-system/Byline.js'
type EffortOption = {
label: React.ReactNode
value: string
description: string
isAvailable: boolean
}
type Props = {
onSelect: (effort: EffortLevel | undefined) => void
onCancel?: () => void
}
export function EffortPicker({ onSelect, onCancel }: Props) {
const model = useMainLoopModel()
const appStateEffort = useAppState((s: any) => s.effortValue)
const setAppState = useSetAppState()
const provider = getAPIProvider()
const usesOpenAIEffort = modelUsesOpenAIEffort(model)
const availableLevels = getAvailableEffortLevels(model)
const currentDisplayedLevel = getDisplayedEffortLevel(model, appStateEffort)
// For OpenAI/Codex, get the model's default reasoning effort
const modelReasoningEffort = usesOpenAIEffort ? getReasoningEffortForModel(model) : undefined
const defaultEffortForModel = modelReasoningEffort || currentDisplayedLevel
const options: EffortOption[] = [
{
label: <EffortOptionLabel level="auto" text="Auto" isCurrent={false} />,
value: 'auto',
description: 'Use the default effort level for your model',
isAvailable: true,
},
...availableLevels.map(level => {
const displayLevel = usesOpenAIEffort
? (level === 'xhigh' ? 'max' : level)
: level
const isCurrent = currentDisplayedLevel === displayLevel
return {
label: (
<EffortOptionLabel
level={level as EffortLevel}
text={getEffortLevelLabel(level as EffortLevel)}
isCurrent={isCurrent}
/>
),
value: level,
description: getEffortLevelDescription(level as EffortLevel),
isAvailable: true,
}
}),
]
function handleSelect(value: string) {
if (value === 'auto') {
setAppState(prev => ({
...prev,
effortValue: undefined,
}))
onSelect(undefined)
} else {
const effortLevel = value as EffortLevel
setAppState(prev => ({
...prev,
effortValue: effortLevel,
}))
onSelect(effortLevel)
}
}
function handleCancel() {
onCancel?.()
}
const supportsEffort = modelSupportsEffort(model)
// For OpenAI/Codex, use the model's default reasoning effort as initial focus
// For Claude, use the displayed effort level or 'auto'
const initialFocus = usesOpenAIEffort
? (modelReasoningEffort || 'auto')
: (appStateEffort ? String(appStateEffort) : 'auto')
return (
<Box flexDirection="column">
<Box marginBottom={1} flexDirection="column">
<Text color="remember" bold={true}>Set effort level</Text>
<Text dimColor={true}>
{usesOpenAIEffort
? `OpenAI/Codex provider (${provider})`
: supportsEffort
? `Claude model · ${provider} provider`
: `Effort not supported for this model`
}
</Text>
</Box>
<Box marginBottom={1}>
<Select
options={options}
defaultValue={initialFocus}
onChange={handleSelect}
onCancel={handleCancel}
visibleOptionCount={Math.min(6, options.length)}
inlineDescriptions={true}
/>
</Box>
<Box marginBottom={1}>
<Text dimColor={true} italic={true}>
<Byline>
<KeyboardShortcutHint shortcut="Enter" action="confirm" />
<KeyboardShortcutHint shortcut="Esc" action="cancel" />
</Byline>
</Text>
</Box>
</Box>
)
}
function EffortOptionLabel({ level, text, isCurrent }: { level: EffortLevel | 'auto', text: string, isCurrent: boolean }) {
const symbol = level === 'auto' ? '⊘' : effortLevelToSymbol(level as EffortLevel)
const color = isCurrent ? 'remember' : level === 'auto' ? 'subtle' : 'suggestion'
return (
<>
<Text color={color}>{symbol} </Text>
<Text bold={isCurrent}>{text}</Text>
{isCurrent && <Text dimColor={true}> (current)</Text>}
</>
)
}

View File

@@ -99,7 +99,7 @@ export function Onboarding({
// Add API key step if needed
// On homespace, ANTHROPIC_API_KEY is preserved in process.env for child
// processes but ignored by Claude Code itself (see auth.ts).
if (!process.env.ANTHROPIC_API_KEY || isRunningOnHomespace()) {
if (!process.env.ANTHROPIC_API_KEY || isRunningOnHomespace() || !isAnthropicAuthEnabled()) {
return '';
}
const customApiKeyTruncated = normalizeApiKeyForConfig(process.env.ANTHROPIC_API_KEY);

View File

@@ -0,0 +1,36 @@
import figures from 'figures'
import React from 'react'
import { describe, expect, it } from 'bun:test'
import { renderToString } from '../../utils/staticRender.js'
import {
PromptInputFooterSuggestions,
type SuggestionItem,
} from './PromptInputFooterSuggestions.js'
describe('PromptInputFooterSuggestions', () => {
it('renders a visible marker for the selected suggestion', async () => {
const suggestions: SuggestionItem[] = [
{
id: 'command-help',
displayText: '/help',
description: 'Show help',
},
{
id: 'command-doctor',
displayText: '/doctor',
description: 'Run diagnostics',
},
]
const output = await renderToString(
<PromptInputFooterSuggestions
suggestions={suggestions}
selectedSuggestion={1}
/>,
80,
)
expect(output).toContain(`${figures.pointer} /doctor`)
expect(output).toContain(' /help')
})
})

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,211 @@
import * as React from 'react'
import { useEffect, useState } from 'react'
import { useTerminalSize } from '../../hooks/useTerminalSize.js'
import { Box, Text } from '../../ink.js'
import { useKeybinding } from '../../keybindings/useKeybinding.js'
import {
buildCodexUsageRows,
fetchCodexUsage,
formatCodexPlanType,
type CodexUsageData,
type CodexUsageRow,
} from '../../services/api/codexUsage.js'
import { formatResetText } from '../../utils/format.js'
import { logError } from '../../utils/log.js'
import { ConfigurableShortcutHint } from '../ConfigurableShortcutHint.js'
import { Byline } from '../design-system/Byline.js'
import { ProgressBar } from '../design-system/ProgressBar.js'
type CodexUsageLimitBarProps = {
label: string
usedPercent: number
resetsAt?: string
maxWidth: number
}
function CodexUsageLimitBar({
label,
usedPercent,
resetsAt,
maxWidth,
}: CodexUsageLimitBarProps): React.ReactNode {
const normalizedUsedPercent = Math.max(0, Math.min(100, usedPercent))
const usedText = `${Math.floor(normalizedUsedPercent)}% used`
const resetText = resetsAt
? `Resets ${formatResetText(resetsAt, true, true)}`
: undefined
if (maxWidth >= 62) {
return (
<Box flexDirection="column">
<Text bold>{label}</Text>
<Box flexDirection="row" gap={1}>
<ProgressBar
ratio={normalizedUsedPercent / 100}
width={50}
fillColor="rate_limit_fill"
emptyColor="rate_limit_empty"
/>
<Text>{usedText}</Text>
</Box>
{resetText ? <Text dimColor>{resetText}</Text> : null}
</Box>
)
}
return (
<Box flexDirection="column">
<Text>
<Text bold>{label}</Text>
{resetText ? (
<>
<Text> </Text>
<Text dimColor>· {resetText}</Text>
</>
) : null}
</Text>
<ProgressBar
ratio={normalizedUsedPercent / 100}
width={maxWidth}
fillColor="rate_limit_fill"
emptyColor="rate_limit_empty"
/>
<Text>{usedText}</Text>
</Box>
)
}
function CodexUsageTextRow({
label,
value,
}: Extract<CodexUsageRow, { kind: 'text' }>): React.ReactNode {
if (!value) {
return <Text bold>{label}</Text>
}
return (
<Text>
<Text bold>{label}</Text>
<Text dimColor> · {value}</Text>
</Text>
)
}
export function CodexUsage(): React.ReactNode {
const [usage, setUsage] = useState<CodexUsageData | null>(null)
const [error, setError] = useState<string | null>(null)
const [isLoading, setIsLoading] = useState(true)
const { columns } = useTerminalSize()
const availableWidth = columns - 2
const maxWidth = Math.min(availableWidth, 80)
const loadUsage = React.useCallback(async () => {
setIsLoading(true)
setError(null)
try {
setUsage(await fetchCodexUsage())
} catch (err) {
logError(err as Error)
setError(err instanceof Error ? err.message : 'Failed to load Codex usage')
} finally {
setIsLoading(false)
}
}, [])
useEffect(() => {
void loadUsage()
}, [loadUsage])
useKeybinding(
'settings:retry',
() => {
void loadUsage()
},
{
context: 'Settings',
isActive: !!error && !isLoading,
},
)
if (error) {
return (
<Box flexDirection="column" gap={1}>
<Text color="error">Error: {error}</Text>
<Text dimColor>
<Byline>
<ConfigurableShortcutHint
action="settings:retry"
context="Settings"
fallback="r"
description="retry"
/>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Byline>
</Text>
</Box>
)
}
if (!usage) {
return (
<Box flexDirection="column" gap={1}>
<Text dimColor>Loading Codex usage data</Text>
<Text dimColor>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Text>
</Box>
)
}
const rows = buildCodexUsageRows(usage.snapshots)
const planType = formatCodexPlanType(usage.planType)
return (
<Box flexDirection="column" gap={1} width="100%">
{planType ? <Text dimColor>Plan: {planType}</Text> : null}
{rows.length === 0 ? (
<Text dimColor>Codex usage data is not available for this account.</Text>
) : null}
{rows.map((row, index) =>
row.kind === 'window' ? (
<CodexUsageLimitBar
key={`${row.label}-${index}`}
label={row.label}
usedPercent={row.usedPercent}
resetsAt={row.resetsAt}
maxWidth={maxWidth}
/>
) : (
<CodexUsageTextRow
key={`${row.label}-${index}`}
label={row.label}
value={row.value}
/>
),
)}
<Text dimColor>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Text>
</Box>
)
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -5,6 +5,8 @@
* Addresses: https://github.com/Gitlawb/openclaude/issues/55
*/
declare const MACRO: { VERSION: string; DISPLAY_VERSION?: string }
const ESC = '\x1b['
const RESET = `${ESC}0m`
const DIM = `${ESC}2m`
@@ -78,6 +80,7 @@ const LOGO_CLAUDE = [
function detectProvider(): { name: string; model: string; baseUrl: string; isLocal: boolean } {
const useGemini = process.env.CLAUDE_CODE_USE_GEMINI === '1' || process.env.CLAUDE_CODE_USE_GEMINI === 'true'
const useGithub = process.env.CLAUDE_CODE_USE_GITHUB === '1' || process.env.CLAUDE_CODE_USE_GITHUB === 'true'
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1' || process.env.CLAUDE_CODE_USE_OPENAI === 'true'
if (useGemini) {
@@ -86,22 +89,53 @@ function detectProvider(): { name: string; model: string; baseUrl: string; isLoc
return { name: 'Google Gemini', model, baseUrl, isLocal: false }
}
if (useGithub) {
const model = process.env.OPENAI_MODEL || 'github:copilot'
const baseUrl =
process.env.OPENAI_BASE_URL || 'https://models.github.ai/inference'
return { name: 'GitHub Models', model, baseUrl, isLocal: false }
}
if (useOpenAI) {
const model = process.env.OPENAI_MODEL || 'gpt-4o'
const rawModel = process.env.OPENAI_MODEL || 'gpt-4o'
const baseUrl = process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1'
const isLocal = /localhost|127\.0\.0\.1|0\.0\.0\.0/.test(baseUrl)
let name = 'OpenAI'
if (/deepseek/i.test(baseUrl) || /deepseek/i.test(model)) name = 'DeepSeek'
if (/deepseek/i.test(baseUrl) || /deepseek/i.test(rawModel)) name = 'DeepSeek'
else if (/openrouter/i.test(baseUrl)) name = 'OpenRouter'
else if (/together/i.test(baseUrl)) name = 'Together AI'
else if (/groq/i.test(baseUrl)) name = 'Groq'
else if (/mistral/i.test(baseUrl) || /mistral/i.test(model)) name = 'Mistral'
else if (/mistral/i.test(baseUrl) || /mistral/i.test(rawModel)) name = 'Mistral'
else if (/azure/i.test(baseUrl)) name = 'Azure OpenAI'
else if (/localhost:11434/i.test(baseUrl)) name = 'Ollama'
else if (/localhost:1234/i.test(baseUrl)) name = 'LM Studio'
else if (/llama/i.test(model)) name = 'Meta Llama'
else if (/llama/i.test(rawModel)) name = 'Meta Llama'
else if (isLocal) name = 'Local'
return { name, model, baseUrl, isLocal }
// Resolve model alias to actual model name + reasoning effort
let displayModel = rawModel
const codexAliases: Record<string, { model: string; reasoningEffort?: string }> = {
codexplan: { model: 'gpt-5.4', reasoningEffort: 'high' },
'gpt-5.4': { model: 'gpt-5.4', reasoningEffort: 'high' },
'gpt-5.3-codex': { model: 'gpt-5.3-codex', reasoningEffort: 'high' },
'gpt-5.3-codex-spark': { model: 'gpt-5.3-codex-spark' },
codexspark: { model: 'gpt-5.3-codex-spark' },
'gpt-5.2-codex': { model: 'gpt-5.2-codex', reasoningEffort: 'high' },
'gpt-5.1-codex-max': { model: 'gpt-5.1-codex-max', reasoningEffort: 'high' },
'gpt-5.1-codex-mini': { model: 'gpt-5.1-codex-mini' },
'gpt-5.4-mini': { model: 'gpt-5.4-mini', reasoningEffort: 'medium' },
'gpt-5.2': { model: 'gpt-5.2', reasoningEffort: 'medium' },
}
const alias = rawModel.toLowerCase()
if (alias in codexAliases) {
const resolved = codexAliases[alias]
displayModel = resolved.model
if (resolved.reasoningEffort) {
displayModel = `${displayModel} (${resolved.reasoningEffort})`
}
}
return { name, model: displayModel, baseUrl, isLocal }
}
// Default: Anthropic
@@ -172,7 +206,7 @@ export function printStartupScreen(): void {
out.push(boxRow(sRow, W, sLen))
out.push(`${rgb(...BORDER)}\u255a${'\u2550'.repeat(W - 2)}\u255d${RESET}`)
out.push(` ${DIM}${rgb(...DIMCOL)}openclaude v${(globalThis as Record<string, unknown>)['MACRO_DISPLAY_VERSION'] ?? '0.1.4'}${RESET}`)
out.push(` ${DIM}${rgb(...DIMCOL)}openclaude ${RESET}${rgb(...ACCENT)}v${MACRO.DISPLAY_VERSION ?? MACRO.VERSION}${RESET}`)
out.push('')
process.stdout.write(out.join('\n') + '\n')

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,48 @@
import { afterEach, expect, test } from 'bun:test'
import { getSystemPrompt, DEFAULT_AGENT_PROMPT } from './prompts.js'
import { CLI_SYSPROMPT_PREFIXES, getCLISyspromptPrefix } from './system.js'
import { GENERAL_PURPOSE_AGENT } from '../tools/AgentTool/built-in/generalPurposeAgent.js'
import { EXPLORE_AGENT } from '../tools/AgentTool/built-in/exploreAgent.js'
const originalSimpleEnv = process.env.CLAUDE_CODE_SIMPLE
afterEach(() => {
process.env.CLAUDE_CODE_SIMPLE = originalSimpleEnv
})
test('CLI identity prefixes describe OpenClaude instead of Claude Code', () => {
expect(getCLISyspromptPrefix()).toContain('OpenClaude')
expect(getCLISyspromptPrefix()).not.toContain("Anthropic's official CLI for Claude")
for (const prefix of CLI_SYSPROMPT_PREFIXES) {
expect(prefix).toContain('OpenClaude')
expect(prefix).not.toContain("Anthropic's official CLI for Claude")
}
})
test('simple mode identity describes OpenClaude instead of Claude Code', async () => {
process.env.CLAUDE_CODE_SIMPLE = '1'
const prompt = await getSystemPrompt([], 'gpt-4o')
expect(prompt[0]).toContain('OpenClaude')
expect(prompt[0]).not.toContain("Anthropic's official CLI for Claude")
})
test('built-in agent prompts describe OpenClaude instead of Claude Code', () => {
expect(DEFAULT_AGENT_PROMPT).toContain('OpenClaude')
expect(DEFAULT_AGENT_PROMPT).not.toContain("Anthropic's official CLI for Claude")
const generalPrompt = GENERAL_PURPOSE_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(generalPrompt).toContain('OpenClaude')
expect(generalPrompt).not.toContain("Anthropic's official CLI for Claude")
const explorePrompt = EXPLORE_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(explorePrompt).toContain('OpenClaude')
expect(explorePrompt).not.toContain("Anthropic's official CLI for Claude")
})

View File

@@ -449,7 +449,7 @@ export async function getSystemPrompt(
): Promise<string[]> {
if (isEnvTruthy(process.env.CLAUDE_CODE_SIMPLE)) {
return [
`You are Claude Code, Anthropic's official CLI for Claude.\n\nCWD: ${getCwd()}\nDate: ${getSessionStartDate()}`,
`You are OpenClaude, an open-source fork of Claude Code.\n\nCWD: ${getCwd()}\nDate: ${getSessionStartDate()}`,
]
}
@@ -755,7 +755,7 @@ export function getUnameSR(): string {
return `${osType()} ${osRelease()}`
}
export const DEFAULT_AGENT_PROMPT = `You are an agent for Claude Code, Anthropic's official CLI for Claude. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done. When you complete the task, respond with a concise report covering what was done and any key findings — the caller will relay this to the user, so it only needs the essentials.`
export const DEFAULT_AGENT_PROMPT = `You are an agent for OpenClaude, an open-source fork of Claude Code. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done. When you complete the task, respond with a concise report covering what was done and any key findings — the caller will relay this to the user, so it only needs the essentials.`
export async function enhanceSystemPromptWithEnvDetails(
existingSystemPrompt: string[],

View File

@@ -7,9 +7,12 @@ import { isEnvDefinedFalsy } from '../utils/envUtils.js'
import { getAPIProvider } from '../utils/model/providers.js'
import { getWorkload } from '../utils/workloadContext.js'
const DEFAULT_PREFIX = `You are Claude Code, Anthropic's official CLI for Claude.`
const AGENT_SDK_CLAUDE_CODE_PRESET_PREFIX = `You are Claude Code, Anthropic's official CLI for Claude, running within the Claude Agent SDK.`
const AGENT_SDK_PREFIX = `You are a Claude agent, built on Anthropic's Claude Agent SDK.`
const DEFAULT_PREFIX =
`You are OpenClaude, an open-source fork of Claude Code.`
const AGENT_SDK_CLAUDE_CODE_PRESET_PREFIX =
`You are OpenClaude, an open-source fork of Claude Code, running within the Claude Agent SDK.`
const AGENT_SDK_PREFIX =
`You are a Claude agent running in OpenClaude, built on the Claude Agent SDK.`
const CLI_SYSPROMPT_PREFIX_VALUES = [
DEFAULT_PREFIX,

View File

@@ -1,6 +1,7 @@
import type * as React from 'react';
import { useCallback, useEffect } from 'react';
import { useAppStateStore, useSetAppState } from 'src/state/AppState.js';
import { logError } from '../utils/log.js';
import type { Theme } from '../utils/theme.js';
type Priority = 'low' | 'medium' | 'high' | 'immediate';
type BaseNotification = {
@@ -44,6 +45,7 @@ export function useNotifications(): {
// Process queue when current notification finishes or queue changes
const processQueue = useCallback(() => {
try {
setAppState(prev => {
const next = getNext(prev.notifications.queue);
if (prev.notifications.current !== null || !next) {
@@ -74,8 +76,12 @@ export function useNotifications(): {
}
};
});
} catch (error) {
logError(error);
}
}, [setAppState]);
const addNotification = useCallback<AddNotificationFn>((notif: Notification) => {
try {
// Handle immediate priority notifications
if (notif.priority === 'immediate') {
// Clear any existing timeout since we're showing a new immediate notification
@@ -189,8 +195,12 @@ export function useNotifications(): {
// Process queue after adding the notification
processQueue();
} catch (error) {
logError(error);
}
}, [setAppState, processQueue]);
const removeNotification = useCallback<RemoveNotificationFn>((key: string) => {
try {
setAppState(prev => {
const isCurrent = prev.notifications.current?.key === key;
const inQueue = prev.notifications.queue.some(n => n.key === key);
@@ -210,6 +220,9 @@ export function useNotifications(): {
};
});
processQueue();
} catch (error) {
logError(error);
}
}, [setAppState, processQueue]);
// Process queue on mount if there are notifications in the initial state.

File diff suppressed because one or more lines are too long

View File

@@ -441,3 +441,8 @@ export async function connectRemoteControl(
): Promise<RemoteControlHandle | null> {
throw new Error('not implemented')
}
// add exit reason types for removing the error within gracefulShutdown file
export type ExitReason = {
}

View File

@@ -1,8 +1,14 @@
import { feature } from 'bun:bundle';
import {
isLocalProviderUrl,
resolveCodexApiCredentials,
resolveProviderRequest,
} from '../services/api/providerConfig.js'
import {
applyProfileEnvToProcessEnv,
buildStartupEnvFromProfile,
redactSecretValueForDisplay,
} from '../utils/providerProfile.js'
// Bugfix for corepack auto-pinning, which adds yarnpkg to peoples' package.jsons
// eslint-disable-next-line custom-rules/no-top-level-side-effects
@@ -35,49 +41,72 @@ function isEnvTruthy(value: string | undefined): boolean {
return normalized !== '' && normalized !== '0' && normalized !== 'false' && normalized !== 'no'
}
function isLocalProviderUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
try {
const parsed = new URL(baseUrl)
return parsed.hostname === 'localhost' || parsed.hostname === '127.0.0.1' || parsed.hostname === '::1'
} catch {
return false
}
}
function getProviderValidationError(
env: NodeJS.ProcessEnv = process.env,
): string | null {
const useOpenAI = isEnvTruthy(env.CLAUDE_CODE_USE_OPENAI)
const useGithub = isEnvTruthy(env.CLAUDE_CODE_USE_GITHUB)
function validateProviderEnvOrExit(): void {
if (!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
return
if (isEnvTruthy(env.CLAUDE_CODE_USE_GEMINI)) {
if (!(env.GEMINI_API_KEY ?? env.GOOGLE_API_KEY)) {
return 'GEMINI_API_KEY is required when CLAUDE_CODE_USE_GEMINI=1.'
}
return null
}
if (useGithub && !useOpenAI) {
const token = (env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim()) ?? ''
if (!token) {
return 'GITHUB_TOKEN or GH_TOKEN is required when CLAUDE_CODE_USE_GITHUB=1.'
}
return null
}
if (!useOpenAI) {
return null
}
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: process.env.OPENAI_BASE_URL,
model: env.OPENAI_MODEL,
baseUrl: env.OPENAI_BASE_URL,
})
if (process.env.OPENAI_API_KEY === 'SUA_CHAVE') {
console.error('Invalid OPENAI_API_KEY: placeholder value SUA_CHAVE detected. Set a real key or unset for local providers.')
process.exit(1)
if (env.OPENAI_API_KEY === 'SUA_CHAVE') {
return 'Invalid OPENAI_API_KEY: placeholder value SUA_CHAVE detected. Set a real key or unset for local providers.'
}
if (request.transport === 'codex_responses') {
const credentials = resolveCodexApiCredentials()
const credentials = resolveCodexApiCredentials(env)
if (!credentials.apiKey) {
const authHint = credentials.authPath
? ` or put auth.json at ${credentials.authPath}`
: ''
console.error(`Codex auth is required for ${request.requestedModel}. Set CODEX_API_KEY${authHint}.`)
process.exit(1)
const safeModel =
redactSecretValueForDisplay(request.requestedModel, env) ??
'the requested model'
return `Codex auth is required for ${safeModel}. Set CODEX_API_KEY${authHint}.`
}
if (!credentials.accountId) {
console.error('Codex auth is missing chatgpt_account_id. Re-login with Codex or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID.')
process.exit(1)
return 'Codex auth is missing chatgpt_account_id. Re-login with Codex or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID.'
}
return
return null
}
if (!process.env.OPENAI_API_KEY && !isLocalProviderUrl(request.baseUrl)) {
console.error('OPENAI_API_KEY is required when CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL is not local.')
if (!env.OPENAI_API_KEY && !isLocalProviderUrl(request.baseUrl)) {
const hasGithubToken = !!(env.GITHUB_TOKEN?.trim() || env.GH_TOKEN?.trim())
if (useGithub && hasGithubToken) {
return null
}
return 'OPENAI_API_KEY is required when CLAUDE_CODE_USE_OPENAI=1 and OPENAI_BASE_URL is not local.'
}
return null
}
function validateProviderEnvOrExit(): void {
const error = getProviderValidationError()
if (error) {
console.error(error)
process.exit(1)
}
}
@@ -98,6 +127,29 @@ async function main(): Promise<void> {
return;
}
{
const { enableConfigs } = await import('../utils/config.js')
enableConfigs()
const { applySafeConfigEnvironmentVariables } = await import('../utils/managedEnv.js')
applySafeConfigEnvironmentVariables()
const { hydrateGithubModelsTokenFromSecureStorage } = await import('../utils/githubModelsCredentials.js')
hydrateGithubModelsTokenFromSecureStorage()
}
const startupEnv = await buildStartupEnvFromProfile({
processEnv: process.env,
})
if (startupEnv !== process.env) {
const startupProfileError = getProviderValidationError(startupEnv)
if (startupProfileError) {
console.error(
`Warning: ignoring saved provider profile. ${startupProfileError}`,
)
} else {
applyProfileEnvToProcessEnv(process.env, startupEnv)
}
}
validateProviderEnvOrExit()
// Print the gradient startup screen before the Ink UI loads
@@ -347,6 +399,22 @@ async function main(): Promise<void> {
process.env.CLAUDE_CODE_SIMPLE = '1';
}
// --provider: set provider env vars early, before main module loads.
// This mirrors the --bare pattern: env vars must be in place before
// Commander option building and module-level constants are evaluated.
if (args.includes('--provider')) {
const { parseProviderFlag, applyProviderFlag } = await import('../utils/providerFlag.js');
const provider = parseProviderFlag(args);
if (provider) {
const result = applyProviderFlag(provider, args);
if (result.error) {
// biome-ignore lint/suspicious/noConsole:: intentional error output
console.error(`Error: ${result.error}`);
process.exit(1);
}
}
}
// No special flags detected, load and run the full CLI
if (process.env.OPENCLAUDE_ENABLE_EARLY_INPUT === '1') {
const {

View File

@@ -1,5 +1,6 @@
import { c as _c } from "react-compiler-runtime";
import * as React from 'react';
import { logError } from '../../utils/log.js';
import { useEffect } from 'react';
import { useNotifications } from 'src/context/notifications.js';
import { getIsRemoteMode } from '../../bootstrap/state.js';
@@ -23,43 +24,47 @@ export function useMcpConnectivityStatus(t0) {
let t3;
if ($[0] !== addNotification || $[1] !== mcpClients) {
t2 = () => {
if (getIsRemoteMode()) {
return;
}
const failedLocalClients = mcpClients.filter(_temp);
const failedClaudeAiClients = mcpClients.filter(_temp2);
const needsAuthLocalServers = mcpClients.filter(_temp3);
const needsAuthClaudeAiServers = mcpClients.filter(_temp4);
if (failedLocalClients.length === 0 && failedClaudeAiClients.length === 0 && needsAuthLocalServers.length === 0 && needsAuthClaudeAiServers.length === 0) {
return;
}
if (failedLocalClients.length > 0) {
addNotification({
key: "mcp-failed",
jsx: <><Text color="error">{failedLocalClients.length} MCP{" "}{failedLocalClients.length === 1 ? "server" : "servers"} failed</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (failedClaudeAiClients.length > 0) {
addNotification({
key: "mcp-claudeai-failed",
jsx: <><Text color="error">{failedClaudeAiClients.length} claude.ai{" "}{failedClaudeAiClients.length === 1 ? "connector" : "connectors"}{" "}unavailable</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (needsAuthLocalServers.length > 0) {
addNotification({
key: "mcp-needs-auth",
jsx: <><Text color="warning">{needsAuthLocalServers.length} MCP{" "}{needsAuthLocalServers.length === 1 ? "server needs" : "servers need"}{" "}auth</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (needsAuthClaudeAiServers.length > 0) {
addNotification({
key: "mcp-claudeai-needs-auth",
jsx: <><Text color="warning">{needsAuthClaudeAiServers.length} claude.ai{" "}{needsAuthClaudeAiServers.length === 1 ? "connector needs" : "connectors need"}{" "}auth</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
try {
if (getIsRemoteMode()) {
return;
}
const failedLocalClients = mcpClients.filter(_temp);
const failedClaudeAiClients = mcpClients.filter(_temp2);
const needsAuthLocalServers = mcpClients.filter(_temp3);
const needsAuthClaudeAiServers = mcpClients.filter(_temp4);
if (failedLocalClients.length === 0 && failedClaudeAiClients.length === 0 && needsAuthLocalServers.length === 0 && needsAuthClaudeAiServers.length === 0) {
return;
}
if (failedLocalClients.length > 0) {
addNotification({
key: "mcp-failed",
jsx: <><Text color="error">{failedLocalClients.length} MCP{" "}{failedLocalClients.length === 1 ? "server" : "servers"} failed</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (failedClaudeAiClients.length > 0) {
addNotification({
key: "mcp-claudeai-failed",
jsx: <><Text color="error">{failedClaudeAiClients.length} claude.ai{" "}{failedClaudeAiClients.length === 1 ? "connector" : "connectors"}{" "}unavailable</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (needsAuthLocalServers.length > 0) {
addNotification({
key: "mcp-needs-auth",
jsx: <><Text color="warning">{needsAuthLocalServers.length} MCP{" "}{needsAuthLocalServers.length === 1 ? "server needs" : "servers need"}{" "}auth</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (needsAuthClaudeAiServers.length > 0) {
addNotification({
key: "mcp-claudeai-needs-auth",
jsx: <><Text color="warning">{needsAuthClaudeAiServers.length} claude.ai{" "}{needsAuthClaudeAiServers.length === 1 ? "connector needs" : "connectors need"}{" "}auth</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
} catch (error) {
logError(error);
}
};
t3 = [addNotification, mcpClients];

View File

@@ -1242,17 +1242,25 @@ export function useTypeahead({
const handleAutocompletePrevious = useCallback(() => {
setSuggestionsState(prev => ({
...prev,
selectedSuggestion: prev.selectedSuggestion <= 0 ? suggestions.length - 1 : prev.selectedSuggestion - 1
selectedSuggestion: prev.suggestions.length === 0
? -1
: prev.selectedSuggestion <= 0
? prev.suggestions.length - 1
: Math.min(prev.selectedSuggestion - 1, prev.suggestions.length - 1)
}));
}, [suggestions.length, setSuggestionsState]);
}, [setSuggestionsState]);
// Handler for autocomplete:next - selects next suggestion
const handleAutocompleteNext = useCallback(() => {
setSuggestionsState(prev => ({
...prev,
selectedSuggestion: prev.selectedSuggestion >= suggestions.length - 1 ? 0 : prev.selectedSuggestion + 1
selectedSuggestion: prev.suggestions.length === 0
? -1
: prev.selectedSuggestion >= prev.suggestions.length - 1
? 0
: Math.max(0, prev.selectedSuggestion + 1)
}));
}, [suggestions.length, setSuggestionsState]);
}, [setSuggestionsState]);
// Autocomplete context keybindings - only active when suggestions are visible
const autocompleteHandlers = useMemo(() => ({

View File

@@ -0,0 +1,49 @@
import { expect, test } from 'bun:test'
import {
INITIAL_STATE,
parseMultipleKeypresses,
type ParsedKey,
} from './parse-keypress.ts'
import { InputEvent } from './events/input-event.ts'
function parseInputEvent(sequence: string): InputEvent {
const [items] = parseMultipleKeypresses(INITIAL_STATE, sequence)
expect(items).toHaveLength(1)
const item = items[0]
expect(item?.kind).toBe('key')
return new InputEvent(item as ParsedKey)
}
test('treats CSI-u modifier 0 as unmodified printable input', () => {
const event = parseInputEvent('\x1b[47;0u')
expect(event.input).toBe('/')
expect(event.key.ctrl).toBe(false)
expect(event.key.meta).toBe(false)
expect(event.key.shift).toBe(false)
expect(event.key.super).toBe(false)
})
test('preserves printable Unicode CSI-u input', () => {
const event = parseInputEvent('\x1b[231u')
expect(event.input).toBe('ç')
expect(event.key.ctrl).toBe(false)
expect(event.key.meta).toBe(false)
expect(event.key.shift).toBe(false)
expect(event.key.super).toBe(false)
})
test('preserves printable Unicode CSI-u input with explicit modifier 0', () => {
const event = parseInputEvent('\x1b[231;0u')
expect(event.input).toBe('ç')
expect(event.key.ctrl).toBe(false)
expect(event.key.meta).toBe(false)
expect(event.key.shift).toBe(false)
expect(event.key.super).toBe(false)
})

View File

@@ -468,7 +468,10 @@ function decodeModifier(modifier: number): {
ctrl: boolean
super: boolean
} {
const m = modifier - 1
// Some Windows VT stacks use 0 instead of 1 for an unmodified CSI-u key.
// Clamp to the protocol default so plain printable keys don't look like
// ctrl+meta+shift+super all at once.
const m = Math.max(modifier, 1) - 1
return {
shift: !!(m & 1),
meta: !!(m & 2),
@@ -477,6 +480,14 @@ function decodeModifier(modifier: number): {
}
}
function isPrivateUseCodepoint(codepoint: number): boolean {
return (
(codepoint >= 0xe000 && codepoint <= 0xf8ff) ||
(codepoint >= 0xf0000 && codepoint <= 0xffffd) ||
(codepoint >= 0x100000 && codepoint <= 0x10fffd)
)
}
/**
* Map keycode to key name for modifyOtherKeys/CSI u sequences.
* Handles both ASCII keycodes and Kitty keyboard protocol functional keys.
@@ -536,6 +547,21 @@ function keycodeToName(keycode: number): string | undefined {
if (keycode >= 32 && keycode <= 126) {
return String.fromCharCode(keycode).toLowerCase()
}
// CSI-u can carry printable Unicode codepoints directly on some
// Windows terminals and keyboard layouts. Keep kitty's private-use
// functional key range excluded so special keys still stay non-text.
if (
keycode > 0x1f &&
keycode !== 0x7f &&
(keycode < 0x80 || keycode > 0x9f) &&
keycode <= 0x10ffff &&
(keycode < 0xd800 || keycode > 0xdfff) &&
!isPrivateUseCodepoint(keycode)
) {
return String.fromCodePoint(keycode)
}
return undefined
}
}

View File

@@ -433,6 +433,8 @@ const reconciler = createReconciler<
scheduleTimeout: setTimeout,
cancelTimeout: clearTimeout,
noTimeout: -1,
supportsMicrotasks: true,
scheduleMicrotask: queueMicrotask,
getCurrentUpdatePriority: () => dispatcher.currentUpdatePriority,
beforeActiveInstanceBlur() {},
afterActiveInstanceBlur() {},

View File

@@ -139,6 +139,7 @@ import { validateUuid } from './utils/uuid.js';
// Plugin startup checks are now handled non-blockingly in REPL.tsx
import { registerMcpAddCommand } from 'src/commands/mcp/addCommand.js';
import { registerMcpDoctorCommand } from 'src/commands/mcp/doctorCommand.js';
import { registerMcpXaaIdpCommand } from 'src/commands/mcp/xaaIdpCommand.js';
import { logPermissionContextForAnts } from 'src/services/internalLogging.js';
import { fetchClaudeAIMcpConfigsIfEligible } from 'src/services/mcp/claudeai.js';
@@ -983,7 +984,7 @@ async function run(): Promise<CommanderCommand> {
return Number.isFinite(n) ? n : undefined;
}).hideHelp()).option('--from-pr [value]', 'Resume a session linked to a PR by PR number/URL, or open interactive picker with optional search term', value => value || true).option('--no-session-persistence', 'Disable session persistence - sessions will not be saved to disk and cannot be resumed (only works with --print)').addOption(new Option('--resume-session-at <message id>', 'When resuming, only messages up to and including the assistant message with <message.id> (use with --resume in print mode)').argParser(String).hideHelp()).addOption(new Option('--rewind-files <user-message-id>', 'Restore files to state at the specified user message and exit (requires --resume)').hideHelp())
// @[MODEL LAUNCH]: Update the example model ID in the --model help text.
.option('--model <model>', `Model for the current session. Provide an alias for the latest model (e.g. 'sonnet' or 'opus') or a model's full name (e.g. 'claude-sonnet-4-6').`).addOption(new Option('--effort <level>', `Effort level for the current session (low, medium, high, max)`).argParser((rawValue: string) => {
.option('--model <model>', `Model for the current session. Provide an alias for the latest model (e.g. 'sonnet' or 'opus') or a model's full name (e.g. 'claude-sonnet-4-6').`).option('--provider <provider>', `AI provider to use (anthropic, openai, gemini, github, bedrock, vertex, ollama). Reads API keys from environment variables.`).addOption(new Option('--effort <level>', `Effort level for the current session (low, medium, high, max)`).argParser((rawValue: string) => {
const value = rawValue.toLowerCase();
const allowed = ['low', 'medium', 'high', 'max'];
if (!allowed.includes(value)) {
@@ -2313,7 +2314,11 @@ async function run(): Promise<CommanderCommand> {
errors
} = getSettingsWithErrors();
const nonMcpErrors = errors.filter(e => !e.mcpErrorMetadata);
if (nonMcpErrors.length > 0 && !isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
if (
nonMcpErrors.length > 0 &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) &&
!isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
) {
await launchInvalidSettingsDialog(root, {
settingsErrors: nonMcpErrors,
onExit: () => gracefulShutdownSync(1)
@@ -3887,6 +3892,7 @@ async function run(): Promise<CommanderCommand> {
// Register the mcp add subcommand (extracted for testability)
registerMcpAddCommand(mcp);
registerMcpDoctorCommand(mcp);
if (isXaaEnabled()) {
registerMcpXaaIdpCommand(mcp);
}

View File

@@ -0,0 +1,59 @@
import { afterEach, expect, test } from 'bun:test'
import { mkdtemp, mkdir, writeFile, rm } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
import { scanMemoryFiles } from './memoryScan.ts'
// Finding #42-3: readdir({ recursive: true }) has no depth limit.
// A deeply nested directory in the memory dir causes a full unbounded walk.
let tempDir: string
afterEach(async () => {
if (tempDir) {
await rm(tempDir, { recursive: true, force: true })
}
})
test('scanMemoryFiles finds .md files at shallow depth', async () => {
tempDir = await mkdtemp(join(tmpdir(), 'memoryScan-'))
await writeFile(join(tempDir, 'note.md'), '---\nname: test\ntype: user\n---\nContent')
const controller = new AbortController()
const result = await scanMemoryFiles(tempDir, controller.signal)
expect(result.length).toBe(1)
expect(result[0].filename).toBe('note.md')
})
test('scanMemoryFiles ignores MEMORY.md', async () => {
tempDir = await mkdtemp(join(tmpdir(), 'memoryScan-'))
await writeFile(join(tempDir, 'MEMORY.md'), '# index')
await writeFile(join(tempDir, 'user_role.md'), '---\nname: role\ntype: user\n---\nContent')
const controller = new AbortController()
const result = await scanMemoryFiles(tempDir, controller.signal)
expect(result.length).toBe(1)
expect(result[0].filename).toBe('user_role.md')
})
test('scanMemoryFiles does not return .md files nested beyond max depth', async () => {
tempDir = await mkdtemp(join(tmpdir(), 'memoryScan-'))
// Shallow file - should be found
await writeFile(join(tempDir, 'shallow.md'), '---\nname: shallow\ntype: user\n---\nContent')
// Deeply nested file (depth 5) - should be excluded
const deepDir = join(tempDir, 'd1', 'd2', 'd3', 'd4', 'd5')
await mkdir(deepDir, { recursive: true })
await writeFile(join(deepDir, 'deep.md'), '---\nname: deep\ntype: user\n---\nContent')
const controller = new AbortController()
const result = await scanMemoryFiles(tempDir, controller.signal)
const filenames = result.map(r => r.filename)
expect(filenames).toContain('shallow.md')
// The deeply nested file must not appear
expect(filenames.some(f => f.includes('deep.md'))).toBe(false)
})

View File

@@ -38,8 +38,15 @@ export async function scanMemoryFiles(
): Promise<MemoryHeader[]> {
try {
const entries = await readdir(memoryDir, { recursive: true })
// Limit depth to 3 levels to prevent DoS from deep/symlinked directory trees.
// Relative paths from readdir use the OS separator, so count separators.
const sep = require('path').sep as string
const MAX_DEPTH = 3
const mdFiles = entries.filter(
f => f.endsWith('.md') && basename(f) !== 'MEMORY.md',
f =>
f.endsWith('.md') &&
basename(f) !== 'MEMORY.md' &&
(f.split(sep).length - 1) < MAX_DEPTH,
)
const headerResults = await Promise.allSettled(

View File

@@ -3,6 +3,7 @@ import {
setMainLoopModelOverride,
} from '../bootstrap/state.js'
import { getGlobalConfig, saveGlobalConfig } from '../utils/config.js'
import { getAPIProvider } from '../utils/model/providers.js'
import {
getSettingsForSource,
updateSettingsForSource,
@@ -23,6 +24,10 @@ import {
* tracked by a completion flag in global config.
*/
export function migrateSonnet1mToSonnet45(): void {
if (getAPIProvider() !== 'firstParty') {
return
}
const config = getGlobalConfig()
if (config.sonnet1m45MigrationComplete) {
return

View File

@@ -376,7 +376,7 @@ async function* queryLoop(
const persistReplacements =
querySource.startsWith('agent:') ||
querySource.startsWith('repl_main_thread')
messagesForQuery = await applyToolResultBudget(
const toolResultBudgetResult = await applyToolResultBudget(
messagesForQuery,
toolUseContext.contentReplacementState,
persistReplacements
@@ -392,6 +392,12 @@ async function* queryLoop(
.map(t => t.name),
),
)
messagesForQuery = toolResultBudgetResult.messages
if (toolResultBudgetResult.newlyReplaced.length > 0) {
toolUseContext.syncToolResultReplacements?.(
toolUseContext.contentReplacementState?.replacements ?? new Map(),
)
}
// Apply snip before microcompact (both may run — they are not mutually exclusive).
// snipTokensFreed is plumbed to autocompact so its threshold check reflects

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,121 @@
import { afterEach, beforeEach, expect, test } from 'bun:test'
import { getAnthropicClient } from './client.js'
type FetchType = typeof globalThis.fetch
type ShimClient = {
beta: {
messages: {
create: (params: Record<string, unknown>) => Promise<unknown>
}
}
}
const originalFetch = globalThis.fetch
const originalMacro = (globalThis as Record<string, unknown>).MACRO
const originalEnv = {
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
GEMINI_API_KEY: process.env.GEMINI_API_KEY,
GEMINI_MODEL: process.env.GEMINI_MODEL,
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL,
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY,
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_MODEL: process.env.OPENAI_MODEL,
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY,
ANTHROPIC_AUTH_TOKEN: process.env.ANTHROPIC_AUTH_TOKEN,
}
beforeEach(() => {
;(globalThis as Record<string, unknown>).MACRO = { VERSION: 'test-version' }
process.env.CLAUDE_CODE_USE_GEMINI = '1'
process.env.GEMINI_API_KEY = 'gemini-test-key'
process.env.GEMINI_MODEL = 'gemini-2.0-flash'
process.env.GEMINI_BASE_URL = 'https://gemini.example/v1beta/openai'
delete process.env.GOOGLE_API_KEY
delete process.env.OPENAI_API_KEY
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_MODEL
delete process.env.ANTHROPIC_API_KEY
delete process.env.ANTHROPIC_AUTH_TOKEN
})
afterEach(() => {
;(globalThis as Record<string, unknown>).MACRO = originalMacro
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
process.env.GEMINI_API_KEY = originalEnv.GEMINI_API_KEY
process.env.GEMINI_MODEL = originalEnv.GEMINI_MODEL
process.env.GEMINI_BASE_URL = originalEnv.GEMINI_BASE_URL
process.env.GOOGLE_API_KEY = originalEnv.GOOGLE_API_KEY
process.env.OPENAI_API_KEY = originalEnv.OPENAI_API_KEY
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
process.env.ANTHROPIC_API_KEY = originalEnv.ANTHROPIC_API_KEY
process.env.ANTHROPIC_AUTH_TOKEN = originalEnv.ANTHROPIC_AUTH_TOKEN
globalThis.fetch = originalFetch
})
test('routes Gemini provider requests through the OpenAI-compatible shim', async () => {
let capturedUrl: string | undefined
let capturedHeaders: Headers | undefined
let capturedBody: Record<string, unknown> | undefined
globalThis.fetch = (async (input, init) => {
capturedUrl =
typeof input === 'string'
? input
: input instanceof URL
? input.toString()
: input.url
capturedHeaders = new Headers(init?.headers)
capturedBody = JSON.parse(String(init?.body)) as Record<string, unknown>
return new Response(
JSON.stringify({
id: 'chatcmpl-gemini',
model: 'gemini-2.0-flash',
choices: [
{
message: {
role: 'assistant',
content: 'gemini ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = (await getAnthropicClient({
maxRetries: 0,
model: 'gemini-2.0-flash',
})) as unknown as ShimClient
const response = await client.beta.messages.create({
model: 'gemini-2.0-flash',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
})
expect(capturedUrl).toBe('https://gemini.example/v1beta/openai/chat/completions')
expect(capturedHeaders?.get('authorization')).toBe('Bearer gemini-test-key')
expect(capturedBody?.model).toBe('gemini-2.0-flash')
expect(response).toMatchObject({
role: 'assistant',
model: 'gemini-2.0-flash',
})
})

View File

@@ -154,7 +154,11 @@ export async function getAnthropicClient({
fetch: resolvedFetch,
}),
}
if (isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI)) {
if (
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
) {
const { createOpenAIShimClient } = await import('./openaiShim.js')
return createOpenAIShimClient({
defaultHeaders,

View File

@@ -144,6 +144,83 @@ describe('Codex request translation', () => {
])
})
test('removes unsupported uri format from strict Responses schemas', () => {
const tools = convertToolsToResponsesTools([
{
name: 'WebFetch',
description: 'Fetch a URL',
input_schema: {
type: 'object',
properties: {
url: { type: 'string', format: 'uri' },
prompt: { type: 'string' },
},
required: ['url', 'prompt'],
additionalProperties: false,
},
},
])
expect(tools).toEqual([
{
type: 'function',
name: 'WebFetch',
description: 'Fetch a URL',
parameters: {
type: 'object',
properties: {
url: { type: 'string' },
prompt: { type: 'string' },
},
required: ['url', 'prompt'],
additionalProperties: false,
},
strict: true,
},
])
})
test('sanitizes malformed enum/default values for Responses tool schemas', () => {
const tools = convertToolsToResponsesTools([
{
name: 'mcp__clientry__create_task',
description: 'Create a task',
input_schema: {
type: 'object',
properties: {
priority: {
type: 'integer',
description: 'Priority: 0=low, 1=medium, 2=high, 3=urgent',
default: true,
enum: [false, 0, 1, 2, 3],
},
},
},
},
])
expect(tools).toEqual([
{
type: 'function',
name: 'mcp__clientry__create_task',
description: 'Create a task',
parameters: {
type: 'object',
properties: {
priority: {
type: 'integer',
description: 'Priority: 0=low, 1=medium, 2=high, 3=urgent',
enum: [0, 1, 2, 3],
},
},
required: ['priority'],
additionalProperties: false,
},
strict: true,
},
])
})
test('converts assistant tool use and user tool result into Responses items', () => {
const items = convertAnthropicMessagesToResponsesInput([
{

View File

@@ -1,7 +1,9 @@
import { APIError } from '@anthropic-ai/sdk'
import type {
ResolvedCodexCredentials,
ResolvedProviderRequest,
} from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from './openaiSchemaSanitizer.js'
export interface AnthropicUsage {
input_tokens: number
@@ -83,7 +85,7 @@ function makeUsage(usage?: {
}
function makeMessageId(): string {
return `msg_${Math.random().toString(36).slice(2)}${Date.now().toString(36)}`
return `msg_${crypto.randomUUID().replace(/-/g, '')}`
}
function normalizeToolUseId(toolUseId: string | undefined): {
@@ -234,7 +236,10 @@ export function convertAnthropicMessagesToResponsesInput(
items.push({
type: 'function_call_output',
call_id: callId,
output: convertToolResultToText(toolResult.content),
output: (() => {
const out = convertToolResultToText(toolResult.content)
return toolResult.is_error ? `Error: ${out}` : out
})(),
})
}
@@ -259,7 +264,8 @@ export function convertAnthropicMessagesToResponsesInput(
if (role === 'assistant') {
const textBlocks = Array.isArray(content)
? content.filter((block: { type?: string }) => block.type !== 'tool_use')
? content.filter((block: { type?: string }) =>
block.type !== 'tool_use' && block.type !== 'thinking')
: content
const parts = convertContentBlocksToResponsesParts(textBlocks, 'assistant')
if (parts.length > 0) {
@@ -302,16 +308,14 @@ export function convertAnthropicMessagesToResponsesInput(
* - Nested schemas (properties, items, anyOf/oneOf/allOf) are processed too
*/
function enforceStrictSchema(schema: unknown): Record<string, unknown> {
if (!schema || typeof schema !== 'object' || Array.isArray(schema)) {
return (schema ?? {}) as Record<string, unknown>
const record = sanitizeSchemaForOpenAICompat(schema)
// Codex Responses rejects JSON Schema's standard `uri` string format.
// Keep URL validation in the tool layer and send a plain string here.
if (record.format === 'uri') {
delete record.format
}
const record = { ...(schema as Record<string, unknown>) }
// Codex API strict schemas reject these JSON schema keywords
delete record.$schema
delete record.propertyNames
if (record.type === 'object') {
// OpenAI structured outputs completely forbid dynamic additionalProperties.
// They must be set to false unconditionally.
@@ -453,6 +457,7 @@ function convertToolChoice(toolChoice: unknown): unknown {
if (!choice?.type) return undefined
if (choice.type === 'auto') return 'auto'
if (choice.type === 'any') return 'required'
if (choice.type === 'none') return 'none'
if (choice.type === 'tool' && choice.name) {
return {
type: 'function',
@@ -553,7 +558,13 @@ export async function performCodexRequest(options: {
if (!response.ok) {
const errorBody = await response.text().catch(() => 'unknown error')
throw new Error(`Codex API error ${response.status}: ${errorBody}`)
let errorResponse: object | undefined
try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ }
throw APIError.generate(
response.status, errorResponse,
`Codex API error ${response.status}: ${errorBody}`,
response.headers as unknown as Record<string, string>,
)
}
return response
@@ -633,11 +644,9 @@ export async function collectCodexCompletedResponse(
for await (const event of readSseEvents(response)) {
if (event.event === 'response.failed') {
throw new Error(
event.data?.response?.error?.message ??
event.data?.error?.message ??
'Codex response failed',
)
const msg = event.data?.response?.error?.message ??
event.data?.error?.message ?? 'Codex response failed'
throw APIError.generate(500, undefined, msg, {} as Record<string, string>)
}
if (
@@ -650,7 +659,10 @@ export async function collectCodexCompletedResponse(
}
if (!completedResponse) {
throw new Error('Codex response ended without a completed payload')
throw APIError.generate(
500, undefined, 'Codex response ended without a completed payload',
{} as Record<string, string>,
)
}
return completedResponse
@@ -806,11 +818,9 @@ export async function* codexStreamToAnthropic(
}
if (event.event === 'response.failed') {
throw new Error(
payload?.response?.error?.message ??
payload?.error?.message ??
'Codex response failed',
)
const msg = payload?.response?.error?.message ??
payload?.error?.message ?? 'Codex response failed'
throw APIError.generate(500, undefined, msg, {} as Record<string, string>)
}
}

View File

@@ -0,0 +1,204 @@
import { describe, expect, test } from 'bun:test'
import {
buildCodexUsageRows,
formatCodexPlanType,
getCodexUsageUrl,
normalizeCodexUsagePayload,
} from './codexUsage.js'
describe('normalizeCodexUsagePayload', () => {
test('normalizes live Codex usage payloads from /backend-api/wham/usage', () => {
const usage = normalizeCodexUsagePayload({
plan_type: 'plus',
rate_limit: {
primary_window: {
used_percent: 38,
limit_window_seconds: 18_000,
reset_at: 1_775_154_358,
},
secondary_window: {
used_percent: 32,
limit_window_seconds: 604_800,
reset_at: 1_775_685_041,
},
},
code_review_rate_limit: {
primary_window: {
used_percent: 0,
limit_window_seconds: 604_800,
reset_at: 1_775_744_471,
},
secondary_window: null,
},
credits: {
has_credits: false,
unlimited: false,
balance: '0',
},
})
expect(usage.planType).toBe('plus')
expect(usage.snapshots).toHaveLength(2)
expect(usage.snapshots[0]).toMatchObject({
limitName: 'codex',
primary: {
usedPercent: 38,
windowMinutes: 300,
},
secondary: {
usedPercent: 32,
windowMinutes: 10_080,
},
})
expect(usage.snapshots[1]).toMatchObject({
limitName: 'code review',
primary: {
usedPercent: 0,
windowMinutes: 10_080,
},
})
})
test('supports direct protocol-style snapshot collections', () => {
const usage = normalizeCodexUsagePayload({
rateLimitsByLimitId: {
codex: {
limit_name: 'codex',
primary: {
used_percent: 12,
window_minutes: 300,
resets_at: 1_700_000_000,
},
credits: {
has_credits: true,
unlimited: false,
balance: '25',
},
},
},
})
expect(usage.snapshots).toEqual([
{
limitName: 'codex',
primary: {
usedPercent: 12,
windowMinutes: 300,
resetsAt: new Date(1_700_000_000 * 1000).toISOString(),
},
secondary: undefined,
credits: {
hasCredits: true,
unlimited: false,
balance: '25',
},
},
])
})
})
describe('buildCodexUsageRows', () => {
test('builds Codex-like labels for primary and secondary windows', () => {
const rows = buildCodexUsageRows([
{
limitName: 'codex',
primary: {
usedPercent: 38,
windowMinutes: 300,
resetsAt: '2026-04-02T10:00:00.000Z',
},
secondary: {
usedPercent: 32,
windowMinutes: 10_080,
resetsAt: '2026-04-09T10:00:00.000Z',
},
},
{
limitName: 'code review',
primary: {
usedPercent: 0,
windowMinutes: 10_080,
resetsAt: '2026-04-09T10:00:00.000Z',
},
},
])
expect(rows).toEqual([
{
kind: 'window',
label: '5h limit',
usedPercent: 38,
resetsAt: '2026-04-02T10:00:00.000Z',
},
{
kind: 'window',
label: 'Weekly limit',
usedPercent: 32,
resetsAt: '2026-04-09T10:00:00.000Z',
},
{
kind: 'window',
label: 'Code review Weekly limit',
usedPercent: 0,
resetsAt: '2026-04-09T10:00:00.000Z',
},
])
})
test('renders credits rows only when credits are available', () => {
const rows = buildCodexUsageRows([
{
limitName: 'codex',
credits: {
hasCredits: true,
unlimited: false,
balance: '25.2',
},
},
{
limitName: 'code review',
credits: {
hasCredits: true,
unlimited: true,
},
},
{
limitName: 'other',
credits: {
hasCredits: true,
unlimited: false,
balance: '0',
},
},
])
expect(rows).toEqual([
{
kind: 'text',
label: 'Credits',
value: '25 credits',
},
{
kind: 'text',
label: 'Code review limit',
value: '',
},
{
kind: 'text',
label: 'Credits',
value: 'Unlimited',
},
])
})
})
describe('Codex usage helpers', () => {
test('formats plan labels and usage endpoint url', () => {
expect(formatCodexPlanType('team_max')).toBe('Team Max')
expect(getCodexUsageUrl()).toBe('https://chatgpt.com/backend-api/wham/usage')
expect(getCodexUsageUrl('https://chatgpt.com/backend-api/codex')).toBe(
'https://chatgpt.com/backend-api/wham/usage',
)
})
})

View File

@@ -0,0 +1,434 @@
import {
DEFAULT_CODEX_BASE_URL,
isCodexBaseUrl,
resolveCodexApiCredentials,
resolveProviderRequest,
} from './providerConfig.js'
export type CodexUsageWindow = {
usedPercent: number
windowMinutes?: number
resetsAt?: string
}
export type CodexUsageCredits = {
hasCredits: boolean
unlimited: boolean
balance?: string
}
export type CodexUsageSnapshot = {
limitName: string
primary?: CodexUsageWindow
secondary?: CodexUsageWindow
credits?: CodexUsageCredits
}
export type CodexUsageData = {
planType?: string
snapshots: CodexUsageSnapshot[]
}
export type CodexUsageRow =
| {
kind: 'window'
label: string
usedPercent: number
resetsAt?: string
}
| {
kind: 'text'
label: string
value: string
}
type RecordLike = Record<string, unknown>
function isRecord(value: unknown): value is RecordLike {
return typeof value === 'object' && value !== null
}
function asString(value: unknown): string | undefined {
return typeof value === 'string' && value.trim() ? value.trim() : undefined
}
function asNumber(value: unknown): number | undefined {
return typeof value === 'number' && Number.isFinite(value) ? value : undefined
}
function asBoolean(value: unknown): boolean | undefined {
return typeof value === 'boolean' ? value : undefined
}
function toIsoFromUnixSeconds(value: unknown): string | undefined {
const seconds = asNumber(value)
if (seconds === undefined) return undefined
return new Date(seconds * 1000).toISOString()
}
function normalizeWindow(value: unknown): CodexUsageWindow | undefined {
if (!isRecord(value)) return undefined
const usedPercent =
asNumber(value.used_percent) ?? asNumber(value.usedPercent)
if (usedPercent === undefined) return undefined
const windowMinutes =
asNumber(value.window_minutes) ??
asNumber(value.windowDurationMins) ??
(() => {
const seconds = asNumber(value.limit_window_seconds)
return seconds === undefined ? undefined : Math.round(seconds / 60)
})()
const resetsAt =
toIsoFromUnixSeconds(value.resets_at) ??
toIsoFromUnixSeconds(value.resetsAt) ??
toIsoFromUnixSeconds(value.reset_at)
return {
usedPercent,
windowMinutes,
resetsAt,
}
}
function normalizeCredits(value: unknown): CodexUsageCredits | undefined {
if (!isRecord(value)) return undefined
const hasCredits =
asBoolean(value.has_credits) ?? asBoolean(value.hasCredits) ?? false
const unlimited = asBoolean(value.unlimited) ?? false
const balance = asString(value.balance)
if (!hasCredits && !unlimited && !balance) {
return undefined
}
return {
hasCredits,
unlimited,
balance,
}
}
function normalizeSnapshot(
value: unknown,
fallbackLimitName: string,
): CodexUsageSnapshot | undefined {
if (!isRecord(value)) return undefined
const limitName =
asString(value.limit_name) ??
asString(value.limitName) ??
asString(value.limit_id) ??
asString(value.limitId) ??
fallbackLimitName
const primary =
normalizeWindow(value.primary) ?? normalizeWindow(value.primary_window)
const secondary =
normalizeWindow(value.secondary) ?? normalizeWindow(value.secondary_window)
const credits = normalizeCredits(value.credits)
if (!primary && !secondary && !credits) {
return undefined
}
return {
limitName,
primary,
secondary,
credits,
}
}
function normalizeSnapshotsFromCollection(
value: unknown,
defaultLimitName = 'codex',
): CodexUsageSnapshot[] {
if (Array.isArray(value)) {
return value
.map((item, index) =>
normalizeSnapshot(
item,
index === 0 ? defaultLimitName : `${defaultLimitName}-${index + 1}`,
),
)
.filter((item): item is CodexUsageSnapshot => item !== undefined)
}
if (!isRecord(value)) return []
return Object.entries(value)
.map(([key, entry]) => normalizeSnapshot(entry, key))
.filter((item): item is CodexUsageSnapshot => item !== undefined)
}
function normalizeLiveUsagePayload(payload: RecordLike): CodexUsageData {
const planType = asString(payload.plan_type) ?? asString(payload.planType)
const snapshots: CodexUsageSnapshot[] = []
const codexCredits = normalizeCredits(payload.credits)
const codexSnapshot = normalizeSnapshot(payload.rate_limit, 'codex')
if (codexSnapshot) {
codexSnapshot.credits ??= codexCredits
snapshots.push(codexSnapshot)
} else if (codexCredits) {
snapshots.push({
limitName: 'codex',
credits: codexCredits,
})
}
const codeReviewSnapshot = normalizeSnapshot(
payload.code_review_rate_limit,
'code review',
)
if (codeReviewSnapshot) {
snapshots.push(codeReviewSnapshot)
}
snapshots.push(
...normalizeSnapshotsFromCollection(
payload.additional_rate_limits ?? payload.additionalRateLimits,
'additional',
),
)
return {
planType,
snapshots,
}
}
export function normalizeCodexUsagePayload(payload: unknown): CodexUsageData {
if (Array.isArray(payload)) {
return {
snapshots: normalizeSnapshotsFromCollection(payload),
}
}
if (!isRecord(payload)) {
return { snapshots: [] }
}
if (
'rate_limit' in payload ||
'code_review_rate_limit' in payload ||
'additional_rate_limits' in payload ||
'credits' in payload
) {
return normalizeLiveUsagePayload(payload)
}
const collection =
payload.rate_limits ??
payload.rateLimits ??
payload.rate_limits_by_limit_id ??
payload.rateLimitsByLimitId
if (collection !== undefined) {
return {
planType: asString(payload.plan_type) ?? asString(payload.planType),
snapshots: normalizeSnapshotsFromCollection(collection),
}
}
const snapshot = normalizeSnapshot(payload, 'codex')
return {
planType: asString(payload.plan_type) ?? asString(payload.planType),
snapshots: snapshot ? [snapshot] : [],
}
}
function capitalizeFirst(value: string): string {
if (!value) return value
return value[0]!.toUpperCase() + value.slice(1)
}
function formatWindowDuration(
windowMinutes: number | undefined,
fallback: string,
): string {
if (windowMinutes === undefined || windowMinutes <= 0) {
return fallback
}
if (windowMinutes === 60 * 24 * 7) {
return 'weekly'
}
if (windowMinutes % (60 * 24) === 0) {
return `${windowMinutes / (60 * 24)}d`
}
if (windowMinutes % 60 === 0) {
return `${windowMinutes / 60}h`
}
return `${windowMinutes}m`
}
function formatCreditBalance(rawBalance: string | undefined): string | undefined {
const balance = rawBalance?.trim()
if (!balance) return undefined
const intValue = Number.parseInt(balance, 10)
if (Number.isFinite(intValue) && `${intValue}` === balance && intValue > 0) {
return `${intValue}`
}
const floatValue = Number.parseFloat(balance)
if (Number.isFinite(floatValue) && floatValue > 0) {
return `${Math.round(floatValue)}`
}
return undefined
}
function buildCreditsRow(
credits: CodexUsageCredits | undefined,
): CodexUsageRow | undefined {
if (!credits?.hasCredits) return undefined
if (credits.unlimited) {
return {
kind: 'text',
label: 'Credits',
value: 'Unlimited',
}
}
const displayBalance = formatCreditBalance(credits.balance)
if (!displayBalance) return undefined
return {
kind: 'text',
label: 'Credits',
value: `${displayBalance} credits`,
}
}
export function buildCodexUsageRows(
snapshots: CodexUsageSnapshot[],
): CodexUsageRow[] {
const rows: CodexUsageRow[] = []
for (const snapshot of snapshots) {
const limitBucketLabel = snapshot.limitName.trim() || 'codex'
const creditsRow = buildCreditsRow(snapshot.credits)
const hasRenderableContent =
snapshot.primary !== undefined ||
snapshot.secondary !== undefined ||
creditsRow !== undefined
if (!hasRenderableContent) {
continue
}
const showLimitPrefix = limitBucketLabel.toLowerCase() !== 'codex'
const windowCount =
Number(snapshot.primary !== undefined) +
Number(snapshot.secondary !== undefined)
const combineNonCodexSingleLimit = showLimitPrefix && windowCount === 1
if (showLimitPrefix && !combineNonCodexSingleLimit) {
rows.push({
kind: 'text',
label: `${capitalizeFirst(limitBucketLabel)} limit`,
value: '',
})
}
if (snapshot.primary) {
const durationLabel = capitalizeFirst(
formatWindowDuration(snapshot.primary.windowMinutes, '5h'),
)
rows.push({
kind: 'window',
label: combineNonCodexSingleLimit
? `${capitalizeFirst(limitBucketLabel)} ${durationLabel} limit`
: `${durationLabel} limit`,
usedPercent: snapshot.primary.usedPercent,
resetsAt: snapshot.primary.resetsAt,
})
}
if (snapshot.secondary) {
const durationLabel = capitalizeFirst(
formatWindowDuration(snapshot.secondary.windowMinutes, 'weekly'),
)
rows.push({
kind: 'window',
label: combineNonCodexSingleLimit
? `${capitalizeFirst(limitBucketLabel)} ${durationLabel} limit`
: `${durationLabel} limit`,
usedPercent: snapshot.secondary.usedPercent,
resetsAt: snapshot.secondary.resetsAt,
})
}
if (creditsRow) {
rows.push(creditsRow)
}
}
return rows
}
export function formatCodexPlanType(
planType: string | undefined,
): string | undefined {
if (!planType) return undefined
return planType
.split(/[_\s-]+/)
.filter(Boolean)
.map(part => capitalizeFirst(part.toLowerCase()))
.join(' ')
}
export function getCodexUsageUrl(baseUrl = DEFAULT_CODEX_BASE_URL): string {
return new URL('/backend-api/wham/usage', baseUrl).toString()
}
export async function fetchCodexUsage(): Promise<CodexUsageData> {
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: process.env.OPENAI_BASE_URL,
})
if (!isCodexBaseUrl(request.baseUrl)) {
throw new Error(
'Codex usage is only available with the official ChatGPT Codex backend.',
)
}
const credentials = resolveCodexApiCredentials()
if (!credentials.apiKey) {
const authHint = credentials.authPath
? ` or place a Codex auth.json at ${credentials.authPath}`
: ''
throw new Error(`Codex auth is required. Set CODEX_API_KEY${authHint}.`)
}
if (!credentials.accountId) {
throw new Error(
'Codex auth is missing chatgpt_account_id. Re-login with the Codex CLI or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID.',
)
}
const response = await fetch(getCodexUsageUrl(request.baseUrl), {
method: 'GET',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${credentials.apiKey}`,
'chatgpt-account-id': credentials.accountId,
originator: 'openclaude',
},
signal: AbortSignal.timeout(5000),
})
if (!response.ok) {
const errorBody = await response.text().catch(() => 'unknown error')
throw new Error(`Codex usage error ${response.status}: ${errorBody}`)
}
return normalizeCodexUsagePayload(await response.json())
}

View File

@@ -164,6 +164,12 @@ export const TOKEN_REVOKED_ERROR_MESSAGE =
export const CCR_AUTH_ERROR_MESSAGE =
'Authentication error · This may be a temporary network issue, please try again'
export const REPEATED_529_ERROR_MESSAGE = 'Repeated 529 Overloaded errors'
export function getCustomOffSwitchMessage(): string {
return getAPIProvider() === 'firstParty'
? 'Opus is experiencing high load, please use /model to switch to Sonnet'
: 'The API is experiencing high load, please try again shortly or use /model to switch models'
}
// Backward-compatible constant for string matching in error handlers
export const CUSTOM_OFF_SWITCH_MESSAGE =
'Opus is experiencing high load, please use /model to switch to Sonnet'
export const API_TIMEOUT_ERROR_MESSAGE = 'Request timed out'
@@ -457,7 +463,7 @@ export function getAssistantMessageFromError(
error.message.includes(CUSTOM_OFF_SWITCH_MESSAGE)
) {
return createAssistantAPIErrorMessage({
content: CUSTOM_OFF_SWITCH_MESSAGE,
content: getCustomOffSwitchMessage(),
error: 'rate_limit',
})
}
@@ -812,7 +818,8 @@ export function getAssistantMessageFromError(
if (
error instanceof Error &&
error.message.toLowerCase().includes('x-api-key')
error.message.toLowerCase().includes('x-api-key') &&
getAPIProvider() === 'firstParty'
) {
// In CCR mode, auth is via JWTs - this is likely a transient network issue
if (isCCRMode()) {

View File

@@ -0,0 +1 @@
export { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'

View File

@@ -312,3 +312,128 @@ test('preserves Gemini tool call extra_content from streaming chunks', async ()
},
})
})
test('strips thinking blocks from assistant messages instead of leaking them as text', async () => {
let requestBody: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
requestBody = JSON.parse(String(init?.body))
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-4o',
choices: [
{
message: { role: 'assistant', content: 'done' },
finish_reason: 'stop',
},
],
usage: { prompt_tokens: 10, completion_tokens: 1, total_tokens: 11 },
}),
{ headers: { 'Content-Type': 'application/json' } },
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model: 'gpt-4o',
system: 'test',
messages: [
{ role: 'user', content: 'hello' },
{
role: 'assistant',
content: [
{ type: 'thinking', thinking: 'secret reasoning' },
{ type: 'text', text: 'visible reply' },
],
},
{ role: 'user', content: 'follow up' },
],
max_tokens: 64,
stream: false,
})
const msgs = requestBody?.messages as Array<{ role: string; content: string }>
const assistantMsg = msgs.find(m => m.role === 'assistant')
// The assistant message should contain only the visible text,
// not <thinking>secret reasoning</thinking>
expect(assistantMsg?.content).toBe('visible reply')
expect(assistantMsg?.content).not.toContain('thinking')
})
test('sanitizes malformed MCP tool schemas before sending them to OpenAI', async () => {
let requestBody: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
requestBody = JSON.parse(String(init?.body))
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-4o',
choices: [
{
message: {
role: 'assistant',
content: 'ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 10,
completion_tokens: 1,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model: 'gpt-4o',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
tools: [
{
name: 'mcp__clientry__create_task',
description: 'Create a task',
input_schema: {
type: 'object',
properties: {
priority: {
type: 'integer',
description: 'Priority: 0=low, 1=medium, 2=high, 3=urgent',
default: true,
enum: [false, 0, 1, 2, 3],
},
},
},
},
],
max_tokens: 64,
stream: false,
})
const parameters = (
requestBody?.tools as Array<{ function?: { parameters?: Record<string, unknown> } }>
)?.[0]?.function?.parameters
const properties = parameters?.properties as
| Record<string, { default?: unknown; enum?: unknown[]; type?: string }>
| undefined
expect(parameters?.additionalProperties).toBe(false)
expect(parameters?.required).toEqual(['priority'])
expect(properties?.priority?.type).toBe('integer')
expect(properties?.priority?.enum).toEqual([0, 1, 2, 3])
expect(properties?.priority).not.toHaveProperty('default')
})

View File

@@ -14,8 +14,16 @@
* OPENAI_BASE_URL=http://... — base URL (default: https://api.openai.com/v1)
* OPENAI_MODEL=gpt-4o — default model override
* CODEX_API_KEY / ~/.codex/auth.json — Codex auth for codexplan/codexspark
*
* GitHub Models (models.github.ai), OpenAI-compatible:
* CLAUDE_CODE_USE_GITHUB=1 — enable GitHub inference (no need for USE_OPENAI)
* GITHUB_TOKEN or GH_TOKEN — PAT with models access (mapped to Bearer auth)
* OPENAI_MODEL — optional; use github:copilot or openai/gpt-4.1 style IDs
*/
import { APIError } from '@anthropic-ai/sdk'
import { isEnvTruthy } from '../../utils/envUtils.js'
import { hydrateGithubModelsTokenFromSecureStorage } from '../../utils/githubModelsCredentials.js'
import {
codexStreamToAnthropic,
collectCodexCompletedResponse,
@@ -26,9 +34,31 @@ import {
type ShimCreateParams,
} from './codexShim.js'
import {
isLocalProviderUrl,
resolveCodexApiCredentials,
resolveProviderRequest,
} from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
const GITHUB_API_VERSION = '2022-11-28'
const GITHUB_429_MAX_RETRIES = 3
const GITHUB_429_BASE_DELAY_SEC = 1
const GITHUB_429_MAX_DELAY_SEC = 32
function isGithubModelsMode(): boolean {
return isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
}
function formatRetryAfterHint(response: Response): string {
const ra = response.headers.get('retry-after')
return ra ? ` (Retry-After: ${ra})` : ''
}
function sleepMs(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms))
}
// ---------------------------------------------------------------------------
// Types — minimal subset of Anthropic SDK types we need to produce
@@ -109,10 +139,12 @@ function convertContentBlocks(
// handled separately
break
case 'thinking':
// Append thinking as text with a marker for models that support reasoning
if (block.thinking) {
parts.push({ type: 'text', text: `<thinking>${block.thinking}</thinking>` })
}
case 'redacted_thinking':
// Strip thinking blocks for OpenAI-compatible providers.
// These are Anthropic-specific content types that 3P providers
// don't understand. Serializing them as <thinking> text corrupts
// multi-turn context: the model sees the tags as part of its
// previous reply and may mimic or misattribute them.
break
default:
if (block.text) {
@@ -187,7 +219,10 @@ function convertMessages(
const assistantMsg: OpenAIMessage = {
role: 'assistant',
content: convertContentBlocks(textContent) as string,
content: (() => {
const c = convertContentBlocks(textContent)
return typeof c === 'string' ? c : Array.isArray(c) ? c.map((p: { text?: string }) => p.text ?? '').join('') : ''
})(),
}
if (toolUses.length > 0) {
@@ -198,7 +233,7 @@ function convertMessages(
input?: unknown
extra_content?: Record<string, unknown>
}) => ({
id: tu.id ?? `call_${Math.random().toString(36).slice(2)}`,
id: tu.id ?? `call_${crypto.randomUUID().replace(/-/g, '')}`,
type: 'function' as const,
function: {
name: tu.name ?? 'unknown',
@@ -216,7 +251,10 @@ function convertMessages(
} else {
result.push({
role: 'assistant',
content: convertContentBlocks(content) as string,
content: (() => {
const c = convertContentBlocks(content)
return typeof c === 'string' ? c : Array.isArray(c) ? c.map((p: { text?: string }) => p.text ?? '').join('') : ''
})(),
})
}
}
@@ -235,28 +273,62 @@ function normalizeSchemaForOpenAI(
schema: Record<string, unknown>,
strict = true,
): Record<string, unknown> {
if (schema.type !== 'object' || !schema.properties) return schema
const properties = schema.properties as Record<string, unknown>
const existingRequired = Array.isArray(schema.required) ? schema.required as string[] : []
// OpenAI strict mode requires every property to be listed in required[].
// Gemini rejects schemas where required[] contains keys absent from properties,
// so only promote keys that actually exist in properties.
if (strict) {
const allKeys = Object.keys(properties)
const required = Array.from(new Set([...existingRequired, ...allKeys]))
return { ...schema, required }
const record = sanitizeSchemaForOpenAICompat(schema)
if (record.type === 'object' && record.properties) {
const properties = record.properties as Record<string, Record<string, unknown>>
const existingRequired = Array.isArray(record.required) ? record.required as string[] : []
// Recurse into each property
const normalizedProps: Record<string, unknown> = {}
for (const [key, value] of Object.entries(properties)) {
normalizedProps[key] = normalizeSchemaForOpenAI(
value as Record<string, unknown>,
strict,
)
}
record.properties = normalizedProps
if (strict) {
// OpenAI strict mode requires every property to be listed in required[]
const allKeys = Object.keys(normalizedProps)
record.required = Array.from(new Set([...existingRequired, ...allKeys]))
// OpenAI strict mode requires additionalProperties: false on all object
// schemas — override unconditionally to ensure nested objects comply.
record.additionalProperties = false
} else {
// For Gemini: keep only existing required keys that are present in properties
record.required = existingRequired.filter(k => k in normalizedProps)
}
}
// For Gemini: keep only existing required keys that are present in properties
const required = existingRequired.filter(k => k in properties)
return { ...schema, required }
// Recurse into array items
if ('items' in record) {
if (Array.isArray(record.items)) {
record.items = (record.items as unknown[]).map(
item => normalizeSchemaForOpenAI(item as Record<string, unknown>, strict),
)
} else {
record.items = normalizeSchemaForOpenAI(record.items as Record<string, unknown>, strict)
}
}
// Recurse into combinators
for (const key of ['anyOf', 'oneOf', 'allOf'] as const) {
if (key in record && Array.isArray(record[key])) {
record[key] = (record[key] as unknown[]).map(
item => normalizeSchemaForOpenAI(item as Record<string, unknown>, strict),
)
}
}
return record
}
function convertTools(
tools: Array<{ name: string; description?: string; input_schema?: Record<string, unknown> }>,
): OpenAITool[] {
const isGemini =
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
const isGemini = isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
return tools
.filter(t => t.name !== 'ToolSearchTool') // Not relevant for OpenAI
@@ -312,11 +384,14 @@ interface OpenAIStreamChunk {
prompt_tokens?: number
completion_tokens?: number
total_tokens?: number
prompt_tokens_details?: {
cached_tokens?: number
}
}
}
function makeMessageId(): string {
return `msg_${Math.random().toString(36).slice(2)}${Date.now().toString(36)}`
return `msg_${crypto.randomUUID().replace(/-/g, '')}`
}
function convertChunkUsage(
@@ -328,7 +403,7 @@ function convertChunkUsage(
input_tokens: usage.prompt_tokens ?? 0,
output_tokens: usage.completion_tokens ?? 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
cache_read_input_tokens: usage.prompt_tokens_details?.cached_tokens ?? 0,
}
}
@@ -342,7 +417,7 @@ async function* openaiStreamToAnthropic(
): AsyncGenerator<AnthropicStreamEvent> {
const messageId = makeMessageId()
let contentBlockIndex = 0
const activeToolCalls = new Map<number, { id: string; name: string; index: number }>()
const activeToolCalls = new Map<number, { id: string; name: string; index: number; jsonBuffer: string }>()
let hasEmittedContentStart = false
let lastStopReason: 'tool_use' | 'max_tokens' | 'end_turn' | null = null
let hasEmittedFinalUsage = false
@@ -374,15 +449,16 @@ async function* openaiStreamToAnthropic(
const decoder = new TextDecoder()
let buffer = ''
while (true) {
const { done, value } = await reader.read()
if (done) break
try {
while (true) {
const { done, value } = await reader.read()
if (done) break
buffer += decoder.decode(value, { stream: true })
const lines = buffer.split('\n')
buffer = lines.pop() ?? ''
buffer += decoder.decode(value, { stream: true })
const lines = buffer.split('\n')
buffer = lines.pop() ?? ''
for (const line of lines) {
for (const line of lines) {
const trimmed = line.trim()
if (!trimmed || trimmed === 'data: [DONE]') continue
if (!trimmed.startsWith('data: ')) continue
@@ -436,6 +512,7 @@ async function* openaiStreamToAnthropic(
id: tc.id,
name: tc.function.name,
index: toolBlockIndex,
jsonBuffer: tc.function.arguments ?? '',
})
yield {
@@ -466,6 +543,9 @@ async function* openaiStreamToAnthropic(
// Continuation of existing tool call
const active = activeToolCalls.get(tc.index)
if (active) {
if (tc.function.arguments) {
active.jsonBuffer += tc.function.arguments
}
yield {
type: 'content_block_delta',
index: active.index,
@@ -493,6 +573,36 @@ async function* openaiStreamToAnthropic(
}
// Close active tool calls
for (const [, tc] of activeToolCalls) {
let suffixToAdd = ''
if (tc.jsonBuffer) {
try {
JSON.parse(tc.jsonBuffer)
} catch {
const str = tc.jsonBuffer.trimEnd()
const combinations = [
'}', '"}', ']}', '"]}', '}}', '"}}', ']}}', '"]}}', '"]}]}', '}]}'
]
for (const combo of combinations) {
try {
JSON.parse(str + combo)
suffixToAdd = combo
break
} catch {}
}
}
}
if (suffixToAdd) {
yield {
type: 'content_block_delta',
index: tc.index,
delta: {
type: 'input_json_delta',
partial_json: suffixToAdd,
},
}
}
yield { type: 'content_block_stop', index: tc.index }
}
@@ -502,6 +612,23 @@ async function* openaiStreamToAnthropic(
: choice.finish_reason === 'length'
? 'max_tokens'
: 'end_turn'
if (choice.finish_reason === 'content_filter' || choice.finish_reason === 'safety') {
// Gemini/Azure content safety filter blocked the response.
// Emit a visible text block so the user knows why output was truncated.
if (!hasEmittedContentStart) {
yield {
type: 'content_block_start',
index: contentBlockIndex,
content_block: { type: 'text', text: '' },
}
hasEmittedContentStart = true
}
yield {
type: 'content_block_delta',
index: contentBlockIndex,
delta: { type: 'text_delta', text: '\n\n[Content blocked by provider safety filter]' },
}
}
lastStopReason = stopReason
yield {
@@ -518,7 +645,8 @@ async function* openaiStreamToAnthropic(
if (
!hasEmittedFinalUsage &&
chunkUsage &&
(chunk.choices?.length ?? 0) === 0
(chunk.choices?.length ?? 0) === 0 &&
lastStopReason !== null
) {
yield {
type: 'message_delta',
@@ -528,6 +656,9 @@ async function* openaiStreamToAnthropic(
hasEmittedFinalUsage = true
}
}
}
} finally {
reader.releaseLock()
}
yield { type: 'message_stop' }
@@ -553,9 +684,11 @@ class OpenAIShimStream {
class OpenAIShimMessages {
private defaultHeaders: Record<string, string>
private reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh'
constructor(defaultHeaders: Record<string, string>) {
constructor(defaultHeaders: Record<string, string>, reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh') {
this.defaultHeaders = defaultHeaders
this.reasoningEffort = reasoningEffort
}
create(
@@ -564,9 +697,12 @@ class OpenAIShimMessages {
) {
const self = this
let httpResponse: Response | undefined
const promise = (async () => {
const request = resolveProviderRequest({ model: params.model })
const request = resolveProviderRequest({ model: params.model, reasoningEffortOverride: self.reasoningEffort })
const response = await self._doRequest(request, params, options)
httpResponse = response
if (params.stream) {
return new OpenAIShimStream(
@@ -593,8 +729,9 @@ class OpenAIShimMessages {
const data = await promise
return {
data,
response: new Response(),
request_id: makeMessageId(),
response: httpResponse ?? new Response(),
request_id:
httpResponse?.headers.get('x-request-id') ?? makeMessageId(),
}
}
@@ -612,8 +749,11 @@ class OpenAIShimMessages {
const authHint = credentials.authPath
? ` or place a Codex auth.json at ${credentials.authPath}`
: ''
const safeModel =
redactSecretValueForDisplay(request.requestedModel, process.env) ??
'the requested model'
throw new Error(
`Codex auth is required for ${request.requestedModel}. Set CODEX_API_KEY${authHint}.`,
`Codex auth is required for ${safeModel}. Set CODEX_API_KEY${authHint}.`,
)
}
if (!credentials.accountId) {
@@ -656,16 +796,32 @@ class OpenAIShimMessages {
messages: openaiMessages,
stream: params.stream ?? false,
}
if (params.max_tokens !== undefined) {
body.max_completion_tokens = params.max_tokens
} else if ((params as Record<string, unknown>).max_completion_tokens !== undefined) {
body.max_completion_tokens = (params as Record<string, unknown>).max_completion_tokens
// Convert max_tokens to max_completion_tokens for OpenAI API compatibility.
// Azure OpenAI requires max_completion_tokens and does not accept max_tokens.
// Ensure max_tokens is a valid positive number before using it.
const maxTokensValue = typeof params.max_tokens === 'number' && params.max_tokens > 0
? params.max_tokens
: undefined
const maxCompletionTokensValue = typeof (params as Record<string, unknown>).max_completion_tokens === 'number'
? (params as Record<string, unknown>).max_completion_tokens as number
: undefined
if (maxTokensValue !== undefined) {
body.max_completion_tokens = maxTokensValue
} else if (maxCompletionTokensValue !== undefined) {
body.max_completion_tokens = maxCompletionTokensValue
}
if (params.stream) {
if (params.stream && !isLocalProviderUrl(request.baseUrl)) {
body.stream_options = { include_usage: true }
}
const isGithub = isGithubModelsMode()
if (isGithub && body.max_completion_tokens !== undefined) {
body.max_tokens = body.max_completion_tokens
delete body.max_completion_tokens
}
if (params.temperature !== undefined) body.temperature = params.temperature
if (params.top_p !== undefined) body.top_p = params.top_p
@@ -704,7 +860,14 @@ class OpenAIShimMessages {
}
const apiKey = process.env.OPENAI_API_KEY ?? ''
const isAzure = /cognitiveservices\.azure\.com|openai\.azure\.com/.test(request.baseUrl)
// Detect Azure endpoints by hostname (not raw URL) to prevent bypass via
// path segments like https://evil.com/cognitiveservices.azure.com/
let isAzure = false
try {
const { hostname } = new URL(request.baseUrl)
isAzure = hostname.endsWith('.azure.com') &&
(hostname.includes('cognitiveservices') || hostname.includes('openai') || hostname.includes('services.ai'))
} catch { /* malformed URL — not Azure */ }
if (apiKey) {
if (isAzure) {
@@ -715,6 +878,11 @@ class OpenAIShimMessages {
}
}
if (isGithub) {
headers.Accept = 'application/vnd.github.v3+json'
headers['X-GitHub-Api-Version'] = GITHUB_API_VERSION
}
// Build the chat completions URL
// Azure Cognitive Services / Azure OpenAI require a deployment-specific path
// and an api-version query parameter.
@@ -737,19 +905,50 @@ class OpenAIShimMessages {
chatCompletionsUrl = `${request.baseUrl}/chat/completions`
}
const response = await fetch(chatCompletionsUrl, {
method: 'POST',
const fetchInit = {
method: 'POST' as const,
headers,
body: JSON.stringify(body),
signal: options?.signal,
})
if (!response.ok) {
const errorBody = await response.text().catch(() => 'unknown error')
throw new Error(`OpenAI API error ${response.status}: ${errorBody}`)
}
return response
const maxAttempts = isGithub ? GITHUB_429_MAX_RETRIES : 1
let response: Response | undefined
for (let attempt = 0; attempt < maxAttempts; attempt++) {
response = await fetch(chatCompletionsUrl, fetchInit)
if (response.ok) {
return response
}
if (
isGithub &&
response.status === 429 &&
attempt < maxAttempts - 1
) {
await response.text().catch(() => {})
const delaySec = Math.min(
GITHUB_429_BASE_DELAY_SEC * 2 ** attempt,
GITHUB_429_MAX_DELAY_SEC,
)
await sleepMs(delaySec * 1000)
continue
}
const errorBody = await response.text().catch(() => 'unknown error')
const rateHint =
isGithub && response.status === 429 ? formatRetryAfterHint(response) : ''
let errorResponse: object | undefined
try { errorResponse = JSON.parse(errorBody) } catch { /* raw text */ }
throw APIError.generate(
response.status,
errorResponse,
`OpenAI API error ${response.status}: ${errorBody}${rateHint}`,
response.headers as unknown as Record<string, string>,
)
}
throw APIError.generate(
500, undefined, 'OpenAI shim: request loop exited unexpectedly',
{} as Record<string, string>,
)
}
private _convertNonStreamingResponse(
@@ -759,7 +958,10 @@ class OpenAIShimMessages {
choices?: Array<{
message?: {
role?: string
content?: string | null
content?:
| string
| null
| Array<{ type?: string; text?: string }>
tool_calls?: Array<{
id: string
function: { name: string; arguments: string }
@@ -771,6 +973,9 @@ class OpenAIShimMessages {
usage?: {
prompt_tokens?: number
completion_tokens?: number
prompt_tokens_details?: {
cached_tokens?: number
}
}
},
model: string,
@@ -778,8 +983,25 @@ class OpenAIShimMessages {
const choice = data.choices?.[0]
const content: Array<Record<string, unknown>> = []
if (choice?.message?.content) {
content.push({ type: 'text', text: choice.message.content })
const rawContent = choice?.message?.content
if (typeof rawContent === 'string' && rawContent) {
content.push({ type: 'text', text: rawContent })
} else if (Array.isArray(rawContent) && rawContent.length > 0) {
const parts: string[] = []
for (const part of rawContent) {
if (
part &&
typeof part === 'object' &&
part.type === 'text' &&
typeof part.text === 'string'
) {
parts.push(part.text)
}
}
const joined = parts.join('\n')
if (joined) {
content.push({ type: 'text', text: joined })
}
}
if (choice?.message?.tool_calls) {
@@ -807,6 +1029,13 @@ class OpenAIShimMessages {
? 'max_tokens'
: 'end_turn'
if (choice?.finish_reason === 'content_filter' || choice?.finish_reason === 'safety') {
content.push({
type: 'text',
text: '\n\n[Content blocked by provider safety filter]',
})
}
return {
id: data.id ?? makeMessageId(),
type: 'message',
@@ -819,7 +1048,7 @@ class OpenAIShimMessages {
input_tokens: data.usage?.prompt_tokens ?? 0,
output_tokens: data.usage?.completion_tokens ?? 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
cache_read_input_tokens: data.usage?.prompt_tokens_details?.cached_tokens ?? 0,
},
}
}
@@ -827,9 +1056,11 @@ class OpenAIShimMessages {
class OpenAIShimBeta {
messages: OpenAIShimMessages
reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh'
constructor(defaultHeaders: Record<string, string>) {
this.messages = new OpenAIShimMessages(defaultHeaders)
constructor(defaultHeaders: Record<string, string>, reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh') {
this.messages = new OpenAIShimMessages(defaultHeaders, reasoningEffort)
this.reasoningEffort = reasoningEffort
}
}
@@ -837,13 +1068,13 @@ export function createOpenAIShimClient(options: {
defaultHeaders?: Record<string, string>
maxRetries?: number
timeout?: number
reasoningEffort?: 'low' | 'medium' | 'high' | 'xhigh'
}): unknown {
hydrateGithubModelsTokenFromSecureStorage()
// When Gemini provider is active, map Gemini env vars to OpenAI-compatible ones
// so the existing providerConfig.ts infrastructure picks them up correctly.
if (
process.env.CLAUDE_CODE_USE_GEMINI === '1' ||
process.env.CLAUDE_CODE_USE_GEMINI === 'true'
) {
if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)) {
process.env.OPENAI_BASE_URL ??=
process.env.GEMINI_BASE_URL ??
'https://generativelanguage.googleapis.com/v1beta/openai'
@@ -852,11 +1083,15 @@ export function createOpenAIShimClient(options: {
if (process.env.GEMINI_MODEL && !process.env.OPENAI_MODEL) {
process.env.OPENAI_MODEL = process.env.GEMINI_MODEL
}
} else if (isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)) {
process.env.OPENAI_BASE_URL ??= GITHUB_MODELS_DEFAULT_BASE
process.env.OPENAI_API_KEY ??=
process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN ?? ''
}
const beta = new OpenAIShimBeta({
...(options.defaultHeaders ?? {}),
})
}, options.reasoningEffort)
return {
beta,

View File

@@ -0,0 +1,41 @@
import { afterEach, expect, test } from 'bun:test'
import {
DEFAULT_GITHUB_MODELS_API_MODEL,
normalizeGithubModelsApiModel,
resolveProviderRequest,
} from './providerConfig.js'
const originalUseGithub = process.env.CLAUDE_CODE_USE_GITHUB
afterEach(() => {
if (originalUseGithub === undefined) {
delete process.env.CLAUDE_CODE_USE_GITHUB
} else {
process.env.CLAUDE_CODE_USE_GITHUB = originalUseGithub
}
})
test.each([
['copilot', DEFAULT_GITHUB_MODELS_API_MODEL],
['github:copilot', DEFAULT_GITHUB_MODELS_API_MODEL],
['', DEFAULT_GITHUB_MODELS_API_MODEL],
['github:gpt-4o', 'gpt-4o'],
['gpt-4o', 'gpt-4o'],
['github:copilot?reasoning=high', DEFAULT_GITHUB_MODELS_API_MODEL],
] as const)('normalizeGithubModelsApiModel(%s) -> %s', (input, expected) => {
expect(normalizeGithubModelsApiModel(input)).toBe(expected)
})
test('resolveProviderRequest applies GitHub normalization when CLAUDE_CODE_USE_GITHUB=1', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const r = resolveProviderRequest({ model: 'github:gpt-4o' })
expect(r.resolvedModel).toBe('gpt-4o')
expect(r.transport).toBe('chat_completions')
})
test('resolveProviderRequest leaves model unchanged without GitHub flag', () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
const r = resolveProviderRequest({ model: 'github:gpt-4o' })
expect(r.resolvedModel).toBe('github:gpt-4o')
})

View File

@@ -0,0 +1,35 @@
import { expect, test } from 'bun:test'
import { isLocalProviderUrl } from './providerConfig.js'
test('treats localhost endpoints as local', () => {
expect(isLocalProviderUrl('http://localhost:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://127.0.0.1:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://0.0.0.0:11434/v1')).toBe(true)
// Full 127.0.0.0/8 loopback range should be treated as local
expect(isLocalProviderUrl('http://127.0.0.2:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://127.1.2.3:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://127.255.255.255:11434/v1')).toBe(true)
})
test('treats private IPv4 endpoints as local', () => {
expect(isLocalProviderUrl('http://10.0.0.1:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://172.16.0.1:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://192.168.0.1:11434/v1')).toBe(true)
})
test('treats .local hostnames as local', () => {
expect(isLocalProviderUrl('http://ollama.local:11434/v1')).toBe(true)
})
test('treats private IPv6 endpoints as local', () => {
expect(isLocalProviderUrl('http://[fd00::1]:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://[fe80::1]:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://[::1]:11434/v1')).toBe(true)
})
test('treats public hosts as remote', () => {
expect(isLocalProviderUrl('http://203.0.113.1:11434/v1')).toBe(false)
expect(isLocalProviderUrl('https://example.com/v1')).toBe(false)
expect(isLocalProviderUrl('http://[2001:4860:4860::8888]:11434/v1')).toBe(false)
})

View File

@@ -1,9 +1,14 @@
import { existsSync, readFileSync } from 'node:fs'
import { isIP } from 'node:net'
import { homedir } from 'node:os'
import { join } from 'node:path'
import { isEnvTruthy } from '../../utils/envUtils.js'
export const DEFAULT_OPENAI_BASE_URL = 'https://api.openai.com/v1'
export const DEFAULT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex'
/** Default GitHub Models API model when user selects copilot / github:copilot */
export const DEFAULT_GITHUB_MODELS_API_MODEL = 'openai/gpt-4.1'
const CODEX_ALIAS_MODELS: Record<
string,
@@ -16,13 +21,43 @@ const CODEX_ALIAS_MODELS: Record<
model: 'gpt-5.4',
reasoningEffort: 'high',
},
'gpt-5.4': {
model: 'gpt-5.4',
reasoningEffort: 'high',
},
'gpt-5.3-codex': {
model: 'gpt-5.3-codex',
reasoningEffort: 'high',
},
'gpt-5.3-codex-spark': {
model: 'gpt-5.3-codex-spark',
},
codexspark: {
model: 'gpt-5.3-codex-spark',
},
'gpt-5.2-codex': {
model: 'gpt-5.2-codex',
reasoningEffort: 'high',
},
'gpt-5.1-codex-max': {
model: 'gpt-5.1-codex-max',
reasoningEffort: 'high',
},
'gpt-5.1-codex-mini': {
model: 'gpt-5.1-codex-mini',
},
'gpt-5.4-mini': {
model: 'gpt-5.4-mini',
reasoningEffort: 'medium',
},
'gpt-5.2': {
model: 'gpt-5.2',
reasoningEffort: 'medium',
},
} as const
type CodexAlias = keyof typeof CODEX_ALIAS_MODELS
type ReasoningEffort = 'low' | 'medium' | 'high'
type ReasoningEffort = 'low' | 'medium' | 'high' | 'xhigh'
export type ProviderTransport = 'chat_completions' | 'codex_responses'
@@ -53,6 +88,29 @@ type ModelDescriptor = {
const LOCALHOST_HOSTNAMES = new Set(['localhost', '127.0.0.1', '::1'])
function isPrivateIpv4Address(hostname: string): boolean {
const octets = hostname.split('.').map(part => Number.parseInt(part, 10))
if (octets.length !== 4 || octets.some(octet => Number.isNaN(octet))) {
return false
}
return (
octets[0] === 10 ||
(octets[0] === 172 && octets[1] >= 16 && octets[1] <= 31) ||
(octets[0] === 192 && octets[1] === 168)
)
}
function isPrivateIpv6Address(hostname: string): boolean {
const firstHextet = hostname.split(':', 1)[0]
if (!firstHextet) return false
const prefix = Number.parseInt(firstHextet, 16)
if (Number.isNaN(prefix)) return false
return (prefix & 0xfe00) === 0xfc00 || (prefix & 0xffc0) === 0xfe80
}
function asTrimmedString(value: unknown): string | undefined {
return typeof value === 'string' && value.trim() ? value.trim() : undefined
}
@@ -98,7 +156,7 @@ function decodeJwtPayload(token: string): Record<string, unknown> | undefined {
function parseReasoningEffort(value: string | undefined): ReasoningEffort | undefined {
if (!value) return undefined
const normalized = value.trim().toLowerCase()
if (normalized === 'low' || normalized === 'medium' || normalized === 'high') {
if (normalized === 'low' || normalized === 'medium' || normalized === 'high' || normalized === 'xhigh') {
return normalized
}
return undefined
@@ -152,7 +210,37 @@ function isCodexAlias(model: string): boolean {
export function isLocalProviderUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
try {
return LOCALHOST_HOSTNAMES.has(new URL(baseUrl).hostname)
let hostname = new URL(baseUrl).hostname.toLowerCase()
// Strip IPv6 brackets added by the URL parser (e.g. "[::1]" -> "::1")
if (hostname.startsWith('[') && hostname.endsWith(']')) {
hostname = hostname.slice(1, -1)
}
// Strip RFC6874 IPv6 zone identifiers (e.g. "fe80::1%25en0" -> "fe80::1")
const zoneIdIndex = hostname.indexOf('%25')
if (zoneIdIndex !== -1) {
hostname = hostname.slice(0, zoneIdIndex)
}
if (LOCALHOST_HOSTNAMES.has(hostname) || hostname === '0.0.0.0') {
return true
}
if (hostname.endsWith('.local')) {
return true
}
const ipVersion = isIP(hostname)
if (ipVersion === 4) {
// Treat the full 127.0.0.0/8 loopback range as local
const firstOctet = Number.parseInt(hostname.split('.', 1)[0] ?? '', 10)
return firstOctet === 127 || isPrivateIpv4Address(hostname)
}
if (ipVersion === 6) {
return isPrivateIpv6Address(hostname)
}
return false
} catch {
return false
}
@@ -171,38 +259,70 @@ export function isCodexBaseUrl(baseUrl: string | undefined): boolean {
}
}
/**
* Normalize user model string for GitHub Models inference (models.github.ai).
* Mirrors runtime devsper `github._normalize_model_id`.
*/
export function normalizeGithubModelsApiModel(requestedModel: string): string {
const noQuery = requestedModel.split('?', 1)[0] ?? requestedModel
const segment =
noQuery.includes(':') ? noQuery.split(':', 2)[1]!.trim() : noQuery.trim()
if (!segment || segment.toLowerCase() === 'copilot') {
return DEFAULT_GITHUB_MODELS_API_MODEL
}
return segment
}
export function resolveProviderRequest(options?: {
model?: string
baseUrl?: string
fallbackModel?: string
reasoningEffortOverride?: ReasoningEffort
}): ResolvedProviderRequest {
const isGithubMode = isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
const requestedModel =
options?.model?.trim() ||
process.env.OPENAI_MODEL?.trim() ||
options?.fallbackModel?.trim() ||
'gpt-4o'
(isGithubMode ? 'github:copilot' : 'gpt-4o')
const descriptor = parseModelDescriptor(requestedModel)
const rawBaseUrl =
options?.baseUrl ??
process.env.OPENAI_BASE_URL ??
process.env.OPENAI_API_BASE ??
undefined
// Use Codex transport only when:
// - the base URL is explicitly the Codex endpoint, OR
// - the model is a Codex alias AND no custom base URL has been set
// A custom OPENAI_BASE_URL (e.g. Azure, OpenRouter) always wins over
// model-name-based Codex detection to prevent auth failures (#200, #203).
const transport: ProviderTransport =
isCodexAlias(requestedModel) || isCodexBaseUrl(rawBaseUrl)
isCodexBaseUrl(rawBaseUrl) || (!rawBaseUrl && isCodexAlias(requestedModel))
? 'codex_responses'
: 'chat_completions'
const resolvedModel =
transport === 'chat_completions' &&
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
? normalizeGithubModelsApiModel(requestedModel)
: descriptor.baseModel
const reasoning = options?.reasoningEffortOverride
? { effort: options.reasoningEffortOverride }
: descriptor.reasoning
return {
transport,
requestedModel,
resolvedModel: descriptor.baseModel,
resolvedModel,
baseUrl:
(rawBaseUrl ??
(transport === 'codex_responses'
? DEFAULT_CODEX_BASE_URL
: DEFAULT_OPENAI_BASE_URL)
).replace(/\/+$/, ''),
reasoning: descriptor.reasoning,
reasoning,
}
}
@@ -311,3 +431,11 @@ export function resolveCodexApiCredentials(
source: 'auth.json',
}
}
export function getReasoningEffortForModel(model: string): ReasoningEffort | undefined {
const normalized = model.trim().toLowerCase()
const base = normalized.split('?', 1)[0] ?? normalized
const alias = base as CodexAlias
const aliasConfig = CODEX_ALIAS_MODELS[alias]
return aliasConfig?.reasoningEffort
}

View File

@@ -0,0 +1,136 @@
import { describe, expect, test, afterEach } from 'bun:test'
import { getRateLimitResetDelayMs, parseOpenAIDuration } from './withRetry.js'
import { APIError } from '@anthropic-ai/sdk'
// Helper to build a mock APIError with specific headers
function makeError(headers: Record<string, string>): APIError {
const headersObj = new Headers(headers)
return {
headers: headersObj,
status: 429,
message: 'rate limit exceeded',
name: 'APIError',
error: {},
} as unknown as APIError
}
// Save/restore env vars between tests
const originalEnv = { ...process.env }
afterEach(() => {
for (const key of [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
]) {
if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key]
}
})
// --- parseOpenAIDuration ---
describe('parseOpenAIDuration', () => {
test('parses seconds: "1s" → 1000', () => {
expect(parseOpenAIDuration('1s')).toBe(1000)
})
test('parses minutes+seconds: "6m0s" → 360000', () => {
expect(parseOpenAIDuration('6m0s')).toBe(360000)
})
test('parses hours+minutes+seconds: "1h30m0s" → 5400000', () => {
expect(parseOpenAIDuration('1h30m0s')).toBe(5400000)
})
test('parses milliseconds: "500ms" → 500', () => {
expect(parseOpenAIDuration('500ms')).toBe(500)
})
test('parses minutes only: "2m" → 120000', () => {
expect(parseOpenAIDuration('2m')).toBe(120000)
})
test('returns null for empty string', () => {
expect(parseOpenAIDuration('')).toBeNull()
})
test('returns null for unrecognized format', () => {
expect(parseOpenAIDuration('invalid')).toBeNull()
})
})
// --- getRateLimitResetDelayMs ---
describe('getRateLimitResetDelayMs - Anthropic (firstParty)', () => {
test('reads anthropic-ratelimit-unified-reset Unix timestamp', () => {
const futureUnixSec = Math.floor(Date.now() / 1000) + 60
const error = makeError({
'anthropic-ratelimit-unified-reset': String(futureUnixSec),
})
const delay = getRateLimitResetDelayMs(error)
expect(delay).not.toBeNull()
expect(delay!).toBeGreaterThan(50_000)
expect(delay!).toBeLessThanOrEqual(60_000)
})
test('returns null when header absent', () => {
const error = makeError({})
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
test('returns null when reset is in the past', () => {
const pastUnixSec = Math.floor(Date.now() / 1000) - 10
const error = makeError({
'anthropic-ratelimit-unified-reset': String(pastUnixSec),
})
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
})
describe('getRateLimitResetDelayMs - OpenAI provider', () => {
test('reads x-ratelimit-reset-requests duration string', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const error = makeError({ 'x-ratelimit-reset-requests': '30s' })
const delay = getRateLimitResetDelayMs(error)
expect(delay).toBe(30_000)
})
test('reads x-ratelimit-reset-tokens and picks the larger delay', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const error = makeError({
'x-ratelimit-reset-requests': '10s',
'x-ratelimit-reset-tokens': '1m0s',
})
// Should use the larger of the two so we don't retry before both reset
const delay = getRateLimitResetDelayMs(error)
expect(delay).toBe(60_000)
})
test('returns null when no openai rate limit headers present', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const error = makeError({})
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
test('works for github provider too', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const error = makeError({ 'x-ratelimit-reset-requests': '5s' })
expect(getRateLimitResetDelayMs(error)).toBe(5_000)
})
})
describe('getRateLimitResetDelayMs - providers without reset headers', () => {
test('returns null for bedrock', () => {
process.env.CLAUDE_CODE_USE_BEDROCK = '1'
const error = makeError({ 'anthropic-ratelimit-unified-reset': String(Math.floor(Date.now() / 1000) + 60) })
// Bedrock doesn't use this header — should still return null
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
test('returns null for vertex', () => {
process.env.CLAUDE_CODE_USE_VERTEX = '1'
const error = makeError({})
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
})

View File

@@ -11,7 +11,7 @@ import { isAwsCredentialsProviderError } from 'src/utils/aws.js'
import { logForDebugging } from 'src/utils/debug.js'
import { logError } from 'src/utils/log.js'
import { createSystemAPIErrorMessage } from 'src/utils/messages.js'
import { getAPIProviderForStatsig } from 'src/utils/model/providers.js'
import { getAPIProvider, getAPIProviderForStatsig } from 'src/utils/model/providers.js'
import {
clearApiKeyHelperCache,
clearAwsCredentialsCache,
@@ -811,12 +811,49 @@ function getRetryAfterMs(error: APIError): number | null {
return null
}
function getRateLimitResetDelayMs(error: APIError): number | null {
const resetHeader = error.headers?.get?.('anthropic-ratelimit-unified-reset')
if (!resetHeader) return null
const resetUnixSec = Number(resetHeader)
if (!Number.isFinite(resetUnixSec)) return null
const delayMs = resetUnixSec * 1000 - Date.now()
if (delayMs <= 0) return null
return Math.min(delayMs, PERSISTENT_RESET_CAP_MS)
/**
* Parse OpenAI-style relative duration strings into milliseconds.
* Formats: "1s", "6m0s", "1h30m0s", "500ms", "2m"
* Returns null for unrecognized formats.
*/
export function parseOpenAIDuration(s: string): number | null {
if (!s) return null
// Try matching hours/minutes/seconds/milliseconds components
const re = /^(?:(\d+)h)?(?:(\d+)m(?!s))?(?:(\d+)s)?(?:(\d+)ms)?$/
const m = re.exec(s)
if (!m || m[0] === '') return null
const h = parseInt(m[1] ?? '0', 10)
const min = parseInt(m[2] ?? '0', 10)
const sec = parseInt(m[3] ?? '0', 10)
const ms = parseInt(m[4] ?? '0', 10)
const total = h * 3_600_000 + min * 60_000 + sec * 1_000 + ms
return total > 0 ? total : null
}
export function getRateLimitResetDelayMs(error: APIError): number | null {
const provider = getAPIProvider()
if (provider === 'firstParty') {
const resetHeader = error.headers?.get?.('anthropic-ratelimit-unified-reset')
if (!resetHeader) return null
const resetUnixSec = Number(resetHeader)
if (!Number.isFinite(resetUnixSec)) return null
const delayMs = resetUnixSec * 1000 - Date.now()
if (delayMs <= 0) return null
return Math.min(delayMs, PERSISTENT_RESET_CAP_MS)
}
if (provider === 'openai' || provider === 'codex' || provider === 'github') {
const reqHeader = error.headers?.get?.('x-ratelimit-reset-requests')
const tokHeader = error.headers?.get?.('x-ratelimit-reset-tokens')
const reqMs = reqHeader ? parseOpenAIDuration(reqHeader) : null
const tokMs = tokHeader ? parseOpenAIDuration(tokHeader) : null
if (reqMs === null && tokMs === null) return null
// Use the larger delay so we don't retry before both limits reset
const delayMs = Math.max(reqMs ?? 0, tokMs ?? 0)
return Math.min(delayMs, PERSISTENT_RESET_CAP_MS)
}
// bedrock, vertex, foundry, gemini — no standard reset header
return null
}

View File

@@ -0,0 +1,94 @@
import { afterEach, describe, expect, mock, test } from 'bun:test'
import {
GitHubDeviceFlowError,
pollAccessToken,
requestDeviceCode,
} from './deviceFlow.js'
describe('requestDeviceCode', () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test('parses successful device code response', async () => {
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(
JSON.stringify({
device_code: 'abc',
user_code: 'ABCD-1234',
verification_uri: 'https://github.com/login/device',
expires_in: 600,
interval: 5,
}),
{ status: 200 },
),
),
)
const r = await requestDeviceCode({
clientId: 'test-client',
fetchImpl: globalThis.fetch,
})
expect(r.device_code).toBe('abc')
expect(r.user_code).toBe('ABCD-1234')
expect(r.verification_uri).toBe('https://github.com/login/device')
expect(r.expires_in).toBe(600)
expect(r.interval).toBe(5)
})
test('throws on HTTP error', async () => {
globalThis.fetch = mock(() =>
Promise.resolve(new Response('bad', { status: 500 })),
)
await expect(
requestDeviceCode({ clientId: 'x', fetchImpl: globalThis.fetch }),
).rejects.toThrow(GitHubDeviceFlowError)
})
})
describe('pollAccessToken', () => {
const originalFetch = globalThis.fetch
afterEach(() => {
globalThis.fetch = originalFetch
})
test('returns token when GitHub responds with access_token immediately', async () => {
let calls = 0
globalThis.fetch = mock(() => {
calls++
return Promise.resolve(
new Response(JSON.stringify({ access_token: 'tok-xyz' }), {
status: 200,
}),
)
})
const token = await pollAccessToken('dev-code', {
clientId: 'cid',
fetchImpl: globalThis.fetch,
})
expect(token).toBe('tok-xyz')
expect(calls).toBe(1)
})
test('throws on access_denied', async () => {
globalThis.fetch = mock(() =>
Promise.resolve(
new Response(JSON.stringify({ error: 'access_denied' }), {
status: 200,
}),
),
)
await expect(
pollAccessToken('dc', {
clientId: 'c',
fetchImpl: globalThis.fetch,
}),
).rejects.toThrow(/denied/)
})
})

View File

@@ -0,0 +1,174 @@
/**
* GitHub OAuth device flow for CLI login (https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow).
*/
import { execFileNoThrow } from '../../utils/execFileNoThrow.js'
export const DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID = 'Ov23liXjWSSui6QIahPl'
export const GITHUB_DEVICE_CODE_URL = 'https://github.com/login/device/code'
export const GITHUB_DEVICE_ACCESS_TOKEN_URL =
'https://github.com/login/oauth/access_token'
/** Match runtime devsper github_oauth DEFAULT_SCOPE */
export const DEFAULT_GITHUB_DEVICE_SCOPE = 'read:user,models:read'
export class GitHubDeviceFlowError extends Error {
constructor(message: string) {
super(message)
this.name = 'GitHubDeviceFlowError'
}
}
export type DeviceCodeResult = {
device_code: string
user_code: string
verification_uri: string
expires_in: number
interval: number
}
export function getGithubDeviceFlowClientId(): string {
return (
process.env.GITHUB_DEVICE_FLOW_CLIENT_ID?.trim() ||
DEFAULT_GITHUB_DEVICE_FLOW_CLIENT_ID
)
}
function sleep(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms))
}
export async function requestDeviceCode(options?: {
clientId?: string
scope?: string
fetchImpl?: typeof fetch
}): Promise<DeviceCodeResult> {
const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
if (!clientId) {
throw new GitHubDeviceFlowError(
'No OAuth client ID: set GITHUB_DEVICE_FLOW_CLIENT_ID or paste a PAT instead.',
)
}
const fetchFn = options?.fetchImpl ?? fetch
const res = await fetchFn(GITHUB_DEVICE_CODE_URL, {
method: 'POST',
headers: { Accept: 'application/json' },
body: new URLSearchParams({
client_id: clientId,
scope: options?.scope ?? DEFAULT_GITHUB_DEVICE_SCOPE,
}),
})
if (!res.ok) {
const text = await res.text().catch(() => '')
throw new GitHubDeviceFlowError(
`Device code request failed: ${res.status} ${text}`,
)
}
const data = (await res.json()) as Record<string, unknown>
const device_code = data.device_code
const user_code = data.user_code
const verification_uri = data.verification_uri
if (
typeof device_code !== 'string' ||
typeof user_code !== 'string' ||
typeof verification_uri !== 'string'
) {
throw new GitHubDeviceFlowError('Malformed device code response from GitHub')
}
return {
device_code,
user_code,
verification_uri,
expires_in: typeof data.expires_in === 'number' ? data.expires_in : 900,
interval: typeof data.interval === 'number' ? data.interval : 5,
}
}
export type PollOptions = {
clientId?: string
initialInterval?: number
timeoutSeconds?: number
fetchImpl?: typeof fetch
}
export async function pollAccessToken(
deviceCode: string,
options?: PollOptions,
): Promise<string> {
const clientId = options?.clientId ?? getGithubDeviceFlowClientId()
if (!clientId) {
throw new GitHubDeviceFlowError('client_id required for polling')
}
let interval = Math.max(1, options?.initialInterval ?? 5)
const timeoutSeconds = options?.timeoutSeconds ?? 900
const fetchFn = options?.fetchImpl ?? fetch
const start = Date.now()
while ((Date.now() - start) / 1000 < timeoutSeconds) {
const res = await fetchFn(GITHUB_DEVICE_ACCESS_TOKEN_URL, {
method: 'POST',
headers: { Accept: 'application/json' },
body: new URLSearchParams({
client_id: clientId,
device_code: deviceCode,
grant_type: 'urn:ietf:params:oauth:grant-type:device_code',
}),
})
if (!res.ok) {
const text = await res.text().catch(() => '')
throw new GitHubDeviceFlowError(
`Token request failed: ${res.status} ${text}`,
)
}
const data = (await res.json()) as Record<string, unknown>
const err = data.error as string | undefined
if (err == null) {
const token = data.access_token
if (typeof token === 'string' && token) {
return token
}
throw new GitHubDeviceFlowError('No access_token in response')
}
if (err === 'authorization_pending') {
await sleep(interval * 1000)
continue
}
if (err === 'slow_down') {
interval =
typeof data.interval === 'number' ? data.interval : interval + 5
await sleep(interval * 1000)
continue
}
if (err === 'expired_token') {
throw new GitHubDeviceFlowError(
'Device code expired. Start the login flow again.',
)
}
if (err === 'access_denied') {
throw new GitHubDeviceFlowError('Authorization was denied or cancelled.')
}
throw new GitHubDeviceFlowError(`GitHub OAuth error: ${err}`)
}
throw new GitHubDeviceFlowError('Timed out waiting for authorization.')
}
/**
* Best-effort open browser / OS handler for the verification URL.
*/
export async function openVerificationUri(uri: string): Promise<void> {
try {
if (process.platform === 'darwin') {
await execFileNoThrow('open', [uri], { useCwd: false, timeout: 5000 })
} else if (process.platform === 'win32') {
await execFileNoThrow('cmd', ['/c', 'start', '', uri], {
useCwd: false,
timeout: 5000,
})
} else {
await execFileNoThrow('xdg-open', [uri], { useCwd: false, timeout: 5000 })
}
} catch {
// User can open the URL manually
}
}

View File

@@ -0,0 +1,48 @@
import assert from 'node:assert/strict'
import test from 'node:test'
import { cleanupFailedConnection } from './client.js'
test('cleanupFailedConnection awaits transport close before resolving', async () => {
let closed = false
let resolveClose: (() => void) | undefined
const transport = {
close: async () =>
await new Promise<void>(resolve => {
resolveClose = () => {
closed = true
resolve()
}
}),
}
const cleanupPromise = cleanupFailedConnection(transport)
assert.equal(closed, false)
resolveClose?.()
await cleanupPromise
assert.equal(closed, true)
})
test('cleanupFailedConnection closes in-process server and transport', async () => {
let inProcessClosed = false
let transportClosed = false
const inProcessServer = {
close: async () => {
inProcessClosed = true
},
}
const transport = {
close: async () => {
transportClosed = true
},
}
await cleanupFailedConnection(transport, inProcessServer)
assert.equal(inProcessClosed, true)
assert.equal(transportClosed, true)
})

View File

@@ -116,8 +116,8 @@ import { getLoggingSafeMcpBaseUrl } from './utils.js'
/* eslint-disable @typescript-eslint/no-require-imports */
const fetchMcpSkillsForClient = feature('MCP_SKILLS')
? (
require('../../skills/mcpSkills.js') as typeof import('../../skills/mcpSkills.js')
).fetchMcpSkillsForClient
require('../../skills/mcpSkills.js') as typeof import('../../skills/mcpSkills.js')
).fetchMcpSkillsForClient
: null
import { UnauthorizedError } from '@modelcontextprotocol/sdk/client/auth.js'
@@ -240,12 +240,12 @@ const claudeInChromeToolRendering =
// GrowthBook tengu_malort_pedway (see gates.ts).
const computerUseWrapper = feature('CHICAGO_MCP')
? (): typeof import('../../utils/computerUse/wrapper.js') =>
require('../../utils/computerUse/wrapper.js')
require('../../utils/computerUse/wrapper.js')
: undefined
const isComputerUseMCPServer = feature('CHICAGO_MCP')
? (
require('../../utils/computerUse/common.js') as typeof import('../../utils/computerUse/common.js')
).isComputerUseMCPServer
require('../../utils/computerUse/common.js') as typeof import('../../utils/computerUse/common.js')
).isComputerUseMCPServer
: undefined
import { mkdir, readFile, unlink, writeFile } from 'fs/promises'
@@ -326,9 +326,9 @@ function mcpBaseUrlAnalytics(serverRef: ScopedMcpServerConfig): {
const url = getLoggingSafeMcpBaseUrl(serverRef)
return url
? {
mcpServerBaseUrl:
url as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
}
mcpServerBaseUrl:
url as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
}
: {}
}
@@ -560,6 +560,22 @@ function getRemoteMcpServerConnectionBatchSize(): number {
)
}
type InProcessMcpServer = {
connect(t: Transport): Promise<void>
close(): Promise<void>
}
export async function cleanupFailedConnection(
transport: Pick<Transport, 'close'>,
inProcessServer?: Pick<InProcessMcpServer, 'close'>,
): Promise<void> {
if (inProcessServer) {
await inProcessServer.close().catch(() => {})
}
await transport.close().catch(() => {})
}
function isLocalMcpServer(config: ScopedMcpServerConfig): boolean {
return !config.type || config.type === 'stdio' || config.type === 'sdk'
}
@@ -606,9 +622,7 @@ export const connectToServer = memoize(
},
): Promise<MCPServerConnection> => {
const connectStartTime = Date.now()
let inProcessServer:
| { connect(t: Transport): Promise<void>; close(): Promise<void> }
| undefined
let inProcessServer: InProcessMcpServer | undefined
try {
let transport
@@ -683,20 +697,20 @@ export const connectToServer = memoize(
const transportOptions: SSEClientTransportOptions =
proxyOptions.dispatcher
? {
eventSourceInit: {
fetch: async (url: string | URL, init?: RequestInit) => {
// eslint-disable-next-line eslint-plugin-n/no-unsupported-features/node-builtins
return fetch(url, {
...init,
...proxyOptions,
headers: {
'User-Agent': getMCPUserAgent(),
...init?.headers,
},
})
},
eventSourceInit: {
fetch: async (url: string | URL, init?: RequestInit) => {
// eslint-disable-next-line eslint-plugin-n/no-unsupported-features/node-builtins
return fetch(url, {
...init,
...proxyOptions,
headers: {
'User-Agent': getMCPUserAgent(),
...init?.headers,
},
})
},
}
},
}
: {}
transport = new SSEClientTransport(
@@ -832,8 +846,8 @@ export const connectToServer = memoize(
'User-Agent': getMCPUserAgent(),
...(sessionIngressToken &&
!hasOAuthTokens && {
Authorization: `Bearer ${sessionIngressToken}`,
}),
Authorization: `Bearer ${sessionIngressToken}`,
}),
...combinedHeaders,
},
},
@@ -842,10 +856,10 @@ export const connectToServer = memoize(
// Redact sensitive headers before logging
const headersForLogging = transportOptions.requestInit?.headers
? mapValues(
transportOptions.requestInit.headers as Record<string, string>,
(value, key) =>
key.toLowerCase() === 'authorization' ? '[REDACTED]' : value,
)
transportOptions.requestInit.headers as Record<string, string>,
(value, key) =>
key.toLowerCase() === 'authorization' ? '[REDACTED]' : value,
)
: undefined
logMCPDebug(
@@ -985,7 +999,7 @@ export const connectToServer = memoize(
const client = new Client(
{
name: 'claude-code',
title: 'Claude Code',
title: 'Open Claude',
version: MACRO.VERSION ?? 'unknown',
description: "Anthropic's agentic coding tool",
websiteUrl: PRODUCT_URL,
@@ -1054,9 +1068,9 @@ export const connectToServer = memoize(
`Connection timeout triggered after ${elapsed}ms (limit: ${getConnectionTimeoutMs()}ms)`,
)
if (inProcessServer) {
inProcessServer.close().catch(() => {})
inProcessServer.close().catch(() => { })
}
transport.close().catch(() => {})
transport.close().catch(() => { })
reject(
new TelemetrySafeError_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS(
`MCP server "${name}" connection timed out after ${getConnectionTimeoutMs()}ms`,
@@ -1145,9 +1159,10 @@ export const connectToServer = memoize(
})
}
if (inProcessServer) {
inProcessServer.close().catch(() => {})
await cleanupFailedConnection(transport, inProcessServer)
} else {
await cleanupFailedConnection(transport)
}
transport.close().catch(() => {})
if (stderrOutput) {
logMCPError(name, `Server stderr: ${stderrOutput}`)
}
@@ -1627,7 +1642,7 @@ export const connectToServer = memoize(
logMCPError(name, `Connection failed: ${errorMessage(error)}`)
if (inProcessServer) {
inProcessServer.close().catch(() => {})
inProcessServer.close().catch(() => { })
}
return {
name,
@@ -1779,8 +1794,8 @@ export const fetchToolsForClient = memoizeWithLRU(
searchHint:
typeof tool._meta?.['anthropic/searchHint'] === 'string'
? tool._meta['anthropic/searchHint']
.replace(/\s+/g, ' ')
.trim() || undefined
.replace(/\s+/g, ' ')
.trim() || undefined
: undefined,
alwaysLoad: tool._meta?.['anthropic/alwaysLoad'] === true,
async description() {
@@ -1871,11 +1886,11 @@ export const fetchToolsForClient = memoizeWithLRU(
onProgress:
onProgress && toolUseId
? progressData => {
onProgress({
toolUseID: toolUseId,
data: progressData,
})
}
onProgress({
toolUseID: toolUseId,
data: progressData,
})
}
: undefined,
handleElicitation: context.handleElicitation,
})
@@ -1975,14 +1990,14 @@ export const fetchToolsForClient = memoizeWithLRU(
return `${client.name} - ${displayName} (MCP)`
},
...(isClaudeInChromeMCPServer(client.name) &&
(client.config.type === 'stdio' || !client.config.type)
(client.config.type === 'stdio' || !client.config.type)
? claudeInChromeToolRendering().getClaudeInChromeMCPToolOverrides(
tool.name,
)
tool.name,
)
: {}),
...(feature('CHICAGO_MCP') &&
(client.config.type === 'stdio' || !client.config.type) &&
isComputerUseMCPServer!(client.name)
(client.config.type === 'stdio' || !client.config.type) &&
isComputerUseMCPServer!(client.name)
? computerUseWrapper!().getComputerUseMCPToolOverrides(tool.name)
: {}),
}
@@ -2876,9 +2891,9 @@ export async function callMCPToolWithUrlElicitationRetry({
const errorData = error.data
const rawElicitations =
errorData != null &&
typeof errorData === 'object' &&
'elicitations' in errorData &&
Array.isArray(errorData.elicitations)
typeof errorData === 'object' &&
'elicitations' in errorData &&
Array.isArray(errorData.elicitations)
? (errorData.elicitations as unknown[])
: []
@@ -3101,16 +3116,16 @@ async function callMCPTool({
timeout: timeoutMs,
onprogress: onProgress
? sdkProgress => {
onProgress({
type: 'mcp_progress',
status: 'progress',
serverName: name,
toolName: tool,
progress: sdkProgress.progress,
total: sdkProgress.total,
progressMessage: sdkProgress.message,
})
}
onProgress({
type: 'mcp_progress',
status: 'progress',
serverName: name,
toolName: tool,
progress: sdkProgress.progress,
total: sdkProgress.total,
progressMessage: sdkProgress.message,
})
}
: undefined,
},
),
@@ -3280,7 +3295,7 @@ export async function setupSdkMcpClients(
const client = new Client(
{
name: 'claude-code',
title: 'Claude Code',
title: 'Open Claude',
version: MACRO.VERSION ?? 'unknown',
description: "Anthropic's agentic coding tool",
websiteUrl: PRODUCT_URL,

View File

@@ -0,0 +1,540 @@
import assert from 'node:assert/strict'
import test from 'node:test'
import type { ValidationError } from '../../utils/settings/validation.js'
import {
buildEmptyDoctorReport,
doctorAllServers,
doctorServer,
findingsFromValidationErrors,
type McpDoctorDependencies,
} from './doctor.js'
function stdioConfig(scope: 'local' | 'project' | 'user' | 'enterprise', command: string) {
return {
type: 'stdio' as const,
command,
args: [],
scope,
}
}
function makeDependencies(overrides: Partial<McpDoctorDependencies> = {}): McpDoctorDependencies {
return {
getAllMcpConfigs: async () => ({ servers: {}, errors: [] }),
getMcpConfigsByScope: () => ({ servers: {}, errors: [] }),
getProjectMcpServerStatus: () => 'approved',
isMcpServerDisabled: () => false,
describeMcpConfigFilePath: scope => `scope://${scope}`,
clearServerCache: async () => {},
connectToServer: async (name, config) => ({
name,
type: 'connected',
capabilities: {},
config,
cleanup: async () => {},
}),
...overrides,
}
}
test('buildEmptyDoctorReport returns zeroed summary', () => {
const report = buildEmptyDoctorReport({
configOnly: true,
scopeFilter: 'project',
targetName: 'filesystem',
})
assert.equal(report.targetName, 'filesystem')
assert.equal(report.scopeFilter, 'project')
assert.equal(report.configOnly, true)
assert.deepEqual(report.summary, {
totalReports: 0,
healthy: 0,
warnings: 0,
blocking: 0,
})
assert.deepEqual(report.findings, [])
assert.deepEqual(report.servers, [])
})
test('findingsFromValidationErrors maps missing env warnings into doctor findings', () => {
const validationErrors: ValidationError[] = [
{
file: '.mcp.json',
path: 'mcpServers.filesystem',
message: 'Missing environment variables: API_KEY, API_URL',
suggestion: 'Set the following environment variables: API_KEY, API_URL',
mcpErrorMetadata: {
scope: 'project',
serverName: 'filesystem',
severity: 'warning',
},
},
]
const findings = findingsFromValidationErrors(validationErrors)
assert.equal(findings.length, 1)
assert.deepEqual(findings[0], {
blocking: false,
code: 'config.missing_env_vars',
message: 'Missing environment variables: API_KEY, API_URL',
remediation: 'Set the following environment variables: API_KEY, API_URL',
scope: 'project',
serverName: 'filesystem',
severity: 'warn',
sourcePath: '.mcp.json',
})
})
test('findingsFromValidationErrors maps Windows npx warnings into doctor findings', () => {
const validationErrors: ValidationError[] = [
{
file: '.mcp.json',
path: 'mcpServers.node-tools',
message: "Windows requires 'cmd /c' wrapper to execute npx",
suggestion:
'Change command to "cmd" with args ["/c", "npx", ...]. See: https://code.claude.com/docs/en/mcp#configure-mcp-servers',
mcpErrorMetadata: {
scope: 'project',
serverName: 'node-tools',
severity: 'warning',
},
},
]
const findings = findingsFromValidationErrors(validationErrors)
assert.equal(findings.length, 1)
assert.equal(findings[0]?.code, 'config.windows_npx_wrapper_required')
assert.equal(findings[0]?.serverName, 'node-tools')
assert.equal(findings[0]?.severity, 'warn')
assert.equal(findings[0]?.blocking, false)
})
test('findingsFromValidationErrors maps fatal parse errors into blocking findings', () => {
const validationErrors: ValidationError[] = [
{
file: 'C:/repo/.mcp.json',
path: '',
message: 'MCP config is not a valid JSON',
suggestion: 'Fix the JSON syntax errors in the file',
mcpErrorMetadata: {
scope: 'project',
severity: 'fatal',
},
},
]
const findings = findingsFromValidationErrors(validationErrors)
assert.equal(findings.length, 1)
assert.equal(findings[0]?.code, 'config.invalid_json')
assert.equal(findings[0]?.severity, 'error')
assert.equal(findings[0]?.blocking, true)
})
test('doctorAllServers reports global validation findings once without duplicating them into every server', async () => {
const localConfig = stdioConfig('local', 'node-local')
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { filesystem: localConfig },
errors: [],
}),
getMcpConfigsByScope: scope =>
scope === 'project'
? {
servers: {},
errors: [
{
file: '.mcp.json',
path: '',
message: 'MCP config is not a valid JSON',
suggestion: 'Fix the JSON syntax errors in the file',
mcpErrorMetadata: {
scope: 'project',
severity: 'fatal',
},
},
],
}
: scope === 'local'
? { servers: { filesystem: localConfig }, errors: [] }
: { servers: {}, errors: [] },
})
const report = await doctorAllServers({ configOnly: true }, deps)
assert.equal(report.summary.totalReports, 1)
assert.equal(report.summary.blocking, 1)
assert.equal(report.findings.length, 1)
assert.equal(report.findings[0]?.code, 'config.invalid_json')
assert.deepEqual(report.servers[0]?.findings, [])
})
test('doctorServer explains same-name shadowing across scopes', async () => {
const localConfig = stdioConfig('local', 'node-local')
const userConfig = stdioConfig('user', 'node-user')
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: {
filesystem: localConfig,
},
errors: [],
}),
getMcpConfigsByScope: scope => {
switch (scope) {
case 'local':
return { servers: { filesystem: localConfig }, errors: [] }
case 'user':
return { servers: { filesystem: userConfig }, errors: [] }
default:
return { servers: {}, errors: [] }
}
},
})
const report = await doctorServer('filesystem', { configOnly: true }, deps)
assert.equal(report.servers.length, 1)
assert.equal(report.servers[0]?.definitions.length, 2)
assert.equal(report.servers[0]?.definitions.find(def => def.sourceType === 'local')?.runtimeActive, true)
assert.equal(report.servers[0]?.definitions.find(def => def.sourceType === 'user')?.runtimeActive, false)
assert.deepEqual(
report.servers[0]?.findings.map(finding => finding.code).sort(),
['duplicate.same_name_multiple_scopes', 'scope.shadowed'],
)
})
test('doctorServer reports project servers pending approval', async () => {
const projectConfig = stdioConfig('project', 'node-project')
const deps = makeDependencies({
getMcpConfigsByScope: scope =>
scope === 'project'
? { servers: { sentry: projectConfig }, errors: [] }
: { servers: {}, errors: [] },
getProjectMcpServerStatus: name => (name === 'sentry' ? 'pending' : 'approved'),
})
const report = await doctorServer('sentry', { configOnly: true }, deps)
assert.equal(report.servers.length, 1)
assert.equal(report.servers[0]?.definitions[0]?.pendingApproval, true)
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, false)
assert.equal(report.servers[0]?.definitions[0]?.runtimeVisible, false)
assert.equal(
report.servers[0]?.findings.some(finding => finding.code === 'state.pending_project_approval'),
true,
)
})
test('doctorServer does not treat disabled servers as runtime-active or live-check targets', async () => {
let connectCalls = 0
const localConfig = stdioConfig('local', 'node-local')
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { github: localConfig },
errors: [],
}),
getMcpConfigsByScope: scope =>
scope === 'local'
? { servers: { github: localConfig }, errors: [] }
: { servers: {}, errors: [] },
isMcpServerDisabled: name => name === 'github',
connectToServer: async (name, config) => {
connectCalls += 1
return {
name,
type: 'failed',
config,
error: 'should not connect',
}
},
})
const report = await doctorServer('github', { configOnly: false }, deps)
assert.equal(connectCalls, 0)
assert.equal(report.summary.blocking, 0)
assert.equal(report.summary.warnings, 1)
assert.equal(report.servers[0]?.definitions[0]?.disabled, true)
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, false)
assert.equal(report.servers[0]?.definitions[0]?.runtimeVisible, false)
assert.equal(report.servers[0]?.liveCheck.result, 'disabled')
assert.equal(
report.servers[0]?.findings.some(finding => finding.code === 'state.disabled' && finding.severity === 'warn'),
true,
)
})
test('doctorAllServers skips live checks in config-only mode', async () => {
let connectCalls = 0
const localConfig = stdioConfig('local', 'node-local')
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { linear: localConfig },
errors: [],
}),
getMcpConfigsByScope: scope =>
scope === 'local'
? { servers: { linear: localConfig }, errors: [] }
: { servers: {}, errors: [] },
connectToServer: async (name, config) => {
connectCalls += 1
return {
name,
type: 'connected',
capabilities: {},
config,
cleanup: async () => {},
}
},
})
const report = await doctorAllServers({ configOnly: true }, deps)
assert.equal(connectCalls, 0)
assert.equal(report.servers[0]?.liveCheck.attempted, false)
assert.equal(report.servers[0]?.liveCheck.result, 'skipped')
})
test('doctorAllServers honors scopeFilter when collecting names', async () => {
const pluginConfig = {
type: 'http' as const,
url: 'https://example.test/mcp',
scope: 'dynamic' as const,
pluginSource: 'plugin:github@official',
}
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { 'plugin:github:github': pluginConfig },
errors: [],
}),
})
const report = await doctorAllServers({ configOnly: true, scopeFilter: 'user' }, deps)
assert.equal(report.summary.totalReports, 0)
assert.deepEqual(report.servers, [])
})
test('doctorAllServers honors scopeFilter when collecting validation errors', async () => {
const userConfig = stdioConfig('user', 'node-user')
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { filesystem: userConfig },
errors: [],
}),
getMcpConfigsByScope: scope => {
switch (scope) {
case 'project':
return {
servers: {},
errors: [
{
file: '.mcp.json',
path: '',
message: 'MCP config is not a valid JSON',
suggestion: 'Fix the JSON syntax errors in the file',
mcpErrorMetadata: {
scope: 'project',
severity: 'fatal',
},
},
],
}
case 'user':
return { servers: { filesystem: userConfig }, errors: [] }
default:
return { servers: {}, errors: [] }
}
},
})
const report = await doctorAllServers({ configOnly: true, scopeFilter: 'user' }, deps)
assert.equal(report.summary.totalReports, 1)
assert.equal(report.summary.blocking, 0)
assert.deepEqual(report.findings, [])
assert.deepEqual(report.servers[0]?.findings, [])
})
test('doctorAllServers includes observed runtime definitions for plugin-only servers', async () => {
const pluginConfig = {
type: 'http' as const,
url: 'https://example.test/mcp',
scope: 'dynamic' as const,
pluginSource: 'plugin:github@official',
}
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { 'plugin:github:github': pluginConfig },
errors: [],
}),
})
const report = await doctorAllServers({ configOnly: true }, deps)
assert.equal(report.summary.totalReports, 1)
assert.equal(report.servers[0]?.definitions.length, 1)
assert.equal(report.servers[0]?.definitions[0]?.sourceType, 'plugin')
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, true)
})
test('doctorAllServers reports disabled plugin servers as disabled, not not-found', async () => {
const pluginConfig = {
type: 'http' as const,
url: 'https://example.test/mcp',
scope: 'dynamic' as const,
pluginSource: 'plugin:github@official',
}
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { 'plugin:github:github': pluginConfig },
errors: [],
}),
isMcpServerDisabled: name => name === 'plugin:github:github',
})
const report = await doctorAllServers({ configOnly: true }, deps)
assert.equal(report.summary.totalReports, 1)
assert.equal(report.summary.warnings, 1)
assert.equal(report.summary.blocking, 0)
assert.equal(report.servers[0]?.definitions.length, 1)
assert.equal(report.servers[0]?.definitions[0]?.sourceType, 'plugin')
assert.equal(report.servers[0]?.definitions[0]?.disabled, true)
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, false)
assert.equal(
report.servers[0]?.findings.some(finding => finding.code === 'state.disabled' && !finding.blocking),
true,
)
assert.equal(
report.servers[0]?.findings.some(finding => finding.code === 'state.not_found'),
false,
)
})
test('doctorServer converts failed live checks into blocking findings', async () => {
const localConfig = stdioConfig('local', 'node-local')
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { github: localConfig },
errors: [],
}),
getMcpConfigsByScope: scope =>
scope === 'local'
? { servers: { github: localConfig }, errors: [] }
: { servers: {}, errors: [] },
connectToServer: async (name, config) => ({
name,
type: 'failed',
config,
error: 'command not found: node-local',
}),
})
const report = await doctorServer('github', { configOnly: false }, deps)
assert.equal(report.summary.blocking, 1)
assert.equal(report.servers[0]?.liveCheck.result, 'failed')
assert.equal(
report.servers[0]?.findings.some(
finding => finding.code === 'stdio.command_not_found' && finding.blocking,
),
true,
)
})
test('doctorServer converts needs-auth live checks into warning findings', async () => {
const localConfig = stdioConfig('local', 'node-local')
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { sentry: localConfig },
errors: [],
}),
getMcpConfigsByScope: scope =>
scope === 'local'
? { servers: { sentry: localConfig }, errors: [] }
: { servers: {}, errors: [] },
connectToServer: async (name, config) => ({
name,
type: 'needs-auth',
config,
}),
})
const report = await doctorServer('sentry', { configOnly: false }, deps)
assert.equal(report.summary.warnings, 1)
assert.equal(report.summary.blocking, 0)
assert.equal(
report.servers[0]?.findings.some(finding => finding.code === 'auth.needs_auth' && finding.severity === 'warn'),
true,
)
})
test('doctorServer includes observed runtime definition for plugin-only targets', async () => {
const pluginConfig = {
type: 'http' as const,
url: 'https://example.test/mcp',
scope: 'dynamic' as const,
pluginSource: 'plugin:github@official',
}
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { 'plugin:github:github': pluginConfig },
errors: [],
}),
})
const report = await doctorServer('plugin:github:github', { configOnly: true }, deps)
assert.equal(report.summary.totalReports, 1)
assert.equal(report.servers[0]?.definitions.length, 1)
assert.equal(report.servers[0]?.definitions[0]?.sourceType, 'plugin')
assert.equal(report.servers[0]?.definitions[0]?.runtimeActive, true)
})
test('doctorServer with scopeFilter does not leak runtime definition from another scope when target is absent', async () => {
let connectCalls = 0
const localConfig = stdioConfig('local', 'node-local')
const deps = makeDependencies({
getAllMcpConfigs: async () => ({
servers: { github: localConfig },
errors: [],
}),
getMcpConfigsByScope: scope =>
scope === 'local'
? { servers: { github: localConfig }, errors: [] }
: { servers: {}, errors: [] },
connectToServer: async (name, config) => {
connectCalls += 1
return {
name,
type: 'connected',
capabilities: {},
config,
cleanup: async () => {},
}
},
})
const report = await doctorServer('github', { configOnly: false, scopeFilter: 'user' }, deps)
assert.equal(connectCalls, 0)
assert.equal(report.summary.totalReports, 1)
assert.equal(report.summary.blocking, 1)
assert.deepEqual(report.servers[0]?.definitions, [])
assert.equal(report.servers[0]?.liveCheck.result, 'skipped')
assert.equal(
report.servers[0]?.findings.some(finding => finding.code === 'state.not_found' && finding.blocking),
true,
)
})
test('doctorServer reports blocking not-found state when no definition exists', async () => {
const report = await doctorServer('missing-server', { configOnly: true }, makeDependencies())
assert.equal(report.summary.blocking, 1)
assert.equal(report.servers[0]?.findings.some(finding => finding.code === 'state.not_found' && finding.blocking), true)
})

695
src/services/mcp/doctor.ts Normal file
View File

@@ -0,0 +1,695 @@
import type { ValidationError } from '../../utils/settings/validation.js'
import { clearServerCache, connectToServer } from './client.js'
import {
getAllMcpConfigs,
getMcpConfigsByScope,
isMcpServerDisabled,
} from './config.js'
import type {
ConfigScope,
ScopedMcpServerConfig,
} from './types.js'
import { describeMcpConfigFilePath, getProjectMcpServerStatus } from './utils.js'
export type McpDoctorSeverity = 'info' | 'warn' | 'error'
export type McpDoctorScopeFilter = 'local' | 'project' | 'user' | 'enterprise'
export type McpDoctorFinding = {
blocking: boolean
code: string
message: string
remediation?: string
scope?: string
serverName?: string
severity: McpDoctorSeverity
sourcePath?: string
}
export type McpDoctorLiveCheck = {
attempted: boolean
durationMs?: number
error?: string
result?: 'connected' | 'needs-auth' | 'failed' | 'pending' | 'disabled' | 'skipped'
}
export type McpDoctorDefinition = {
name: string
sourceType:
| 'local'
| 'project'
| 'user'
| 'enterprise'
| 'managed'
| 'plugin'
| 'claudeai'
| 'dynamic'
| 'internal'
sourcePath?: string
transport?: string
runtimeVisible: boolean
runtimeActive: boolean
pendingApproval?: boolean
disabled?: boolean
}
export type McpDoctorServerReport = {
serverName: string
requestedByUser: boolean
definitions: McpDoctorDefinition[]
liveCheck: McpDoctorLiveCheck
findings: McpDoctorFinding[]
}
export type McpDoctorDependencies = {
getAllMcpConfigs: typeof getAllMcpConfigs
getMcpConfigsByScope: typeof getMcpConfigsByScope
getProjectMcpServerStatus: typeof getProjectMcpServerStatus
isMcpServerDisabled: typeof isMcpServerDisabled
describeMcpConfigFilePath: typeof describeMcpConfigFilePath
connectToServer: typeof connectToServer
clearServerCache: typeof clearServerCache
}
export type McpDoctorReport = {
generatedAt: string
targetName?: string
scopeFilter?: McpDoctorScopeFilter
configOnly: boolean
summary: {
totalReports: number
healthy: number
warnings: number
blocking: number
}
findings: McpDoctorFinding[]
servers: McpDoctorServerReport[]
}
const DEFAULT_DEPENDENCIES: McpDoctorDependencies = {
getAllMcpConfigs,
getMcpConfigsByScope,
getProjectMcpServerStatus,
isMcpServerDisabled,
describeMcpConfigFilePath,
connectToServer,
clearServerCache,
}
export function buildEmptyDoctorReport(options: {
configOnly: boolean
scopeFilter?: McpDoctorScopeFilter
targetName?: string
}): McpDoctorReport {
return {
generatedAt: new Date().toISOString(),
targetName: options.targetName,
scopeFilter: options.scopeFilter,
configOnly: options.configOnly,
summary: {
totalReports: 0,
healthy: 0,
warnings: 0,
blocking: 0,
},
findings: [],
servers: [],
}
}
function getFindingCode(error: ValidationError): string {
if (error.message === 'MCP config is not a valid JSON') {
return 'config.invalid_json'
}
if (error.message.startsWith('Missing environment variables:')) {
return 'config.missing_env_vars'
}
if (error.message.includes("Windows requires 'cmd /c' wrapper to execute npx")) {
return 'config.windows_npx_wrapper_required'
}
if (error.message === 'Does not adhere to MCP server configuration schema') {
return 'config.invalid_schema'
}
return 'config.validation_error'
}
function getSeverity(error: ValidationError): McpDoctorSeverity {
const severity = error.mcpErrorMetadata?.severity
if (severity === 'fatal') {
return 'error'
}
if (severity === 'warning') {
return 'warn'
}
return 'warn'
}
export function findingsFromValidationErrors(
validationErrors: ValidationError[],
): McpDoctorFinding[] {
return validationErrors.map(error => {
const severity = getSeverity(error)
return {
blocking: severity === 'error',
code: getFindingCode(error),
message: error.message,
remediation: error.suggestion,
scope: error.mcpErrorMetadata?.scope,
serverName: error.mcpErrorMetadata?.serverName,
severity,
sourcePath: error.file,
}
})
}
function splitValidationFindings(validationFindings: McpDoctorFinding[]): {
globalFindings: McpDoctorFinding[]
serverFindingsByName: Map<string, McpDoctorFinding[]>
} {
const globalFindings: McpDoctorFinding[] = []
const serverFindingsByName = new Map<string, McpDoctorFinding[]>()
for (const finding of validationFindings) {
if (!finding.serverName) {
globalFindings.push(finding)
continue
}
const findings = serverFindingsByName.get(finding.serverName) ?? []
findings.push(finding)
serverFindingsByName.set(finding.serverName, findings)
}
return {
globalFindings,
serverFindingsByName,
}
}
function getSourceType(config: ScopedMcpServerConfig): McpDoctorDefinition['sourceType'] {
if (config.scope === 'claudeai') {
return 'claudeai'
}
if (config.scope === 'dynamic') {
return config.pluginSource ? 'plugin' : 'dynamic'
}
if (config.scope === 'managed') {
return 'managed'
}
return config.scope
}
function getTransport(config: ScopedMcpServerConfig): string {
return config.type ?? 'stdio'
}
function getConfigSignature(config: ScopedMcpServerConfig): string {
switch (config.type) {
case 'sse':
case 'http':
case 'ws':
case 'claudeai-proxy':
return `${config.scope}:${config.type}:${config.url}`
case 'sdk':
return `${config.scope}:${config.type}:${config.name}`
default:
return `${config.scope}:${config.type ?? 'stdio'}:${config.command}:${JSON.stringify(config.args ?? [])}`
}
}
function isSameDefinition(
config: ScopedMcpServerConfig,
activeConfig: ScopedMcpServerConfig | undefined,
): boolean {
if (!activeConfig) {
return false
}
return getSourceType(config) === getSourceType(activeConfig) && getConfigSignature(config) === getConfigSignature(activeConfig)
}
function buildScopeDefinitions(
name: string,
scope: ConfigScope,
servers: Record<string, ScopedMcpServerConfig>,
activeConfig: ScopedMcpServerConfig | undefined,
deps: McpDoctorDependencies,
): McpDoctorDefinition[] {
const config = servers[name]
if (!config) {
return []
}
const pendingApproval =
scope === 'project' ? deps.getProjectMcpServerStatus(name) === 'pending' : false
const disabled = deps.isMcpServerDisabled(name)
const runtimeActive = !disabled && isSameDefinition(config, activeConfig)
return [
{
name,
sourceType: getSourceType(config),
sourcePath: deps.describeMcpConfigFilePath(scope),
transport: getTransport(config),
runtimeVisible: runtimeActive,
runtimeActive,
pendingApproval,
disabled,
},
]
}
function shouldIncludeScope(
scope: ConfigScope,
scopeFilter: McpDoctorScopeFilter | undefined,
): boolean {
if (!scopeFilter) {
return scope === 'enterprise' || scope === 'local' || scope === 'project' || scope === 'user'
}
return scope === scopeFilter
}
function getValidationErrorsForSelectedScopes(
scopeResults: {
enterprise: ReturnType<McpDoctorDependencies['getMcpConfigsByScope']>
local: ReturnType<McpDoctorDependencies['getMcpConfigsByScope']>
project: ReturnType<McpDoctorDependencies['getMcpConfigsByScope']>
user: ReturnType<McpDoctorDependencies['getMcpConfigsByScope']>
},
scopeFilter: McpDoctorScopeFilter | undefined,
): ValidationError[] {
return [
...(shouldIncludeScope('enterprise', scopeFilter) ? scopeResults.enterprise.errors : []),
...(shouldIncludeScope('local', scopeFilter) ? scopeResults.local.errors : []),
...(shouldIncludeScope('project', scopeFilter) ? scopeResults.project.errors : []),
...(shouldIncludeScope('user', scopeFilter) ? scopeResults.user.errors : []),
]
}
function buildObservedDefinition(
name: string,
activeConfig: ScopedMcpServerConfig,
options?: {
disabled?: boolean
runtimeActive?: boolean
runtimeVisible?: boolean
},
): McpDoctorDefinition {
return {
name,
sourceType: getSourceType(activeConfig),
sourcePath:
getSourceType(activeConfig) === 'plugin'
? `plugin:${activeConfig.pluginSource ?? 'unknown'}`
: getSourceType(activeConfig) === 'claudeai'
? 'claude.ai'
: activeConfig.scope,
transport: getTransport(activeConfig),
runtimeVisible: options?.runtimeVisible ?? true,
runtimeActive: options?.runtimeActive ?? true,
disabled: options?.disabled ?? false,
}
}
function hasDefinitionForRuntimeSource(
definitions: McpDoctorDefinition[],
runtimeConfig: ScopedMcpServerConfig,
deps: McpDoctorDependencies,
): boolean {
const runtimeSourceType = getSourceType(runtimeConfig)
const runtimeSourcePath =
runtimeSourceType === 'plugin'
? `plugin:${runtimeConfig.pluginSource ?? 'unknown'}`
: runtimeSourceType === 'claudeai'
? 'claude.ai'
: deps.describeMcpConfigFilePath(runtimeConfig.scope)
return definitions.some(
definition =>
definition.sourceType === runtimeSourceType &&
definition.sourcePath === runtimeSourcePath &&
definition.transport === getTransport(runtimeConfig),
)
}
function buildShadowingFindings(definitions: McpDoctorDefinition[]): McpDoctorFinding[] {
const userEditable = definitions.filter(definition =>
definition.sourceType === 'local' ||
definition.sourceType === 'project' ||
definition.sourceType === 'user' ||
definition.sourceType === 'enterprise',
)
if (userEditable.length <= 1) {
return []
}
const active = userEditable.find(definition => definition.runtimeActive) ?? userEditable[0]
return [
{
blocking: false,
code: 'duplicate.same_name_multiple_scopes',
message: `Server is defined in multiple config scopes; active source is ${active.sourceType}`,
remediation: 'Remove or rename one of the duplicate definitions to avoid confusion.',
serverName: active.name,
severity: 'warn',
},
{
blocking: false,
code: 'scope.shadowed',
message: `${active.name} has shadowed definitions in lower-precedence config scopes.`,
remediation: 'Inspect the other definitions and remove the ones you no longer want to keep.',
serverName: active.name,
severity: 'warn',
},
]
}
function buildStateFindings(definitions: McpDoctorDefinition[]): McpDoctorFinding[] {
const findings: McpDoctorFinding[] = []
for (const definition of definitions) {
if (definition.pendingApproval) {
findings.push({
blocking: false,
code: 'state.pending_project_approval',
message: `${definition.name} is declared in project config but pending project approval.`,
remediation: 'Approve the server in the project MCP approval flow before expecting it to become active.',
scope: 'project',
serverName: definition.name,
severity: 'warn',
sourcePath: definition.sourcePath,
})
}
if (definition.disabled) {
findings.push({
blocking: false,
code: 'state.disabled',
message: `${definition.name} is currently disabled.`,
remediation: 'Re-enable the server before expecting it to be available at runtime.',
serverName: definition.name,
severity: 'warn',
sourcePath: definition.sourcePath,
})
}
}
return findings
}
function summarizeReport(report: McpDoctorReport): McpDoctorReport {
const allFindings = [...report.findings, ...report.servers.flatMap(server => server.findings)]
const blocking = allFindings.filter(finding => finding.blocking).length
const warnings = allFindings.filter(finding => finding.severity === 'warn').length
const healthy = report.servers.filter(
server =>
server.liveCheck.result === 'connected' &&
server.findings.every(finding => !finding.blocking && finding.severity !== 'warn'),
).length
return {
...report,
summary: {
totalReports: report.servers.length,
healthy,
warnings,
blocking,
},
}
}
async function getLiveCheck(
name: string,
activeConfig: ScopedMcpServerConfig | undefined,
configOnly: boolean,
definitions: McpDoctorDefinition[],
deps: McpDoctorDependencies,
): Promise<McpDoctorLiveCheck> {
if (configOnly) {
return { attempted: false, result: 'skipped' }
}
if (!activeConfig) {
if (definitions.some(definition => definition.pendingApproval)) {
return { attempted: false, result: 'pending' }
}
if (definitions.some(definition => definition.disabled)) {
return { attempted: false, result: 'disabled' }
}
return { attempted: false, result: 'skipped' }
}
const startedAt = Date.now()
const connection = await deps.connectToServer(name, activeConfig)
const durationMs = Date.now() - startedAt
try {
switch (connection.type) {
case 'connected':
return { attempted: true, result: 'connected', durationMs }
case 'needs-auth':
return { attempted: true, result: 'needs-auth', durationMs }
case 'disabled':
return { attempted: true, result: 'disabled', durationMs }
case 'pending':
return { attempted: true, result: 'pending', durationMs }
case 'failed':
return {
attempted: true,
result: 'failed',
durationMs,
error: connection.error,
}
}
} finally {
await deps.clearServerCache(name, activeConfig).catch(() => {
// Best-effort cleanup for diagnostic connections.
})
}
}
function buildLiveFindings(
name: string,
definitions: McpDoctorDefinition[],
liveCheck: McpDoctorLiveCheck,
): McpDoctorFinding[] {
const activeDefinition = definitions.find(definition => definition.runtimeActive)
if (liveCheck.result === 'needs-auth') {
return [
{
blocking: false,
code: 'auth.needs_auth',
message: `${name} requires authentication before it can be used.`,
remediation: 'Authenticate the server and then rerun the doctor command.',
serverName: name,
severity: 'warn',
sourcePath: activeDefinition?.sourcePath,
},
]
}
if (liveCheck.result === 'failed') {
const commandNotFound =
activeDefinition?.transport === 'stdio' &&
typeof liveCheck.error === 'string' &&
liveCheck.error.toLowerCase().includes('not found')
return [
{
blocking: true,
code: commandNotFound ? 'stdio.command_not_found' : 'health.failed',
message: liveCheck.error
? `${name} failed its live health check: ${liveCheck.error}`
: `${name} failed its live health check.`,
remediation: commandNotFound
? 'Verify the configured executable exists on PATH or use a full executable path.'
: 'Inspect the server configuration and retry the connection once the underlying problem is fixed.',
serverName: name,
severity: 'error',
sourcePath: activeDefinition?.sourcePath,
},
]
}
return []
}
async function buildServerReport(
name: string,
options: {
configOnly: boolean
requestedByUser: boolean
scopeFilter?: McpDoctorScopeFilter
},
validationFindingsByName: Map<string, McpDoctorFinding[]>,
deps: McpDoctorDependencies,
): Promise<McpDoctorServerReport> {
const scopeResults = {
enterprise: deps.getMcpConfigsByScope('enterprise'),
local: deps.getMcpConfigsByScope('local'),
project: deps.getMcpConfigsByScope('project'),
user: deps.getMcpConfigsByScope('user'),
}
const { servers: activeServers } = await deps.getAllMcpConfigs()
const serverDisabled = deps.isMcpServerDisabled(name)
const runtimeConfig = activeServers[name] ?? undefined
const activeConfig = serverDisabled ? undefined : runtimeConfig
const definitions = [
...(shouldIncludeScope('enterprise', options.scopeFilter)
? buildScopeDefinitions(name, 'enterprise', scopeResults.enterprise.servers, activeConfig, deps)
: []),
...(shouldIncludeScope('local', options.scopeFilter)
? buildScopeDefinitions(name, 'local', scopeResults.local.servers, activeConfig, deps)
: []),
...(shouldIncludeScope('project', options.scopeFilter)
? buildScopeDefinitions(name, 'project', scopeResults.project.servers, activeConfig, deps)
: []),
...(shouldIncludeScope('user', options.scopeFilter)
? buildScopeDefinitions(name, 'user', scopeResults.user.servers, activeConfig, deps)
: []),
]
const shouldAddObservedDefinition =
!!runtimeConfig &&
!hasDefinitionForRuntimeSource(definitions, runtimeConfig, deps) &&
((definitions.length === 0 && !options.scopeFilter) ||
(definitions.length > 0 && definitions.every(definition => !definition.runtimeActive)))
if (runtimeConfig && shouldAddObservedDefinition) {
definitions.push(
buildObservedDefinition(name, runtimeConfig, {
disabled: serverDisabled,
runtimeActive: !serverDisabled,
runtimeVisible: !serverDisabled,
}),
)
}
const visibleRuntimeConfig =
definitions.some(definition => definition.runtimeActive) || shouldAddObservedDefinition
? activeConfig
: undefined
const findings: McpDoctorFinding[] = [
...(validationFindingsByName.get(name) ?? []),
...buildShadowingFindings(definitions),
...buildStateFindings(definitions),
]
if (definitions.length === 0 && !shouldAddObservedDefinition) {
findings.push({
blocking: true,
code: 'state.not_found',
message: `${name} was not found in the selected MCP configuration sources.`,
remediation: 'Check the server name and scope, or add the MCP server before retrying.',
serverName: name,
severity: 'error',
})
}
const liveCheck = await getLiveCheck(name, visibleRuntimeConfig, options.configOnly, definitions, deps)
findings.push(...buildLiveFindings(name, definitions, liveCheck))
return {
serverName: name,
requestedByUser: options.requestedByUser,
definitions,
liveCheck,
findings,
}
}
function getServerNames(
scopeServers: Array<Record<string, ScopedMcpServerConfig>>,
activeServers: Record<string, ScopedMcpServerConfig>,
includeActiveServers: boolean,
): string[] {
const names = new Set<string>(includeActiveServers ? Object.keys(activeServers) : [])
for (const servers of scopeServers) {
for (const name of Object.keys(servers)) {
names.add(name)
}
}
return [...names].sort()
}
export async function doctorAllServers(
options: { configOnly: boolean; scopeFilter?: McpDoctorScopeFilter } = {
configOnly: false,
},
deps: McpDoctorDependencies = DEFAULT_DEPENDENCIES,
): Promise<McpDoctorReport> {
const report = buildEmptyDoctorReport(options)
const scopeResults = {
enterprise: deps.getMcpConfigsByScope('enterprise'),
local: deps.getMcpConfigsByScope('local'),
project: deps.getMcpConfigsByScope('project'),
user: deps.getMcpConfigsByScope('user'),
}
const validationFindings = findingsFromValidationErrors(
getValidationErrorsForSelectedScopes(scopeResults, options.scopeFilter),
)
const { globalFindings, serverFindingsByName } = splitValidationFindings(validationFindings)
const { servers: activeServers } = await deps.getAllMcpConfigs()
const names = getServerNames(
[
...(shouldIncludeScope('enterprise', options.scopeFilter) ? [scopeResults.enterprise.servers] : []),
...(shouldIncludeScope('local', options.scopeFilter) ? [scopeResults.local.servers] : []),
...(shouldIncludeScope('project', options.scopeFilter) ? [scopeResults.project.servers] : []),
...(shouldIncludeScope('user', options.scopeFilter) ? [scopeResults.user.servers] : []),
],
activeServers,
!options.scopeFilter,
)
const servers = await Promise.all(
names.map(name =>
buildServerReport(
name,
{
configOnly: options.configOnly,
requestedByUser: false,
scopeFilter: options.scopeFilter,
},
serverFindingsByName,
deps,
),
),
)
report.servers = servers
report.findings = globalFindings
return summarizeReport(report)
}
export async function doctorServer(
name: string,
options: { configOnly: boolean; scopeFilter?: McpDoctorScopeFilter },
deps: McpDoctorDependencies = DEFAULT_DEPENDENCIES,
): Promise<McpDoctorReport> {
const report = buildEmptyDoctorReport({ ...options, targetName: name })
const scopeResults = {
enterprise: deps.getMcpConfigsByScope('enterprise'),
local: deps.getMcpConfigsByScope('local'),
project: deps.getMcpConfigsByScope('project'),
user: deps.getMcpConfigsByScope('user'),
}
const validationFindings = findingsFromValidationErrors(
getValidationErrorsForSelectedScopes(scopeResults, options.scopeFilter),
)
const { globalFindings, serverFindingsByName } = splitValidationFindings(validationFindings)
const server = await buildServerReport(
name,
{
configOnly: options.configOnly,
requestedByUser: true,
scopeFilter: options.scopeFilter,
},
serverFindingsByName,
deps,
)
report.servers = [server]
report.findings = globalFindings
return summarizeReport(report)
}

View File

@@ -35,7 +35,7 @@ export async function sendNotification(
})
}
const DEFAULT_TITLE = 'Claude Code'
const DEFAULT_TITLE = 'Open Claude'
async function sendToChannel(
channel: string,

View File

@@ -6,6 +6,7 @@ import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
logEvent,
} from 'src/services/analytics/index.js'
import { isAntEmployee } from 'src/utils/buildConfig.js'
import { getCwd } from 'src/utils/cwd.js'
import { checkForReleaseNotes } from 'src/utils/releaseNotes.js'
import { setCwd } from 'src/utils/Shell.js'
@@ -334,7 +335,7 @@ export async function setup(
// overhead. NOT an early-return: the --dangerously-skip-permissions safety
// gate, tengu_started beacon, and apiKeyHelper prefetch below must still run.
if (!isBareMode()) {
if (process.env.USER_TYPE === 'ant') {
if (isAntEmployee()) {
// Prime repo classification cache for auto-undercover mode. Default is
// undercover ON until proven internal; if this resolves to internal, clear
// the prompt cache so the next turn picks up the OFF state.
@@ -414,7 +415,7 @@ export async function setup(
}
if (
process.env.USER_TYPE === 'ant' &&
isAntEmployee() &&
// Skip for Desktop's local agent mode — same trust model as CCR/BYOC
// (trusted Anthropic-managed launcher intentionally pre-approving everything).
// Precedent: permissionSetup.ts:861, applySettingsChange.ts:55 (PR #19116)

View File

@@ -1,4 +1,5 @@
import { setMainLoopModelOverride } from '../bootstrap/state.js'
import { isAntEmployee } from '../utils/buildConfig.js'
import {
clearApiKeyHelperCache,
clearAwsCredentialsCache,
@@ -140,7 +141,7 @@ export function onChangeAppState({
}
// tungstenPanelVisible (ant-only tmux panel sticky toggle)
if (process.env.USER_TYPE === 'ant') {
if (isAntEmployee()) {
if (
newState.tungstenPanelVisible !== oldState.tungstenPanelVisible &&
newState.tungstenPanelVisible !== undefined &&

View File

@@ -10,7 +10,7 @@
*/
import type { UUID } from 'crypto'
import { randomBytes } from 'crypto'
import { randomInt } from 'crypto'
import {
OUTPUT_FILE_TAG,
STATUS_TAG,
@@ -73,10 +73,9 @@ const DEFAULT_MAIN_SESSION_AGENT: CustomAgentDefinition = {
const TASK_ID_ALPHABET = '0123456789abcdefghijklmnopqrstuvwxyz'
function generateMainSessionTaskId(): string {
const bytes = randomBytes(8)
let id = 's'
for (let i = 0; i < 8; i++) {
id += TASK_ID_ALPHABET[bytes[i]! % TASK_ID_ALPHABET.length]
id += TASK_ID_ALPHABET[randomInt(TASK_ID_ALPHABET.length)]!
}
return id
}

File diff suppressed because one or more lines are too long

View File

@@ -21,7 +21,7 @@ function getExploreSystemPrompt(): string {
? `- Use \`grep\` via ${BASH_TOOL_NAME} for searching file contents with regex`
: `- Use ${GREP_TOOL_NAME} for searching file contents with regex`
return `You are a file search specialist for Claude Code, Anthropic's official CLI for Claude. You excel at thoroughly navigating and exploring codebases.
return `You are a file search specialist for OpenClaude, an open-source fork of Claude Code. You excel at thoroughly navigating and exploring codebases.
=== CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS ===
This is a READ-ONLY exploration task. You are STRICTLY PROHIBITED from:

Some files were not shown because too many files have changed in this diff Show More