Compare commits

..

70 Commits

Author SHA1 Message Date
Juan Camilo
037a855528 fix: strip Anthropic-specific params from 3P provider paths
Three silent failure modes affecting all third-party provider users:

1. Thinking blocks serialized as <thinking> text corrupt multi-turn
   context — strip them instead of converting to raw text tags.

2. Unknown models fall through to 200k context window default, so
   auto-compact never triggers — use conservative 8k for unknown
   3P models with a warning log.

3. Session resume with thinking blocks causes 400 or context corruption
   on 3P providers — strip thinking/redacted_thinking content blocks
   from deserialized messages when resuming against a non-Anthropic
   provider.

Addresses findings 2, 3, and 5 from #248.
2026-04-03 14:05:34 +02:00
Vasanth T
7c0ea68b65 fix: address code scanning alerts (#240) 2026-04-03 14:52:35 +05:30
KRATOS
f3a984dde1 fix(security-review): Handle null shell output (#231)
Normalize shell command stdout and stderr before the prompt-shell path and shared tool-result mappers use string operations. This prevents /security-review from crashing when a shell tool returns null output fields and adds regression coverage for both direct mapper calls and prompt generation.

Fixes #165

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-03 10:14:28 +02:00
Brendan
72c6e97094 fix: route ask-user-question footer actions through useInput (#229) 2026-04-03 10:14:17 +02:00
Preetham
f3ab727ec2 Added LM Studio provider setup guide (#227)
* ## PR: Add LM Studio Provider Support

### Summary
Adds comprehensive LM Studio integration to openclaude, following the same pattern as the existing Ollama provider. LM Studio is a popular local LLM inference tool that exposes an OpenAI-compatible API.

### Changes (4 files, 672 insertions)

**New Files:**
- `lmstudio_provider.py` (377 lines) - Full provider implementation with:
  - Health check functions (`check_lmstudio_running`)
  - Model listing (`list_lmstudio_models`)
  - Chat completion (`lmstudio_chat`)
  - Streaming support (`lmstudio_chat_stream`)
  - Comprehensive docstring with setup instructions, troubleshooting, and model recommendations

- `test_lmstudio_provider.py` (227 lines) - Complete test suite with 12 passing tests covering:
  - API URL construction
  - Server health checks
  - Model listing
  - Chat completion functionality

**Modified Files:**
- `docs/quick-start-mac-linux.md` (+34 lines) - Added Option D: LM Studio with setup instructions and troubleshooting
- `docs/quick-start-windows.md` (+34 lines) - Added Option D: LM Studio with PowerShell syntax and troubleshooting

### Key Features
- No API key required (local inference)
- Default port: 1234 (LM Studio's standard)
- OpenAI-compatible API integration
- Consistent with existing provider patterns (Ollama, Atomic Chat)
- All tests passing (12/12)

### Usage
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaude
```

* made pr as doc only pr for lm studio

* LM studio recent ui changes fixes in doc
2026-04-03 12:45:57 +08:00
Vasanth T
29edece72f docs: refresh repository README (#226) 2026-04-03 11:39:26 +08:00
Vasanth T
6181050811 chore: patch dependabot vulnerabilities (#225) 2026-04-03 11:34:09 +08:00
Adam Solomon
0fd0026a76 feat: (Extension of #175) added cross-platform system-wide environment variable setup guide for all providers (#185)
* added Instructions to env example to allow openclaude to be used system wide

* added suggested .env.example changes

I added the suggested .env.example changes suggested earlier within the pr thread
2026-04-03 11:27:14 +08:00
KRATOS
6919d774f2 fix: custom OPENAI_BASE_URL always wins over Codex model alias detection (#222)
* feat: add --provider CLI flag for multi-provider support

Adds a --provider flag that maps friendly provider names to the
environment variables the codebase uses for provider detection.
No more manual env-var configuration — users can now simply run:

  openclaude --provider openai --model gpt-4o
  openclaude --provider gemini --model gemini-2.0-flash
  openclaude --provider ollama --model llama3.2
  openclaude --provider bedrock
  openclaude --provider vertex

Implementation details:
- providerFlag.ts: core logic — maps provider names to env vars,
  uses ??= so explicit env vars always win over the flag defaults
- providerFlag.test.ts: 18 tests covering all 7 providers,
  error messages, model passthrough, and env-var precedence
- cli.tsx: early fast-path (mirrors --bare pattern) — sets env
  vars before Commander option-building and module constants run
- main.tsx: adds --provider to Commander option chain for --help

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: custom OPENAI_BASE_URL always wins over Codex model alias detection

When OPENAI_MODEL=gpt-5.4 (or gpt-5.4-mini) and a custom OPENAI_BASE_URL
is set (Azure, OpenRouter, etc), the transport was incorrectly forced to
codex_responses because gpt-5.4 is in CODEX_ALIAS_MODELS. This caused
requests to be sent with Codex auth instead of the user's API key,
resulting in 401 Unauthorized errors.

Fix: only use codex_responses when the base URL is explicitly the Codex
endpoint, OR when no custom base URL is set and the model is a Codex
alias. An explicit OPENAI_BASE_URL always takes priority over model-name
based Codex detection.

Verified locally: gpt-5.4 via OpenRouter now correctly shows
Provider=OpenRouter, Endpoint=https://openrouter.ai/api/v1 instead of
routing to chatgpt.com/backend-api/codex.

Fixes #200, #203

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 11:11:10 +08:00
Vasanth T
aa69e85795 fix: correct prompt identity branding (#224) 2026-04-03 11:06:26 +08:00
Kevin Codex
66bbb75836 Merge pull request #221 from gnanam1990/fix/keyboard-freeze-mcp-notifications
fix: prevent keyboard freeze when MCP notification effects fire
2026-04-03 10:27:11 +08:00
gnanam1990
2c6ec0119e fix: prevent keyboard freeze when MCP notification effects fire
React 19 requires `supportsMicrotasks: true` in the reconciler host
config so it can flush state updates from passive effects via
queueMicrotask. Without this, state updates triggered inside
useMcpConnectivityStatus were silently dropped, corrupting React's
internal executionContext and causing all keyboard input to freeze
after the "N MCP server(s) need auth" notification appeared.

Root cause (three-part fix):

1. reconciler.ts: declare supportsMicrotasks + scheduleMicrotask so
   React 19 schedules passive-effect flushes correctly.

2. useMcpConnectivityStatus.tsx: wrap the MCP auth notification effect
   in try/catch so any unexpected throw does not propagate into
   flushPassiveEffects and permanently corrupt executionContext.

3. notifications.tsx: wrap addNotification, removeNotification, and
   processQueue in try/catch for the same reason — these are called
   from 12+ notification hooks across passive effects.

Also fixes a pre-existing test isolation bug in context.test.ts where
assigning `undefined` to process.env produced the string "undefined",
polluting the env for subsequent test files.

Resolves: #169, #205, #77
2026-04-03 07:41:53 +05:30
Kevin Codex
74a25d01a6 Merge pull request #206 from alamnahin/feat/ollama-image-passthrough
feat(ollama): pass Anthropic base64 image blocks to Ollama images payload
2026-04-03 10:06:08 +08:00
Kevin Codex
7cf4c88ab8 docs: add security policy 2026-04-03 09:40:17 +08:00
Kevin Codex
f68b9aa57d Create SECURITY.md 2026-04-03 09:17:21 +08:00
Kevin Codex
20d1ee8427 Merge pull request #207 from alamnahin/feat/router-large-request-modeling
fix(router): use large message size when selecting models
2026-04-03 08:58:29 +08:00
Kevin Codex
089a42fc07 Merge pull request #211 from joetam/fix-image-paste-stubs
fix linux clipboard image paste for jpeg/gif/webp
2026-04-03 08:55:50 +08:00
jmt
f5b20fc517 fix: make clipboard images pasteable in OpenClaude
Images in the clipboard could fail to become pasted image attachments in OpenClaude. User-facing symptom: paste would detect that an image existed, but nothing would appear in the prompt, and bundled builds could also fail while converting BMP clipboard images into a format OpenClaude can send to the model.

Linux clipboard image paste had drifted between detection and extraction. checkImage accepted png/jpeg/jpg/gif/webp/bmp, but saveImage only tried image/png and image/bmp. When the clipboard advertised a JPEG, GIF, or WebP image, OpenClaude concluded that an image was present and then failed to write the temp screenshot file, so the paste path returned null and nothing was inserted into the prompt.

Bundled OpenClaude builds had a second failure mode. The build replaces image-processor-napi and sharp with explicit stub modules in bundled mode. getImageProcessor() treated those stubs as real processors, so BMP clipboard images reached sharp(imageBuffer).png() and then failed before they could be converted into a pasteable PNG for OpenClaude.

Keep the Linux clipboard commands generated from one MIME type list and reject __stub-marked image processors up front instead of failing in the middle of image paste.
2026-04-02 15:51:49 -07:00
Md.Nahin Alam
184ec250fd test(router): scope FAKE_KEY via pytest monkeypatch fixture 2026-04-03 04:18:20 +06:00
Md.Nahin Alam
43deb49c2c fix(router): use large request size for model selection 2026-04-03 03:45:33 +06:00
Md.Nahin Alam
0e7a2446c7 feat(ollama): pass base64 image blocks through to Ollama payload 2026-04-03 03:29:00 +06:00
Kevin Codex
63ad0196d6 Merge pull request #172 from devNull-bootloader/main
Add OpenClaude VS Code extension with terminal UI and control center
2026-04-03 02:25:03 +08:00
Kevin Codex
32046e9b40 Merge pull request #191 from BrainSlugs83/security/pin-firecrawl-js-dependency
security: pin @mendable/firecrawl-js to exact version
2026-04-03 02:19:17 +08:00
Mikey
7bd7d0f54d security: pin @mendable/firecrawl-js to exact version
Pins @mendable/firecrawl-js from ^4.18.1 to 4.18.1, consistent with
the pinning policy established in #102.
2026-04-02 11:07:54 -07:00
Kevin Codex
cdf4bad95b Merge pull request #182 from wrenchpilot/development
Enhance local provider URL validation for private IP addresses
2026-04-03 01:56:20 +08:00
Urvish Lanje
4158214895 Merge pull request #3 from devNull-bootloader/feat/initial-vscode-extension
fix: address review feedback for launcher behavior and links
2026-04-02 19:53:21 +02:00
Kevin Codex
a6ed57d0f4 Merge pull request #161 from auriti/fix/block-update-for-3p-providers
fix: block update command for 3P providers, align thinking block handling
2026-04-03 01:52:54 +08:00
James Shawn Carnley
7b68eb1acb Enhance local provider URL detection for IPv6 and loopback ranges 2026-04-02 13:46:10 -04:00
Kevin Codex
84950642ae Merge pull request #168 from firecrawl/add-firecrawl
feat: add Firecrawl backend for WebSearch and WebFetch
2026-04-03 01:43:25 +08:00
Kevin Codex
a287597273 Merge pull request #162 from auriti/fix/provider-aware-error-messages
fix: provider-aware error messages and skip Anthropic key approval for 3P
2026-04-03 01:42:15 +08:00
Kevin Codex
1cd4164062 Merge pull request #159 from Ghoul07-bit/main
Android Termux installation guide
2026-04-03 01:17:17 +08:00
Kevin Codex
47c53a18e8 Merge pull request #174 from gnanam1990/feat/provider-aware-rate-limit
feat: provider-aware rate limit reset delay for OpenAI/GitHub/Codex providers
2026-04-03 01:16:58 +08:00
Urvish Lanje
cf90457428 fix: address review feedback for launcher behavior and links
- point all repository links to Gitlawb/openclaude
- make shim opt-in by default to preserve Anthropic-first flow
- add command availability check with first-run install guidance
- render runtime and shim state dynamically in control center
- make command palette shortcut hint platform-aware
2026-04-02 17:14:42 +00:00
James Shawn Carnley
5e77d82620 Merge branch 'Gitlawb:main' into development 2026-04-02 12:55:59 -04:00
Kevin Codex
11d9660a80 Merge pull request #157 from erdemozyol/fix/status-tab-highlight
fix: refresh tab highlight on horizontal navigation
2026-04-03 00:55:33 +08:00
Kevin Codex
1a57335d74 Merge pull request #160 from auriti/fix/shim-ids-azure-safety
fix: crypto.randomUUID for IDs, Azure Foundry detection, safety filter visibility
2026-04-03 00:54:49 +08:00
Kevin Codex
7bc903d875 Merge pull request #156 from auriti/fix/model-lookup-and-llama-context
fix: deterministic prefix matching and correct Llama 3.x context windows
2026-04-03 00:53:42 +08:00
Kevin Codex
4c22de2585 Merge pull request #179 from gnanam1990/fix/gemini-routing
fix: route CLAUDE_CODE_USE_GEMINI through OpenAI-compatible shim
2026-04-03 00:50:21 +08:00
Leonardo Grigorio
63daf33b48 docs: add Firecrawl section to README 2026-04-02 13:47:59 -03:00
James Shawn Carnley
2ee43d7ee8 Merge branch 'Gitlawb:main' into development 2026-04-02 12:43:24 -04:00
Kevin Codex
3581d3f83f Merge pull request #142 from skfallin/fix/anthropic-schema-format
Strip incompatible JSON Schema keywords from tool schemas
2026-04-03 00:26:45 +08:00
James Shawn Carnley
4a4394bb65 feat: enhance local provider URL validation to include private IPv4 and IPv6 addresses 2026-04-02 12:26:23 -04:00
gnanam1990
b4aa27183d fix: route CLAUDE_CODE_USE_GEMINI through OpenAI-compatible shim
The Gemini provider uses Google's OpenAI-compatible endpoint
(generativelanguage.googleapis.com/v1beta/openai) but the client
routing condition in client.ts only checked CLAUDE_CODE_USE_OPENAI
and CLAUDE_CODE_USE_GITHUB — CLAUDE_CODE_USE_GEMINI was missing.

This caused every Gemini request to fall through to the Anthropic
client path. Since ANTHROPIC_API_KEY is not set when using Gemini,
the Anthropic SDK threw:

  "Could not resolve authentication method. Expected either apiKey
   or authToken to be set."

Fix: add CLAUDE_CODE_USE_GEMINI to the OpenAI shim routing condition
so Gemini requests correctly reach createOpenAIShimClient(), which
maps GEMINI_API_KEY → OPENAI_API_KEY and sets OPENAI_BASE_URL to
the Google endpoint.

Closes #176

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:51:26 +05:30
Kevin Codex
96b9e0235b Merge pull request #177 from gnanam1990/feat/env-example
feat: add .env.example with all provider configurations
2026-04-03 00:16:38 +08:00
gnanam1990
7095abb837 feat: add .env.example with all provider configurations
New contributors had to hunt through README and source files to find
required environment variables. This adds a single reference file at
repo root covering all supported providers with placeholder values,
inline comments, and sensible defaults.

Providers covered:
- Anthropic (default)
- OpenAI
- Google Gemini
- GitHub Models
- Ollama (local)
- AWS Bedrock
- Google Vertex AI

Also includes optional tuning vars: CLAUDE_CODE_MAX_RETRIES,
CLAUDE_CODE_UNATTENDED_RETRY, OPENCLAUDE_ENABLE_EXTENDED_KEYS,
OPENCLAUDE_DISABLE_CO_AUTHORED_BY, API_TIMEOUT_MS, CLAUDE_DEBUG.

Updated .gitignore to add !.env.example exception so the template
is not suppressed by the existing .env.* rule.

Closes #175

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:43:49 +05:30
gnanam1990
8501786852 feat: provider-aware rate limit reset delay
Previously getRateLimitResetDelayMs only read the Anthropic-specific
'anthropic-ratelimit-unified-reset' header (Unix timestamp), returning
null for every other provider. This meant OpenAI, GitHub, and Codex
users in persistent retry mode (CLAUDE_CODE_UNATTENDED_RETRY=1) always
fell back to dumb exponential backoff even when the server included an
exact reset time in the response headers.

This change makes the function provider-aware:

- firstParty (Anthropic): existing behaviour preserved — reads
  'anthropic-ratelimit-unified-reset' Unix timestamp
- openai / codex / github: reads 'x-ratelimit-reset-requests' and
  'x-ratelimit-reset-tokens' (OpenAI relative duration strings like
  "1s", "6m0s", "1h30m0s"), picks the larger of the two so retries
  don't fire before both token and request limits have reset
- bedrock / vertex / foundry / gemini: returns null (no standard
  reset header for these providers)

Adds parseOpenAIDuration() as an exported helper to convert OpenAI's
duration format into milliseconds.

16 new tests covering all provider paths and edge cases.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:30:05 +05:30
skfallin
37d4c21739 fix: make schema sanitization provider-specific 2026-04-02 17:57:42 +02:00
Urvish Lanje
a43023705b Merge pull request #2 from devNull-bootloader/feat/initial-vscode-extension
Initial VS Code Extension for OpenClaude
2026-04-02 17:54:40 +02:00
Kevin Codex
73db9b5fd3 Merge pull request #163 from erdemozyol/feat/codex-status-usage
Add Codex usage to /status
2026-04-02 23:54:07 +08:00
Urvish Lanje
2b5cf9f0c1 feat: initial VS Code extension for OpenClaude
Introduce OpenClaude as a first-class VS Code extension with:

- Built-in Control Center sidebar for seamless workflow integration
- Terminal-first design with authentic monospace UI and ASCII styling
- Quick-launch buttons for OpenClaude terminal, repository access, and command palette
- Status display showing runtime and OpenAI shim configuration
- Dark theme optimized for focus and extended development sessions
- Proper extension manifest with activation events and contribution points
- Debug configuration for local development

This extension provides developers with direct access to OpenClaude
without leaving VS Code, enabling a tighter integration with the editor.
2026-04-02 15:50:56 +00:00
Kevin Codex
4237a72b92 Merge pull request #170 from gnanam1990/fix/security-issue-42
security: fix 5 findings from issue #42 — env leak, ant gate, depth DoS, URL parse, CA cert
2026-04-02 23:38:53 +08:00
gnanam1990
942d09ca9c security: fix 5 findings from issue #42 — env leak, ant gate, depth DoS, URL parse, CA cert
Finding 1 [CRITICAL] — sessionRunner leaks full process.env to child
Extract buildChildEnv() with an explicit allowlist of safe OS/runtime vars.
Child process no longer inherits ANTHROPIC_API_KEY, OPENAI_API_KEY, DB
credentials, or any other secret present in the parent shell environment.
Only CLAUDE_CODE_* bridge vars, PATH, HOME, and standard OS env are passed.

Finding 2 [HIGH] — USER_TYPE=ant activatable by external users
Add isAntEmployee() -> false constant in src/utils/buildConfig.ts.
Replace all three direct process.env.USER_TYPE === 'ant' checks in
setup.ts and onChangeAppState.ts so no external user can activate
Anthropic-internal code paths (commit attribution, system prompt clearing,
dangerously-skip-permissions bypass) by setting USER_TYPE in their shell.

Finding 3 [HIGH] — memoryScan.ts unlimited directory walk
Add MAX_DEPTH=3 guard on readdir({ recursive: true }) results.
Deep or symlink-looped memory directories no longer cause an unbounded
blocking walk before the MAX_MEMORY_FILES cap takes effect.

Finding 5 [HIGH] — buildSdkUrl uses string.includes for protocol detection
Replace apiBaseUrl.includes('localhost') with new URL(apiBaseUrl).hostname
comparison so a remote URL containing 'localhost' in its path no longer
incorrectly gets ws:// (unencrypted) instead of wss://.

Finding 6 [HIGH] — upstream proxy writes unvalidated CA cert to disk
Add isValidPemContent() validation before writeFile in the CA cert download
path. A compromised proxy sending non-PEM data (HTML, JSON, scripts) is now
rejected before it can be appended to the system CA bundle.

Each fix is covered by new unit tests (25 tests across 5 new test files).
All 52 tests pass. Build verified clean on v0.1.7.

Fixes #42

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 21:04:10 +05:30
Leonardo Grigorio
ac4efae870 feat: add Firecrawl backend for WebSearch and WebFetch tools
WebSearch is currently disabled for all non-Anthropic providers (OpenAI
shim, DeepSeek, Ollama, etc.) because those providers have no native
search backend. This adds Firecrawl as a fallback that activates when
FIRECRAWL_API_KEY is set, unlocking web search for every model
openclaude supports.

WebFetch uses basic HTTP + Turndown for HTML-to-markdown conversion,
which fails silently on JS-rendered SPAs and bot-protected pages.
Firecrawl scrape replaces the fetch layer when FIRECRAWL_API_KEY is set,
returning clean markdown that handles dynamic content correctly.

Changes:
- WebSearchTool: add runFirecrawlSearch() using @mendable/firecrawl-js,
  respects allowed_domains (post-filter) and blocked_domains (-site: operators),
  includes result snippets alongside links. shouldUseFirecrawl() ensures
  firstParty/Vertex/Foundry/Codex providers keep their native backends.
- WebFetchTool: add scrapeWithFirecrawl(), drops into the existing
  applyPromptToMarkdown() pipeline so prompt processing is unchanged.
- Remove "Web search is only available in the US" restriction from
  prompt when Firecrawl is active (it works globally).
2026-04-02 12:18:20 -03:00
Urvish Lanje
4c6adf4774 Merge pull request #1 from devNull-bootloader/copilot/create-vscode-extension-openclaude
Add sleek terminal-style VS Code extension for OpenClaude
2026-04-02 17:13:02 +02:00
copilot-swe-agent[bot]
ff124dcdfb fix: use cryptographic nonce for extension webview CSP
Agent-Logs-Url: https://github.com/devNull-bootloader/openclaude/sessions/30a4694d-1125-4280-a593-74b5e3da601e

Co-authored-by: devNull-bootloader <189463177+devNull-bootloader@users.noreply.github.com>
2026-04-02 15:08:22 +00:00
copilot-swe-agent[bot]
8e8671fc51 feat: add visual OpenClaude control center UI in VS Code extension
Agent-Logs-Url: https://github.com/devNull-bootloader/openclaude/sessions/30a4694d-1125-4280-a593-74b5e3da601e

Co-authored-by: devNull-bootloader <189463177+devNull-bootloader@users.noreply.github.com>
2026-04-02 15:07:20 +00:00
Leonardo Grigorio
4c1ba35aa1 Revert "docs: add MCP servers guide with Firecrawl as featured example"
This reverts commit 5baee3b491.
2026-04-02 12:02:42 -03:00
Leonardo Grigorio
5baee3b491 docs: add MCP servers guide with Firecrawl as featured example
Adds docs/mcp-servers.md — the first documentation on how to configure
MCP servers in OpenClaude. Covers .mcp.json setup, the Firecrawl MCP
server for web scraping and search, available tools, and a pattern for
adding multiple servers.
2026-04-02 12:01:54 -03:00
copilot-swe-agent[bot]
43ba2cbfae feat: add VS Code extension with terminal launcher and custom theme
Agent-Logs-Url: https://github.com/devNull-bootloader/openclaude/sessions/5c0e9230-42be-4cce-a5d6-e85d665ea72a

Co-authored-by: devNull-bootloader <189463177+devNull-bootloader@users.noreply.github.com>
2026-04-02 14:58:36 +00:00
erdemozyol
5c25ac4e9a Add Codex usage to /status 2026-04-02 17:37:07 +03:00
erdemozyol
84ac06bac9 fix: show display version in status 2026-04-02 17:28:34 +03:00
Juan Camilo
c66b859342 fix: provider-aware error messages and skip Anthropic key approval for 3P
1. errors.ts: Add getCustomOffSwitchMessage() that returns a
   provider-neutral message for 3P users instead of the hardcoded
   "Opus is experiencing high load, please use /model to switch to
   Sonnet" which is misleading for OpenAI/Gemini/Ollama users.
   The original constant is preserved for backward-compatible string
   matching in error handlers.

2. Onboarding.tsx: Skip the "approve API key" step when a 3P provider
   is active. Previously, having ANTHROPIC_API_KEY in the environment
   (e.g., from a previous Anthropic setup) triggered an irrelevant
   Anthropic key approval UI even when using Gemini or OpenAI.
2026-04-02 16:23:12 +02:00
Juan Camilo
1709f5c098 fix: block update command for 3P providers, align thinking block handling
1. cli/update.ts: Block the update command for third-party providers.
   The update mechanism downloads from Anthropic's GCS bucket, which
   would silently replace the OpenClaude build (with the OpenAI shim)
   with the upstream Claude Code binary (without it). Now shows an
   actionable message directing users to rebuild from source.

2. codexShim.ts: Filter thinking blocks from assistant history, matching
   the openaiShim behavior. Without this, thinking blocks were included
   as plain text in assistant messages for the Codex transport but
   excluded for the OpenAI transport — causing inconsistent history
   when switching providers mid-session.
2026-04-02 16:18:10 +02:00
Juan Camilo
5d6443799a fix: crypto.randomUUID for IDs, Azure Foundry detection, safety filter visibility
Three targeted fixes:

1. Replace Math.random() with crypto.randomUUID() for message and tool
   call IDs in both openaiShim.ts and codexShim.ts. Math.random() is
   not cryptographically secure and predictable in seeded environments.

2. Anchor Azure endpoint detection to parsed hostname instead of raw
   URL regex. Adds support for Azure AI Foundry (services.ai.azure.com)
   alongside existing cognitiveservices and openai Azure endpoints.
   Prevents SSRF-style bypass via path segments.

3. Surface content safety filter blocks to the user. When Gemini or
   Azure returns finish_reason 'content_filter' or 'safety', emit a
   visible text block '[Content blocked by provider safety filter]'
   instead of silently returning empty/truncated content with
   stop_reason 'end_turn'. Applied to both streaming and non-streaming.
2026-04-02 16:14:35 +02:00
Ghoul07-bit
3ef09f911e Create ANDROID_INSTALL.md
Installation Guide to run OpenClaude on andriod
2026-04-02 15:10:20 +01:00
erdemozyol
6f4aa02123 fix: refresh tab highlight on horizontal navigation 2026-04-02 16:58:45 +03:00
Juan Camilo
b65921e8c3 fix: deterministic prefix matching and correct Llama 3.x context windows
Two fixes in openaiContextWindows.ts:

1. Sort lookup keys by length descending in lookupByModel() so the most
   specific prefix always wins. Without this, 'gpt-4-turbo-preview'
   could match 'gpt-4' (8k) instead of 'gpt-4-turbo' (128k) depending
   on V8's object key iteration order.

2. Update Llama 3.1/3.2/3.3 context windows from 8,192 to 128,000.
   These models support 128k context natively (Meta official specs).
   The previous 8k value was Ollama's default num_ctx, not the model's
   actual capability, causing premature auto-compact warnings.
2026-04-02 15:50:52 +02:00
skfallin
0fe8551d33 Merge branch 'main' into fix/anthropic-schema-format 2026-04-02 15:50:16 +02:00
skfallin
6319df02f0 Merge upstream/main into fix/anthropic-schema-format 2026-04-02 15:42:28 +02:00
skfallin
0c88dea247 Strip incompatible JSON Schema keywords from tool schemas 2026-04-02 13:50:47 +02:00
94 changed files with 4477 additions and 610 deletions

250
.env.example Normal file
View File

@@ -0,0 +1,250 @@
# =============================================================================
# OpenClaude Environment Configuration
# =============================================================================
# Copy this file to .env and fill in your values:
# cp .env.example .env
#
# Only set the variables for the provider you want to use.
# All other sections can be left commented out.
# =============================================================================
# =============================================================================
# SYSTEM-WIDE SETUP (OPTIONAL)
# =============================================================================
# Instead of using a .env file per project, you can set these variables
# system-wide so OpenClaude works from any directory on your machine.
#
# STEP 1: Pick your provider variables from the list below.
# STEP 2: Set them using the method for your OS (see further down).
#
# ── Provider variables ───────────────────────────────────────────────
#
# Option 1 — Anthropic:
# ANTHROPIC_API_KEY=sk-ant-your-key-here
# ANTHROPIC_MODEL=claude-sonnet-4-5 (optional)
# ANTHROPIC_BASE_URL=https://api.anthropic.com (optional)
#
# Option 2 — OpenAI:
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_API_KEY=sk-your-key-here
# OPENAI_MODEL=gpt-4o
# OPENAI_BASE_URL=https://api.openai.com/v1 (optional)
#
# Option 3 — Google Gemini:
# CLAUDE_CODE_USE_GEMINI=1
# GEMINI_API_KEY=your-gemini-key-here
# GEMINI_MODEL=gemini-2.0-flash
# GEMINI_BASE_URL=https://generativelanguage.googleapis.com (optional)
#
# Option 4 — GitHub Models:
# CLAUDE_CODE_USE_GITHUB=1
# GITHUB_TOKEN=ghp_your-token-here
#
# Option 5 — Ollama (local):
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_BASE_URL=http://localhost:11434/v1
# OPENAI_API_KEY=ollama
# OPENAI_MODEL=llama3.2
#
# Option 6 — LM Studio (local):
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_BASE_URL=http://localhost:1234/v1
# OPENAI_MODEL=your-model-id-here
# OPENAI_API_KEY=lmstudio (optional)
#
# Option 7 — AWS Bedrock (may also need: aws configure):
# CLAUDE_CODE_USE_BEDROCK=1
# AWS_REGION=us-east-1
# AWS_DEFAULT_REGION=us-east-1
# AWS_BEARER_TOKEN_BEDROCK=your-bearer-token-here
# ANTHROPIC_BEDROCK_BASE_URL=https://bedrock-runtime.us-east-1.amazonaws.com
#
# Option 8 — Google Vertex AI:
# CLAUDE_CODE_USE_VERTEX=1
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
# CLOUD_ML_REGION=us-east5
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
#
# ── How to set variables on each OS ──────────────────────────────────
#
# macOS (zsh):
# 1. Open: nano ~/.zshrc
# 2. Add each variable as: export VAR_NAME=value
# 3. Save and reload: source ~/.zshrc
#
# Linux (bash):
# 1. Open: nano ~/.bashrc
# 2. Add each variable as: export VAR_NAME=value
# 3. Save and reload: source ~/.bashrc
#
# Windows (PowerShell):
# Run for each variable:
# [System.Environment]::SetEnvironmentVariable('VAR_NAME', 'value', 'User')
# Then restart your terminal.
#
# Windows (Command Prompt):
# Run for each variable:
# setx VAR_NAME value
# Then restart your terminal.
#
# Windows (GUI):
# Settings > System > About > Advanced System Settings >
# Environment Variables > under "User variables" click New,
# then add each variable.
#
# ── Important notes ──────────────────────────────────────────────────
#
# LOCAL SERVERS: If using LM Studio or Ollama, the server MUST be
# running with a model loaded before you launch OpenClaude —
# otherwise you'll get connection errors.
#
# SWITCHING PROVIDERS: To temporarily switch, unset the relevant
# variables in your current terminal session:
#
# macOS / Linux:
# unset VAR_NAME
# # e.g.: unset CLAUDE_CODE_USE_OPENAI OPENAI_BASE_URL OPENAI_MODEL
#
# Windows (PowerShell — current session only):
# Remove-Item Env:VAR_NAME
#
# To permanently remove a variable on Windows:
# [System.Environment]::SetEnvironmentVariable('VAR_NAME', $null, 'User')
#
# LOAD ORDER:
# Shell and system environment variables are inherited by the process.
# Project .env files are only used if your launcher or shell loads them
# before starting OpenClaude.
# COMPATIBILITY:
# System-wide variables work regardless of how you run OpenClaude:
# npx, global npm install, bun run, or node directly. Any process
# launched from your terminal inherits your shell's environment.
#
# REMINDER: Make sure .env is in your .gitignore to avoid committing secrets.
# =============================================================================
# =============================================================================
# PROVIDER SELECTION — uncomment ONE block below
# =============================================================================
# -----------------------------------------------------------------------------
# Option 1: Anthropic (default — no provider flag needed)
# -----------------------------------------------------------------------------
ANTHROPIC_API_KEY=sk-ant-your-key-here
# Override the default model (optional)
# ANTHROPIC_MODEL=claude-sonnet-4-5
# Use a custom Anthropic-compatible endpoint (optional)
# ANTHROPIC_BASE_URL=https://api.anthropic.com
# -----------------------------------------------------------------------------
# Option 2: OpenAI
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_API_KEY=sk-your-key-here
# OPENAI_MODEL=gpt-4o
# Use a custom OpenAI-compatible endpoint (optional — defaults to api.openai.com)
# OPENAI_BASE_URL=https://api.openai.com/v1
# -----------------------------------------------------------------------------
# Option 3: Google Gemini
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_GEMINI=1
# GEMINI_API_KEY=your-gemini-key-here
# GEMINI_MODEL=gemini-2.0-flash
# Use a custom Gemini endpoint (optional)
# GEMINI_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai
# -----------------------------------------------------------------------------
# Option 4: GitHub Models
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_GITHUB=1
# GITHUB_TOKEN=ghp_your-token-here
# -----------------------------------------------------------------------------
# Option 5: Ollama (local models)
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_BASE_URL=http://localhost:11434/v1
# OPENAI_API_KEY=ollama
# OPENAI_MODEL=llama3.2
# -----------------------------------------------------------------------------
# Option 6: LM Studio (local models)
# -----------------------------------------------------------------------------
# LM Studio exposes an OpenAI-compatible API, so we use the OpenAI provider.
# Make sure LM Studio is running with the Developer server enabled
# (Developer tab > toggle server ON).
#
# Steps:
# 1. Download and install LM Studio from https://lmstudio.ai
# 2. Search for and download a model (e.g. any coding or instruct model)
# 3. Load the model and start the Developer server
# 4. Set OPENAI_MODEL to the model ID shown in LM Studio's Developer tab
#
# The default server URL is http://localhost:1234 — change the port below
# if you've configured a different one in LM Studio.
#
# OPENAI_API_KEY is optional — LM Studio runs locally and ignores it.
# Some clients require a non-empty value; if you get auth errors, set it
# to any dummy value (e.g. "lmstudio").
#
# CLAUDE_CODE_USE_OPENAI=1
# OPENAI_BASE_URL=http://localhost:1234/v1
# OPENAI_MODEL=your-model-id-here
# -----------------------------------------------------------------------------
# Option 7: AWS Bedrock
# -----------------------------------------------------------------------------
# You may also need AWS CLI credentials configured (run: aws configure)
# or have AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set in your
# environment in addition to the variables below.
#
# CLAUDE_CODE_USE_BEDROCK=1
# AWS_REGION=us-east-1
# AWS_DEFAULT_REGION=us-east-1
# AWS_BEARER_TOKEN_BEDROCK=your-bearer-token-here
# ANTHROPIC_BEDROCK_BASE_URL=https://bedrock-runtime.us-east-1.amazonaws.com
# -----------------------------------------------------------------------------
# Option 8: Google Vertex AI
# -----------------------------------------------------------------------------
# CLAUDE_CODE_USE_VERTEX=1
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
# CLOUD_ML_REGION=us-east5
# GOOGLE_CLOUD_PROJECT=your-gcp-project-id
# =============================================================================
# OPTIONAL TUNING
# =============================================================================
# Max number of API retries on failure (default: 10)
# CLAUDE_CODE_MAX_RETRIES=10
# Enable persistent retry mode for unattended/CI sessions
# Retries 429/529 indefinitely with smart backoff
# CLAUDE_CODE_UNATTENDED_RETRY=1
# Enable extended key reporting (Kitty keyboard protocol)
# Useful for iTerm2, WezTerm, Ghostty if modifier keys feel off
# OPENCLAUDE_ENABLE_EXTENDED_KEYS=1
# Disable "Co-authored-by" line in git commits made by OpenClaude
# OPENCLAUDE_DISABLE_CO_AUTHORED_BY=1
# Custom timeout for API requests in milliseconds (default: varies)
# API_TIMEOUT_MS=60000
# Enable debug logging
# CLAUDE_DEBUG=1

View File

@@ -6,6 +6,9 @@ on:
branches:
- main
permissions:
contents: read
jobs:
smoke-and-tests:
runs-on: ubuntu-latest

1
.gitignore vendored
View File

@@ -3,5 +3,6 @@ dist/
*.tsbuildinfo
.env
.env.*
!.env.example
.openclaude-profile.json
reports/

162
ANDROID_INSTALL.md Normal file
View File

@@ -0,0 +1,162 @@
# OpenClaude on Android (Termux)
A complete guide to running OpenClaude on Android using Termux + proot Ubuntu.
---
## Prerequisites
- Android phone with ~700MB free storage
- [Termux](https://f-droid.org/en/packages/com.termux/) installed from **F-Droid** (not Play Store)
- An [OpenRouter](https://openrouter.ai) API key (free, no credit card required)
---
## Why This Setup?
OpenClaude requires [Bun](https://bun.sh) to build, and Bun does not support Android natively. The workaround is running a real Ubuntu environment inside Termux via `proot-distro`, where Bun's Linux binary works correctly.
---
## Installation
### Step 1 — Update Termux
```bash
pkg update && pkg upgrade
```
Press `N` or Enter for any config file conflict prompts.
### Step 2 — Install dependencies
```bash
pkg install nodejs-lts git proot-distro
```
Verify Node.js:
```bash
node --version # should be v20+
```
### Step 3 — Clone OpenClaude
```bash
git clone https://github.com/Gitlawb/openclaude.git
cd openclaude
npm install
npm link
```
### Step 4 — Install Ubuntu via proot
```bash
proot-distro install ubuntu
```
This downloads ~200400MB. Wait for it to complete.
### Step 5 — Install Bun inside Ubuntu
```bash
proot-distro login ubuntu
curl -fsSL https://bun.sh/install | bash
source ~/.bashrc
bun --version # should show 1.3.11+
```
### Step 6 — Build OpenClaude
```bash
cd /data/data/com.termux/files/home/openclaude
bun run build
```
You should see:
```
✓ Built openclaude v0.1.6 → dist/cli.mjs
```
### Step 7 — Save env vars permanently
Still inside Ubuntu, add your OpenRouter config to `.bashrc`:
```bash
echo 'export CLAUDE_CODE_USE_OPENAI=1' >> ~/.bashrc
echo 'export OPENAI_API_KEY=your_openrouter_key_here' >> ~/.bashrc
echo 'export OPENAI_BASE_URL=https://openrouter.ai/api/v1' >> ~/.bashrc
echo 'export OPENAI_MODEL=qwen/qwen3.6-plus-preview:free' >> ~/.bashrc
source ~/.bashrc
```
Replace `your_openrouter_key_here` with your actual key from [openrouter.ai/keys](https://openrouter.ai/keys).
### Step 8 — Run OpenClaude
```bash
node dist/cli.mjs
```
Select **3** (3rd-party platform) at the login screen. Your env vars will be detected automatically.
---
## Restarting After Closing Termux
Every time you reopen Termux after killing it, run:
```bash
proot-distro login ubuntu
cd /data/data/com.termux/files/home/openclaude
node dist/cli.mjs
```
---
## Recommended Free Model
**`qwen/qwen3.6-plus-preview:free`** — Best free model on OpenRouter as of April 2026.
- 1M token context window
- Beats Claude 4.5 Opus on Terminal-Bench 2.0 agentic coding (61.6 vs 59.3)
- Built-in chain-of-thought reasoning
- Native tool use and function calling
- $0/M tokens (preview period)
> ⚠️ Free status may change when the preview period ends. Check [openrouter.ai](https://openrouter.ai/qwen/qwen3.6-plus-preview:free) for current pricing.
---
## Alternative Free Models (OpenRouter)
| Model ID | Context | Notes |
|---|---|---|
| `qwen/qwen3-coder:free` | 262K | Best for pure coding tasks |
| `openai/gpt-oss-120b:free` | 131K | OpenAI open model, strong tool calling |
| `nvidia/nemotron-3-super-120b-a12b:free` | 262K | Hybrid MoE, good general use |
| `meta-llama/llama-3.3-70b-instruct:free` | 66K | Reliable, widely tested |
Switch models anytime:
```bash
export OPENAI_MODEL=qwen/qwen3-coder:free
node dist/cli.mjs
```
---
## Why Not Groq or Cerebras?
Both were tested and fail due to OpenClaude's large system prompt (~50K tokens):
- **Groq free tier**: TPM limits too low (6K12K tokens/min)
- **Cerebras free tier**: TPM limits exceeded, even on `llama3.1-8b`
OpenRouter free models have no TPM restrictions — only 20 req/min and 200 req/day.
---
## Tips
- **Don't swipe Termux away** from recent apps mid-session — use the home button to minimize instead.
- The Ubuntu environment persists between Termux sessions; your build and config are saved.
- Run `bun run build` again only if you pull updates to the OpenClaude repo.

263
README.md
View File

@@ -1,64 +1,44 @@
# OpenClaude
Use Claude Code with **any LLM** — not just Claude.
OpenClaude is an open-source coding-agent CLI that works with more than one model provider.
OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`, and local inference via [Atomic Chat](https://atomic.chat/) on Apple Silicon.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping the same terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
## Why OpenClaude
- Use one CLI across cloud and local model providers
- Save provider profiles inside the app with `/provider`
- Run locally with Ollama or Atomic Chat
- Keep core coding-agent workflows: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
---
## Start Here
## Quick Start
If you are new to terminals or just want the easiest path, start with the beginner guides:
- [Non-Technical Setup](docs/non-technical-setup.md)
- [Windows Quick Start](docs/quick-start-windows.md)
- [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
If you want source builds, Bun workflows, profile launchers, or full provider examples, use:
- [Advanced Setup](docs/advanced-setup.md)
---
## Beginner Install
For most users, install the npm package:
### Install
```bash
npm install -g @gitlawb/openclaude
```
The package name is `@gitlawb/openclaude`, but the command you run is:
If the npm install path later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
### Start
```bash
openclaude
```
If you install via npm and later see `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
Inside OpenClaude:
---
- run `/provider` for guided setup of OpenAI-compatible, Gemini, Ollama, or Codex profiles
- run `/onboard-github` for GitHub Models setup
## Fastest Setup
### Fastest OpenAI setup
### Windows PowerShell
```powershell
npm install -g @gitlawb/openclaude
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
openclaude
```
### macOS / Linux
macOS / Linux:
```bash
npm install -g @gitlawb/openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
@@ -66,135 +46,166 @@ export OPENAI_MODEL=gpt-4o
openclaude
```
That is enough to start with OpenAI.
Windows PowerShell:
```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
openclaude
```
### Fastest local Ollama setup
macOS / Linux:
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=qwen2.5-coder:7b
openclaude
```
Windows PowerShell:
```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="qwen2.5-coder:7b"
openclaude
```
---
## Choose Your Guide
## Setup Guides
### Beginner
Beginner-friendly guides:
- Want the easiest setup with copy-paste steps: [Non-Technical Setup](docs/non-technical-setup.md)
- On Windows: [Windows Quick Start](docs/quick-start-windows.md)
- On macOS or Linux: [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
- [Non-Technical Setup](docs/non-technical-setup.md)
- [Windows Quick Start](docs/quick-start-windows.md)
- [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
### Advanced
Advanced and source-build guides:
- Want source builds, Bun, local profiles, runtime checks, or more provider choices: [Advanced Setup](docs/advanced-setup.md)
- [Advanced Setup](docs/advanced-setup.md)
- [Android Install](ANDROID_INSTALL.md)
---
## Common Beginner Choices
## Supported Providers
### OpenAI
Best default if you already have an OpenAI API key.
### Ollama
Best if you want to run models locally on your own machine.
### Codex
Best if you already use the Codex CLI or ChatGPT Codex backend.
### Atomic Chat
Best if you want local inference on Apple Silicon with Atomic Chat. See [Advanced Setup](docs/advanced-setup.md).
| Provider | Setup Path | Notes |
| --- | --- | --- |
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and compatible local `/v1` servers |
| Gemini | `/provider` or env vars | Google Gemini support through the runtime provider layer |
| GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
| Codex | `/provider` | Uses existing Codex credentials when available |
| Ollama | `/provider` or env vars | Local inference with no API key |
| Atomic Chat | advanced setup | Local Apple Silicon backend |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
---
## What Works
- **All tools**: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
- **Streaming**: Real-time token streaming
- **Tool calling**: Multi-step tool chains (the model calls tools, gets results, continues)
- **Images**: Base64 and URL images passed to vision models
- **Slash commands**: /commit, /review, /compact, /diff, /doctor, etc.
- **Sub-agents**: AgentTool spawns sub-agents using the same provider
- **Memory**: Persistent memory system
## What's Different
- **No thinking mode**: Anthropic's extended thinking is disabled (OpenAI models use different reasoning)
- **No prompt caching**: Anthropic-specific cache headers are skipped
- **No beta features**: Anthropic-specific beta headers are ignored
- **Token limits**: Defaults to 32K max output — some models may cap lower, which is handled gracefully
- Tool-driven coding workflows
Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
- Streaming responses
Real-time token output and tool progress
- Tool calling
Multi-step tool loops with model calls, tool execution, and follow-up responses
- Images
URL and base64 image inputs for providers that support vision
- Provider profiles
Guided setup plus saved `.openclaude-profile.json` support
- Local and remote model backends
Cloud APIs, local servers, and Apple Silicon local inference
---
## How It Works
## Provider Notes
The shim (`src/services/api/openaiShim.ts`) sits between Claude Code and the LLM API:
OpenClaude supports multiple providers, but behavior is not identical across all of them.
```
Claude Code Tool System
|
v
Anthropic SDK interface (duck-typed)
|
v
openaiShim.ts <-- translates formats
|
v
OpenAI Chat Completions API
|
v
Any compatible model
```
- Anthropic-specific features may not exist on other providers
- Tool quality depends heavily on the selected model
- Smaller local models can struggle with long multi-step tool flows
- Some providers impose lower output caps than the CLI defaults, and OpenClaude adapts where possible
It translates:
- Anthropic message blocks → OpenAI messages
- Anthropic tool_use/tool_result → OpenAI function calls
- OpenAI SSE streaming → Anthropic stream events
- Anthropic system prompt arrays → OpenAI system messages
The rest of Claude Code doesn't know it's talking to a different model.
For best results, use models with strong tool/function calling support.
---
## Model Quality Notes
## Web Search and Fetch
Not all models are equal at agentic tool use. Here's a rough guide:
`WebFetch` works out of the box.
| Model | Tool Calling | Code Quality | Speed |
|-------|-------------|-------------|-------|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Smaller models (<7B) | Limited | Limited | Very Fast |
`WebSearch` and richer JS-aware fetching work best with a Firecrawl API key:
For best results, use models with strong function/tool calling support.
```bash
export FIRECRAWL_API_KEY=your-key-here
```
With Firecrawl enabled:
- `WebSearch` is available across more provider setups
- `WebFetch` can handle JavaScript-rendered pages more reliably
Firecrawl is optional. Without it, OpenClaude falls back to the built-in behavior.
---
## Files Changed from Original
## Source Build
```
src/services/api/openaiShim.ts — NEW: OpenAI-compatible API shim (724 lines)
src/services/api/client.ts — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts — Added 'openai' provider type
src/utils/model/configs.ts — Added openai model mappings
src/utils/model/model.ts — Respects OPENAI_MODEL for defaults
src/utils/auth.ts — Recognizes OpenAI as valid 3P provider
```bash
bun install
bun run build
node dist/cli.mjs
```
6 files changed. 786 lines added. Zero dependencies added.
Helpful commands:
- `bun run dev`
- `bun run smoke`
- `bun run doctor:runtime`
---
## Origin
## VS Code Extension
This is a fork of [instructkr/claude-code](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code), which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026.
The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration and theme support.
The original Claude Code source is the property of Anthropic. This repository is not affiliated with or endorsed by Anthropic.
---
## Security
If you believe you found a security issue, see [SECURITY.md](SECURITY.md).
---
## Contributing
Contributions are welcome.
For larger changes, open an issue first so the scope is clear before implementation. Helpful validation commands include:
- `bun run build`
- `bun run smoke`
- focused `bun test ...` runs for touched areas
---
## Disclaimer
OpenClaude is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.
"Claude" and "Claude Code" are trademarks of Anthropic.
---
## License
This repository is provided for educational and research purposes. The original source code is subject to Anthropic's terms. The OpenAI shim additions are public domain.
MIT

69
SECURITY.md Normal file
View File

@@ -0,0 +1,69 @@
# Security Policy
## Supported Versions
Open Claude is currently maintained on the latest `main` branch and the latest
npm release only.
| Version | Supported |
| ------- | --------- |
| Latest release | :white_check_mark: |
| Older releases | :x: |
| Unreleased forks / modified builds | :x: |
Security fixes are generally released in the next patch version and may also be
landed directly on `main` before a package release is published.
## Reporting a Vulnerability
If you believe you have found a security vulnerability in Open Claude, please
report it privately.
Preferred reporting channel:
- GitHub Security Advisories / private vulnerability reporting for this
repository
Please include:
- a clear description of the issue
- affected version, commit, or environment
- reproduction steps or a proof of concept
- impact assessment
- any suggested remediation, if available
Please do **not** open a public issue for an unpatched vulnerability.
## Response Process
Our general goals are:
- initial triage acknowledgment within 7 days
- follow-up after validation when we can reproduce the issue
- coordinated disclosure after a fix is available
Severity, exploitability, and maintenance bandwidth may affect timelines.
## Disclosure and CVEs
Valid reports may be fixed privately first and disclosed after a patch is
available.
If a report is accepted and the issue is significant enough to warrant formal
tracking, we may publish a GitHub Security Advisory and request or assign a CVE
through the appropriate channel. CVE issuance is not guaranteed for every
report.
## Scope
This policy applies to:
- the Open Claude source code in this repository
- official release artifacts published from this repository
- the `@gitlawb/openclaude` npm package
This policy does not cover:
- third-party model providers, endpoints, or hosted services
- local misconfiguration on the reporter's machine
- vulnerabilities in unofficial forks, mirrors, or downstream repackages

View File

@@ -13,6 +13,7 @@
"@anthropic-ai/vertex-sdk": "0.14.4",
"@commander-js/extra-typings": "12.1.0",
"@growthbook/growthbook": "1.6.5",
"@mendable/firecrawl-js": "4.18.1",
"@modelcontextprotocol/sdk": "1.29.0",
"@opentelemetry/api": "1.9.1",
"@opentelemetry/api-logs": "0.214.0",
@@ -35,7 +36,7 @@
"cli-highlight": "2.1.11",
"code-excerpt": "4.0.0",
"commander": "12.1.0",
"diff": "7.0.0",
"diff": "8.0.3",
"emoji-regex": "10.6.0",
"env-paths": "3.0.0",
"execa": "9.6.1",
@@ -48,7 +49,7 @@
"ignore": "7.0.5",
"indent-string": "5.0.0",
"jsonc-parser": "3.3.1",
"lodash-es": "4.17.23",
"lodash-es": "4.18.0",
"lru-cache": "11.2.7",
"marked": "15.0.12",
"p-map": "7.0.4",
@@ -185,6 +186,8 @@
"@js-sdsl/ordered-map": ["@js-sdsl/ordered-map@4.4.2", "", {}, "sha512-iUKgm52T8HOE/makSxjqoWhe95ZJA1/G1sYsGev2JDKUSS14KAgg1LHb+Ba+IPow0xflbnSkOsZcO08C7w1gYw=="],
"@mendable/firecrawl-js": ["@mendable/firecrawl-js@4.18.1", "", { "dependencies": { "axios": "1.14.0", "firecrawl": "4.16.0", "typescript-event-target": "^1.1.1", "zod": "^3.23.8", "zod-to-json-schema": "^3.23.0" } }, "sha512-NfmJv+xcHoZthj8I3NP/8KAgO8EWcvOcTvCAvszxqs7/6sCs1CRss6Tum6RycZNSwJkr5RzQossN89IlixRfng=="],
"@mixmark-io/domino": ["@mixmark-io/domino@2.2.0", "", {}, "sha512-Y28PR25bHXUg88kCV7nivXrP2Nj2RueZ3/l/jdx6J9f8J4nsEGcgX0Qe6lt7Pa+J79+kPiJU3LguR6O/6zrLOw=="],
"@modelcontextprotocol/sdk": ["@modelcontextprotocol/sdk@1.29.0", "", { "dependencies": { "@hono/node-server": "^1.19.9", "ajv": "^8.17.1", "ajv-formats": "^3.0.1", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "eventsource-parser": "^3.0.0", "express": "^5.2.1", "express-rate-limit": "^8.2.1", "hono": "^4.11.4", "jose": "^6.1.3", "json-schema-typed": "^8.0.2", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", "zod": "^3.25 || ^4.0", "zod-to-json-schema": "^3.25.1" }, "peerDependencies": { "@cfworker/json-schema": "^4.1.1" }, "optionalPeers": ["@cfworker/json-schema"] }, "sha512-zo37mZA9hJWpULgkRpowewez1y6ML5GsXJPY8FI0tBBCd77HEvza4jDqRKOXgHNn867PVGCyTdzqpz0izu5ZjQ=="],
@@ -433,7 +436,7 @@
"depd": ["depd@2.0.0", "", {}, "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw=="],
"diff": ["diff@7.0.0", "", {}, "sha512-PJWHUb1RFevKCwaFA9RlG5tCd+FO5iRh9A8HEtkmBH2Li03iJriB6m6JIN4rGz3K3JLawI7/veA1xzRKP6ISBw=="],
"diff": ["diff@8.0.3", "", {}, "sha512-qejHi7bcSD4hQAZE0tNAawRK1ZtafHDmMTMkrrIGgSLl7hTnQHmKCeB45xAcbfTqK2zowkM3j3bHt/4b/ARbYQ=="],
"dijkstrajs": ["dijkstrajs@1.0.3", "", {}, "sha512-qiSlmBq9+BCdCA/L46dw8Uy93mloxsPSbwnm5yrKn2vMPiy8KyAskTF6zuV/j5BMsmOGZDPs7KjU+mjb670kfA=="],
@@ -495,6 +498,8 @@
"find-up": ["find-up@4.1.0", "", { "dependencies": { "locate-path": "^5.0.0", "path-exists": "^4.0.0" } }, "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw=="],
"firecrawl": ["firecrawl@4.16.0", "", { "dependencies": { "axios": "^1.13.5", "typescript-event-target": "^1.1.1", "zod": "^3.23.8", "zod-to-json-schema": "^3.23.0" } }, "sha512-7SJ/FWhZBtW2gTCE/BsvU+gbfIpfTq+D9IH82l9MacauLVptaY6EdYAhrK3YSMC9yr5NxvxRcpZKcXG/nqjiiQ=="],
"follow-redirects": ["follow-redirects@1.15.11", "", {}, "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ=="],
"form-data": ["form-data@4.0.5", "", { "dependencies": { "asynckit": "^0.4.0", "combined-stream": "^1.0.8", "es-set-tostringtag": "^2.1.0", "hasown": "^2.0.2", "mime-types": "^2.1.12" } }, "sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w=="],
@@ -591,7 +596,7 @@
"locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="],
"lodash-es": ["lodash-es@4.17.23", "", {}, "sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg=="],
"lodash-es": ["lodash-es@4.18.0", "", {}, "sha512-koAgswPPA+UTaPN64Etp+PGP+WT6oqOS2NMi5yDkMaiGw9qY4VxQbQF0mtKMyr4BlTznWyzePV5UpECTJQmSUA=="],
"lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="],
@@ -767,6 +772,8 @@
"typescript": ["typescript@5.9.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="],
"typescript-event-target": ["typescript-event-target@1.1.2", "", {}, "sha512-TvkrTUpv7gCPlcnSoEwUVUBwsdheKm+HF5u2tPAKubkIGMfovdSizCTaZRY/NhR8+Ijy8iZZUapbVQAsNrkFrw=="],
"undici": ["undici@7.24.6", "", {}, "sha512-Xi4agocCbRzt0yYMZGMA6ApD7gvtUFaxm4ZmeacWI4cZxaF6C+8I8QfofC20NAePiB/IcvZmzkJ7XPa471AEtA=="],
"undici-types": ["undici-types@7.18.2", "", {}, "sha512-AsuCzffGHJybSaRrmr5eHr81mwJU3kjw6M+uprWvCXiNeN9SOGwQ3Jn8jb8m3Z6izVgknn1R0FTCEAP2QrLY/w=="],
@@ -817,6 +824,8 @@
"zod-to-json-schema": ["zod-to-json-schema@3.25.2", "", { "peerDependencies": { "zod": "^3.25.28 || ^4" } }, "sha512-O/PgfnpT1xKSDeQYSCfRI5Gy3hPf91mKVDuYLUHZJMiDFptvP41MSnWofm8dnCm0256ZNfZIM7DSzuSMAFnjHA=="],
"@anthropic-ai/sandbox-runtime/lodash-es": ["lodash-es@4.17.23", "", {}, "sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg=="],
"@aws-crypto/crc32/@aws-crypto/util": ["@aws-crypto/util@5.2.0", "", { "dependencies": { "@aws-sdk/types": "^3.222.0", "@smithy/util-utf8": "^2.0.0", "tslib": "^2.6.2" } }, "sha512-4RkU9EsI6ZpBve5fseQlGNUWKMa1RLPQ1dnjnQoe07ldfIzcsGb5hC5W0Dm7u423KWzawlrpbjXBrXCEv9zazQ=="],
"@aws-crypto/crc32/tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],

View File

@@ -194,7 +194,7 @@ bun run hardening:strict
Notes:
- `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key or a missing key for non-local providers.
- Local providers such as `http://localhost:11434/v1` and `http://127.0.0.1:1337/v1` can run without `OPENAI_API_KEY`.
- Local providers such as `http://localhost:11434/v1`, `http://10.0.0.1:11434/v1`, and `http://127.0.0.1:1337/v1` can run without `OPENAI_API_KEY`.
- Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
## Provider Launch Profiles

View File

@@ -66,6 +66,33 @@ openclaude
No API key is needed for Ollama local models.
### Option D: LM Studio
Install LM Studio first from:
- `https://lmstudio.ai/`
Then in LM Studio:
1. Download a model (e.g., Llama 3.1 8B, Mistral 7B)
2. Go to the "Developer" tab
3. Select your model and enable the server via the toggle
Then run:
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
# export OPENAI_API_KEY=lmstudio # optional: some users need a dummy key
openclaude
```
Replace `your-model-name` with the model name shown in LM Studio.
No API key is needed for LM Studio local models (but uncomment the `OPENAI_API_KEY` line if you hit auth errors).
## 4. If `openclaude` Is Not Found
Close the terminal, open a new one, and try again:
@@ -89,6 +116,14 @@ Check the basics:
- make sure Ollama is running
- make sure the model was pulled successfully
### For LM Studio
- make sure LM Studio is installed
- make sure LM Studio is running
- make sure the server is enabled (toggle on in the "Developer" tab)
- make sure a model is loaded in LM Studio
- make sure the model name matches what you set in `OPENAI_MODEL`
## 6. Updating OpenClaude
```bash

View File

@@ -66,6 +66,33 @@ openclaude
No API key is needed for Ollama local models.
### Option D: LM Studio
Install LM Studio first from:
- `https://lmstudio.ai/`
Then in LM Studio:
1. Download a model (e.g., Llama 3.1 8B, Mistral 7B)
2. Go to the "Developer" tab
3. Select your model and enable the server via the toggle
Then run:
```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:1234/v1"
$env:OPENAI_MODEL="your-model-name"
# $env:OPENAI_API_KEY="lmstudio" # optional: some users need a dummy key
openclaude
```
Replace `your-model-name` with the model name shown in LM Studio.
No API key is needed for LM Studio local models (but uncomment the `OPENAI_API_KEY` line if you hit auth errors).
## 4. If `openclaude` Is Not Found
Close PowerShell, open a new one, and try again:
@@ -89,6 +116,14 @@ Check the basics:
- make sure Ollama is running
- make sure the model was pulled successfully
### For LM Studio
- make sure LM Studio is installed
- make sure LM Studio is running
- make sure the server is enabled (toggle on in the "Developer" tab)
- make sure a model is loaded in LM Studio
- make sure the model name matches what you set in `OPENAI_MODEL`
## 6. Updating OpenClaude
```powershell

View File

@@ -49,6 +49,18 @@ def normalize_ollama_model(model_name: str) -> str:
return model_name
def _extract_ollama_image_data(block: dict) -> str | None:
source = block.get("source")
if not isinstance(source, dict):
return None
if source.get("type") != "base64":
return None
data = source.get("data")
if isinstance(data, str) and data:
return data
return None
def anthropic_to_ollama_messages(messages: list[dict]) -> list[dict]:
ollama_messages = []
for msg in messages:
@@ -58,15 +70,23 @@ def anthropic_to_ollama_messages(messages: list[dict]) -> list[dict]:
ollama_messages.append({"role": role, "content": content})
elif isinstance(content, list):
text_parts = []
image_parts = []
for block in content:
if isinstance(block, dict):
if block.get("type") == "text":
text_parts.append(block.get("text", ""))
elif block.get("type") == "image":
text_parts.append("[image]")
image_data = _extract_ollama_image_data(block)
if image_data:
image_parts.append(image_data)
else:
text_parts.append("[image]")
elif isinstance(block, str):
text_parts.append(block)
ollama_messages.append({"role": role, "content": "\n".join(text_parts)})
ollama_message = {"role": role, "content": "\n".join(text_parts)}
if image_parts:
ollama_message["images"] = image_parts
ollama_messages.append(ollama_message)
return ollama_messages

View File

@@ -51,6 +51,7 @@
"@anthropic-ai/vertex-sdk": "0.14.4",
"@commander-js/extra-typings": "12.1.0",
"@growthbook/growthbook": "1.6.5",
"@mendable/firecrawl-js": "4.18.1",
"@modelcontextprotocol/sdk": "1.29.0",
"@opentelemetry/api": "1.9.1",
"@opentelemetry/api-logs": "0.214.0",
@@ -73,7 +74,7 @@
"cli-highlight": "2.1.11",
"code-excerpt": "4.0.0",
"commander": "12.1.0",
"diff": "7.0.0",
"diff": "8.0.3",
"emoji-regex": "10.6.0",
"env-paths": "3.0.0",
"execa": "9.6.1",
@@ -86,7 +87,7 @@
"ignore": "7.0.5",
"indent-string": "5.0.0",
"jsonc-parser": "3.3.1",
"lodash-es": "4.17.23",
"lodash-es": "4.18.0",
"lru-cache": "11.2.7",
"marked": "15.0.12",
"p-map": "7.0.4",

View File

@@ -199,15 +199,19 @@ export async function submitTranscriptShare() { return { success: false }; }
`,
}
function escapeForResolvedPathRegex(modulePath: string): string {
return modulePath
.replace(/[|\\{}()[\]^$+*?.]/g, '\\$&')
.replace(/\//g, '[/\\\\]')
}
export const noTelemetryPlugin: BunPlugin = {
name: 'no-telemetry',
setup(build) {
for (const [modulePath, contents] of Object.entries(stubs)) {
// Build regex that matches the resolved file path on any OS
// e.g. "services/analytics/growthbook" → /services[/\\]analytics[/\\]growthbook\.(ts|js)$/
const escaped = modulePath
.replace(/\//g, '[/\\\\]')
.replace(/\./g, '\\.')
const escaped = escapeForResolvedPathRegex(modulePath)
const filter = new RegExp(`${escaped}\\.(ts|js)$`)
build.onLoad({ filter }, () => ({

View File

@@ -124,19 +124,15 @@ function printSummary(profile: ProviderProfile, env: NodeJS.ProcessEnv): void {
console.log(`Launching profile: ${profile}`)
if (profile === 'gemini') {
console.log(`GEMINI_MODEL=${env.GEMINI_MODEL}`)
console.log(`GEMINI_API_KEY_SET=${Boolean(env.GEMINI_API_KEY)}`)
} else if (profile === 'codex') {
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
console.log(`CODEX_API_KEY_SET=${Boolean(resolveCodexApiCredentials(env).apiKey)}`)
} else if (profile === 'atomic-chat') {
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
console.log('OPENAI_API_KEY_SET=false (local provider, no key required)')
} else {
console.log(`OPENAI_BASE_URL=${env.OPENAI_BASE_URL}`)
console.log(`OPENAI_MODEL=${env.OPENAI_MODEL}`)
console.log(`OPENAI_API_KEY_SET=${Boolean(env.OPENAI_API_KEY)}`)
}
}

View File

@@ -430,6 +430,7 @@ function writeJsonReport(
options: CliOptions,
results: CheckResult[],
): void {
const envSummary = serializeSafeEnvSummary()
const payload = {
timestamp: new Date().toISOString(),
cwd: process.cwd(),
@@ -438,12 +439,24 @@ function writeJsonReport(
passed: results.filter(result => result.ok).length,
failed: results.filter(result => !result.ok).length,
},
env: serializeSafeEnvSummary(),
env: envSummary,
results,
}
if (options.json) {
console.log(JSON.stringify(payload, null, 2))
console.log(
JSON.stringify(
{
timestamp: payload.timestamp,
cwd: payload.cwd,
summary: payload.summary,
env: '[redacted in console JSON output; use --out-file for the full report]',
results: payload.results,
},
null,
2,
),
)
}
if (options.outFile) {

View File

@@ -228,9 +228,14 @@ class SmartRouter:
return min(available, key=lambda p: p.score(self.strategy))
def get_model_for_provider(
self, provider: Provider, claude_model: str
self,
provider: Provider,
claude_model: str,
is_large_request: bool = False,
) -> str:
"""Map a Claude model name to the provider's actual model."""
if is_large_request:
return provider.big_model
is_large = any(
keyword in claude_model.lower()
for keyword in ["opus", "sonnet", "large", "big"]
@@ -289,7 +294,11 @@ class SmartRouter:
)
provider = min(available, key=lambda p: p.score(self.strategy))
model = self.get_model_for_provider(provider, claude_model)
model = self.get_model_for_provider(
provider,
claude_model,
is_large_request=large,
)
logger.debug(
f"SmartRouter: routing to {provider.name}/{model} "

View File

@@ -1,4 +1,4 @@
import { randomBytes } from 'crypto'
import { randomInt } from 'crypto'
import type { AppState } from './state/AppState.js'
import type { AgentId } from './types/ids.js'
import { getTaskOutputPath } from './utils/task/diskOutput.js'
@@ -97,10 +97,9 @@ const TASK_ID_ALPHABET = '0123456789abcdefghijklmnopqrstuvwxyz'
export function generateTaskId(type: TaskType): string {
const prefix = getTaskIdPrefix(type)
const bytes = randomBytes(8)
let id = prefix
for (let i = 0; i < 8; i++) {
id += TASK_ID_ALPHABET[bytes[i]! % TASK_ID_ALPHABET.length]
id += TASK_ID_ALPHABET[randomInt(TASK_ID_ALPHABET.length)]!
}
return id
}

View File

@@ -0,0 +1,85 @@
import { expect, test } from 'bun:test'
import { buildChildEnv } from './sessionRunner.ts'
// Finding #42-1: sessionRunner spreads the full parent process.env into the
// child process environment, leaking API keys, DB credentials, proxy secrets.
// Only CLAUDE_CODE_OAUTH_TOKEN was stripped. Fix: explicit allowlist.
const baseOpts = {
accessToken: 'test-access-token',
useCcrV2: false as const,
}
test('buildChildEnv does not leak ANTHROPIC_API_KEY to child', () => {
const parentEnv = {
PATH: '/usr/bin',
HOME: '/home/user',
ANTHROPIC_API_KEY: 'sk-ant-secret-key',
CLAUDE_CODE_SESSION_ACCESS_TOKEN: 'will-be-overwritten',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.ANTHROPIC_API_KEY).toBeUndefined()
})
test('buildChildEnv does not leak OPENAI_API_KEY to child', () => {
const parentEnv = {
PATH: '/usr/bin',
HOME: '/home/user',
OPENAI_API_KEY: 'sk-openai-secret',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.OPENAI_API_KEY).toBeUndefined()
})
test('buildChildEnv does not leak arbitrary secrets to child', () => {
const parentEnv = {
PATH: '/usr/bin',
HOME: '/home/user',
DB_PASSWORD: 'super-secret',
AWS_SECRET_ACCESS_KEY: 'aws-secret',
GITHUB_TOKEN: 'ghp_token',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.DB_PASSWORD).toBeUndefined()
expect(env.AWS_SECRET_ACCESS_KEY).toBeUndefined()
expect(env.GITHUB_TOKEN).toBeUndefined()
})
test('buildChildEnv includes PATH and HOME from parent', () => {
const parentEnv = {
PATH: '/usr/bin:/usr/local/bin',
HOME: '/home/user',
ANTHROPIC_API_KEY: 'sk-secret',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.PATH).toBe('/usr/bin:/usr/local/bin')
expect(env.HOME).toBe('/home/user')
})
test('buildChildEnv sets CLAUDE_CODE_SESSION_ACCESS_TOKEN from opts', () => {
const env = buildChildEnv({ PATH: '/usr/bin' }, { ...baseOpts, accessToken: 'my-token' })
expect(env.CLAUDE_CODE_SESSION_ACCESS_TOKEN).toBe('my-token')
})
test('buildChildEnv sets CLAUDE_CODE_ENVIRONMENT_KIND to bridge', () => {
const env = buildChildEnv({ PATH: '/usr/bin' }, baseOpts)
expect(env.CLAUDE_CODE_ENVIRONMENT_KIND).toBe('bridge')
})
test('buildChildEnv does not pass CLAUDE_CODE_OAUTH_TOKEN to child', () => {
const parentEnv = {
PATH: '/usr/bin',
CLAUDE_CODE_OAUTH_TOKEN: 'oauth-token-to-strip',
}
const env = buildChildEnv(parentEnv, baseOpts)
expect(env.CLAUDE_CODE_OAUTH_TOKEN).toBeUndefined()
})
test('buildChildEnv sets CCR v2 vars when useCcrV2 is true', () => {
const env = buildChildEnv(
{ PATH: '/usr/bin' },
{ accessToken: 'tok', useCcrV2: true, workerEpoch: 42 },
)
expect(env.CLAUDE_CODE_USE_CCR_V2).toBe('1')
expect(env.CLAUDE_CODE_WORKER_EPOCH).toBe('42')
})

View File

@@ -16,6 +16,69 @@ import type {
const MAX_ACTIVITIES = 10
const MAX_STDERR_LINES = 10
/**
* Safe OS and runtime variables that the child process needs to function.
* Everything else (API keys, DB passwords, proxy secrets, etc.) must not
* be inherited — the child authenticates via CLAUDE_CODE_SESSION_ACCESS_TOKEN.
*/
const CHILD_ENV_ALLOWLIST = new Set([
// System / shell
'PATH', 'HOME', 'USERPROFILE', 'HOMEPATH', 'HOMEDRIVE',
'USERNAME', 'USER', 'LOGNAME',
'TEMP', 'TMP', 'TMPDIR',
'SYSTEMROOT', 'SYSTEMDRIVE', 'COMSPEC', 'WINDIR',
'LANG', 'LC_ALL', 'LC_CTYPE',
// Node.js runtime
'NODE_OPTIONS', 'NODE_PATH', 'NODE_ENV',
// OpenClaude session / bridge (non-secret)
'CLAUDE_CODE_ENVIRONMENT_KIND',
'CLAUDE_CODE_FORCE_SANDBOX',
'CLAUDE_CODE_BUBBLEWRAP',
'CLAUDE_CODE_ENTRYPOINT',
'CLAUDE_CODE_COORDINATOR_MODE',
'CLAUDE_CODE_PERMISSIONS_VERSION',
'CLAUDE_CODE_PERMISSIONS_SETTING',
// Display / terminal
'TERM', 'COLORTERM', 'FORCE_COLOR', 'NO_COLOR',
])
type BuildChildEnvOpts = {
accessToken: string
useCcrV2: boolean
workerEpoch?: number
sandbox?: boolean
}
/**
* Build the environment for the child CC process from an explicit allowlist.
* This prevents the parent's API keys and credentials from leaking to the child.
*/
export function buildChildEnv(
parentEnv: NodeJS.ProcessEnv,
opts: BuildChildEnvOpts,
): NodeJS.ProcessEnv {
// Start from allowlisted parent vars only
const env: NodeJS.ProcessEnv = {}
for (const key of Object.keys(parentEnv)) {
if (CHILD_ENV_ALLOWLIST.has(key)) {
env[key] = parentEnv[key]
}
}
// Bridge-required overrides
env.CLAUDE_CODE_OAUTH_TOKEN = undefined // explicitly strip
env.CLAUDE_CODE_ENVIRONMENT_KIND = 'bridge'
if (opts.sandbox) env.CLAUDE_CODE_FORCE_SANDBOX = '1'
env.CLAUDE_CODE_SESSION_ACCESS_TOKEN = opts.accessToken
env.CLAUDE_CODE_POST_FOR_SESSION_INGRESS_V2 = '1'
if (opts.useCcrV2) {
env.CLAUDE_CODE_USE_CCR_V2 = '1'
env.CLAUDE_CODE_WORKER_EPOCH = String(opts.workerEpoch)
}
return env
}
/**
* Sanitize a session ID for use in file names.
* Strips any characters that could cause path traversal (e.g. `../`, `/`)
@@ -303,24 +366,12 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
: []),
]
const env: NodeJS.ProcessEnv = {
...deps.env,
// Strip the bridge's OAuth token so the child CC process uses
// the session access token for inference instead.
CLAUDE_CODE_OAUTH_TOKEN: undefined,
CLAUDE_CODE_ENVIRONMENT_KIND: 'bridge',
...(deps.sandbox && { CLAUDE_CODE_FORCE_SANDBOX: '1' }),
CLAUDE_CODE_SESSION_ACCESS_TOKEN: opts.accessToken,
// v1: HybridTransport (WS reads + POST writes) to Session-Ingress.
// Harmless in v2 mode — transportUtils checks CLAUDE_CODE_USE_CCR_V2 first.
CLAUDE_CODE_POST_FOR_SESSION_INGRESS_V2: '1',
// v2: SSETransport + CCRClient to CCR's /v1/code/sessions/* endpoints.
// Same env vars environment-manager sets in the container path.
...(opts.useCcrV2 && {
CLAUDE_CODE_USE_CCR_V2: '1',
CLAUDE_CODE_WORKER_EPOCH: String(opts.workerEpoch),
}),
}
const env = buildChildEnv(deps.env, {
accessToken: opts.accessToken,
useCcrV2: opts.useCcrV2,
workerEpoch: opts.workerEpoch,
sandbox: deps.sandbox,
})
deps.onDebug(
`[bridge:session] Spawning sessionId=${opts.sessionId} sdkUrl=${opts.sdkUrl} accessToken=${opts.accessToken ? 'present' : 'MISSING'}`,

View File

@@ -0,0 +1,36 @@
import { expect, test } from 'bun:test'
import { buildSdkUrl } from './workSecret.ts'
// Finding #42-5: buildSdkUrl uses string.includes() on the full URL,
// so a remote URL containing "localhost" in its path gets ws:// (unencrypted).
test('buildSdkUrl uses wss for remote URL that contains localhost in path', () => {
const url = buildSdkUrl('https://remote.example.com/proxy/localhost/api', 'sess-1')
expect(url).toContain('wss://')
expect(url).not.toContain('ws://')
})
test('buildSdkUrl uses ws for actual localhost hostname', () => {
const url = buildSdkUrl('http://localhost:8080', 'sess-1')
expect(url).toContain('ws://')
})
test('buildSdkUrl uses ws for 127.0.0.1 hostname', () => {
const url = buildSdkUrl('http://127.0.0.1:3000', 'sess-1')
expect(url).toContain('ws://')
})
test('buildSdkUrl uses wss for regular remote hostname', () => {
const url = buildSdkUrl('https://api.example.com', 'sess-1')
expect(url).toContain('wss://')
})
test('buildSdkUrl uses v2 path for localhost', () => {
const url = buildSdkUrl('http://localhost:8080', 'sess-abc')
expect(url).toContain('/v2/session_ingress/ws/sess-abc')
})
test('buildSdkUrl uses v1 path for remote', () => {
const url = buildSdkUrl('https://api.example.com', 'sess-abc')
expect(url).toContain('/v1/session_ingress/ws/sess-abc')
})

View File

@@ -39,8 +39,8 @@ export function decodeWorkSecret(secret: string): WorkSecret {
* and /v1/ for production (Envoy rewrites /v1/ → /v2/).
*/
export function buildSdkUrl(apiBaseUrl: string, sessionId: string): string {
const isLocalhost =
apiBaseUrl.includes('localhost') || apiBaseUrl.includes('127.0.0.1')
const hostname = new URL(apiBaseUrl).hostname
const isLocalhost = hostname === 'localhost' || hostname === '127.0.0.1'
const protocol = isLocalhost ? 'ws' : 'wss'
const version = isLocalhost ? 'v2' : 'v1'
const host = apiBaseUrl.replace(/^https?:\/\//, '').replace(/\/+$/, '')

View File

@@ -11,7 +11,7 @@ import { MCPServerDesktopImportDialog } from '../../components/MCPServerDesktopI
import { render } from '../../ink.js';
import { KeybindingSetup } from '../../keybindings/KeybindingProviderSetup.js';
import { type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS, logEvent } from '../../services/analytics/index.js';
import { clearMcpClientConfig, clearServerTokensFromLocalStorage, getMcpClientConfig, readClientSecret, saveMcpClientSecret } from '../../services/mcp/auth.js';
import { clearMcpClientConfig, clearServerTokensFromLocalStorage, readClientSecret, saveMcpClientSecret } from '../../services/mcp/auth.js';
import { doctorAllServers, doctorServer, type McpDoctorReport, type McpDoctorScopeFilter } from '../../services/mcp/doctor.js';
import { connectToServer, getMcpServerConnectionBatchSize } from '../../services/mcp/client.js';
import { addMcpConfig, getAllMcpConfigs, getMcpConfigByName, getMcpConfigsByScope, removeMcpConfig } from '../../services/mcp/config.js';
@@ -323,9 +323,7 @@ export async function mcpGetHandler(name: string): Promise<void> {
if (server.oauth?.clientId || server.oauth?.callbackPort) {
const parts: string[] = [];
if (server.oauth.clientId) {
parts.push('client_id configured');
const clientConfig = getMcpClientConfig(name, server);
if (clientConfig?.clientSecret) parts.push('client_secret configured');
parts.push('oauth client configured');
}
if (server.oauth.callbackPort) parts.push(`callback_port ${server.oauth.callbackPort}`);
// biome-ignore lint/suspicious/noConsole:: intentional console output
@@ -347,9 +345,7 @@ export async function mcpGetHandler(name: string): Promise<void> {
if (server.oauth?.clientId || server.oauth?.callbackPort) {
const parts: string[] = [];
if (server.oauth.clientId) {
parts.push('client_id configured');
const clientConfig = getMcpClientConfig(name, server);
if (clientConfig?.clientSecret) parts.push('client_secret configured');
parts.push('oauth client configured');
}
if (server.oauth.callbackPort) parts.push(`callback_port ${server.oauth.callbackPort}`);
// biome-ignore lint/suspicious/noConsole:: intentional console output

View File

@@ -1,4 +1,5 @@
import chalk from 'chalk'
import { getAPIProvider } from 'src/utils/model/providers.js'
import { logEvent } from 'src/services/analytics/index.js'
import {
getLatestVersion,
@@ -28,6 +29,19 @@ import { gte } from 'src/utils/semver.js'
import { getInitialSettings } from 'src/utils/settings/settings.js'
export async function update() {
// Block updates for third-party providers. The update mechanism downloads
// from Anthropic's distribution bucket, which would silently replace the
// OpenClaude build (with the OpenAI shim) with the upstream Claude Code
// binary (without it).
if (getAPIProvider() !== 'firstParty') {
writeToStdout(
chalk.yellow('Auto-update is not available for third-party provider builds.\n') +
'To update, pull the latest source from the repository and rebuild:\n' +
' git pull && bun install && bun run build\n',
)
return
}
logEvent('tengu_update_check', {})
writeToStdout(`Current version: ${MACRO.VERSION}\n`)

View File

@@ -21,6 +21,7 @@ import { ErrorStep } from './ErrorStep.js';
import { ExistingWorkflowStep } from './ExistingWorkflowStep.js';
import { InstallAppStep } from './InstallAppStep.js';
import { OAuthFlowStep } from './OAuthFlowStep.js';
import { extractGitHubRepoSlug } from './repoSlug.js';
import { SuccessStep } from './SuccessStep.js';
import { setupGitHubActions } from './setupGitHubActions.js';
import type { State, Warning, Workflow } from './types.js';
@@ -282,15 +283,15 @@ function InstallGitHubApp(props: {
}
const repoWarnings: Warning[] = [];
if (repoName_1.includes('github.com')) {
const match = repoName_1.match(/github\.com[:/]([^/]+\/[^/]+)(\.git)?$/);
if (!match) {
const slug = extractGitHubRepoSlug(repoName_1);
if (!slug) {
repoWarnings.push({
title: 'Invalid GitHub URL format',
message: 'The repository URL format appears to be invalid.',
instructions: ['Use format: owner/repo or https://github.com/owner/repo', 'Example: anthropics/claude-cli']
});
} else {
repoName_1 = match[1]?.replace(/\.git$/, '') || '';
repoName_1 = slug;
}
}
if (!repoName_1.includes('/')) {

View File

@@ -0,0 +1,36 @@
import assert from 'node:assert/strict'
import test from 'node:test'
import { extractGitHubRepoSlug } from './repoSlug.ts'
test('keeps owner/repo input as-is', () => {
assert.equal(extractGitHubRepoSlug('Gitlawb/openclaude'), 'Gitlawb/openclaude')
})
test('extracts slug from https GitHub URLs', () => {
assert.equal(
extractGitHubRepoSlug('https://github.com/Gitlawb/openclaude'),
'Gitlawb/openclaude',
)
assert.equal(
extractGitHubRepoSlug('https://www.github.com/Gitlawb/openclaude.git'),
'Gitlawb/openclaude',
)
})
test('extracts slug from ssh GitHub URLs', () => {
assert.equal(
extractGitHubRepoSlug('git@github.com:Gitlawb/openclaude.git'),
'Gitlawb/openclaude',
)
assert.equal(
extractGitHubRepoSlug('ssh://git@github.com/Gitlawb/openclaude'),
'Gitlawb/openclaude',
)
})
test('rejects malformed or non-GitHub URLs', () => {
assert.equal(extractGitHubRepoSlug('https://gitlab.com/Gitlawb/openclaude'), null)
assert.equal(extractGitHubRepoSlug('https://github.com/Gitlawb'), null)
assert.equal(extractGitHubRepoSlug('not actually github.com/Gitlawb/openclaude'), null)
})

View File

@@ -0,0 +1,38 @@
export function extractGitHubRepoSlug(value: string): string | null {
const trimmed = value.trim()
if (/^[a-z][a-z0-9+.-]*:\/\//i.test(trimmed) && !trimmed.includes('github.com')) {
return null
}
if (!trimmed.includes('github.com')) {
return trimmed
}
const sshMatch = trimmed.match(
/^(?:git@|ssh:\/\/git@)(?:www\.)?github\.com[:/](?<owner>[^/:\s]+)\/(?<repo>[^/\s]+?)(?:\.git)?\/?$/i,
)
if (sshMatch?.groups?.owner && sshMatch.groups.repo) {
return `${sshMatch.groups.owner}/${sshMatch.groups.repo}`
}
try {
const parsed = new URL(trimmed)
const hostname = parsed.hostname.toLowerCase()
if (hostname !== 'github.com' && hostname !== 'www.github.com') {
return null
}
const segments = parsed.pathname
.replace(/^\/+|\/+$/g, '')
.split('/')
.filter(Boolean)
if (segments.length < 2) {
return null
}
return `${segments[0]}/${segments[1]}`.replace(/\.git$/i, '')
} catch {
return null
}
}

View File

@@ -99,7 +99,7 @@ export function Onboarding({
// Add API key step if needed
// On homespace, ANTHROPIC_API_KEY is preserved in process.env for child
// processes but ignored by Claude Code itself (see auth.ts).
if (!process.env.ANTHROPIC_API_KEY || isRunningOnHomespace()) {
if (!process.env.ANTHROPIC_API_KEY || isRunningOnHomespace() || !isAnthropicAuthEnabled()) {
return '';
}
const customApiKeyTruncated = normalizeApiKeyForConfig(process.env.ANTHROPIC_API_KEY);

View File

@@ -0,0 +1,211 @@
import * as React from 'react'
import { useEffect, useState } from 'react'
import { useTerminalSize } from '../../hooks/useTerminalSize.js'
import { Box, Text } from '../../ink.js'
import { useKeybinding } from '../../keybindings/useKeybinding.js'
import {
buildCodexUsageRows,
fetchCodexUsage,
formatCodexPlanType,
type CodexUsageData,
type CodexUsageRow,
} from '../../services/api/codexUsage.js'
import { formatResetText } from '../../utils/format.js'
import { logError } from '../../utils/log.js'
import { ConfigurableShortcutHint } from '../ConfigurableShortcutHint.js'
import { Byline } from '../design-system/Byline.js'
import { ProgressBar } from '../design-system/ProgressBar.js'
type CodexUsageLimitBarProps = {
label: string
usedPercent: number
resetsAt?: string
maxWidth: number
}
function CodexUsageLimitBar({
label,
usedPercent,
resetsAt,
maxWidth,
}: CodexUsageLimitBarProps): React.ReactNode {
const normalizedUsedPercent = Math.max(0, Math.min(100, usedPercent))
const usedText = `${Math.floor(normalizedUsedPercent)}% used`
const resetText = resetsAt
? `Resets ${formatResetText(resetsAt, true, true)}`
: undefined
if (maxWidth >= 62) {
return (
<Box flexDirection="column">
<Text bold>{label}</Text>
<Box flexDirection="row" gap={1}>
<ProgressBar
ratio={normalizedUsedPercent / 100}
width={50}
fillColor="rate_limit_fill"
emptyColor="rate_limit_empty"
/>
<Text>{usedText}</Text>
</Box>
{resetText ? <Text dimColor>{resetText}</Text> : null}
</Box>
)
}
return (
<Box flexDirection="column">
<Text>
<Text bold>{label}</Text>
{resetText ? (
<>
<Text> </Text>
<Text dimColor>· {resetText}</Text>
</>
) : null}
</Text>
<ProgressBar
ratio={normalizedUsedPercent / 100}
width={maxWidth}
fillColor="rate_limit_fill"
emptyColor="rate_limit_empty"
/>
<Text>{usedText}</Text>
</Box>
)
}
function CodexUsageTextRow({
label,
value,
}: Extract<CodexUsageRow, { kind: 'text' }>): React.ReactNode {
if (!value) {
return <Text bold>{label}</Text>
}
return (
<Text>
<Text bold>{label}</Text>
<Text dimColor> · {value}</Text>
</Text>
)
}
export function CodexUsage(): React.ReactNode {
const [usage, setUsage] = useState<CodexUsageData | null>(null)
const [error, setError] = useState<string | null>(null)
const [isLoading, setIsLoading] = useState(true)
const { columns } = useTerminalSize()
const availableWidth = columns - 2
const maxWidth = Math.min(availableWidth, 80)
const loadUsage = React.useCallback(async () => {
setIsLoading(true)
setError(null)
try {
setUsage(await fetchCodexUsage())
} catch (err) {
logError(err as Error)
setError(err instanceof Error ? err.message : 'Failed to load Codex usage')
} finally {
setIsLoading(false)
}
}, [])
useEffect(() => {
void loadUsage()
}, [loadUsage])
useKeybinding(
'settings:retry',
() => {
void loadUsage()
},
{
context: 'Settings',
isActive: !!error && !isLoading,
},
)
if (error) {
return (
<Box flexDirection="column" gap={1}>
<Text color="error">Error: {error}</Text>
<Text dimColor>
<Byline>
<ConfigurableShortcutHint
action="settings:retry"
context="Settings"
fallback="r"
description="retry"
/>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Byline>
</Text>
</Box>
)
}
if (!usage) {
return (
<Box flexDirection="column" gap={1}>
<Text dimColor>Loading Codex usage data</Text>
<Text dimColor>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Text>
</Box>
)
}
const rows = buildCodexUsageRows(usage.snapshots)
const planType = formatCodexPlanType(usage.planType)
return (
<Box flexDirection="column" gap={1} width="100%">
{planType ? <Text dimColor>Plan: {planType}</Text> : null}
{rows.length === 0 ? (
<Text dimColor>Codex usage data is not available for this account.</Text>
) : null}
{rows.map((row, index) =>
row.kind === 'window' ? (
<CodexUsageLimitBar
key={`${row.label}-${index}`}
label={row.label}
usedPercent={row.usedPercent}
resetsAt={row.resetsAt}
maxWidth={maxWidth}
/>
) : (
<CodexUsageTextRow
key={`${row.label}-${index}`}
label={row.label}
value={row.value}
/>
),
)}
<Text dimColor>
<ConfigurableShortcutHint
action="confirm:no"
context="Settings"
fallback="Esc"
description="cancel"
/>
</Text>
</Box>
)
}

View File

@@ -22,7 +22,7 @@ function buildPrimarySection(): Property[] {
const nameValue = customTitle ?? <Text dimColor>/rename to add a name</Text>;
return [{
label: 'Version',
value: MACRO.VERSION
value: MACRO.DISPLAY_VERSION ?? MACRO.VERSION
}, {
label: 'Session name',
value: nameValue

View File

@@ -10,11 +10,13 @@ import { useKeybinding } from '../../keybindings/useKeybinding.js';
import { type ExtraUsage, fetchUtilization, type RateLimit, type Utilization } from '../../services/api/usage.js';
import { formatResetText } from '../../utils/format.js';
import { logError } from '../../utils/log.js';
import { getAPIProvider } from '../../utils/model/providers.js';
import { jsonStringify } from '../../utils/slowOperations.js';
import { ConfigurableShortcutHint } from '../ConfigurableShortcutHint.js';
import { Byline } from '../design-system/Byline.js';
import { ProgressBar } from '../design-system/ProgressBar.js';
import { isEligibleForOverageCreditGrant, OverageCreditUpsell } from '../LogoV2/OverageCreditUpsell.js';
import { CodexUsage } from './CodexUsage.js';
type LimitBarProps = {
title: string;
limit: RateLimit;
@@ -171,7 +173,7 @@ function LimitBar(t0) {
return t8;
}
}
export function Usage(): React.ReactNode {
function AnthropicUsage(): React.ReactNode {
const [utilization, setUtilization] = useState<Utilization | null>(null);
const [error, setError] = useState<string | null>(null);
const [isLoading, setIsLoading] = useState(true);
@@ -263,6 +265,12 @@ export function Usage(): React.ReactNode {
</Text>
</Box>;
}
export function Usage(): React.ReactNode {
if (getAPIProvider() === 'codex') {
return <CodexUsage />;
}
return <AnthropicUsage />;
}
type ExtraUsageSectionProps = {
extraUsage: ExtraUsage;
maxWidth: number;

View File

@@ -199,7 +199,7 @@ export function Tabs(t0) {
const t12 = 0;
const t13 = true;
const t14 = modalScrollRef ? 0 : undefined;
const t15 = !hidden && <Box flexDirection="row" gap={1} flexShrink={modalScrollRef ? 0 : undefined}>{title !== undefined && <Text bold={true} color={color}>{title}</Text>}{tabs.map((t16, i) => {
const t15 = !hidden && <Box key={`${selectedTabIndex}-${headerFocused ? "focused" : "blurred"}`} flexDirection="row" gap={1} flexShrink={modalScrollRef ? 0 : undefined}>{title !== undefined && <Text bold={true} color={color}>{title}</Text>}{tabs.map((t16, i) => {
const [id, title_0] = t16;
const isCurrent = selectedTabIndex === i;
const hasColorCursor = color && isCurrent && headerFocused;

View File

@@ -1,8 +1,7 @@
import { c as _c } from "react-compiler-runtime";
import figures from 'figures';
import React, { useCallback, useState } from 'react';
import type { KeyboardEvent } from '../../../ink/events/keyboard-event.js';
import { Box, Text } from '../../../ink.js';
import React, { useState } from 'react';
import { Box, Text, useInput } from '../../../ink.js';
import { useAppState } from '../../../state/AppState.js';
import type { Question, QuestionOption } from '../../../tools/AskUserQuestionTool/AskUserQuestionTool.js';
import type { PastedContent } from '../../../utils/config.js';
@@ -95,6 +94,7 @@ export function QuestionView(t0) {
let t4;
if ($[3] === Symbol.for("react.memo_cache_sentinel")) {
t4 = () => {
setFooterIndex(0);
setIsFooterFocused(true);
};
$[3] = t4;
@@ -112,14 +112,15 @@ export function QuestionView(t0) {
t5 = $[4];
}
const handleUpFromFooter = t5;
let t6;
if ($[5] !== footerIndex || $[6] !== isFooterFocused || $[7] !== isInPlanMode || $[8] !== onCancel || $[9] !== onFinishPlanInterview || $[10] !== onRespondToClaude) {
t6 = e => {
useInput(
(input, key, event) => {
if (!isFooterFocused) {
return;
}
if (e.key === "up" || e.ctrl && e.key === "p") {
e.preventDefault();
if (key.upArrow || (key.ctrl && input === 'p')) {
event.stopImmediatePropagation();
if (footerIndex === 0) {
handleUpFromFooter();
} else {
@@ -127,15 +128,17 @@ export function QuestionView(t0) {
}
return;
}
if (e.key === "down" || e.ctrl && e.key === "n") {
e.preventDefault();
if (key.downArrow || (key.ctrl && input === 'n')) {
event.stopImmediatePropagation();
if (isInPlanMode && footerIndex === 0) {
setFooterIndex(1);
}
return;
}
if (e.key === "return") {
e.preventDefault();
if (key.return) {
event.stopImmediatePropagation();
if (footerIndex === 0) {
onRespondToClaude();
} else {
@@ -143,22 +146,15 @@ export function QuestionView(t0) {
}
return;
}
if (e.key === "escape") {
e.preventDefault();
if (key.escape) {
event.stopImmediatePropagation();
onCancel();
}
};
$[5] = footerIndex;
$[6] = isFooterFocused;
$[7] = isInPlanMode;
$[8] = onCancel;
$[9] = onFinishPlanInterview;
$[10] = onRespondToClaude;
$[11] = t6;
} else {
t6 = $[11];
}
const handleKeyDown = t6;
},
{ isActive: isFooterFocused },
);
let handleOpenEditor;
let questionText;
let t7;
@@ -434,9 +430,8 @@ export function QuestionView(t0) {
t25 = $[109];
}
let t26;
if ($[110] !== handleKeyDown || $[111] !== t25 || $[112] !== t8) {
t26 = <Box flexDirection="column" marginTop={0} tabIndex={0} autoFocus={true} onKeyDown={handleKeyDown}>{t8}{t9}{t25}</Box>;
$[110] = handleKeyDown;
if ($[111] !== t25 || $[112] !== t8) {
t26 = <Box flexDirection="column" marginTop={0} tabIndex={0} autoFocus={true}>{t8}{t9}{t25}</Box>;
$[111] = t25;
$[112] = t8;
$[113] = t26;

View File

@@ -0,0 +1,48 @@
import { afterEach, expect, test } from 'bun:test'
import { getSystemPrompt, DEFAULT_AGENT_PROMPT } from './prompts.js'
import { CLI_SYSPROMPT_PREFIXES, getCLISyspromptPrefix } from './system.js'
import { GENERAL_PURPOSE_AGENT } from '../tools/AgentTool/built-in/generalPurposeAgent.js'
import { EXPLORE_AGENT } from '../tools/AgentTool/built-in/exploreAgent.js'
const originalSimpleEnv = process.env.CLAUDE_CODE_SIMPLE
afterEach(() => {
process.env.CLAUDE_CODE_SIMPLE = originalSimpleEnv
})
test('CLI identity prefixes describe OpenClaude instead of Claude Code', () => {
expect(getCLISyspromptPrefix()).toContain('OpenClaude')
expect(getCLISyspromptPrefix()).not.toContain("Anthropic's official CLI for Claude")
for (const prefix of CLI_SYSPROMPT_PREFIXES) {
expect(prefix).toContain('OpenClaude')
expect(prefix).not.toContain("Anthropic's official CLI for Claude")
}
})
test('simple mode identity describes OpenClaude instead of Claude Code', async () => {
process.env.CLAUDE_CODE_SIMPLE = '1'
const prompt = await getSystemPrompt([], 'gpt-4o')
expect(prompt[0]).toContain('OpenClaude')
expect(prompt[0]).not.toContain("Anthropic's official CLI for Claude")
})
test('built-in agent prompts describe OpenClaude instead of Claude Code', () => {
expect(DEFAULT_AGENT_PROMPT).toContain('OpenClaude')
expect(DEFAULT_AGENT_PROMPT).not.toContain("Anthropic's official CLI for Claude")
const generalPrompt = GENERAL_PURPOSE_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(generalPrompt).toContain('OpenClaude')
expect(generalPrompt).not.toContain("Anthropic's official CLI for Claude")
const explorePrompt = EXPLORE_AGENT.getSystemPrompt({
toolUseContext: { options: {} as never },
})
expect(explorePrompt).toContain('OpenClaude')
expect(explorePrompt).not.toContain("Anthropic's official CLI for Claude")
})

View File

@@ -449,7 +449,7 @@ export async function getSystemPrompt(
): Promise<string[]> {
if (isEnvTruthy(process.env.CLAUDE_CODE_SIMPLE)) {
return [
`You are Claude Code, Anthropic's official CLI for Claude.\n\nCWD: ${getCwd()}\nDate: ${getSessionStartDate()}`,
`You are OpenClaude, an open-source fork of Claude Code.\n\nCWD: ${getCwd()}\nDate: ${getSessionStartDate()}`,
]
}
@@ -755,7 +755,7 @@ export function getUnameSR(): string {
return `${osType()} ${osRelease()}`
}
export const DEFAULT_AGENT_PROMPT = `You are an agent for Claude Code, Anthropic's official CLI for Claude. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done. When you complete the task, respond with a concise report covering what was done and any key findings — the caller will relay this to the user, so it only needs the essentials.`
export const DEFAULT_AGENT_PROMPT = `You are an agent for OpenClaude, an open-source fork of Claude Code. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done. When you complete the task, respond with a concise report covering what was done and any key findings — the caller will relay this to the user, so it only needs the essentials.`
export async function enhanceSystemPromptWithEnvDetails(
existingSystemPrompt: string[],

View File

@@ -7,9 +7,12 @@ import { isEnvDefinedFalsy } from '../utils/envUtils.js'
import { getAPIProvider } from '../utils/model/providers.js'
import { getWorkload } from '../utils/workloadContext.js'
const DEFAULT_PREFIX = `You are Claude Code, Anthropic's official CLI for Claude.`
const AGENT_SDK_CLAUDE_CODE_PRESET_PREFIX = `You are Claude Code, Anthropic's official CLI for Claude, running within the Claude Agent SDK.`
const AGENT_SDK_PREFIX = `You are a Claude agent, built on Anthropic's Claude Agent SDK.`
const DEFAULT_PREFIX =
`You are OpenClaude, an open-source fork of Claude Code.`
const AGENT_SDK_CLAUDE_CODE_PRESET_PREFIX =
`You are OpenClaude, an open-source fork of Claude Code, running within the Claude Agent SDK.`
const AGENT_SDK_PREFIX =
`You are a Claude agent running in OpenClaude, built on the Claude Agent SDK.`
const CLI_SYSPROMPT_PREFIX_VALUES = [
DEFAULT_PREFIX,

View File

@@ -1,6 +1,7 @@
import type * as React from 'react';
import { useCallback, useEffect } from 'react';
import { useAppStateStore, useSetAppState } from 'src/state/AppState.js';
import { logError } from '../utils/log.js';
import type { Theme } from '../utils/theme.js';
type Priority = 'low' | 'medium' | 'high' | 'immediate';
type BaseNotification = {
@@ -44,6 +45,7 @@ export function useNotifications(): {
// Process queue when current notification finishes or queue changes
const processQueue = useCallback(() => {
try {
setAppState(prev => {
const next = getNext(prev.notifications.queue);
if (prev.notifications.current !== null || !next) {
@@ -74,8 +76,12 @@ export function useNotifications(): {
}
};
});
} catch (error) {
logError(error);
}
}, [setAppState]);
const addNotification = useCallback<AddNotificationFn>((notif: Notification) => {
try {
// Handle immediate priority notifications
if (notif.priority === 'immediate') {
// Clear any existing timeout since we're showing a new immediate notification
@@ -189,8 +195,12 @@ export function useNotifications(): {
// Process queue after adding the notification
processQueue();
} catch (error) {
logError(error);
}
}, [setAppState, processQueue]);
const removeNotification = useCallback<RemoveNotificationFn>((key: string) => {
try {
setAppState(prev => {
const isCurrent = prev.notifications.current?.key === key;
const inQueue = prev.notifications.queue.some(n => n.key === key);
@@ -210,6 +220,9 @@ export function useNotifications(): {
};
});
processQueue();
} catch (error) {
logError(error);
}
}, [setAppState, processQueue]);
// Process queue on mount if there are notifications in the initial state.

View File

@@ -1,5 +1,6 @@
import { feature } from 'bun:bundle';
import {
isLocalProviderUrl,
resolveCodexApiCredentials,
resolveProviderRequest,
} from '../services/api/providerConfig.js'
@@ -40,16 +41,6 @@ function isEnvTruthy(value: string | undefined): boolean {
return normalized !== '' && normalized !== '0' && normalized !== 'false' && normalized !== 'no'
}
function isLocalProviderUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
try {
const parsed = new URL(baseUrl)
return parsed.hostname === 'localhost' || parsed.hostname === '127.0.0.1' || parsed.hostname === '::1'
} catch {
return false
}
}
function getProviderValidationError(
env: NodeJS.ProcessEnv = process.env,
): string | null {
@@ -408,6 +399,22 @@ async function main(): Promise<void> {
process.env.CLAUDE_CODE_SIMPLE = '1';
}
// --provider: set provider env vars early, before main module loads.
// This mirrors the --bare pattern: env vars must be in place before
// Commander option building and module-level constants are evaluated.
if (args.includes('--provider')) {
const { parseProviderFlag, applyProviderFlag } = await import('../utils/providerFlag.js');
const provider = parseProviderFlag(args);
if (provider) {
const result = applyProviderFlag(provider, args);
if (result.error) {
// biome-ignore lint/suspicious/noConsole:: intentional error output
console.error(`Error: ${result.error}`);
process.exit(1);
}
}
}
// No special flags detected, load and run the full CLI
if (process.env.OPENCLAUDE_ENABLE_EARLY_INPUT === '1') {
const {

View File

@@ -1,5 +1,6 @@
import { c as _c } from "react-compiler-runtime";
import * as React from 'react';
import { logError } from '../../utils/log.js';
import { useEffect } from 'react';
import { useNotifications } from 'src/context/notifications.js';
import { getIsRemoteMode } from '../../bootstrap/state.js';
@@ -23,43 +24,47 @@ export function useMcpConnectivityStatus(t0) {
let t3;
if ($[0] !== addNotification || $[1] !== mcpClients) {
t2 = () => {
if (getIsRemoteMode()) {
return;
}
const failedLocalClients = mcpClients.filter(_temp);
const failedClaudeAiClients = mcpClients.filter(_temp2);
const needsAuthLocalServers = mcpClients.filter(_temp3);
const needsAuthClaudeAiServers = mcpClients.filter(_temp4);
if (failedLocalClients.length === 0 && failedClaudeAiClients.length === 0 && needsAuthLocalServers.length === 0 && needsAuthClaudeAiServers.length === 0) {
return;
}
if (failedLocalClients.length > 0) {
addNotification({
key: "mcp-failed",
jsx: <><Text color="error">{failedLocalClients.length} MCP{" "}{failedLocalClients.length === 1 ? "server" : "servers"} failed</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (failedClaudeAiClients.length > 0) {
addNotification({
key: "mcp-claudeai-failed",
jsx: <><Text color="error">{failedClaudeAiClients.length} claude.ai{" "}{failedClaudeAiClients.length === 1 ? "connector" : "connectors"}{" "}unavailable</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (needsAuthLocalServers.length > 0) {
addNotification({
key: "mcp-needs-auth",
jsx: <><Text color="warning">{needsAuthLocalServers.length} MCP{" "}{needsAuthLocalServers.length === 1 ? "server needs" : "servers need"}{" "}auth</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (needsAuthClaudeAiServers.length > 0) {
addNotification({
key: "mcp-claudeai-needs-auth",
jsx: <><Text color="warning">{needsAuthClaudeAiServers.length} claude.ai{" "}{needsAuthClaudeAiServers.length === 1 ? "connector needs" : "connectors need"}{" "}auth</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
try {
if (getIsRemoteMode()) {
return;
}
const failedLocalClients = mcpClients.filter(_temp);
const failedClaudeAiClients = mcpClients.filter(_temp2);
const needsAuthLocalServers = mcpClients.filter(_temp3);
const needsAuthClaudeAiServers = mcpClients.filter(_temp4);
if (failedLocalClients.length === 0 && failedClaudeAiClients.length === 0 && needsAuthLocalServers.length === 0 && needsAuthClaudeAiServers.length === 0) {
return;
}
if (failedLocalClients.length > 0) {
addNotification({
key: "mcp-failed",
jsx: <><Text color="error">{failedLocalClients.length} MCP{" "}{failedLocalClients.length === 1 ? "server" : "servers"} failed</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (failedClaudeAiClients.length > 0) {
addNotification({
key: "mcp-claudeai-failed",
jsx: <><Text color="error">{failedClaudeAiClients.length} claude.ai{" "}{failedClaudeAiClients.length === 1 ? "connector" : "connectors"}{" "}unavailable</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (needsAuthLocalServers.length > 0) {
addNotification({
key: "mcp-needs-auth",
jsx: <><Text color="warning">{needsAuthLocalServers.length} MCP{" "}{needsAuthLocalServers.length === 1 ? "server needs" : "servers need"}{" "}auth</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
if (needsAuthClaudeAiServers.length > 0) {
addNotification({
key: "mcp-claudeai-needs-auth",
jsx: <><Text color="warning">{needsAuthClaudeAiServers.length} claude.ai{" "}{needsAuthClaudeAiServers.length === 1 ? "connector needs" : "connectors need"}{" "}auth</Text><Text dimColor={true}> · /mcp</Text></>,
priority: "medium"
});
}
} catch (error) {
logError(error);
}
};
t3 = [addNotification, mcpClients];

View File

@@ -433,6 +433,8 @@ const reconciler = createReconciler<
scheduleTimeout: setTimeout,
cancelTimeout: clearTimeout,
noTimeout: -1,
supportsMicrotasks: true,
scheduleMicrotask: queueMicrotask,
getCurrentUpdatePriority: () => dispatcher.currentUpdatePriority,
beforeActiveInstanceBlur() {},
afterActiveInstanceBlur() {},

View File

@@ -984,7 +984,7 @@ async function run(): Promise<CommanderCommand> {
return Number.isFinite(n) ? n : undefined;
}).hideHelp()).option('--from-pr [value]', 'Resume a session linked to a PR by PR number/URL, or open interactive picker with optional search term', value => value || true).option('--no-session-persistence', 'Disable session persistence - sessions will not be saved to disk and cannot be resumed (only works with --print)').addOption(new Option('--resume-session-at <message id>', 'When resuming, only messages up to and including the assistant message with <message.id> (use with --resume in print mode)').argParser(String).hideHelp()).addOption(new Option('--rewind-files <user-message-id>', 'Restore files to state at the specified user message and exit (requires --resume)').hideHelp())
// @[MODEL LAUNCH]: Update the example model ID in the --model help text.
.option('--model <model>', `Model for the current session. Provide an alias for the latest model (e.g. 'sonnet' or 'opus') or a model's full name (e.g. 'claude-sonnet-4-6').`).addOption(new Option('--effort <level>', `Effort level for the current session (low, medium, high, max)`).argParser((rawValue: string) => {
.option('--model <model>', `Model for the current session. Provide an alias for the latest model (e.g. 'sonnet' or 'opus') or a model's full name (e.g. 'claude-sonnet-4-6').`).option('--provider <provider>', `AI provider to use (anthropic, openai, gemini, github, bedrock, vertex, ollama). Reads API keys from environment variables.`).addOption(new Option('--effort <level>', `Effort level for the current session (low, medium, high, max)`).argParser((rawValue: string) => {
const value = rawValue.toLowerCase();
const allowed = ['low', 'medium', 'high', 'max'];
if (!allowed.includes(value)) {

View File

@@ -0,0 +1,59 @@
import { afterEach, expect, test } from 'bun:test'
import { mkdtemp, mkdir, writeFile, rm } from 'fs/promises'
import { join } from 'path'
import { tmpdir } from 'os'
import { scanMemoryFiles } from './memoryScan.ts'
// Finding #42-3: readdir({ recursive: true }) has no depth limit.
// A deeply nested directory in the memory dir causes a full unbounded walk.
let tempDir: string
afterEach(async () => {
if (tempDir) {
await rm(tempDir, { recursive: true, force: true })
}
})
test('scanMemoryFiles finds .md files at shallow depth', async () => {
tempDir = await mkdtemp(join(tmpdir(), 'memoryScan-'))
await writeFile(join(tempDir, 'note.md'), '---\nname: test\ntype: user\n---\nContent')
const controller = new AbortController()
const result = await scanMemoryFiles(tempDir, controller.signal)
expect(result.length).toBe(1)
expect(result[0].filename).toBe('note.md')
})
test('scanMemoryFiles ignores MEMORY.md', async () => {
tempDir = await mkdtemp(join(tmpdir(), 'memoryScan-'))
await writeFile(join(tempDir, 'MEMORY.md'), '# index')
await writeFile(join(tempDir, 'user_role.md'), '---\nname: role\ntype: user\n---\nContent')
const controller = new AbortController()
const result = await scanMemoryFiles(tempDir, controller.signal)
expect(result.length).toBe(1)
expect(result[0].filename).toBe('user_role.md')
})
test('scanMemoryFiles does not return .md files nested beyond max depth', async () => {
tempDir = await mkdtemp(join(tmpdir(), 'memoryScan-'))
// Shallow file - should be found
await writeFile(join(tempDir, 'shallow.md'), '---\nname: shallow\ntype: user\n---\nContent')
// Deeply nested file (depth 5) - should be excluded
const deepDir = join(tempDir, 'd1', 'd2', 'd3', 'd4', 'd5')
await mkdir(deepDir, { recursive: true })
await writeFile(join(deepDir, 'deep.md'), '---\nname: deep\ntype: user\n---\nContent')
const controller = new AbortController()
const result = await scanMemoryFiles(tempDir, controller.signal)
const filenames = result.map(r => r.filename)
expect(filenames).toContain('shallow.md')
// The deeply nested file must not appear
expect(filenames.some(f => f.includes('deep.md'))).toBe(false)
})

View File

@@ -38,8 +38,15 @@ export async function scanMemoryFiles(
): Promise<MemoryHeader[]> {
try {
const entries = await readdir(memoryDir, { recursive: true })
// Limit depth to 3 levels to prevent DoS from deep/symlinked directory trees.
// Relative paths from readdir use the OS separator, so count separators.
const sep = require('path').sep as string
const MAX_DEPTH = 3
const mdFiles = entries.filter(
f => f.endsWith('.md') && basename(f) !== 'MEMORY.md',
f =>
f.endsWith('.md') &&
basename(f) !== 'MEMORY.md' &&
(f.split(sep).length - 1) < MAX_DEPTH,
)
const headerResults = await Promise.allSettled(

View File

@@ -0,0 +1,121 @@
import { afterEach, beforeEach, expect, test } from 'bun:test'
import { getAnthropicClient } from './client.js'
type FetchType = typeof globalThis.fetch
type ShimClient = {
beta: {
messages: {
create: (params: Record<string, unknown>) => Promise<unknown>
}
}
}
const originalFetch = globalThis.fetch
const originalMacro = (globalThis as Record<string, unknown>).MACRO
const originalEnv = {
CLAUDE_CODE_USE_GEMINI: process.env.CLAUDE_CODE_USE_GEMINI,
GEMINI_API_KEY: process.env.GEMINI_API_KEY,
GEMINI_MODEL: process.env.GEMINI_MODEL,
GEMINI_BASE_URL: process.env.GEMINI_BASE_URL,
GOOGLE_API_KEY: process.env.GOOGLE_API_KEY,
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OPENAI_BASE_URL: process.env.OPENAI_BASE_URL,
OPENAI_MODEL: process.env.OPENAI_MODEL,
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY,
ANTHROPIC_AUTH_TOKEN: process.env.ANTHROPIC_AUTH_TOKEN,
}
beforeEach(() => {
;(globalThis as Record<string, unknown>).MACRO = { VERSION: 'test-version' }
process.env.CLAUDE_CODE_USE_GEMINI = '1'
process.env.GEMINI_API_KEY = 'gemini-test-key'
process.env.GEMINI_MODEL = 'gemini-2.0-flash'
process.env.GEMINI_BASE_URL = 'https://gemini.example/v1beta/openai'
delete process.env.GOOGLE_API_KEY
delete process.env.OPENAI_API_KEY
delete process.env.OPENAI_BASE_URL
delete process.env.OPENAI_MODEL
delete process.env.ANTHROPIC_API_KEY
delete process.env.ANTHROPIC_AUTH_TOKEN
})
afterEach(() => {
;(globalThis as Record<string, unknown>).MACRO = originalMacro
process.env.CLAUDE_CODE_USE_GEMINI = originalEnv.CLAUDE_CODE_USE_GEMINI
process.env.GEMINI_API_KEY = originalEnv.GEMINI_API_KEY
process.env.GEMINI_MODEL = originalEnv.GEMINI_MODEL
process.env.GEMINI_BASE_URL = originalEnv.GEMINI_BASE_URL
process.env.GOOGLE_API_KEY = originalEnv.GOOGLE_API_KEY
process.env.OPENAI_API_KEY = originalEnv.OPENAI_API_KEY
process.env.OPENAI_BASE_URL = originalEnv.OPENAI_BASE_URL
process.env.OPENAI_MODEL = originalEnv.OPENAI_MODEL
process.env.ANTHROPIC_API_KEY = originalEnv.ANTHROPIC_API_KEY
process.env.ANTHROPIC_AUTH_TOKEN = originalEnv.ANTHROPIC_AUTH_TOKEN
globalThis.fetch = originalFetch
})
test('routes Gemini provider requests through the OpenAI-compatible shim', async () => {
let capturedUrl: string | undefined
let capturedHeaders: Headers | undefined
let capturedBody: Record<string, unknown> | undefined
globalThis.fetch = (async (input, init) => {
capturedUrl =
typeof input === 'string'
? input
: input instanceof URL
? input.toString()
: input.url
capturedHeaders = new Headers(init?.headers)
capturedBody = JSON.parse(String(init?.body)) as Record<string, unknown>
return new Response(
JSON.stringify({
id: 'chatcmpl-gemini',
model: 'gemini-2.0-flash',
choices: [
{
message: {
role: 'assistant',
content: 'gemini ok',
},
finish_reason: 'stop',
},
],
usage: {
prompt_tokens: 8,
completion_tokens: 3,
total_tokens: 11,
},
}),
{
headers: {
'Content-Type': 'application/json',
},
},
)
}) as FetchType
const client = (await getAnthropicClient({
maxRetries: 0,
model: 'gemini-2.0-flash',
})) as unknown as ShimClient
const response = await client.beta.messages.create({
model: 'gemini-2.0-flash',
system: 'test system',
messages: [{ role: 'user', content: 'hello' }],
max_tokens: 64,
stream: false,
})
expect(capturedUrl).toBe('https://gemini.example/v1beta/openai/chat/completions')
expect(capturedHeaders?.get('authorization')).toBe('Bearer gemini-test-key')
expect(capturedBody?.model).toBe('gemini-2.0-flash')
expect(response).toMatchObject({
role: 'assistant',
model: 'gemini-2.0-flash',
})
})

View File

@@ -156,7 +156,8 @@ export async function getAnthropicClient({
}
if (
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI)
) {
const { createOpenAIShimClient } = await import('./openaiShim.js')
return createOpenAIShimClient({

View File

@@ -85,7 +85,7 @@ function makeUsage(usage?: {
}
function makeMessageId(): string {
return `msg_${Math.random().toString(36).slice(2)}${Date.now().toString(36)}`
return `msg_${crypto.randomUUID().replace(/-/g, '')}`
}
function normalizeToolUseId(toolUseId: string | undefined): {
@@ -264,7 +264,8 @@ export function convertAnthropicMessagesToResponsesInput(
if (role === 'assistant') {
const textBlocks = Array.isArray(content)
? content.filter((block: { type?: string }) => block.type !== 'tool_use')
? content.filter((block: { type?: string }) =>
block.type !== 'tool_use' && block.type !== 'thinking')
: content
const parts = convertContentBlocksToResponsesParts(textBlocks, 'assistant')
if (parts.length > 0) {

View File

@@ -0,0 +1,204 @@
import { describe, expect, test } from 'bun:test'
import {
buildCodexUsageRows,
formatCodexPlanType,
getCodexUsageUrl,
normalizeCodexUsagePayload,
} from './codexUsage.js'
describe('normalizeCodexUsagePayload', () => {
test('normalizes live Codex usage payloads from /backend-api/wham/usage', () => {
const usage = normalizeCodexUsagePayload({
plan_type: 'plus',
rate_limit: {
primary_window: {
used_percent: 38,
limit_window_seconds: 18_000,
reset_at: 1_775_154_358,
},
secondary_window: {
used_percent: 32,
limit_window_seconds: 604_800,
reset_at: 1_775_685_041,
},
},
code_review_rate_limit: {
primary_window: {
used_percent: 0,
limit_window_seconds: 604_800,
reset_at: 1_775_744_471,
},
secondary_window: null,
},
credits: {
has_credits: false,
unlimited: false,
balance: '0',
},
})
expect(usage.planType).toBe('plus')
expect(usage.snapshots).toHaveLength(2)
expect(usage.snapshots[0]).toMatchObject({
limitName: 'codex',
primary: {
usedPercent: 38,
windowMinutes: 300,
},
secondary: {
usedPercent: 32,
windowMinutes: 10_080,
},
})
expect(usage.snapshots[1]).toMatchObject({
limitName: 'code review',
primary: {
usedPercent: 0,
windowMinutes: 10_080,
},
})
})
test('supports direct protocol-style snapshot collections', () => {
const usage = normalizeCodexUsagePayload({
rateLimitsByLimitId: {
codex: {
limit_name: 'codex',
primary: {
used_percent: 12,
window_minutes: 300,
resets_at: 1_700_000_000,
},
credits: {
has_credits: true,
unlimited: false,
balance: '25',
},
},
},
})
expect(usage.snapshots).toEqual([
{
limitName: 'codex',
primary: {
usedPercent: 12,
windowMinutes: 300,
resetsAt: new Date(1_700_000_000 * 1000).toISOString(),
},
secondary: undefined,
credits: {
hasCredits: true,
unlimited: false,
balance: '25',
},
},
])
})
})
describe('buildCodexUsageRows', () => {
test('builds Codex-like labels for primary and secondary windows', () => {
const rows = buildCodexUsageRows([
{
limitName: 'codex',
primary: {
usedPercent: 38,
windowMinutes: 300,
resetsAt: '2026-04-02T10:00:00.000Z',
},
secondary: {
usedPercent: 32,
windowMinutes: 10_080,
resetsAt: '2026-04-09T10:00:00.000Z',
},
},
{
limitName: 'code review',
primary: {
usedPercent: 0,
windowMinutes: 10_080,
resetsAt: '2026-04-09T10:00:00.000Z',
},
},
])
expect(rows).toEqual([
{
kind: 'window',
label: '5h limit',
usedPercent: 38,
resetsAt: '2026-04-02T10:00:00.000Z',
},
{
kind: 'window',
label: 'Weekly limit',
usedPercent: 32,
resetsAt: '2026-04-09T10:00:00.000Z',
},
{
kind: 'window',
label: 'Code review Weekly limit',
usedPercent: 0,
resetsAt: '2026-04-09T10:00:00.000Z',
},
])
})
test('renders credits rows only when credits are available', () => {
const rows = buildCodexUsageRows([
{
limitName: 'codex',
credits: {
hasCredits: true,
unlimited: false,
balance: '25.2',
},
},
{
limitName: 'code review',
credits: {
hasCredits: true,
unlimited: true,
},
},
{
limitName: 'other',
credits: {
hasCredits: true,
unlimited: false,
balance: '0',
},
},
])
expect(rows).toEqual([
{
kind: 'text',
label: 'Credits',
value: '25 credits',
},
{
kind: 'text',
label: 'Code review limit',
value: '',
},
{
kind: 'text',
label: 'Credits',
value: 'Unlimited',
},
])
})
})
describe('Codex usage helpers', () => {
test('formats plan labels and usage endpoint url', () => {
expect(formatCodexPlanType('team_max')).toBe('Team Max')
expect(getCodexUsageUrl()).toBe('https://chatgpt.com/backend-api/wham/usage')
expect(getCodexUsageUrl('https://chatgpt.com/backend-api/codex')).toBe(
'https://chatgpt.com/backend-api/wham/usage',
)
})
})

View File

@@ -0,0 +1,434 @@
import {
DEFAULT_CODEX_BASE_URL,
isCodexBaseUrl,
resolveCodexApiCredentials,
resolveProviderRequest,
} from './providerConfig.js'
export type CodexUsageWindow = {
usedPercent: number
windowMinutes?: number
resetsAt?: string
}
export type CodexUsageCredits = {
hasCredits: boolean
unlimited: boolean
balance?: string
}
export type CodexUsageSnapshot = {
limitName: string
primary?: CodexUsageWindow
secondary?: CodexUsageWindow
credits?: CodexUsageCredits
}
export type CodexUsageData = {
planType?: string
snapshots: CodexUsageSnapshot[]
}
export type CodexUsageRow =
| {
kind: 'window'
label: string
usedPercent: number
resetsAt?: string
}
| {
kind: 'text'
label: string
value: string
}
type RecordLike = Record<string, unknown>
function isRecord(value: unknown): value is RecordLike {
return typeof value === 'object' && value !== null
}
function asString(value: unknown): string | undefined {
return typeof value === 'string' && value.trim() ? value.trim() : undefined
}
function asNumber(value: unknown): number | undefined {
return typeof value === 'number' && Number.isFinite(value) ? value : undefined
}
function asBoolean(value: unknown): boolean | undefined {
return typeof value === 'boolean' ? value : undefined
}
function toIsoFromUnixSeconds(value: unknown): string | undefined {
const seconds = asNumber(value)
if (seconds === undefined) return undefined
return new Date(seconds * 1000).toISOString()
}
function normalizeWindow(value: unknown): CodexUsageWindow | undefined {
if (!isRecord(value)) return undefined
const usedPercent =
asNumber(value.used_percent) ?? asNumber(value.usedPercent)
if (usedPercent === undefined) return undefined
const windowMinutes =
asNumber(value.window_minutes) ??
asNumber(value.windowDurationMins) ??
(() => {
const seconds = asNumber(value.limit_window_seconds)
return seconds === undefined ? undefined : Math.round(seconds / 60)
})()
const resetsAt =
toIsoFromUnixSeconds(value.resets_at) ??
toIsoFromUnixSeconds(value.resetsAt) ??
toIsoFromUnixSeconds(value.reset_at)
return {
usedPercent,
windowMinutes,
resetsAt,
}
}
function normalizeCredits(value: unknown): CodexUsageCredits | undefined {
if (!isRecord(value)) return undefined
const hasCredits =
asBoolean(value.has_credits) ?? asBoolean(value.hasCredits) ?? false
const unlimited = asBoolean(value.unlimited) ?? false
const balance = asString(value.balance)
if (!hasCredits && !unlimited && !balance) {
return undefined
}
return {
hasCredits,
unlimited,
balance,
}
}
function normalizeSnapshot(
value: unknown,
fallbackLimitName: string,
): CodexUsageSnapshot | undefined {
if (!isRecord(value)) return undefined
const limitName =
asString(value.limit_name) ??
asString(value.limitName) ??
asString(value.limit_id) ??
asString(value.limitId) ??
fallbackLimitName
const primary =
normalizeWindow(value.primary) ?? normalizeWindow(value.primary_window)
const secondary =
normalizeWindow(value.secondary) ?? normalizeWindow(value.secondary_window)
const credits = normalizeCredits(value.credits)
if (!primary && !secondary && !credits) {
return undefined
}
return {
limitName,
primary,
secondary,
credits,
}
}
function normalizeSnapshotsFromCollection(
value: unknown,
defaultLimitName = 'codex',
): CodexUsageSnapshot[] {
if (Array.isArray(value)) {
return value
.map((item, index) =>
normalizeSnapshot(
item,
index === 0 ? defaultLimitName : `${defaultLimitName}-${index + 1}`,
),
)
.filter((item): item is CodexUsageSnapshot => item !== undefined)
}
if (!isRecord(value)) return []
return Object.entries(value)
.map(([key, entry]) => normalizeSnapshot(entry, key))
.filter((item): item is CodexUsageSnapshot => item !== undefined)
}
function normalizeLiveUsagePayload(payload: RecordLike): CodexUsageData {
const planType = asString(payload.plan_type) ?? asString(payload.planType)
const snapshots: CodexUsageSnapshot[] = []
const codexCredits = normalizeCredits(payload.credits)
const codexSnapshot = normalizeSnapshot(payload.rate_limit, 'codex')
if (codexSnapshot) {
codexSnapshot.credits ??= codexCredits
snapshots.push(codexSnapshot)
} else if (codexCredits) {
snapshots.push({
limitName: 'codex',
credits: codexCredits,
})
}
const codeReviewSnapshot = normalizeSnapshot(
payload.code_review_rate_limit,
'code review',
)
if (codeReviewSnapshot) {
snapshots.push(codeReviewSnapshot)
}
snapshots.push(
...normalizeSnapshotsFromCollection(
payload.additional_rate_limits ?? payload.additionalRateLimits,
'additional',
),
)
return {
planType,
snapshots,
}
}
export function normalizeCodexUsagePayload(payload: unknown): CodexUsageData {
if (Array.isArray(payload)) {
return {
snapshots: normalizeSnapshotsFromCollection(payload),
}
}
if (!isRecord(payload)) {
return { snapshots: [] }
}
if (
'rate_limit' in payload ||
'code_review_rate_limit' in payload ||
'additional_rate_limits' in payload ||
'credits' in payload
) {
return normalizeLiveUsagePayload(payload)
}
const collection =
payload.rate_limits ??
payload.rateLimits ??
payload.rate_limits_by_limit_id ??
payload.rateLimitsByLimitId
if (collection !== undefined) {
return {
planType: asString(payload.plan_type) ?? asString(payload.planType),
snapshots: normalizeSnapshotsFromCollection(collection),
}
}
const snapshot = normalizeSnapshot(payload, 'codex')
return {
planType: asString(payload.plan_type) ?? asString(payload.planType),
snapshots: snapshot ? [snapshot] : [],
}
}
function capitalizeFirst(value: string): string {
if (!value) return value
return value[0]!.toUpperCase() + value.slice(1)
}
function formatWindowDuration(
windowMinutes: number | undefined,
fallback: string,
): string {
if (windowMinutes === undefined || windowMinutes <= 0) {
return fallback
}
if (windowMinutes === 60 * 24 * 7) {
return 'weekly'
}
if (windowMinutes % (60 * 24) === 0) {
return `${windowMinutes / (60 * 24)}d`
}
if (windowMinutes % 60 === 0) {
return `${windowMinutes / 60}h`
}
return `${windowMinutes}m`
}
function formatCreditBalance(rawBalance: string | undefined): string | undefined {
const balance = rawBalance?.trim()
if (!balance) return undefined
const intValue = Number.parseInt(balance, 10)
if (Number.isFinite(intValue) && `${intValue}` === balance && intValue > 0) {
return `${intValue}`
}
const floatValue = Number.parseFloat(balance)
if (Number.isFinite(floatValue) && floatValue > 0) {
return `${Math.round(floatValue)}`
}
return undefined
}
function buildCreditsRow(
credits: CodexUsageCredits | undefined,
): CodexUsageRow | undefined {
if (!credits?.hasCredits) return undefined
if (credits.unlimited) {
return {
kind: 'text',
label: 'Credits',
value: 'Unlimited',
}
}
const displayBalance = formatCreditBalance(credits.balance)
if (!displayBalance) return undefined
return {
kind: 'text',
label: 'Credits',
value: `${displayBalance} credits`,
}
}
export function buildCodexUsageRows(
snapshots: CodexUsageSnapshot[],
): CodexUsageRow[] {
const rows: CodexUsageRow[] = []
for (const snapshot of snapshots) {
const limitBucketLabel = snapshot.limitName.trim() || 'codex'
const creditsRow = buildCreditsRow(snapshot.credits)
const hasRenderableContent =
snapshot.primary !== undefined ||
snapshot.secondary !== undefined ||
creditsRow !== undefined
if (!hasRenderableContent) {
continue
}
const showLimitPrefix = limitBucketLabel.toLowerCase() !== 'codex'
const windowCount =
Number(snapshot.primary !== undefined) +
Number(snapshot.secondary !== undefined)
const combineNonCodexSingleLimit = showLimitPrefix && windowCount === 1
if (showLimitPrefix && !combineNonCodexSingleLimit) {
rows.push({
kind: 'text',
label: `${capitalizeFirst(limitBucketLabel)} limit`,
value: '',
})
}
if (snapshot.primary) {
const durationLabel = capitalizeFirst(
formatWindowDuration(snapshot.primary.windowMinutes, '5h'),
)
rows.push({
kind: 'window',
label: combineNonCodexSingleLimit
? `${capitalizeFirst(limitBucketLabel)} ${durationLabel} limit`
: `${durationLabel} limit`,
usedPercent: snapshot.primary.usedPercent,
resetsAt: snapshot.primary.resetsAt,
})
}
if (snapshot.secondary) {
const durationLabel = capitalizeFirst(
formatWindowDuration(snapshot.secondary.windowMinutes, 'weekly'),
)
rows.push({
kind: 'window',
label: combineNonCodexSingleLimit
? `${capitalizeFirst(limitBucketLabel)} ${durationLabel} limit`
: `${durationLabel} limit`,
usedPercent: snapshot.secondary.usedPercent,
resetsAt: snapshot.secondary.resetsAt,
})
}
if (creditsRow) {
rows.push(creditsRow)
}
}
return rows
}
export function formatCodexPlanType(
planType: string | undefined,
): string | undefined {
if (!planType) return undefined
return planType
.split(/[_\s-]+/)
.filter(Boolean)
.map(part => capitalizeFirst(part.toLowerCase()))
.join(' ')
}
export function getCodexUsageUrl(baseUrl = DEFAULT_CODEX_BASE_URL): string {
return new URL('/backend-api/wham/usage', baseUrl).toString()
}
export async function fetchCodexUsage(): Promise<CodexUsageData> {
const request = resolveProviderRequest({
model: process.env.OPENAI_MODEL,
baseUrl: process.env.OPENAI_BASE_URL,
})
if (!isCodexBaseUrl(request.baseUrl)) {
throw new Error(
'Codex usage is only available with the official ChatGPT Codex backend.',
)
}
const credentials = resolveCodexApiCredentials()
if (!credentials.apiKey) {
const authHint = credentials.authPath
? ` or place a Codex auth.json at ${credentials.authPath}`
: ''
throw new Error(`Codex auth is required. Set CODEX_API_KEY${authHint}.`)
}
if (!credentials.accountId) {
throw new Error(
'Codex auth is missing chatgpt_account_id. Re-login with the Codex CLI or set CHATGPT_ACCOUNT_ID/CODEX_ACCOUNT_ID.',
)
}
const response = await fetch(getCodexUsageUrl(request.baseUrl), {
method: 'GET',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${credentials.apiKey}`,
'chatgpt-account-id': credentials.accountId,
originator: 'openclaude',
},
signal: AbortSignal.timeout(5000),
})
if (!response.ok) {
const errorBody = await response.text().catch(() => 'unknown error')
throw new Error(`Codex usage error ${response.status}: ${errorBody}`)
}
return normalizeCodexUsagePayload(await response.json())
}

View File

@@ -164,6 +164,12 @@ export const TOKEN_REVOKED_ERROR_MESSAGE =
export const CCR_AUTH_ERROR_MESSAGE =
'Authentication error · This may be a temporary network issue, please try again'
export const REPEATED_529_ERROR_MESSAGE = 'Repeated 529 Overloaded errors'
export function getCustomOffSwitchMessage(): string {
return getAPIProvider() === 'firstParty'
? 'Opus is experiencing high load, please use /model to switch to Sonnet'
: 'The API is experiencing high load, please try again shortly or use /model to switch models'
}
// Backward-compatible constant for string matching in error handlers
export const CUSTOM_OFF_SWITCH_MESSAGE =
'Opus is experiencing high load, please use /model to switch to Sonnet'
export const API_TIMEOUT_ERROR_MESSAGE = 'Request timed out'
@@ -457,7 +463,7 @@ export function getAssistantMessageFromError(
error.message.includes(CUSTOM_OFF_SWITCH_MESSAGE)
) {
return createAssistantAPIErrorMessage({
content: CUSTOM_OFF_SWITCH_MESSAGE,
content: getCustomOffSwitchMessage(),
error: 'rate_limit',
})
}

View File

@@ -1,216 +1 @@
function isSchemaRecord(value: unknown): value is Record<string, unknown> {
return value !== null && typeof value === 'object' && !Array.isArray(value)
}
function deepEqualJsonValue(a: unknown, b: unknown): boolean {
if (Object.is(a, b)) return true
if (typeof a !== typeof b) return false
if (Array.isArray(a) && Array.isArray(b)) {
return (
a.length === b.length &&
a.every((value, index) => deepEqualJsonValue(value, b[index]))
)
}
if (isSchemaRecord(a) && isSchemaRecord(b)) {
const aKeys = Object.keys(a)
const bKeys = Object.keys(b)
return (
aKeys.length === bKeys.length &&
aKeys.every(key => key in b && deepEqualJsonValue(a[key], b[key]))
)
}
return false
}
function matchesJsonSchemaType(type: string, value: unknown): boolean {
switch (type) {
case 'string':
return typeof value === 'string'
case 'number':
return typeof value === 'number' && Number.isFinite(value)
case 'integer':
return typeof value === 'number' && Number.isInteger(value)
case 'boolean':
return typeof value === 'boolean'
case 'object':
return value !== null && typeof value === 'object' && !Array.isArray(value)
case 'array':
return Array.isArray(value)
case 'null':
return value === null
default:
return true
}
}
function getJsonSchemaTypes(record: Record<string, unknown>): string[] {
const raw = record.type
if (typeof raw === 'string') {
return [raw]
}
if (Array.isArray(raw)) {
return raw.filter((value): value is string => typeof value === 'string')
}
return []
}
function schemaAllowsValue(schema: Record<string, unknown>, value: unknown): boolean {
if (Array.isArray(schema.anyOf)) {
return schema.anyOf.some(item =>
schemaAllowsValue(sanitizeSchemaForOpenAICompat(item), value),
)
}
if (Array.isArray(schema.oneOf)) {
return (
schema.oneOf.filter(item =>
schemaAllowsValue(sanitizeSchemaForOpenAICompat(item), value),
).length === 1
)
}
if (Array.isArray(schema.allOf)) {
return schema.allOf.every(item =>
schemaAllowsValue(sanitizeSchemaForOpenAICompat(item), value),
)
}
if ('const' in schema && !deepEqualJsonValue(schema.const, value)) {
return false
}
if (Array.isArray(schema.enum)) {
if (!schema.enum.some(item => deepEqualJsonValue(item, value))) {
return false
}
}
const types = getJsonSchemaTypes(schema)
if (types.length > 0 && !types.some(type => matchesJsonSchemaType(type, value))) {
return false
}
return true
}
function sanitizeTypeField(record: Record<string, unknown>): void {
const allowed = new Set([
'string',
'number',
'integer',
'boolean',
'object',
'array',
'null',
])
const raw = record.type
if (typeof raw === 'string') {
if (!allowed.has(raw)) delete record.type
return
}
if (!Array.isArray(raw)) return
const filtered = raw.filter(
(value, index): value is string =>
typeof value === 'string' &&
allowed.has(value) &&
raw.indexOf(value) === index,
)
if (filtered.length === 0) {
delete record.type
} else if (filtered.length === 1) {
record.type = filtered[0]
} else {
record.type = filtered
}
}
/**
* Sanitize loose/invalid JSON Schema into a form OpenAI-compatible providers
* are more likely to accept. This is intentionally defensive for external MCP
* servers that may advertise imperfect schemas.
*/
export function sanitizeSchemaForOpenAICompat(
schema: unknown,
): Record<string, unknown> {
if (!isSchemaRecord(schema)) {
return {}
}
const record = { ...schema }
delete record.$schema
delete record.propertyNames
sanitizeTypeField(record)
if (isSchemaRecord(record.properties)) {
const sanitizedProps: Record<string, unknown> = {}
for (const [key, value] of Object.entries(record.properties)) {
sanitizedProps[key] = sanitizeSchemaForOpenAICompat(value)
}
record.properties = sanitizedProps
}
if ('items' in record) {
if (Array.isArray(record.items)) {
record.items = record.items.map(item =>
sanitizeSchemaForOpenAICompat(item),
)
} else {
record.items = sanitizeSchemaForOpenAICompat(record.items)
}
}
for (const key of ['anyOf', 'oneOf', 'allOf'] as const) {
if (Array.isArray(record[key])) {
record[key] = record[key].map(item =>
sanitizeSchemaForOpenAICompat(item),
)
}
}
if (Array.isArray(record.required) && isSchemaRecord(record.properties)) {
record.required = record.required.filter(
(value): value is string =>
typeof value === 'string' && value in record.properties,
)
}
const schemaWithoutEnum = { ...record }
delete schemaWithoutEnum.enum
if (Array.isArray(record.enum)) {
const filteredEnum = record.enum.filter(value =>
schemaAllowsValue(schemaWithoutEnum, value),
)
if (filteredEnum.length > 0) {
record.enum = filteredEnum
} else {
delete record.enum
}
}
const schemaWithoutConst = { ...record }
delete schemaWithoutConst.const
if ('const' in record && !schemaAllowsValue(schemaWithoutConst, record.const)) {
delete record.const
}
const schemaWithoutDefault = { ...record }
delete schemaWithoutDefault.default
if (
'default' in record &&
!schemaAllowsValue(schemaWithoutDefault, record.default)
) {
delete record.default
}
return record
}
export { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'

View File

@@ -313,6 +313,57 @@ test('preserves Gemini tool call extra_content from streaming chunks', async ()
})
})
test('strips thinking blocks from assistant messages instead of leaking them as text', async () => {
let requestBody: Record<string, unknown> | undefined
globalThis.fetch = (async (_input, init) => {
requestBody = JSON.parse(String(init?.body))
return new Response(
JSON.stringify({
id: 'chatcmpl-1',
model: 'gpt-4o',
choices: [
{
message: { role: 'assistant', content: 'done' },
finish_reason: 'stop',
},
],
usage: { prompt_tokens: 10, completion_tokens: 1, total_tokens: 11 },
}),
{ headers: { 'Content-Type': 'application/json' } },
)
}) as FetchType
const client = createOpenAIShimClient({}) as OpenAIShimClient
await client.beta.messages.create({
model: 'gpt-4o',
system: 'test',
messages: [
{ role: 'user', content: 'hello' },
{
role: 'assistant',
content: [
{ type: 'thinking', thinking: 'secret reasoning' },
{ type: 'text', text: 'visible reply' },
],
},
{ role: 'user', content: 'follow up' },
],
max_tokens: 64,
stream: false,
})
const msgs = requestBody?.messages as Array<{ role: string; content: string }>
const assistantMsg = msgs.find(m => m.role === 'assistant')
// The assistant message should contain only the visible text,
// not <thinking>secret reasoning</thinking>
expect(assistantMsg?.content).toBe('visible reply')
expect(assistantMsg?.content).not.toContain('thinking')
})
test('sanitizes malformed MCP tool schemas before sending them to OpenAI', async () => {
let requestBody: Record<string, unknown> | undefined

View File

@@ -38,8 +38,8 @@ import {
resolveCodexApiCredentials,
resolveProviderRequest,
} from './providerConfig.js'
import { sanitizeSchemaForOpenAICompat } from '../../utils/schemaSanitizer.js'
import { redactSecretValueForDisplay } from '../../utils/providerProfile.js'
import { sanitizeSchemaForOpenAICompat } from './openaiSchemaSanitizer.js'
const GITHUB_MODELS_DEFAULT_BASE = 'https://models.github.ai/inference'
const GITHUB_API_VERSION = '2022-11-28'
@@ -139,10 +139,12 @@ function convertContentBlocks(
// handled separately
break
case 'thinking':
// Append thinking as text with a marker for models that support reasoning
if (block.thinking) {
parts.push({ type: 'text', text: `<thinking>${block.thinking}</thinking>` })
}
case 'redacted_thinking':
// Strip thinking blocks for OpenAI-compatible providers.
// These are Anthropic-specific content types that 3P providers
// don't understand. Serializing them as <thinking> text corrupts
// multi-turn context: the model sees the tags as part of its
// previous reply and may mimic or misattribute them.
break
default:
if (block.text) {
@@ -231,7 +233,7 @@ function convertMessages(
input?: unknown
extra_content?: Record<string, unknown>
}) => ({
id: tu.id ?? `call_${Math.random().toString(36).slice(2)}`,
id: tu.id ?? `call_${crypto.randomUUID().replace(/-/g, '')}`,
type: 'function' as const,
function: {
name: tu.name ?? 'unknown',
@@ -389,7 +391,7 @@ interface OpenAIStreamChunk {
}
function makeMessageId(): string {
return `msg_${Math.random().toString(36).slice(2)}${Date.now().toString(36)}`
return `msg_${crypto.randomUUID().replace(/-/g, '')}`
}
function convertChunkUsage(
@@ -610,6 +612,23 @@ async function* openaiStreamToAnthropic(
: choice.finish_reason === 'length'
? 'max_tokens'
: 'end_turn'
if (choice.finish_reason === 'content_filter' || choice.finish_reason === 'safety') {
// Gemini/Azure content safety filter blocked the response.
// Emit a visible text block so the user knows why output was truncated.
if (!hasEmittedContentStart) {
yield {
type: 'content_block_start',
index: contentBlockIndex,
content_block: { type: 'text', text: '' },
}
hasEmittedContentStart = true
}
yield {
type: 'content_block_delta',
index: contentBlockIndex,
delta: { type: 'text_delta', text: '\n\n[Content blocked by provider safety filter]' },
}
}
lastStopReason = stopReason
yield {
@@ -841,7 +860,14 @@ class OpenAIShimMessages {
}
const apiKey = process.env.OPENAI_API_KEY ?? ''
const isAzure = /cognitiveservices\.azure\.com|openai\.azure\.com/.test(request.baseUrl)
// Detect Azure endpoints by hostname (not raw URL) to prevent bypass via
// path segments like https://evil.com/cognitiveservices.azure.com/
let isAzure = false
try {
const { hostname } = new URL(request.baseUrl)
isAzure = hostname.endsWith('.azure.com') &&
(hostname.includes('cognitiveservices') || hostname.includes('openai') || hostname.includes('services.ai'))
} catch { /* malformed URL — not Azure */ }
if (apiKey) {
if (isAzure) {
@@ -1003,6 +1029,13 @@ class OpenAIShimMessages {
? 'max_tokens'
: 'end_turn'
if (choice?.finish_reason === 'content_filter' || choice?.finish_reason === 'safety') {
content.push({
type: 'text',
text: '\n\n[Content blocked by provider safety filter]',
})
}
return {
id: data.id ?? makeMessageId(),
type: 'message',

View File

@@ -0,0 +1,35 @@
import { expect, test } from 'bun:test'
import { isLocalProviderUrl } from './providerConfig.js'
test('treats localhost endpoints as local', () => {
expect(isLocalProviderUrl('http://localhost:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://127.0.0.1:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://0.0.0.0:11434/v1')).toBe(true)
// Full 127.0.0.0/8 loopback range should be treated as local
expect(isLocalProviderUrl('http://127.0.0.2:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://127.1.2.3:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://127.255.255.255:11434/v1')).toBe(true)
})
test('treats private IPv4 endpoints as local', () => {
expect(isLocalProviderUrl('http://10.0.0.1:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://172.16.0.1:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://192.168.0.1:11434/v1')).toBe(true)
})
test('treats .local hostnames as local', () => {
expect(isLocalProviderUrl('http://ollama.local:11434/v1')).toBe(true)
})
test('treats private IPv6 endpoints as local', () => {
expect(isLocalProviderUrl('http://[fd00::1]:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://[fe80::1]:11434/v1')).toBe(true)
expect(isLocalProviderUrl('http://[::1]:11434/v1')).toBe(true)
})
test('treats public hosts as remote', () => {
expect(isLocalProviderUrl('http://203.0.113.1:11434/v1')).toBe(false)
expect(isLocalProviderUrl('https://example.com/v1')).toBe(false)
expect(isLocalProviderUrl('http://[2001:4860:4860::8888]:11434/v1')).toBe(false)
})

View File

@@ -1,4 +1,5 @@
import { existsSync, readFileSync } from 'node:fs'
import { isIP } from 'node:net'
import { homedir } from 'node:os'
import { join } from 'node:path'
@@ -87,6 +88,29 @@ type ModelDescriptor = {
const LOCALHOST_HOSTNAMES = new Set(['localhost', '127.0.0.1', '::1'])
function isPrivateIpv4Address(hostname: string): boolean {
const octets = hostname.split('.').map(part => Number.parseInt(part, 10))
if (octets.length !== 4 || octets.some(octet => Number.isNaN(octet))) {
return false
}
return (
octets[0] === 10 ||
(octets[0] === 172 && octets[1] >= 16 && octets[1] <= 31) ||
(octets[0] === 192 && octets[1] === 168)
)
}
function isPrivateIpv6Address(hostname: string): boolean {
const firstHextet = hostname.split(':', 1)[0]
if (!firstHextet) return false
const prefix = Number.parseInt(firstHextet, 16)
if (Number.isNaN(prefix)) return false
return (prefix & 0xfe00) === 0xfc00 || (prefix & 0xffc0) === 0xfe80
}
function asTrimmedString(value: unknown): string | undefined {
return typeof value === 'string' && value.trim() ? value.trim() : undefined
}
@@ -186,7 +210,37 @@ function isCodexAlias(model: string): boolean {
export function isLocalProviderUrl(baseUrl: string | undefined): boolean {
if (!baseUrl) return false
try {
return LOCALHOST_HOSTNAMES.has(new URL(baseUrl).hostname)
let hostname = new URL(baseUrl).hostname.toLowerCase()
// Strip IPv6 brackets added by the URL parser (e.g. "[::1]" -> "::1")
if (hostname.startsWith('[') && hostname.endsWith(']')) {
hostname = hostname.slice(1, -1)
}
// Strip RFC6874 IPv6 zone identifiers (e.g. "fe80::1%25en0" -> "fe80::1")
const zoneIdIndex = hostname.indexOf('%25')
if (zoneIdIndex !== -1) {
hostname = hostname.slice(0, zoneIdIndex)
}
if (LOCALHOST_HOSTNAMES.has(hostname) || hostname === '0.0.0.0') {
return true
}
if (hostname.endsWith('.local')) {
return true
}
const ipVersion = isIP(hostname)
if (ipVersion === 4) {
// Treat the full 127.0.0.0/8 loopback range as local
const firstOctet = Number.parseInt(hostname.split('.', 1)[0] ?? '', 10)
return firstOctet === 127 || isPrivateIpv4Address(hostname)
}
if (ipVersion === 6) {
return isPrivateIpv6Address(hostname)
}
return false
} catch {
return false
}
@@ -237,8 +291,13 @@ export function resolveProviderRequest(options?: {
process.env.OPENAI_BASE_URL ??
process.env.OPENAI_API_BASE ??
undefined
// Use Codex transport only when:
// - the base URL is explicitly the Codex endpoint, OR
// - the model is a Codex alias AND no custom base URL has been set
// A custom OPENAI_BASE_URL (e.g. Azure, OpenRouter) always wins over
// model-name-based Codex detection to prevent auth failures (#200, #203).
const transport: ProviderTransport =
isCodexAlias(requestedModel) || isCodexBaseUrl(rawBaseUrl)
isCodexBaseUrl(rawBaseUrl) || (!rawBaseUrl && isCodexAlias(requestedModel))
? 'codex_responses'
: 'chat_completions'

View File

@@ -0,0 +1,136 @@
import { describe, expect, test, afterEach } from 'bun:test'
import { getRateLimitResetDelayMs, parseOpenAIDuration } from './withRetry.js'
import { APIError } from '@anthropic-ai/sdk'
// Helper to build a mock APIError with specific headers
function makeError(headers: Record<string, string>): APIError {
const headersObj = new Headers(headers)
return {
headers: headersObj,
status: 429,
message: 'rate limit exceeded',
name: 'APIError',
error: {},
} as unknown as APIError
}
// Save/restore env vars between tests
const originalEnv = { ...process.env }
afterEach(() => {
for (const key of [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
]) {
if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key]
}
})
// --- parseOpenAIDuration ---
describe('parseOpenAIDuration', () => {
test('parses seconds: "1s" → 1000', () => {
expect(parseOpenAIDuration('1s')).toBe(1000)
})
test('parses minutes+seconds: "6m0s" → 360000', () => {
expect(parseOpenAIDuration('6m0s')).toBe(360000)
})
test('parses hours+minutes+seconds: "1h30m0s" → 5400000', () => {
expect(parseOpenAIDuration('1h30m0s')).toBe(5400000)
})
test('parses milliseconds: "500ms" → 500', () => {
expect(parseOpenAIDuration('500ms')).toBe(500)
})
test('parses minutes only: "2m" → 120000', () => {
expect(parseOpenAIDuration('2m')).toBe(120000)
})
test('returns null for empty string', () => {
expect(parseOpenAIDuration('')).toBeNull()
})
test('returns null for unrecognized format', () => {
expect(parseOpenAIDuration('invalid')).toBeNull()
})
})
// --- getRateLimitResetDelayMs ---
describe('getRateLimitResetDelayMs - Anthropic (firstParty)', () => {
test('reads anthropic-ratelimit-unified-reset Unix timestamp', () => {
const futureUnixSec = Math.floor(Date.now() / 1000) + 60
const error = makeError({
'anthropic-ratelimit-unified-reset': String(futureUnixSec),
})
const delay = getRateLimitResetDelayMs(error)
expect(delay).not.toBeNull()
expect(delay!).toBeGreaterThan(50_000)
expect(delay!).toBeLessThanOrEqual(60_000)
})
test('returns null when header absent', () => {
const error = makeError({})
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
test('returns null when reset is in the past', () => {
const pastUnixSec = Math.floor(Date.now() / 1000) - 10
const error = makeError({
'anthropic-ratelimit-unified-reset': String(pastUnixSec),
})
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
})
describe('getRateLimitResetDelayMs - OpenAI provider', () => {
test('reads x-ratelimit-reset-requests duration string', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const error = makeError({ 'x-ratelimit-reset-requests': '30s' })
const delay = getRateLimitResetDelayMs(error)
expect(delay).toBe(30_000)
})
test('reads x-ratelimit-reset-tokens and picks the larger delay', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const error = makeError({
'x-ratelimit-reset-requests': '10s',
'x-ratelimit-reset-tokens': '1m0s',
})
// Should use the larger of the two so we don't retry before both reset
const delay = getRateLimitResetDelayMs(error)
expect(delay).toBe(60_000)
})
test('returns null when no openai rate limit headers present', () => {
process.env.CLAUDE_CODE_USE_OPENAI = '1'
const error = makeError({})
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
test('works for github provider too', () => {
process.env.CLAUDE_CODE_USE_GITHUB = '1'
const error = makeError({ 'x-ratelimit-reset-requests': '5s' })
expect(getRateLimitResetDelayMs(error)).toBe(5_000)
})
})
describe('getRateLimitResetDelayMs - providers without reset headers', () => {
test('returns null for bedrock', () => {
process.env.CLAUDE_CODE_USE_BEDROCK = '1'
const error = makeError({ 'anthropic-ratelimit-unified-reset': String(Math.floor(Date.now() / 1000) + 60) })
// Bedrock doesn't use this header — should still return null
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
test('returns null for vertex', () => {
process.env.CLAUDE_CODE_USE_VERTEX = '1'
const error = makeError({})
expect(getRateLimitResetDelayMs(error)).toBeNull()
})
})

View File

@@ -11,7 +11,7 @@ import { isAwsCredentialsProviderError } from 'src/utils/aws.js'
import { logForDebugging } from 'src/utils/debug.js'
import { logError } from 'src/utils/log.js'
import { createSystemAPIErrorMessage } from 'src/utils/messages.js'
import { getAPIProviderForStatsig } from 'src/utils/model/providers.js'
import { getAPIProvider, getAPIProviderForStatsig } from 'src/utils/model/providers.js'
import {
clearApiKeyHelperCache,
clearAwsCredentialsCache,
@@ -811,12 +811,49 @@ function getRetryAfterMs(error: APIError): number | null {
return null
}
function getRateLimitResetDelayMs(error: APIError): number | null {
const resetHeader = error.headers?.get?.('anthropic-ratelimit-unified-reset')
if (!resetHeader) return null
const resetUnixSec = Number(resetHeader)
if (!Number.isFinite(resetUnixSec)) return null
const delayMs = resetUnixSec * 1000 - Date.now()
if (delayMs <= 0) return null
return Math.min(delayMs, PERSISTENT_RESET_CAP_MS)
/**
* Parse OpenAI-style relative duration strings into milliseconds.
* Formats: "1s", "6m0s", "1h30m0s", "500ms", "2m"
* Returns null for unrecognized formats.
*/
export function parseOpenAIDuration(s: string): number | null {
if (!s) return null
// Try matching hours/minutes/seconds/milliseconds components
const re = /^(?:(\d+)h)?(?:(\d+)m(?!s))?(?:(\d+)s)?(?:(\d+)ms)?$/
const m = re.exec(s)
if (!m || m[0] === '') return null
const h = parseInt(m[1] ?? '0', 10)
const min = parseInt(m[2] ?? '0', 10)
const sec = parseInt(m[3] ?? '0', 10)
const ms = parseInt(m[4] ?? '0', 10)
const total = h * 3_600_000 + min * 60_000 + sec * 1_000 + ms
return total > 0 ? total : null
}
export function getRateLimitResetDelayMs(error: APIError): number | null {
const provider = getAPIProvider()
if (provider === 'firstParty') {
const resetHeader = error.headers?.get?.('anthropic-ratelimit-unified-reset')
if (!resetHeader) return null
const resetUnixSec = Number(resetHeader)
if (!Number.isFinite(resetUnixSec)) return null
const delayMs = resetUnixSec * 1000 - Date.now()
if (delayMs <= 0) return null
return Math.min(delayMs, PERSISTENT_RESET_CAP_MS)
}
if (provider === 'openai' || provider === 'codex' || provider === 'github') {
const reqHeader = error.headers?.get?.('x-ratelimit-reset-requests')
const tokHeader = error.headers?.get?.('x-ratelimit-reset-tokens')
const reqMs = reqHeader ? parseOpenAIDuration(reqHeader) : null
const tokMs = tokHeader ? parseOpenAIDuration(tokHeader) : null
if (reqMs === null && tokMs === null) return null
// Use the larger delay so we don't retry before both limits reset
const delayMs = Math.max(reqMs ?? 0, tokMs ?? 0)
return Math.min(delayMs, PERSISTENT_RESET_CAP_MS)
}
// bedrock, vertex, foundry, gemini — no standard reset header
return null
}

View File

@@ -6,6 +6,7 @@ import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
logEvent,
} from 'src/services/analytics/index.js'
import { isAntEmployee } from 'src/utils/buildConfig.js'
import { getCwd } from 'src/utils/cwd.js'
import { checkForReleaseNotes } from 'src/utils/releaseNotes.js'
import { setCwd } from 'src/utils/Shell.js'
@@ -334,7 +335,7 @@ export async function setup(
// overhead. NOT an early-return: the --dangerously-skip-permissions safety
// gate, tengu_started beacon, and apiKeyHelper prefetch below must still run.
if (!isBareMode()) {
if (process.env.USER_TYPE === 'ant') {
if (isAntEmployee()) {
// Prime repo classification cache for auto-undercover mode. Default is
// undercover ON until proven internal; if this resolves to internal, clear
// the prompt cache so the next turn picks up the OFF state.
@@ -414,7 +415,7 @@ export async function setup(
}
if (
process.env.USER_TYPE === 'ant' &&
isAntEmployee() &&
// Skip for Desktop's local agent mode — same trust model as CCR/BYOC
// (trusted Anthropic-managed launcher intentionally pre-approving everything).
// Precedent: permissionSetup.ts:861, applySettingsChange.ts:55 (PR #19116)

View File

@@ -1,4 +1,5 @@
import { setMainLoopModelOverride } from '../bootstrap/state.js'
import { isAntEmployee } from '../utils/buildConfig.js'
import {
clearApiKeyHelperCache,
clearAwsCredentialsCache,
@@ -140,7 +141,7 @@ export function onChangeAppState({
}
// tungstenPanelVisible (ant-only tmux panel sticky toggle)
if (process.env.USER_TYPE === 'ant') {
if (isAntEmployee()) {
if (
newState.tungstenPanelVisible !== oldState.tungstenPanelVisible &&
newState.tungstenPanelVisible !== undefined &&

View File

@@ -10,7 +10,7 @@
*/
import type { UUID } from 'crypto'
import { randomBytes } from 'crypto'
import { randomInt } from 'crypto'
import {
OUTPUT_FILE_TAG,
STATUS_TAG,
@@ -73,10 +73,9 @@ const DEFAULT_MAIN_SESSION_AGENT: CustomAgentDefinition = {
const TASK_ID_ALPHABET = '0123456789abcdefghijklmnopqrstuvwxyz'
function generateMainSessionTaskId(): string {
const bytes = randomBytes(8)
let id = 's'
for (let i = 0; i < 8; i++) {
id += TASK_ID_ALPHABET[bytes[i]! % TASK_ID_ALPHABET.length]
id += TASK_ID_ALPHABET[randomInt(TASK_ID_ALPHABET.length)]!
}
return id
}

View File

@@ -21,7 +21,7 @@ function getExploreSystemPrompt(): string {
? `- Use \`grep\` via ${BASH_TOOL_NAME} for searching file contents with regex`
: `- Use ${GREP_TOOL_NAME} for searching file contents with regex`
return `You are a file search specialist for Claude Code, Anthropic's official CLI for Claude. You excel at thoroughly navigating and exploring codebases.
return `You are a file search specialist for OpenClaude, an open-source fork of Claude Code. You excel at thoroughly navigating and exploring codebases.
=== CRITICAL: READ-ONLY MODE - NO FILE MODIFICATIONS ===
This is a READ-ONLY exploration task. You are STRICTLY PROHIBITED from:

View File

@@ -1,6 +1,6 @@
import type { BuiltInAgentDefinition } from '../loadAgentsDir.js'
const SHARED_PREFIX = `You are an agent for Claude Code, Anthropic's official CLI for Claude. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done.`
const SHARED_PREFIX = `You are an agent for OpenClaude, an open-source fork of Claude Code. Given the user's message, you should use the tools available to complete the task. Complete the task fully—don't gold-plate, but don't leave it half-done.`
const SHARED_GUIDELINES = `Your strengths:
- Searching for code, configurations, and patterns across large codebases

View File

@@ -578,10 +578,12 @@ export const BashTool = buildTool({
const block = buildImageToolResult(stdout, toolUseID);
if (block) return block;
}
let processedStdout = stdout;
if (stdout) {
const normalizedStdout = typeof stdout === 'string' ? stdout : '';
const normalizedStderr = typeof stderr === 'string' ? stderr : '';
let processedStdout = normalizedStdout;
if (normalizedStdout) {
// Replace any leading newlines or lines with only whitespace
processedStdout = stdout.replace(/^(\s*\n)+/, '');
processedStdout = normalizedStdout.replace(/^(\s*\n)+/, '');
// Still trim the end as before
processedStdout = processedStdout.trimEnd();
}
@@ -598,9 +600,9 @@ export const BashTool = buildTool({
hasMore: preview.hasMore
});
}
let errorMessage = stderr.trim();
let errorMessage = normalizedStderr.trim();
if (interrupted) {
if (stderr) errorMessage += EOL;
if (normalizedStderr) errorMessage += EOL;
errorMessage += '<error>Command was aborted before completion</error>';
}
let backgroundInfo = '';

View File

@@ -0,0 +1,40 @@
import { expect, test } from 'bun:test'
import { applySedSubstitution, type SedEditInfo } from './sedEditParser.js'
function sedInfo(pattern: string, replacement: string, extendedRegex = false): SedEditInfo {
return {
filePath: 'example.txt',
pattern,
replacement,
flags: 'g',
extendedRegex,
}
}
test('BRE mode keeps unescaped plus literal', () => {
const result = applySedSubstitution(
'a+b and aaab',
sedInfo('a+b', 'literal-plus'),
)
expect(result).toBe('literal-plus and aaab')
})
test('BRE mode treats escaped plus as one-or-more', () => {
const result = applySedSubstitution(
'abbb and a+b',
sedInfo('ab\\+', 'one-or-more'),
)
expect(result).toBe('one-or-more and a+b')
})
test('BRE mode preserves escaped backslashes', () => {
const result = applySedSubstitution(
String.raw`foo\bar foo/bar`,
sedInfo(String.raw`foo\\bar`, 'backslash-match'),
)
expect(result).toBe('backslash-match foo/bar')
})

View File

@@ -7,18 +7,6 @@ import { randomBytes } from 'crypto'
import { tryParseShellCommand } from '../../utils/bash/shellQuote.js'
// BRE→ERE conversion placeholders (null-byte sentinels, never appear in user input)
const BACKSLASH_PLACEHOLDER = '\x00BACKSLASH\x00'
const PLUS_PLACEHOLDER = '\x00PLUS\x00'
const QUESTION_PLACEHOLDER = '\x00QUESTION\x00'
const PIPE_PLACEHOLDER = '\x00PIPE\x00'
const LPAREN_PLACEHOLDER = '\x00LPAREN\x00'
const RPAREN_PLACEHOLDER = '\x00RPAREN\x00'
const BACKSLASH_PLACEHOLDER_RE = new RegExp(BACKSLASH_PLACEHOLDER, 'g')
const PLUS_PLACEHOLDER_RE = new RegExp(PLUS_PLACEHOLDER, 'g')
const QUESTION_PLACEHOLDER_RE = new RegExp(QUESTION_PLACEHOLDER, 'g')
const PIPE_PLACEHOLDER_RE = new RegExp(PIPE_PLACEHOLDER, 'g')
const LPAREN_PLACEHOLDER_RE = new RegExp(LPAREN_PLACEHOLDER, 'g')
const RPAREN_PLACEHOLDER_RE = new RegExp(RPAREN_PLACEHOLDER, 'g')
export type SedEditInfo = {
/** The file path being edited */
@@ -33,6 +21,40 @@ export type SedEditInfo = {
extendedRegex: boolean
}
function convertBrePatternToJs(pattern: string): string {
let result = ''
for (let i = 0; i < pattern.length; i++) {
const char = pattern[i]!
if (char === '\\') {
const next = pattern[i + 1]
if (next === undefined) {
result += '\\\\'
continue
}
if (next === '\\') {
result += '\\\\'
} else if ('+?|()'.includes(next)) {
result += next
} else {
result += `\\${next}`
}
i++
continue
}
if ('+?|()'.includes(char)) {
result += `\\${char}`
continue
}
result += char
}
return result
}
/**
* Check if a command is a sed in-place edit command
* Returns true only for simple sed -i 's/pattern/replacement/flags' file commands
@@ -273,28 +295,7 @@ export function applySedSubstitution(
// ERE/JS: + means "one or more", \+ is literal
// We need to convert BRE escaping to ERE for JavaScript regex
if (!sedInfo.extendedRegex) {
jsPattern = jsPattern
// Step 1: Protect literal backslashes (\\) first - in both BRE and ERE, \\ is literal backslash
.replace(/\\\\/g, BACKSLASH_PLACEHOLDER)
// Step 2: Replace escaped metacharacters with placeholders (these should become unescaped in JS)
.replace(/\\\+/g, PLUS_PLACEHOLDER)
.replace(/\\\?/g, QUESTION_PLACEHOLDER)
.replace(/\\\|/g, PIPE_PLACEHOLDER)
.replace(/\\\(/g, LPAREN_PLACEHOLDER)
.replace(/\\\)/g, RPAREN_PLACEHOLDER)
// Step 3: Escape unescaped metacharacters (these are literal in BRE)
.replace(/\+/g, '\\+')
.replace(/\?/g, '\\?')
.replace(/\|/g, '\\|')
.replace(/\(/g, '\\(')
.replace(/\)/g, '\\)')
// Step 4: Replace placeholders with their JS equivalents
.replace(BACKSLASH_PLACEHOLDER_RE, '\\\\')
.replace(PLUS_PLACEHOLDER_RE, '+')
.replace(QUESTION_PLACEHOLDER_RE, '?')
.replace(PIPE_PLACEHOLDER_RE, '|')
.replace(LPAREN_PLACEHOLDER_RE, '(')
.replace(RPAREN_PLACEHOLDER_RE, ')')
jsPattern = convertBrePatternToJs(jsPattern)
}
// Unescape sed-specific escapes in replacement

View File

@@ -34,6 +34,17 @@ type SharpCreator = (options: SharpCreatorOptions) => SharpInstance
let imageProcessorModule: { default: SharpFunction } | null = null
let imageCreatorModule: { default: SharpCreator } | null = null
/**
* Error thrown when no image processor is available (e.g., in the open build
* where sharp and image-processor-napi are stubbed out).
*/
export class ImageProcessorUnavailableError extends Error {
constructor() {
super('No image processor available (sharp is not installed)')
this.name = 'ImageProcessorUnavailableError'
}
}
export async function getImageProcessor(): Promise<SharpFunction> {
if (imageProcessorModule) {
return imageProcessorModule.default
@@ -44,10 +55,14 @@ export async function getImageProcessor(): Promise<SharpFunction> {
try {
// Use the native image processor module
const imageProcessor = await import('image-processor-napi')
if ((imageProcessor as { __stub?: boolean }).__stub) {
throw new ImageProcessorUnavailableError()
}
const sharp = imageProcessor.sharp || imageProcessor.default
imageProcessorModule = { default: sharp }
return sharp
} catch {
} catch (e) {
if (e instanceof ImageProcessorUnavailableError) throw e
// Fall back to sharp if native module is not available
// biome-ignore lint/suspicious/noConsole: intentional warning
console.warn(
@@ -58,12 +73,20 @@ export async function getImageProcessor(): Promise<SharpFunction> {
// Use sharp for non-bundled builds or as fallback.
// Single structural cast: our SharpFunction is a subset of sharp's actual type surface.
const imported = (await import(
'sharp'
)) as unknown as MaybeDefault<SharpFunction>
const sharp = unwrapDefault(imported)
imageProcessorModule = { default: sharp }
return sharp
try {
const imported = (await import(
'sharp'
)) as unknown as MaybeDefault<SharpFunction> & { __stub?: boolean }
if (imported && (imported as { __stub?: boolean }).__stub) {
throw new ImageProcessorUnavailableError()
}
const sharp = unwrapDefault(imported as MaybeDefault<SharpFunction>)
imageProcessorModule = { default: sharp }
return sharp
} catch (e) {
if (e instanceof ImageProcessorUnavailableError) throw e
throw new ImageProcessorUnavailableError()
}
}
/**

View File

@@ -396,9 +396,13 @@ export const PowerShellTool = buildTool({
const block = buildImageToolResult(stdout, toolUseID);
if (block) return block;
}
let processedStdout = stdout;
const normalizedStdout = typeof stdout === 'string' ? stdout : '';
const normalizedStderr = typeof stderr === 'string' ? stderr : '';
let processedStdout = normalizedStdout;
if (persistedOutputPath) {
const trimmed = stdout ? stdout.replace(/^(\s*\n)+/, '').trimEnd() : '';
const trimmed = normalizedStdout
? normalizedStdout.replace(/^(\s*\n)+/, '').trimEnd()
: '';
const preview = generatePreview(trimmed, PREVIEW_SIZE_BYTES);
processedStdout = buildLargeToolResultMessage({
filepath: persistedOutputPath,
@@ -407,13 +411,13 @@ export const PowerShellTool = buildTool({
preview: preview.preview,
hasMore: preview.hasMore
});
} else if (stdout) {
processedStdout = stdout.replace(/^(\s*\n)+/, '');
} else if (normalizedStdout) {
processedStdout = normalizedStdout.replace(/^(\s*\n)+/, '');
processedStdout = processedStdout.trimEnd();
}
let errorMessage = stderr.trim();
let errorMessage = normalizedStderr.trim();
if (interrupted) {
if (stderr) errorMessage += EOL;
if (normalizedStderr) errorMessage += EOL;
errorMessage += '<error>Command was aborted before completion</error>';
}
let backgroundInfo = '';

View File

@@ -21,6 +21,18 @@ import {
MAX_MARKDOWN_LENGTH,
} from './utils.js'
function isFirecrawlEnabled(): boolean {
return Boolean(process.env.FIRECRAWL_API_KEY)
}
async function scrapeWithFirecrawl(url: string): Promise<{ markdown: string; bytes: number }> {
const { FirecrawlClient } = await import('@mendable/firecrawl-js')
const app = new FirecrawlClient({ apiKey: process.env.FIRECRAWL_API_KEY! })
const result = await app.scrape(url, { formats: ['markdown'] })
const markdown = (result as { markdown?: string }).markdown ?? ''
return { markdown, bytes: Buffer.byteLength(markdown) }
}
const inputSchema = lazySchema(() =>
z.strictObject({
url: z.string().url().describe('The URL to fetch content from'),
@@ -211,6 +223,27 @@ ${DESCRIPTION}`
) {
const start = Date.now()
if (isFirecrawlEnabled()) {
const { markdown, bytes } = await scrapeWithFirecrawl(url)
const result = await applyPromptToMarkdown(
prompt,
markdown,
abortController.signal,
isNonInteractiveSession,
false,
)
return {
data: {
bytes,
code: 200,
codeText: 'OK',
result,
durationMs: Date.now() - start,
url,
} satisfies Output,
}
}
const response = await getURLMarkdownContent(url, abortController)
// Check if we got a redirect to a different host

View File

@@ -88,6 +88,67 @@ function makeToolSchema(input: Input): BetaWebSearchTool20250305 {
}
}
function isFirecrawlEnabled(): boolean {
return Boolean(process.env.FIRECRAWL_API_KEY)
}
function shouldUseFirecrawl(): boolean {
if (!isFirecrawlEnabled()) return false
// Don't override native search on providers that already have it
if (isCodexResponsesWebSearchEnabled()) return false
const provider = getAPIProvider()
if (provider === 'firstParty' || provider === 'vertex' || provider === 'foundry') return false
return true
}
async function runFirecrawlSearch(input: Input): Promise<Output> {
const startTime = performance.now()
const { FirecrawlClient } = await import('@mendable/firecrawl-js')
const app = new FirecrawlClient({ apiKey: process.env.FIRECRAWL_API_KEY! })
let query = input.query
if (input.blocked_domains?.length) {
const exclusions = input.blocked_domains.map(d => `-site:${d}`).join(' ')
query = `${query} ${exclusions}`
}
const data = await app.search(query, { limit: 10 })
let hits = (data.web ?? []).map((r: { url: string; title?: string }) => ({
title: r.title ?? r.url,
url: r.url,
}))
if (input.allowed_domains?.length) {
hits = hits.filter(h =>
input.allowed_domains!.some(d => {
try {
return new URL(h.url).hostname.endsWith(d)
} catch {
return false
}
}),
)
}
const snippets = (data.web ?? [])
.filter((r: { description?: string }) => r.description)
.map((r: { url: string; title?: string; description?: string }) =>
`**${r.title ?? r.url}** — ${r.description} (${r.url})`,
)
.join('\n')
const results: Output['results'] = []
if (snippets) results.push(snippets)
results.push({ tool_use_id: 'firecrawl-search', content: hits })
return {
query: input.query,
results,
durationSeconds: (performance.now() - startTime) / 1000,
}
}
function isCodexResponsesWebSearchEnabled(): boolean {
if (getAPIProvider() !== 'openai') {
return false
@@ -378,6 +439,10 @@ export const WebSearchTool = buildTool({
return summary ? `Searching for ${summary}` : 'Searching the web'
},
isEnabled() {
if (shouldUseFirecrawl()) {
return true
}
const provider = getAPIProvider()
const model = getMainLoopModel()
@@ -437,7 +502,7 @@ export const WebSearchTool = buildTool({
}
},
async prompt() {
if (isCodexResponsesWebSearchEnabled()) {
if (shouldUseFirecrawl() || isCodexResponsesWebSearchEnabled()) {
return getWebSearchPrompt().replace(
/\n\s*-\s*Web search is only available in the US/,
'',
@@ -474,6 +539,10 @@ export const WebSearchTool = buildTool({
return { result: true }
},
async call(input, context, _canUseTool, _parentMessage, onProgress) {
if (shouldUseFirecrawl()) {
return { data: await runFirecrawlSearch(input) }
}
if (isCodexResponsesWebSearchEnabled()) {
return {
data: await runCodexWebSearch(input, context.abortController.signal),

View File

@@ -0,0 +1,71 @@
import { expect, test } from 'bun:test'
import { BashTool } from './BashTool/BashTool.js'
import { PowerShellTool } from './PowerShellTool/PowerShellTool.js'
test('BashTool result mapper tolerates null stderr', () => {
const result = BashTool.mapToolResultToToolResultBlockParam(
{
stdout: 'ok',
stderr: null as unknown as string,
interrupted: false,
},
'tool-1',
)
expect(result).toMatchObject({
type: 'tool_result',
tool_use_id: 'tool-1',
content: 'ok',
})
})
test('BashTool result mapper tolerates null stdout', () => {
const result = BashTool.mapToolResultToToolResultBlockParam(
{
stdout: null as unknown as string,
stderr: 'problem',
interrupted: false,
},
'tool-2',
)
expect(result).toMatchObject({
type: 'tool_result',
tool_use_id: 'tool-2',
content: 'problem',
})
})
test('PowerShellTool result mapper tolerates null stderr', () => {
const result = PowerShellTool.mapToolResultToToolResultBlockParam(
{
stdout: 'ok',
stderr: null as unknown as string,
interrupted: false,
},
'tool-3',
)
expect(result).toMatchObject({
type: 'tool_result',
tool_use_id: 'tool-3',
content: 'ok',
})
})
test('PowerShellTool result mapper tolerates null stdout', () => {
const result = PowerShellTool.mapToolResultToToolResultBlockParam(
{
stdout: null as unknown as string,
stderr: 'problem',
interrupted: false,
},
'tool-4',
)
expect(result).toMatchObject({
type: 'tool_result',
tool_use_id: 'tool-4',
content: 'problem',
})
})

View File

@@ -0,0 +1,42 @@
import { expect, test } from 'bun:test'
import { isValidPemContent } from './upstreamproxy.ts'
// Finding #42-6: The CA cert downloaded from the upstream proxy is written
// to disk without validation. A compromised server could send arbitrary data.
// Fix: validate it contains only valid PEM certificate blocks before writing.
test('isValidPemContent returns true for a valid PEM certificate block', () => {
const pem = [
'-----BEGIN CERTIFICATE-----',
'MIICpDCCAYwCCQDU+pQ4pHgSpDANBgkqhkiG9w0BAQsFADAUMRIwEAYDVQQDDAls',
'b2NhbGhvc3QwHhcNMjMwMTAxMDAwMDAwWhcNMjQwMTAxMDAwMDAwWjAUMRIwEAYD',
'VQQDDAlsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC7',
'-----END CERTIFICATE-----',
].join('\n')
expect(isValidPemContent(pem)).toBe(true)
})
test('isValidPemContent returns true for multiple PEM blocks', () => {
const block = '-----BEGIN CERTIFICATE-----\nABCD\n-----END CERTIFICATE-----'
const pem = `${block}\n${block}`
expect(isValidPemContent(pem)).toBe(true)
})
test('isValidPemContent returns false for arbitrary text', () => {
expect(isValidPemContent('Hello world')).toBe(false)
expect(isValidPemContent('<html><body>error</body></html>')).toBe(false)
expect(isValidPemContent('{"error":"unauthorized"}')).toBe(false)
})
test('isValidPemContent returns false for empty string', () => {
expect(isValidPemContent('')).toBe(false)
})
test('isValidPemContent returns false for whitespace only', () => {
expect(isValidPemContent(' \n ')).toBe(false)
})
test('isValidPemContent returns false for malformed PEM (no end marker)', () => {
expect(isValidPemContent('-----BEGIN CERTIFICATE-----\nABCD')).toBe(false)
})

View File

@@ -203,6 +203,18 @@ export function resetUpstreamProxyForTests(): void {
state = { enabled: false }
}
/**
* Validate that a string contains only well-formed PEM certificate blocks.
* Used to guard against a compromised upstream proxy sending arbitrary data
* that would be written into the system CA bundle.
*/
export function isValidPemContent(content: string): boolean {
if (!content || !content.trim()) return false
const pemBlockRegex =
/-----BEGIN CERTIFICATE-----[\s\S]+?-----END CERTIFICATE-----/g
return pemBlockRegex.test(content)
}
async function readToken(path: string): Promise<string | null> {
try {
const raw = await readFile(path, 'utf8')
@@ -271,6 +283,13 @@ async function downloadCaBundle(
return false
}
const ccrCa = await resp.text()
if (!isValidPemContent(ccrCa)) {
logForDebugging(
`[upstreamproxy] ca-cert response is not valid PEM; proxy disabled`,
{ level: 'warn' },
)
return false
}
const systemCa = await readFile(systemCaPath, 'utf8').catch(() => '')
await mkdir(join(outPath, '..'), { recursive: true })
await writeFile(outPath, systemCa + '\n' + ccrCa, 'utf8')

66
src/utils/api.test.ts Normal file
View File

@@ -0,0 +1,66 @@
import { expect, test } from 'bun:test'
import { z } from 'zod/v4'
import { getEmptyToolPermissionContext, type Tool, type Tools } from '../Tool.js'
import { toolToAPISchema } from './api.js'
test('toolToAPISchema preserves provider-specific schema keywords in input_schema', async () => {
const schema = await toolToAPISchema(
{
name: 'WebFetch',
inputSchema: z.strictObject({}),
inputJSONSchema: {
type: 'object',
properties: {
url: {
type: 'string',
format: 'uri',
description: 'Public HTTP or HTTPS URL',
},
metadata: {
type: 'object',
propertyNames: {
pattern: '^[a-z]+$',
},
properties: {
callback: {
type: 'string',
format: 'uri-reference',
},
},
},
},
},
prompt: async () => 'Fetch a URL',
} as unknown as Tool,
{
getToolPermissionContext: async () => getEmptyToolPermissionContext(),
tools: [] as unknown as Tools,
agents: [],
},
)
expect(schema).toMatchObject({
input_schema: {
type: 'object',
properties: {
url: {
type: 'string',
format: 'uri',
description: 'Public HTTP or HTTPS URL',
},
metadata: {
type: 'object',
propertyNames: {
pattern: '^[a-z]+$',
},
properties: {
callback: {
type: 'string',
format: 'uri-reference',
},
},
},
},
},
})
})

View File

@@ -0,0 +1,20 @@
import { expect, test } from 'bun:test'
import { isAntEmployee } from './buildConfig.ts'
// Finding #42-2: process.env.USER_TYPE === 'ant' is checked directly in multiple
// places, allowing any external user to activate Anthropic-internal code paths.
// In OpenClaude, this must always be false regardless of env var.
test('isAntEmployee always returns false in OpenClaude regardless of USER_TYPE env var', () => {
const original = process.env.USER_TYPE
process.env.USER_TYPE = 'ant'
expect(isAntEmployee()).toBe(false)
process.env.USER_TYPE = original
})
test('isAntEmployee returns false even when USER_TYPE is unset', () => {
const original = process.env.USER_TYPE
delete process.env.USER_TYPE
expect(isAntEmployee()).toBe(false)
process.env.USER_TYPE = original
})

18
src/utils/buildConfig.ts Normal file
View File

@@ -0,0 +1,18 @@
/**
* OpenClaude build-time constants.
*
* These replace process.env checks that were only meaningful in Anthropic's
* internal build. In OpenClaude all such gates are permanently disabled so
* external users cannot activate internal code paths by setting env vars.
*/
/**
* Always false in OpenClaude.
* Replaces all `process.env.USER_TYPE === 'ant'` checks so that no external
* user can activate Anthropic-internal features (commit attribution hooks,
* system-prompt section clearing, dangerously-skip-permissions bypass, etc.)
* by setting USER_TYPE in their shell environment.
*/
export function isAntEmployee(): boolean {
return false
}

View File

@@ -307,10 +307,6 @@ function stripHtmlCommentsFromTokens(tokens: ReturnType<Lexer['lex']>): {
let result = ''
let stripped = false
// A well-formed HTML comment span. Non-greedy so multiple comments on the
// same line are matched independently; [\s\S] to span newlines.
const commentSpan = /<!--[\s\S]*?-->/g
for (const token of tokens) {
if (token.type === 'html') {
const trimmed = token.raw.trimStart()
@@ -318,7 +314,7 @@ function stripHtmlCommentsFromTokens(tokens: ReturnType<Lexer['lex']>): {
// Per CommonMark, a type-2 HTML block ends at the *line* containing
// `-->`, so text after `-->` on that line is part of this token.
// Strip only the comment spans and keep any residual content.
const residue = token.raw.replace(commentSpan, '')
const residue = stripHtmlCommentSpans(token.raw)
stripped = true
if (residue.trim().length > 0) {
// Residual content exists (e.g. `<!-- note --> Use bun`): keep it.
@@ -333,6 +329,20 @@ function stripHtmlCommentsFromTokens(tokens: ReturnType<Lexer['lex']>): {
return { content: result, stripped }
}
function stripHtmlCommentSpans(raw: string): string {
let residue = raw
while (residue.includes('<!--')) {
const updated = residue.replace(/<!--[\s\S]*?-->/g, '')
if (updated === residue) {
break
}
residue = updated
}
return residue
}
/**
* Parses raw memory file content into a MemoryFileInfo. Pure function — no I/O.
*
@@ -504,8 +514,7 @@ function extractIncludePathsFromTokens(
const raw = element.raw || ''
const trimmed = raw.trimStart()
if (trimmed.startsWith('<!--') && trimmed.includes('-->')) {
const commentSpan = /<!--[\s\S]*?-->/g
const residue = raw.replace(commentSpan, '')
const residue = stripHtmlCommentSpans(raw)
if (residue.trim().length > 0) {
extractPathsFromText(residue)
}

View File

@@ -12,9 +12,17 @@ const originalEnv = {
}
afterEach(() => {
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
process.env.CLAUDE_CODE_MAX_OUTPUT_TOKENS =
originalEnv.CLAUDE_CODE_MAX_OUTPUT_TOKENS
if (originalEnv.CLAUDE_CODE_USE_OPENAI === undefined) {
delete process.env.CLAUDE_CODE_USE_OPENAI
} else {
process.env.CLAUDE_CODE_USE_OPENAI = originalEnv.CLAUDE_CODE_USE_OPENAI
}
if (originalEnv.CLAUDE_CODE_MAX_OUTPUT_TOKENS === undefined) {
delete process.env.CLAUDE_CODE_MAX_OUTPUT_TOKENS
} else {
process.env.CLAUDE_CODE_MAX_OUTPUT_TOKENS =
originalEnv.CLAUDE_CODE_MAX_OUTPUT_TOKENS
}
})
test('deepseek-chat uses provider-specific context and output caps', () => {

View File

@@ -72,16 +72,23 @@ export function getContextWindowForModel(
return 1_000_000
}
// OpenAI-compatible provider — use known context windows for the model
if (
// OpenAI-compatible provider — use known context windows for the model.
// Unknown models get a conservative 8k default so auto-compact triggers
// before hitting a hard context_window_exceeded error (issue #248 finding 3).
const isOpenAIProvider =
isEnvTruthy(process.env.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GEMINI) ||
isEnvTruthy(process.env.CLAUDE_CODE_USE_GITHUB)
) {
if (isOpenAIProvider) {
const openaiWindow = getOpenAIContextWindow(model)
if (openaiWindow !== undefined) {
return openaiWindow
}
console.error(
`[context] Warning: model "${model}" not in context window table — using conservative 8k default. ` +
'Add it to src/utils/model/openaiContextWindows.ts for accurate compaction.',
)
return 8_000
}
const cap = getModelCapability(model)

View File

@@ -24,6 +24,7 @@ import {
type FileHistorySnapshot,
} from './fileHistory.js'
import { logError } from './log.js'
import { getAPIProvider } from './model/providers.js'
import {
createAssistantMessage,
createUserMessage,
@@ -145,6 +146,25 @@ export type DeserializeResult = {
turnInterruptionState: TurnInterruptionState
}
/**
* Remove thinking/redacted_thinking content blocks from assistant messages.
* Messages that become empty after stripping are removed entirely.
*/
function stripThinkingBlocks(messages: NormalizedMessage[]): NormalizedMessage[] {
return messages.reduce<NormalizedMessage[]>((acc, msg) => {
if (msg.type !== 'assistant' || !Array.isArray(msg.message?.content)) {
acc.push(msg)
return acc
}
const filtered = msg.message.content.filter(
(block: { type?: string }) => block.type !== 'thinking' && block.type !== 'redacted_thinking',
)
if (filtered.length === 0) return acc
acc.push({ ...msg, message: { ...msg.message, content: filtered } })
return acc
}, [])
}
/**
* Deserializes messages from a log file into the format expected by the REPL.
* Filters unresolved tool uses, orphaned thinking messages, and appends a
@@ -195,10 +215,19 @@ export function deserializeMessagesWithInterruptDetection(
filteredToolUses,
) as NormalizedMessage[]
// Strip thinking/redacted_thinking content blocks from assistant messages
// when resuming against a 3P provider. These Anthropic-specific blocks cause
// 400 errors or context corruption on OpenAI-compatible providers (issue #248 finding 5).
const provider = getAPIProvider()
const isThirdPartyProvider = provider !== 'firstParty' && provider !== 'bedrock' && provider !== 'vertex' && provider !== 'foundry'
const thinkingStripped = isThirdPartyProvider
? stripThinkingBlocks(filteredThinking)
: filteredThinking
// Filter out assistant messages with only whitespace text content.
// This can happen when model outputs "\n\n" before thinking, user cancels mid-stream.
const filteredMessages = filterWhitespaceOnlyAssistantMessages(
filteredThinking,
thinkingStripped,
) as NormalizedMessage[]
const internalState = detectTurnInterruption(filteredMessages)

View File

@@ -28,6 +28,31 @@ type SupportedPlatform = 'darwin' | 'linux' | 'win32'
// Threshold in characters for when to consider text a "large paste"
export const PASTE_THRESHOLD = 800
export const LINUX_CLIPBOARD_IMAGE_MIME_TYPES = [
'image/png',
'image/jpeg',
'image/jpg',
'image/gif',
'image/webp',
'image/bmp',
]
export function buildLinuxClipboardCheckCommand(): string {
const mimePattern = LINUX_CLIPBOARD_IMAGE_MIME_TYPES.map(mimeType =>
mimeType.replace('/', '\\/'),
).join('|')
return `xclip -selection clipboard -t TARGETS -o 2>/dev/null | grep -E "${mimePattern}" || wl-paste -l 2>/dev/null | grep -E "${mimePattern}"`
}
export function buildLinuxClipboardSaveCommand(screenshotPath: string): string {
return LINUX_CLIPBOARD_IMAGE_MIME_TYPES.flatMap(mimeType => [
`xclip -selection clipboard -t ${mimeType} -o > "${screenshotPath}" 2>/dev/null`,
`wl-paste --type ${mimeType} > "${screenshotPath}" 2>/dev/null`,
]).join(' || ')
}
function getClipboardCommands() {
const platform = process.platform as SupportedPlatform
@@ -62,9 +87,8 @@ function getClipboardCommands() {
deleteFile: `rm -f "${screenshotPath}"`,
},
linux: {
checkImage:
'xclip -selection clipboard -t TARGETS -o 2>/dev/null | grep -E "image/(png|jpeg|jpg|gif|webp|bmp)" || wl-paste -l 2>/dev/null | grep -E "image/(png|jpeg|jpg|gif|webp|bmp)"',
saveImage: `xclip -selection clipboard -t image/png -o > "${screenshotPath}" 2>/dev/null || wl-paste --type image/png > "${screenshotPath}" 2>/dev/null || xclip -selection clipboard -t image/bmp -o > "${screenshotPath}" 2>/dev/null || wl-paste --type image/bmp > "${screenshotPath}"`,
checkImage: buildLinuxClipboardCheckCommand(),
saveImage: buildLinuxClipboardSaveCommand(screenshotPath),
getPath:
'xclip -selection clipboard -t text/plain -o 2>/dev/null || wl-paste 2>/dev/null',
deleteFile: `rm -f "${screenshotPath}"`,

View File

@@ -159,7 +159,7 @@ export function logError(error: unknown): void {
const err = toError(error)
if (feature('HARD_FAIL') && isHardFailMode()) {
// biome-ignore lint/suspicious/noConsole:: intentional crash output
console.error('[HARD FAIL] logError called with:', err.stack || err.message)
console.error('[HARD FAIL] logError called:', err.name || 'Error')
// eslint-disable-next-line custom-rules/no-process-exit
process.exit(1)
}

View File

@@ -50,9 +50,11 @@ const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
'gemini-2.5-flash': 1_048_576,
// Ollama local models
'llama3.3:70b': 8_192,
'llama3.1:8b': 8_192,
'llama3.2:3b': 8_192,
// Llama 3.1+ models support 128k context natively (Meta official specs).
// Ollama defaults to num_ctx=8192 but users can configure higher values.
'llama3.3:70b': 128_000,
'llama3.1:8b': 128_000,
'llama3.2:3b': 128_000,
'qwen2.5-coder:32b': 32_768,
'qwen2.5-coder:7b': 32_768,
'deepseek-coder-v2:16b': 163_840,
@@ -122,7 +124,11 @@ const OPENAI_MAX_OUTPUT_TOKENS: Record<string, number> = {
function lookupByModel<T>(table: Record<string, T>, model: string): T | undefined {
if (table[model] !== undefined) return table[model]
for (const key of Object.keys(table)) {
// Sort keys by length descending so the most specific prefix wins.
// Without this, 'gpt-4-turbo-preview' could match 'gpt-4' (8k) instead
// of 'gpt-4-turbo' (128k) depending on V8's key iteration order.
const sortedKeys = Object.keys(table).sort((a, b) => b.length - a.length)
for (const key of sortedKeys) {
if (model.startsWith(key)) return table[key]
}
return undefined

View File

@@ -0,0 +1,77 @@
import { afterEach, expect, test } from 'bun:test'
import { getEmptyToolPermissionContext } from '../Tool.js'
import { BashTool } from '../tools/BashTool/BashTool.js'
import { executeShellCommandsInPrompt } from './promptShellExecution.js'
const originalCall = BashTool.call
const originalMapToolResultToToolResultBlockParam =
BashTool.mapToolResultToToolResultBlockParam
afterEach(() => {
BashTool.call = originalCall
BashTool.mapToolResultToToolResultBlockParam =
originalMapToolResultToToolResultBlockParam
})
test('executeShellCommandsInPrompt normalizes null shell output', async () => {
let normalizedResult:
| { stdout: string; stderr: string; interrupted: boolean }
| undefined
BashTool.call = (async () => ({
data: {
stdout: null,
stderr: null,
interrupted: false,
},
})) as unknown as typeof BashTool.call
BashTool.mapToolResultToToolResultBlockParam = (result, toolUseID) => {
normalizedResult = result as {
stdout: string
stderr: string
interrupted: boolean
}
return originalMapToolResultToToolResultBlockParam(result, toolUseID)
}
await executeShellCommandsInPrompt(
'```!\ngit status\n```',
{
abortController: new AbortController(),
options: {
commands: [],
debug: false,
mainLoopModel: 'sonnet',
tools: new Map(),
verbose: false,
thinkingConfig: { type: 'disabled' },
mcpClients: [],
mcpResources: {},
isNonInteractiveSession: false,
agentDefinitions: {
systemDefinitions: [],
projectDefinitions: [],
userDefinitions: [],
},
},
readFileState: new Map(),
getAppState() {
return {
toolPermissionContext: {
...getEmptyToolPermissionContext(),
alwaysAllowRules: { command: ['Bash(*)'] },
},
}
},
setAppState() {},
} as never,
'security-review',
)
expect(normalizedResult).toEqual({
stdout: '',
stderr: '',
interrupted: false,
})
})

View File

@@ -16,7 +16,11 @@ import { processToolResultBlock } from './toolResultStorage.js'
// _simulatedSedEdit) that PowerShellTool's does not.
// NOTE: call() is invoked directly here, bypassing validateInput — any
// load-bearing check must live in call() itself (see PR #23311).
type ShellOut = { stdout: string; stderr: string; interrupted: boolean }
type ShellOut = {
stdout: string | null | undefined
stderr: string | null | undefined
interrupted: boolean
}
type PromptShellTool = Tool & {
call(
input: { command: string },
@@ -113,17 +117,25 @@ export async function executeShellCommandsInPrompt(
}
const { data } = await shellTool.call({ command }, context)
const normalizedData = {
...data,
stdout: typeof data.stdout === 'string' ? data.stdout : '',
stderr: typeof data.stderr === 'string' ? data.stderr : '',
}
// Reuse the same persistence flow as regular Bash tool calls
const toolResultBlock = await processToolResultBlock(
shellTool,
data,
normalizedData,
randomUUID(),
)
// Extract the string content from the block
const output =
typeof toolResultBlock.content === 'string'
? toolResultBlock.content
: formatBashOutput(data.stdout, data.stderr)
: formatBashOutput(
normalizedData.stdout,
normalizedData.stderr,
)
// Function replacer — String.replace interprets $$, $&, $`, $' in
// the replacement string even with a string search pattern. Shell
// output (especially PowerShell: $env:PATH, $$, $PSVersionTable)
@@ -143,21 +155,23 @@ export async function executeShellCommandsInPrompt(
}
function formatBashOutput(
stdout: string,
stderr: string,
stdout: string | null | undefined,
stderr: string | null | undefined,
inline = false,
): string {
const normalizedStdout = typeof stdout === 'string' ? stdout : ''
const normalizedStderr = typeof stderr === 'string' ? stderr : ''
const parts: string[] = []
if (stdout.trim()) {
parts.push(stdout.trim())
if (normalizedStdout.trim()) {
parts.push(normalizedStdout.trim())
}
if (stderr.trim()) {
if (normalizedStderr.trim()) {
if (inline) {
parts.push(`[stderr: ${stderr.trim()}]`)
parts.push(`[stderr: ${normalizedStderr.trim()}]`)
} else {
parts.push(`[stderr]\n${stderr.trim()}`)
parts.push(`[stderr]\n${normalizedStderr.trim()}`)
}
}

View File

@@ -0,0 +1,139 @@
import { describe, expect, test, afterEach } from 'bun:test'
import { parseProviderFlag, applyProviderFlag, VALID_PROVIDERS } from './providerFlag.js'
const originalEnv = { ...process.env }
afterEach(() => {
for (const key of [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_GITHUB',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'OPENAI_BASE_URL',
'OPENAI_API_KEY',
'OPENAI_MODEL',
'GEMINI_MODEL',
]) {
if (originalEnv[key] === undefined) delete process.env[key]
else process.env[key] = originalEnv[key]
}
})
// --- parseProviderFlag ---
describe('parseProviderFlag', () => {
test('returns provider name when --provider flag present', () => {
expect(parseProviderFlag(['--provider', 'openai'])).toBe('openai')
})
test('returns provider name with --model alongside', () => {
expect(parseProviderFlag(['--provider', 'gemini', '--model', 'gemini-2.0-flash'])).toBe('gemini')
})
test('returns null when --provider flag absent', () => {
expect(parseProviderFlag(['--model', 'gpt-4o'])).toBeNull()
})
test('returns null for empty args', () => {
expect(parseProviderFlag([])).toBeNull()
})
test('returns null when --provider has no value', () => {
expect(parseProviderFlag(['--provider'])).toBeNull()
})
test('returns null when --provider value starts with --', () => {
expect(parseProviderFlag(['--provider', '--model'])).toBeNull()
})
})
// --- applyProviderFlag ---
describe('applyProviderFlag - anthropic', () => {
test('sets no env vars for anthropic (default)', () => {
const result = applyProviderFlag('anthropic', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GEMINI).toBeUndefined()
})
})
describe('applyProviderFlag - openai', () => {
test('sets CLAUDE_CODE_USE_OPENAI=1', () => {
const result = applyProviderFlag('openai', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBe('1')
})
test('sets OPENAI_MODEL when --model is provided', () => {
applyProviderFlag('openai', ['--model', 'gpt-4o'])
expect(process.env.OPENAI_MODEL).toBe('gpt-4o')
})
})
describe('applyProviderFlag - gemini', () => {
test('sets CLAUDE_CODE_USE_GEMINI=1', () => {
const result = applyProviderFlag('gemini', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GEMINI).toBe('1')
})
test('sets GEMINI_MODEL when --model is provided', () => {
applyProviderFlag('gemini', ['--model', 'gemini-2.0-flash'])
expect(process.env.GEMINI_MODEL).toBe('gemini-2.0-flash')
})
})
describe('applyProviderFlag - github', () => {
test('sets CLAUDE_CODE_USE_GITHUB=1', () => {
const result = applyProviderFlag('github', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_GITHUB).toBe('1')
})
})
describe('applyProviderFlag - bedrock', () => {
test('sets CLAUDE_CODE_USE_BEDROCK=1', () => {
const result = applyProviderFlag('bedrock', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_BEDROCK).toBe('1')
})
})
describe('applyProviderFlag - vertex', () => {
test('sets CLAUDE_CODE_USE_VERTEX=1', () => {
const result = applyProviderFlag('vertex', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_VERTEX).toBe('1')
})
})
describe('applyProviderFlag - ollama', () => {
test('sets CLAUDE_CODE_USE_OPENAI=1 with Ollama base URL', () => {
const result = applyProviderFlag('ollama', [])
expect(result.error).toBeUndefined()
expect(process.env.CLAUDE_CODE_USE_OPENAI).toBe('1')
expect(process.env.OPENAI_BASE_URL).toBe('http://localhost:11434/v1')
expect(process.env.OPENAI_API_KEY).toBe('ollama')
})
test('sets OPENAI_MODEL when --model is provided', () => {
applyProviderFlag('ollama', ['--model', 'llama3.2'])
expect(process.env.OPENAI_MODEL).toBe('llama3.2')
})
test('does not override existing OPENAI_BASE_URL when user set a custom one', () => {
process.env.OPENAI_BASE_URL = 'http://my-ollama:11434/v1'
applyProviderFlag('ollama', [])
expect(process.env.OPENAI_BASE_URL).toBe('http://my-ollama:11434/v1')
})
})
describe('applyProviderFlag - invalid provider', () => {
test('returns error for unknown provider', () => {
const result = applyProviderFlag('unknown-provider', [])
expect(result.error).toContain('unknown-provider')
expect(result.error).toContain(VALID_PROVIDERS.join(', '))
})
})

107
src/utils/providerFlag.ts Normal file
View File

@@ -0,0 +1,107 @@
/**
* --provider CLI flag support.
*
* Maps the user-friendly provider name to the environment variables
* that the rest of the codebase uses for provider detection.
*
* Usage:
* openclaude --provider openai --model gpt-4o
* openclaude --provider gemini --model gemini-2.0-flash
* openclaude --provider ollama --model llama3.2
* openclaude --provider anthropic (default, no-op)
*/
export const VALID_PROVIDERS = [
'anthropic',
'openai',
'gemini',
'github',
'bedrock',
'vertex',
'ollama',
] as const
export type ProviderFlagName = (typeof VALID_PROVIDERS)[number]
/**
* Extract the value of --provider from argv.
* Returns null if the flag is absent or has no value.
*/
export function parseProviderFlag(args: string[]): string | null {
const idx = args.indexOf('--provider')
if (idx === -1) return null
const value = args[idx + 1]
if (!value || value.startsWith('--')) return null
return value
}
/**
* Extract the value of --model from argv.
* Returns null if absent.
*/
function parseModelFlag(args: string[]): string | null {
const idx = args.indexOf('--model')
if (idx === -1) return null
const value = args[idx + 1]
if (!value || value.startsWith('--')) return null
return value
}
/**
* Apply a provider name to process.env.
* Sets the required CLAUDE_CODE_USE_* flag and any provider-specific
* defaults (Ollama base URL, model routing). Does NOT overwrite values
* that are already set — explicit env vars always win.
*
* Returns { error } if the provider name is not recognized.
*/
export function applyProviderFlag(
provider: string,
args: string[],
): { error?: string } {
if (!(VALID_PROVIDERS as readonly string[]).includes(provider)) {
return {
error: `Unknown provider "${provider}". Valid providers: ${VALID_PROVIDERS.join(', ')}`,
}
}
const model = parseModelFlag(args)
switch (provider as ProviderFlagName) {
case 'anthropic':
// Default — no env vars needed
break
case 'openai':
process.env.CLAUDE_CODE_USE_OPENAI = '1'
if (model) process.env.OPENAI_MODEL ??= model
break
case 'gemini':
process.env.CLAUDE_CODE_USE_GEMINI = '1'
if (model) process.env.GEMINI_MODEL ??= model
break
case 'github':
process.env.CLAUDE_CODE_USE_GITHUB = '1'
if (model) process.env.OPENAI_MODEL ??= model
break
case 'bedrock':
process.env.CLAUDE_CODE_USE_BEDROCK = '1'
break
case 'vertex':
process.env.CLAUDE_CODE_USE_VERTEX = '1'
break
case 'ollama':
process.env.CLAUDE_CODE_USE_OPENAI = '1'
process.env.OPENAI_BASE_URL ??= 'http://localhost:11434/v1'
process.env.OPENAI_API_KEY ??= 'ollama'
if (model) process.env.OPENAI_MODEL ??= model
break
}
return {}
}

View File

@@ -0,0 +1,246 @@
const OPENAI_INCOMPATIBLE_SCHEMA_KEYWORDS = new Set([
'$comment',
'$schema',
'default',
'else',
'examples',
'format',
'if',
'maxLength',
'maximum',
'minLength',
'minimum',
'multipleOf',
'pattern',
'patternProperties',
'propertyNames',
'then',
'unevaluatedProperties',
])
function isSchemaRecord(value: unknown): value is Record<string, unknown> {
return value !== null && typeof value === 'object' && !Array.isArray(value)
}
function stripSchemaKeywords(schema: unknown, keywords: Set<string>): unknown {
if (Array.isArray(schema)) {
return schema.map(item => stripSchemaKeywords(item, keywords))
}
if (!isSchemaRecord(schema)) {
return schema
}
const result: Record<string, unknown> = {}
for (const [key, value] of Object.entries(schema)) {
if (keywords.has(key)) {
continue
}
result[key] = stripSchemaKeywords(value, keywords)
}
return result
}
function deepEqualJsonValue(a: unknown, b: unknown): boolean {
if (Object.is(a, b)) return true
if (typeof a !== typeof b) return false
if (Array.isArray(a) && Array.isArray(b)) {
return (
a.length === b.length &&
a.every((value, index) => deepEqualJsonValue(value, b[index]))
)
}
if (isSchemaRecord(a) && isSchemaRecord(b)) {
const aKeys = Object.keys(a)
const bKeys = Object.keys(b)
return (
aKeys.length === bKeys.length &&
aKeys.every(key => key in b && deepEqualJsonValue(a[key], b[key]))
)
}
return false
}
function matchesJsonSchemaType(type: string, value: unknown): boolean {
switch (type) {
case 'string':
return typeof value === 'string'
case 'number':
return typeof value === 'number' && Number.isFinite(value)
case 'integer':
return typeof value === 'number' && Number.isInteger(value)
case 'boolean':
return typeof value === 'boolean'
case 'object':
return value !== null && typeof value === 'object' && !Array.isArray(value)
case 'array':
return Array.isArray(value)
case 'null':
return value === null
default:
return true
}
}
function getJsonSchemaTypes(record: Record<string, unknown>): string[] {
const raw = record.type
if (typeof raw === 'string') {
return [raw]
}
if (Array.isArray(raw)) {
return raw.filter((value): value is string => typeof value === 'string')
}
return []
}
function schemaAllowsValue(schema: Record<string, unknown>, value: unknown): boolean {
if (Array.isArray(schema.anyOf)) {
return schema.anyOf.some(item =>
schemaAllowsValue(sanitizeSchemaForOpenAICompat(item), value),
)
}
if (Array.isArray(schema.oneOf)) {
return (
schema.oneOf.filter(item =>
schemaAllowsValue(sanitizeSchemaForOpenAICompat(item), value),
).length === 1
)
}
if (Array.isArray(schema.allOf)) {
return schema.allOf.every(item =>
schemaAllowsValue(sanitizeSchemaForOpenAICompat(item), value),
)
}
if ('const' in schema && !deepEqualJsonValue(schema.const, value)) {
return false
}
if (Array.isArray(schema.enum)) {
if (!schema.enum.some(item => deepEqualJsonValue(item, value))) {
return false
}
}
const types = getJsonSchemaTypes(schema)
if (types.length > 0 && !types.some(type => matchesJsonSchemaType(type, value))) {
return false
}
return true
}
function sanitizeTypeField(record: Record<string, unknown>): void {
const allowed = new Set([
'string',
'number',
'integer',
'boolean',
'object',
'array',
'null',
])
const raw = record.type
if (typeof raw === 'string') {
if (!allowed.has(raw)) delete record.type
return
}
if (!Array.isArray(raw)) return
const filtered = raw.filter(
(value, index): value is string =>
typeof value === 'string' &&
allowed.has(value) &&
raw.indexOf(value) === index,
)
if (filtered.length === 0) {
delete record.type
} else if (filtered.length === 1) {
record.type = filtered[0]
} else {
record.type = filtered
}
}
/**
* Sanitize JSON Schema into a shape OpenAI-compatible providers and Codex
* strict-mode tooling are more likely to accept. This strips provider-rejected
* keywords while keeping enum/const cleanup defensive for imperfect MCP schemas.
*/
export function sanitizeSchemaForOpenAICompat(
schema: unknown,
): Record<string, unknown> {
const stripped = stripSchemaKeywords(schema, OPENAI_INCOMPATIBLE_SCHEMA_KEYWORDS)
if (!isSchemaRecord(stripped)) {
return {}
}
const record = { ...stripped }
sanitizeTypeField(record)
if (isSchemaRecord(record.properties)) {
const sanitizedProps: Record<string, unknown> = {}
for (const [key, value] of Object.entries(record.properties)) {
sanitizedProps[key] = sanitizeSchemaForOpenAICompat(value)
}
record.properties = sanitizedProps
}
if ('items' in record) {
if (Array.isArray(record.items)) {
record.items = record.items.map(item =>
sanitizeSchemaForOpenAICompat(item),
)
} else {
record.items = sanitizeSchemaForOpenAICompat(record.items)
}
}
for (const key of ['anyOf', 'oneOf', 'allOf'] as const) {
if (Array.isArray(record[key])) {
record[key] = record[key].map(item =>
sanitizeSchemaForOpenAICompat(item),
)
}
}
if (Array.isArray(record.required) && isSchemaRecord(record.properties)) {
record.required = record.required.filter(
(value): value is string =>
typeof value === 'string' && value in record.properties,
)
}
const schemaWithoutEnum = { ...record }
delete schemaWithoutEnum.enum
if (Array.isArray(record.enum)) {
const filteredEnum = record.enum.filter(value =>
schemaAllowsValue(schemaWithoutEnum, value),
)
if (filteredEnum.length > 0) {
record.enum = filteredEnum
} else {
delete record.enum
}
}
const schemaWithoutConst = { ...record }
delete schemaWithoutConst.const
if ('const' in record && !schemaAllowsValue(schemaWithoutConst, record.const)) {
delete record.const
}
return record
}

View File

@@ -3,7 +3,7 @@
* Inspired by https://github.com/nas5w/random-word-slugs
* with Claude-flavored words
*/
import { randomBytes } from 'crypto'
import { randomInt as cryptoRandomInt } from 'crypto'
// Adjectives for slug generation - whimsical and delightful
const ADJECTIVES = [
@@ -765,10 +765,7 @@ const VERBS = [
* Generate a cryptographically random integer in the range [0, max)
*/
function randomInt(max: number): number {
// Use crypto.randomBytes for better randomness than Math.random
const bytes = randomBytes(4)
const value = bytes.readUInt32BE(0)
return value % max
return cryptoRandomInt(max)
}
/**

View File

@@ -38,6 +38,26 @@ def test_converts_image_block_to_placeholder():
assert "[image]" in result[0]["content"]
assert "Describe this" in result[0]["content"]
def test_converts_base64_image_block_to_ollama_images():
messages = [{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": "YWJjMTIz",
},
},
{"type": "text", "text": "Describe this"},
],
}]
result = anthropic_to_ollama_messages(messages)
assert result[0]["images"] == ["YWJjMTIz"]
assert "Describe this" in result[0]["content"]
def test_converts_multi_turn():
messages = [
{"role": "user", "content": "Hi"},
@@ -118,3 +138,43 @@ async def test_ollama_chat_prepends_system():
)
assert captured["messages"][0]["role"] == "system"
assert "helpful" in captured["messages"][0]["content"]
@pytest.mark.asyncio
async def test_ollama_chat_includes_base64_images_in_payload():
captured = {}
async def mock_post(url, json=None, **kwargs):
captured.update(json or {})
m = MagicMock()
m.raise_for_status = MagicMock()
m.json.return_value = {
"message": {"content": "ok"},
"created_at": "",
"prompt_eval_count": 1,
"eval_count": 1,
}
return m
with patch("ollama_provider.httpx.AsyncClient") as MockClient:
MockClient.return_value.__aenter__.return_value.post = mock_post
await ollama_chat(
model="llama3:8b",
messages=[{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "ZHVtbXk=",
},
},
{"type": "text", "text": "What is in this image?"},
],
}],
)
assert captured["messages"][0]["images"] == ["ZHVtbXk="]
assert "What is in this image?" in captured["messages"][0]["content"]

View File

@@ -13,6 +13,11 @@ from smart_router import SmartRouter, Provider
# ── Fixtures ──────────────────────────────────────────────────────────────────
@pytest.fixture(autouse=True)
def fake_api_key(monkeypatch):
monkeypatch.setenv("FAKE_KEY", "test-key")
def make_provider(name, healthy=True, configured=True,
latency=100.0, cost=0.002, errors=0, requests=0):
p = Provider(
@@ -122,6 +127,13 @@ def test_get_model_large_request():
assert model == "openai-big"
def test_get_model_large_message_overrides_claude_label():
p = make_provider("openai")
r = make_router()
model = r.get_model_for_provider(p, "claude-haiku", is_large_request=True)
assert model == "openai-big"
def test_get_model_small_request():
p = make_provider("openai")
r = make_router()
@@ -140,6 +152,16 @@ async def test_route_returns_best_provider():
assert result["provider"] == "cheap"
@pytest.mark.asyncio
async def test_route_uses_big_model_for_large_message_bodies():
p = make_provider("openai")
r = make_router(providers=[p])
result = await r.route([
{"role": "user", "content": "x" * 3001},
], "claude-haiku")
assert result["model"] == "openai-big"
@pytest.mark.asyncio
async def test_route_raises_when_no_providers():
p = make_provider("a", healthy=False)

View File

@@ -0,0 +1,13 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Extension",
"type": "extensionHost",
"request": "launch",
"args": ["--extensionDevelopmentPath=${workspaceFolder}"],
"outFiles": ["${workspaceFolder}/out/**/*.js"],
"preLaunchTask": "${defaultBuildTask}"
}
]
}

View File

@@ -0,0 +1,44 @@
# OpenClaude VS Code Extension
A sleek VS Code companion for OpenClaude with a visual **Control Center** plus terminal-first workflows.
## Features
- **Control Center sidebar UI** in the Activity Bar:
- Launch OpenClaude
- Open repository/docs
- Open VS Code theme picker
- **Terminal launch command**: `OpenClaude: Launch in Terminal`
- **Built-in dark theme**: `OpenClaude Terminal Black` (terminal-inspired, low-glare, neon accents)
## Requirements
- VS Code `1.95+`
- `openclaude` available in your terminal PATH (`npm install -g @gitlawb/openclaude`)
## Commands
- `OpenClaude: Open Control Center`
- `OpenClaude: Launch in Terminal`
- `OpenClaude: Open Repository`
## Settings
- `openclaude.launchCommand` (default: `openclaude`)
- `openclaude.terminalName` (default: `OpenClaude`)
- `openclaude.useOpenAIShim` (default: `true`)
## Development
From this folder:
```bash
npm run lint
```
To package (optional):
```bash
npm run package
```

View File

@@ -0,0 +1,6 @@
<svg width="128" height="128" viewBox="0 0 128 128" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="128" height="128" rx="20" fill="#0B0F18"/>
<rect x="16" y="20" width="96" height="88" rx="10" fill="#090B10" stroke="#2A3350"/>
<path d="M32 48L46 60L32 72" stroke="#66D9EF" stroke-width="8" stroke-linecap="round" stroke-linejoin="round"/>
<rect x="56" y="68" width="38" height="8" rx="4" fill="#89DD7C"/>
</svg>

After

Width:  |  Height:  |  Size: 434 B

View File

@@ -0,0 +1,102 @@
{
"name": "openclaude-vscode",
"displayName": "OpenClaude",
"description": "Sleek VS Code extension for OpenClaude with a visual Control Center and terminal-aligned theme.",
"version": "0.1.1",
"publisher": "devnull-bootloader",
"engines": {
"vscode": "^1.95.0"
},
"categories": [
"Themes",
"Other"
],
"activationEvents": [
"onStartupFinished",
"onCommand:openclaude.start",
"onCommand:openclaude.openDocs",
"onCommand:openclaude.openControlCenter",
"onView:openclaude.controlCenter"
],
"main": "./src/extension.js",
"contributes": {
"commands": [
{
"command": "openclaude.start",
"title": "OpenClaude: Launch in Terminal",
"category": "OpenClaude"
},
{
"command": "openclaude.openDocs",
"title": "OpenClaude: Open Repository",
"category": "OpenClaude"
},
{
"command": "openclaude.openControlCenter",
"title": "OpenClaude: Open Control Center",
"category": "OpenClaude"
}
],
"viewsContainers": {
"activitybar": [
{
"id": "openclaude",
"title": "OpenClaude",
"icon": "media/openclaude.svg"
}
]
},
"views": {
"openclaude": [
{
"id": "openclaude.controlCenter",
"name": "Control Center",
"type": "webview"
}
]
},
"configuration": {
"title": "OpenClaude",
"properties": {
"openclaude.launchCommand": {
"type": "string",
"default": "openclaude",
"description": "Command run in the integrated terminal when launching OpenClaude."
},
"openclaude.terminalName": {
"type": "string",
"default": "OpenClaude",
"description": "Integrated terminal tab name for OpenClaude sessions."
},
"openclaude.useOpenAIShim": {
"type": "boolean",
"default": false,
"description": "Optionally set CLAUDE_CODE_USE_OPENAI=1 in launched OpenClaude terminals."
}
}
},
"themes": [
{
"label": "OpenClaude Terminal Black",
"uiTheme": "vs-dark",
"path": "./themes/OpenClaude-Terminal-Black.json"
}
]
},
"scripts": {
"lint": "node --check ./src/extension.js",
"package": "npx @vscode/vsce package --no-dependencies"
},
"keywords": [
"openclaude",
"terminal",
"theme",
"cli",
"llm"
],
"repository": {
"type": "git",
"url": "https://github.com/Gitlawb/openclaude"
},
"license": "MIT"
}

View File

@@ -0,0 +1,335 @@
const vscode = require('vscode');
const crypto = require('crypto');
const { exec } = require('child_process');
const { promisify } = require('util');
const execAsync = promisify(exec);
const OPENCLAUDE_REPO_URL = 'https://github.com/Gitlawb/openclaude';
async function isCommandAvailable(command) {
try {
if (!command) {
return false;
}
if (process.platform === 'win32') {
await execAsync(`where ${command}`);
} else {
await execAsync(`command -v ${command}`);
}
return true;
} catch {
return false;
}
}
function getExecutableFromCommand(command) {
return command.trim().split(/\s+/)[0];
}
async function launchOpenClaude() {
const configured = vscode.workspace.getConfiguration('openclaude');
const launchCommand = configured.get('launchCommand', 'openclaude');
const terminalName = configured.get('terminalName', 'OpenClaude');
const shimEnabled = configured.get('useOpenAIShim', false);
const executable = getExecutableFromCommand(launchCommand);
const installed = await isCommandAvailable(executable);
if (!installed) {
const action = await vscode.window.showErrorMessage(
`OpenClaude command not found: ${executable}. Install it with: npm install -g @gitlawb/openclaude`,
'Open Repository'
);
if (action === 'Open Repository') {
await vscode.env.openExternal(vscode.Uri.parse(OPENCLAUDE_REPO_URL));
}
return;
}
const env = {};
if (shimEnabled) {
env.CLAUDE_CODE_USE_OPENAI = '1';
}
const terminal = vscode.window.createTerminal({
name: terminalName,
env,
});
terminal.show(true);
terminal.sendText(launchCommand, true);
}
class OpenClaudeControlCenterProvider {
async resolveWebviewView(webviewView) {
webviewView.webview.options = { enableScripts: true };
const configured = vscode.workspace.getConfiguration('openclaude');
const launchCommand = configured.get('launchCommand', 'openclaude');
const executable = getExecutableFromCommand(launchCommand);
const installed = await isCommandAvailable(executable);
const shimEnabled = configured.get('useOpenAIShim', false);
const shortcut = process.platform === 'darwin' ? 'Cmd+Shift+P' : 'Ctrl+Shift+P';
webviewView.webview.html = this.getHtml(webviewView.webview, {
installed,
shimEnabled,
shortcut,
executable,
});
webviewView.webview.onDidReceiveMessage(async (message) => {
if (message?.type === 'launch') {
await launchOpenClaude();
return;
}
if (message?.type === 'docs') {
await vscode.env.openExternal(vscode.Uri.parse(OPENCLAUDE_REPO_URL));
return;
}
if (message?.type === 'commands') {
await vscode.commands.executeCommand('workbench.action.showCommands');
}
});
}
getHtml(webview, status) {
const nonce = crypto.randomBytes(16).toString('base64');
const runtimeLabel = status.installed ? 'available' : 'missing';
const shimLabel = status.shimEnabled ? 'enabled (CLAUDE_CODE_USE_OPENAI=1)' : 'disabled';
return `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="Content-Security-Policy" content="default-src 'none'; style-src 'unsafe-inline'; script-src 'nonce-${nonce}';" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<style>
:root {
--oc-bg-1: #081018;
--oc-bg-2: #0e1b29;
--oc-line: #2f4d63;
--oc-accent: #7fffd4;
--oc-accent-dim: #4db89a;
--oc-text-dim: #94a7b5;
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: "Cascadia Code", "JetBrains Mono", "Fira Code", Consolas, "Liberation Mono", Menlo, monospace;
color: var(--vscode-foreground);
background:
radial-gradient(circle at 85% -10%, color-mix(in srgb, var(--oc-accent) 16%, transparent), transparent 45%),
linear-gradient(165deg, var(--oc-bg-1), var(--oc-bg-2));
padding: 14px;
min-height: 100vh;
line-height: 1.45;
letter-spacing: 0.15px;
overflow-x: hidden;
}
.panel {
border: 1px solid color-mix(in srgb, var(--oc-line) 80%, var(--vscode-editorWidget-border));
border-radius: 10px;
background: color-mix(in srgb, var(--oc-bg-1) 78%, var(--vscode-sideBar-background));
box-shadow: 0 0 0 1px rgba(127, 255, 212, 0.08), 0 10px 24px rgba(0, 0, 0, 0.35);
overflow: hidden;
animation: boot 360ms ease-out;
}
.topbar {
padding: 8px 10px;
font-size: 10px;
text-transform: uppercase;
color: var(--oc-text-dim);
border-bottom: 1px solid var(--oc-line);
background: color-mix(in srgb, var(--oc-bg-2) 74%, black);
display: flex;
justify-content: space-between;
gap: 8px;
}
.boot-dot {
color: var(--oc-accent);
animation: blink 1.2s steps(1, end) infinite;
}
.content {
padding: 12px;
display: grid;
gap: 14px;
}
.title {
color: var(--oc-accent);
font-size: 14px;
font-weight: 700;
margin-bottom: 4px;
}
.sub {
color: var(--oc-text-dim);
font-size: 11px;
}
.terminal-box {
border: 1px dashed color-mix(in srgb, var(--oc-line) 78%, white);
border-radius: 8px;
padding: 10px;
background: color-mix(in srgb, var(--oc-bg-2) 78%, black);
font-size: 11px;
display: grid;
gap: 6px;
}
.terminal-row {
color: var(--oc-text-dim);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.prompt {
color: var(--oc-accent);
}
.cursor::after {
content: "_";
animation: blink 1s steps(1, end) infinite;
margin-left: 1px;
}
.actions {
display: grid;
gap: 8px;
}
.btn {
width: 100%;
border: 1px solid var(--oc-line);
border-radius: 7px;
padding: 10px;
cursor: pointer;
text-align: left;
font-family: inherit;
font-size: 11px;
letter-spacing: 0.3px;
text-transform: uppercase;
transition: transform 140ms ease, border-color 140ms ease, background 140ms ease;
background: color-mix(in srgb, var(--oc-bg-2) 82%, black);
color: var(--vscode-foreground);
position: relative;
overflow: hidden;
}
.btn::before {
content: ">";
color: var(--oc-accent-dim);
margin-right: 8px;
display: inline-block;
width: 10px;
}
.btn:hover {
border-color: var(--oc-accent-dim);
transform: translateX(2px);
background: color-mix(in srgb, var(--oc-bg-2) 68%, #113642);
}
.btn.primary {
border-color: color-mix(in srgb, var(--oc-accent) 50%, var(--oc-line));
box-shadow: inset 0 0 0 1px rgba(127, 255, 212, 0.12);
}
.hint {
font-size: 10px;
color: var(--oc-text-dim);
border-top: 1px solid var(--oc-line);
padding-top: 10px;
}
.hint code {
font-family: inherit;
color: var(--oc-accent);
background: rgba(0, 0, 0, 0.26);
padding: 2px 5px;
border-radius: 4px;
border: 1px solid rgba(127, 255, 212, 0.14);
}
@keyframes blink {
50% {
opacity: 0;
}
}
@keyframes boot {
from {
transform: translateY(6px);
opacity: 0;
}
to {
transform: translateY(0);
opacity: 1;
}
}
</style>
</head>
<body>
<div class="panel">
<div class="topbar">
<span>openclaude control center</span>
<span class="boot-dot">online</span>
</div>
<div class="content">
<div>
<div class="title">READY FOR INPUT</div>
<div class="sub">Terminal-oriented workflow with direct command access.</div>
</div>
<div class="terminal-box">
<div class="terminal-row"><span class="prompt">$</span> openclaude --status</div>
<div class="terminal-row">runtime: ${runtimeLabel}</div>
<div class="terminal-row">shim: ${shimLabel}</div>
<div class="terminal-row">command: ${status.executable}</div>
<div class="terminal-row"><span class="prompt">$</span> <span class="cursor">awaiting command</span></div>
</div>
<div class="actions">
<button class="btn primary" id="launch">Launch OpenClaude</button>
<button class="btn" id="docs">Open Repository</button>
<button class="btn" id="commands">Open Command Palette</button>
</div>
<div class="hint">
Quick trigger: use <code>${status.shortcut}</code> and run OpenClaude commands from anywhere.
</div>
</div>
</div>
<script nonce="${nonce}">
const vscode = acquireVsCodeApi();
document.getElementById('launch').addEventListener('click', () => vscode.postMessage({ type: 'launch' }));
document.getElementById('docs').addEventListener('click', () => vscode.postMessage({ type: 'docs' }));
document.getElementById('commands').addEventListener('click', () => vscode.postMessage({ type: 'commands' }));
</script>
</body>
</html>`;
}
}
/**
* @param {vscode.ExtensionContext} context
*/
function activate(context) {
const startCommand = vscode.commands.registerCommand('openclaude.start', async () => {
await launchOpenClaude();
});
const openDocsCommand = vscode.commands.registerCommand('openclaude.openDocs', async () => {
await vscode.env.openExternal(vscode.Uri.parse(OPENCLAUDE_REPO_URL));
});
const openUiCommand = vscode.commands.registerCommand('openclaude.openControlCenter', async () => {
await vscode.commands.executeCommand('workbench.view.extension.openclaude');
});
const provider = new OpenClaudeControlCenterProvider();
const providerDisposable = vscode.window.registerWebviewViewProvider('openclaude.controlCenter', provider);
context.subscriptions.push(startCommand, openDocsCommand, openUiCommand, providerDisposable);
}
function deactivate() {}
module.exports = {
activate,
deactivate,
};

View File

@@ -0,0 +1,78 @@
{
"$schema": "vscode://schemas/color-theme",
"name": "OpenClaude Terminal Black",
"type": "dark",
"colors": {
"editor.background": "#090B10",
"editor.foreground": "#D6E2FF",
"editorCursor.foreground": "#66D9EF",
"editorLineNumber.foreground": "#3D4458",
"editorLineNumber.activeForeground": "#7F8AA3",
"editor.selectionBackground": "#1C2333",
"editor.inactiveSelectionBackground": "#141A27",
"editor.wordHighlightBackground": "#1C233344",
"editor.wordHighlightStrongBackground": "#24304B66",
"editorIndentGuide.background1": "#131825",
"editorIndentGuide.activeBackground1": "#2A3350",
"editorBracketMatch.background": "#25304B66",
"editorBracketMatch.border": "#66D9EF",
"terminal.background": "#090B10",
"terminal.foreground": "#D6E2FF",
"terminalCursor.background": "#66D9EF",
"terminalCursor.foreground": "#66D9EF",
"terminal.ansiBlack": "#090B10",
"terminal.ansiRed": "#FF6B6B",
"terminal.ansiGreen": "#89DD7C",
"terminal.ansiYellow": "#F2C14E",
"terminal.ansiBlue": "#5CA9FF",
"terminal.ansiMagenta": "#C792EA",
"terminal.ansiCyan": "#66D9EF",
"terminal.ansiWhite": "#D6E2FF",
"terminal.ansiBrightBlack": "#4A5165",
"terminal.ansiBrightRed": "#FF8787",
"terminal.ansiBrightGreen": "#A4EFA0",
"terminal.ansiBrightYellow": "#FFD479",
"terminal.ansiBrightBlue": "#86C1FF",
"terminal.ansiBrightMagenta": "#D8B0F5",
"terminal.ansiBrightCyan": "#9DE9FF",
"terminal.ansiBrightWhite": "#E8F0FF",
"statusBar.background": "#0F1420",
"statusBar.foreground": "#D6E2FF",
"activityBar.background": "#0D111B",
"activityBar.foreground": "#D6E2FF",
"sideBar.background": "#0B0F18",
"sideBar.foreground": "#B3BDD4",
"titleBar.activeBackground": "#0B0F18",
"titleBar.activeForeground": "#D6E2FF"
},
"tokenColors": [
{
"scope": ["comment", "punctuation.definition.comment"],
"settings": { "foreground": "#5A637B", "fontStyle": "italic" }
},
{
"scope": ["keyword", "storage.type", "storage.modifier"],
"settings": { "foreground": "#C792EA" }
},
{
"scope": ["string", "constant.other.symbol"],
"settings": { "foreground": "#89DD7C" }
},
{
"scope": ["constant.numeric", "constant.language"],
"settings": { "foreground": "#F2C14E" }
},
{
"scope": ["entity.name.function", "support.function"],
"settings": { "foreground": "#5CA9FF" }
},
{
"scope": ["variable", "entity.name.variable"],
"settings": { "foreground": "#D6E2FF" }
},
{
"scope": ["entity.name.type", "support.type", "entity.name.class"],
"settings": { "foreground": "#66D9EF" }
}
]
}