* fix: disable experimental API betas by default to prevent 500 errors
Tool search (defer_loading), global cache scope, and context management
betas require internal Anthropic server-side support. External accounts
receive 500 Internal Server Error when these are sent.
Set CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=true by default in the CLI
entrypoint. Users with internal access can opt back in with =false.
Also includes: cache key stability fixes (Sonnet 1M latch, system-before-
messages key ordering, resume fingerprint isMeta skip), sideQuery default
cleanup, and /dream command.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: standardize API headers to Headers type and enable tengu feature flags by default
* fix: address PR review — dream lock, MCP betas guard, redundant Partial
- Call recordConsolidation() programmatically in /dream instead of
delegating to model prompt (unreliable)
- Add CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS guard to MCP entrypoint
(was only in CLI entrypoint, causing 500s in MCP server mode)
- Remove redundant ? markers from SecretValueSource Partial<{}> type
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Prevents collision with existing Claude Code installations that already
use ~/.claude for their own config, settings, and project data.
Migration compatibility: if ~/.openclaude does not yet exist but ~/.claude
does, the legacy path is kept automatically so existing openclaude users
don't lose their data on upgrade. New installs go straight to ~/.openclaude.
Users who need an explicit path can set CLAUDE_CONFIG_DIR.
Fixes#184
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat: add agentModels and agentRouting to SettingsSchema
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add agentRouting module for per-agent provider resolution
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: thread providerOverride through OpenAI shim for per-agent routing
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: getAnthropicClient accepts providerOverride for agent routing
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: thread providerOverride through Options and queryModel calls
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: thread providerOverride through query loop and ToolUseContext
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: resolve agent routing in runAgent and inject providerOverride
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add Agent Routing configuration guide to README
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: add unit tests for resolveAgentProvider + plaintext api_key note
- 15 tests covering priority chain (name > subagentType > default > null)
- normalize() case-insensitive and hyphen/underscore equivalence
- Edge cases: null settings, missing config sections, non-existent model
- README note about api_key stored in plaintext
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* security: address code review — SSRF, credential leak, key collision
- base_url schema now uses z.string().url() for SSRF mitigation
- Strip auth headers (Authorization, x-api-key, api-key) from
defaultHeaders when providerOverride is active, preventing
Anthropic credentials from leaking to third-party endpoints
- Warn on duplicate normalized routing keys to prevent silent shadowing
- providerOverride.apiKey is never logged (verified via grep)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: 冯俊辉 <fengjunhui@shiyanjia.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add flickerFreeMode to GlobalConfig so external users can enable
fullscreen alt-screen mode via /config instead of having to set
the CLAUDE_CODE_NO_FLICKER=1 env var manually.
Priority order in isFullscreenEnvEnabled():
CLAUDE_CODE_NO_FLICKER=0 → always off (env wins)
CLAUDE_CODE_NO_FLICKER=1 → always on (env wins)
tmux -CC detected → off (terminal safety guard)
config flickerFreeMode → user preference (new)
USER_TYPE=ant → internal default
The env var still takes full precedence so existing scripts and
automation are unaffected. The new setting only activates when
flickerFreeMode is explicitly set in config.
Normalize shell command stdout and stderr before the prompt-shell path and shared tool-result mappers use string operations. This prevents /security-review from crashing when a shell tool returns null output fields and adds regression coverage for both direct mapper calls and prompt generation.
Fixes#165
Co-authored-by: Claude <noreply@anthropic.com>
* feat: add --provider CLI flag for multi-provider support
Adds a --provider flag that maps friendly provider names to the
environment variables the codebase uses for provider detection.
No more manual env-var configuration — users can now simply run:
openclaude --provider openai --model gpt-4o
openclaude --provider gemini --model gemini-2.0-flash
openclaude --provider ollama --model llama3.2
openclaude --provider bedrock
openclaude --provider vertex
Implementation details:
- providerFlag.ts: core logic — maps provider names to env vars,
uses ??= so explicit env vars always win over the flag defaults
- providerFlag.test.ts: 18 tests covering all 7 providers,
error messages, model passthrough, and env-var precedence
- cli.tsx: early fast-path (mirrors --bare pattern) — sets env
vars before Commander option-building and module constants run
- main.tsx: adds --provider to Commander option chain for --help
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: custom OPENAI_BASE_URL always wins over Codex model alias detection
When OPENAI_MODEL=gpt-5.4 (or gpt-5.4-mini) and a custom OPENAI_BASE_URL
is set (Azure, OpenRouter, etc), the transport was incorrectly forced to
codex_responses because gpt-5.4 is in CODEX_ALIAS_MODELS. This caused
requests to be sent with Codex auth instead of the user's API key,
resulting in 401 Unauthorized errors.
Fix: only use codex_responses when the base URL is explicitly the Codex
endpoint, OR when no custom base URL is set and the model is a Codex
alias. An explicit OPENAI_BASE_URL always takes priority over model-name
based Codex detection.
Verified locally: gpt-5.4 via OpenRouter now correctly shows
Provider=OpenRouter, Endpoint=https://openrouter.ai/api/v1 instead of
routing to chatgpt.com/backend-api/codex.
Fixes#200, #203
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
React 19 requires `supportsMicrotasks: true` in the reconciler host
config so it can flush state updates from passive effects via
queueMicrotask. Without this, state updates triggered inside
useMcpConnectivityStatus were silently dropped, corrupting React's
internal executionContext and causing all keyboard input to freeze
after the "N MCP server(s) need auth" notification appeared.
Root cause (three-part fix):
1. reconciler.ts: declare supportsMicrotasks + scheduleMicrotask so
React 19 schedules passive-effect flushes correctly.
2. useMcpConnectivityStatus.tsx: wrap the MCP auth notification effect
in try/catch so any unexpected throw does not propagate into
flushPassiveEffects and permanently corrupt executionContext.
3. notifications.tsx: wrap addNotification, removeNotification, and
processQueue in try/catch for the same reason — these are called
from 12+ notification hooks across passive effects.
Also fixes a pre-existing test isolation bug in context.test.ts where
assigning `undefined` to process.env produced the string "undefined",
polluting the env for subsequent test files.
Resolves: #169, #205, #77
Images in the clipboard could fail to become pasted image attachments in OpenClaude. User-facing symptom: paste would detect that an image existed, but nothing would appear in the prompt, and bundled builds could also fail while converting BMP clipboard images into a format OpenClaude can send to the model.
Linux clipboard image paste had drifted between detection and extraction. checkImage accepted png/jpeg/jpg/gif/webp/bmp, but saveImage only tried image/png and image/bmp. When the clipboard advertised a JPEG, GIF, or WebP image, OpenClaude concluded that an image was present and then failed to write the temp screenshot file, so the paste path returned null and nothing was inserted into the prompt.
Bundled OpenClaude builds had a second failure mode. The build replaces image-processor-napi and sharp with explicit stub modules in bundled mode. getImageProcessor() treated those stubs as real processors, so BMP clipboard images reached sharp(imageBuffer).png() and then failed before they could be converted into a pasteable PNG for OpenClaude.
Keep the Linux clipboard commands generated from one MIME type list and reject __stub-marked image processors up front instead of failing in the middle of image paste.
Finding 1 [CRITICAL] — sessionRunner leaks full process.env to child
Extract buildChildEnv() with an explicit allowlist of safe OS/runtime vars.
Child process no longer inherits ANTHROPIC_API_KEY, OPENAI_API_KEY, DB
credentials, or any other secret present in the parent shell environment.
Only CLAUDE_CODE_* bridge vars, PATH, HOME, and standard OS env are passed.
Finding 2 [HIGH] — USER_TYPE=ant activatable by external users
Add isAntEmployee() -> false constant in src/utils/buildConfig.ts.
Replace all three direct process.env.USER_TYPE === 'ant' checks in
setup.ts and onChangeAppState.ts so no external user can activate
Anthropic-internal code paths (commit attribution, system prompt clearing,
dangerously-skip-permissions bypass) by setting USER_TYPE in their shell.
Finding 3 [HIGH] — memoryScan.ts unlimited directory walk
Add MAX_DEPTH=3 guard on readdir({ recursive: true }) results.
Deep or symlink-looped memory directories no longer cause an unbounded
blocking walk before the MAX_MEMORY_FILES cap takes effect.
Finding 5 [HIGH] — buildSdkUrl uses string.includes for protocol detection
Replace apiBaseUrl.includes('localhost') with new URL(apiBaseUrl).hostname
comparison so a remote URL containing 'localhost' in its path no longer
incorrectly gets ws:// (unencrypted) instead of wss://.
Finding 6 [HIGH] — upstream proxy writes unvalidated CA cert to disk
Add isValidPemContent() validation before writeFile in the CA cert download
path. A compromised proxy sending non-PEM data (HTML, JSON, scripts) is now
rejected before it can be appended to the system CA bundle.
Each fix is covered by new unit tests (25 tests across 5 new test files).
All 52 tests pass. Build verified clean on v0.1.7.
Fixes#42
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two fixes in openaiContextWindows.ts:
1. Sort lookup keys by length descending in lookupByModel() so the most
specific prefix always wins. Without this, 'gpt-4-turbo-preview'
could match 'gpt-4' (8k) instead of 'gpt-4-turbo' (128k) depending
on V8's object key iteration order.
2. Update Llama 3.1/3.2/3.3 context windows from 8,192 to 128,000.
These models support 128k context natively (Meta official specs).
The previous 8k value was Ollama's default num_ctx, not the model's
actual capability, causing premature auto-compact warnings.
Two fixes for issue #133 where setting ANTHROPIC_API_KEY=dummy alongside
CLAUDE_CODE_USE_GEMINI=1 causes "Invalid API key" errors:
1. auth.ts: In the CI branch of getAnthropicApiKeyWithSource(), the
ANTHROPIC_API_KEY value was returned without checking isUsing3PServices().
A dummy key leaked into the Anthropic key resolution pipeline even when
Gemini was the active provider. Now guards with isUsing3PServices().
2. errors.ts: The x-api-key error handler surfaced "Invalid API key" for
any provider. Added getAPIProvider() === 'firstParty' guard so 3P users
see the real underlying error instead of a misleading auth message.
Note: The cli.tsx Gemini validation fix (originally part of this PR) was
independently implemented in PR #121 and is already on main.
Replace raw === '1' || === 'true' comparisons with isEnvTruthy() in
context.ts for consistency with getAPIProvider() in providers.ts.
This also covers the newly added CLAUDE_CODE_USE_GITHUB provider.
Add native Gemini model entries (without google/ prefix) to both
context window and max output token tables. Corrects gemini-2.5-pro
and gemini-2.5-flash max output tokens to 65,536 (was 8,192/32,768).
The version kill-switch calls Anthropic's GrowthBook endpoint to
enforce a minimum version. This is currently safe for 3P users only
because isAnalyticsDisabled() returns true (disabling GrowthBook).
Adding an explicit provider guard makes this safety independent of the
analytics stub, preventing 3P users from being blocked by Anthropic's
version requirements in case of future upstream merges.
- Introduced a new provider profile for Atomic Chat, allowing it to be used alongside existing providers.
- Updated `package.json` to include a new development script for launching Atomic Chat.
- Modified `smart_router.py` to recognize Atomic Chat as a local provider that does not require an API key.
- Enhanced provider discovery and launch scripts to handle Atomic Chat, including model listing and connection checks.
- Added tests to ensure proper environment setup and behavior for Atomic Chat profiles.
This update expands the functionality of the application to support local LLMs via Atomic Chat, improving versatility for users.
- Introduced environment variable CLAUDE_CODE_USE_GITHUB to enable GitHub Models.
- Added checks for GITHUB_TOKEN or GH_TOKEN for authentication.
- Updated base URL handling to include GitHub Models default.
- Enhanced provider detection and error handling for GitHub Models.
- Updated relevant functions and components to accommodate the new provider.
When using non-Anthropic providers (Ollama, Gemini, Codex), the
underlying call to queryHaiku for session title generation fails.
Previously, this caused the catch block to return null, leaving the
terminal tab permanently stuck on 'Claude Code'.
Now, when the API call fails, we gracefully derive a title locally from
the user's first message (first 7 words, sentence-cased), ensuring
users still see a meaningful session title in their terminal tab.