Finding 1 [CRITICAL] — sessionRunner leaks full process.env to child
Extract buildChildEnv() with an explicit allowlist of safe OS/runtime vars.
Child process no longer inherits ANTHROPIC_API_KEY, OPENAI_API_KEY, DB
credentials, or any other secret present in the parent shell environment.
Only CLAUDE_CODE_* bridge vars, PATH, HOME, and standard OS env are passed.
Finding 2 [HIGH] — USER_TYPE=ant activatable by external users
Add isAntEmployee() -> false constant in src/utils/buildConfig.ts.
Replace all three direct process.env.USER_TYPE === 'ant' checks in
setup.ts and onChangeAppState.ts so no external user can activate
Anthropic-internal code paths (commit attribution, system prompt clearing,
dangerously-skip-permissions bypass) by setting USER_TYPE in their shell.
Finding 3 [HIGH] — memoryScan.ts unlimited directory walk
Add MAX_DEPTH=3 guard on readdir({ recursive: true }) results.
Deep or symlink-looped memory directories no longer cause an unbounded
blocking walk before the MAX_MEMORY_FILES cap takes effect.
Finding 5 [HIGH] — buildSdkUrl uses string.includes for protocol detection
Replace apiBaseUrl.includes('localhost') with new URL(apiBaseUrl).hostname
comparison so a remote URL containing 'localhost' in its path no longer
incorrectly gets ws:// (unencrypted) instead of wss://.
Finding 6 [HIGH] — upstream proxy writes unvalidated CA cert to disk
Add isValidPemContent() validation before writeFile in the CA cert download
path. A compromised proxy sending non-PEM data (HTML, JSON, scripts) is now
rejected before it can be appended to the system CA bundle.
Each fix is covered by new unit tests (25 tests across 5 new test files).
All 52 tests pass. Build verified clean on v0.1.7.
Fixes#42
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
WebSearch is currently disabled for all non-Anthropic providers (OpenAI
shim, DeepSeek, Ollama, etc.) because those providers have no native
search backend. This adds Firecrawl as a fallback that activates when
FIRECRAWL_API_KEY is set, unlocking web search for every model
openclaude supports.
WebFetch uses basic HTTP + Turndown for HTML-to-markdown conversion,
which fails silently on JS-rendered SPAs and bot-protected pages.
Firecrawl scrape replaces the fetch layer when FIRECRAWL_API_KEY is set,
returning clean markdown that handles dynamic content correctly.
Changes:
- WebSearchTool: add runFirecrawlSearch() using @mendable/firecrawl-js,
respects allowed_domains (post-filter) and blocked_domains (-site: operators),
includes result snippets alongside links. shouldUseFirecrawl() ensures
firstParty/Vertex/Foundry/Codex providers keep their native backends.
- WebFetchTool: add scrapeWithFirecrawl(), drops into the existing
applyPromptToMarkdown() pipeline so prompt processing is unchanged.
- Remove "Web search is only available in the US" restriction from
prompt when Firecrawl is active (it works globally).
1. errors.ts: Add getCustomOffSwitchMessage() that returns a
provider-neutral message for 3P users instead of the hardcoded
"Opus is experiencing high load, please use /model to switch to
Sonnet" which is misleading for OpenAI/Gemini/Ollama users.
The original constant is preserved for backward-compatible string
matching in error handlers.
2. Onboarding.tsx: Skip the "approve API key" step when a 3P provider
is active. Previously, having ANTHROPIC_API_KEY in the environment
(e.g., from a previous Anthropic setup) triggered an irrelevant
Anthropic key approval UI even when using Gemini or OpenAI.
1. cli/update.ts: Block the update command for third-party providers.
The update mechanism downloads from Anthropic's GCS bucket, which
would silently replace the OpenClaude build (with the OpenAI shim)
with the upstream Claude Code binary (without it). Now shows an
actionable message directing users to rebuild from source.
2. codexShim.ts: Filter thinking blocks from assistant history, matching
the openaiShim behavior. Without this, thinking blocks were included
as plain text in assistant messages for the Codex transport but
excluded for the OpenAI transport — causing inconsistent history
when switching providers mid-session.
Three targeted fixes:
1. Replace Math.random() with crypto.randomUUID() for message and tool
call IDs in both openaiShim.ts and codexShim.ts. Math.random() is
not cryptographically secure and predictable in seeded environments.
2. Anchor Azure endpoint detection to parsed hostname instead of raw
URL regex. Adds support for Azure AI Foundry (services.ai.azure.com)
alongside existing cognitiveservices and openai Azure endpoints.
Prevents SSRF-style bypass via path segments.
3. Surface content safety filter blocks to the user. When Gemini or
Azure returns finish_reason 'content_filter' or 'safety', emit a
visible text block '[Content blocked by provider safety filter]'
instead of silently returning empty/truncated content with
stop_reason 'end_turn'. Applied to both streaming and non-streaming.
Two fixes in openaiContextWindows.ts:
1. Sort lookup keys by length descending in lookupByModel() so the most
specific prefix always wins. Without this, 'gpt-4-turbo-preview'
could match 'gpt-4' (8k) instead of 'gpt-4-turbo' (128k) depending
on V8's object key iteration order.
2. Update Llama 3.1/3.2/3.3 context windows from 8,192 to 128,000.
These models support 128k context natively (Meta official specs).
The previous 8k value was Ollama's default num_ctx, not the model's
actual capability, causing premature auto-compact warnings.
Two fixes for issue #133 where setting ANTHROPIC_API_KEY=dummy alongside
CLAUDE_CODE_USE_GEMINI=1 causes "Invalid API key" errors:
1. auth.ts: In the CI branch of getAnthropicApiKeyWithSource(), the
ANTHROPIC_API_KEY value was returned without checking isUsing3PServices().
A dummy key leaked into the Anthropic key resolution pipeline even when
Gemini was the active provider. Now guards with isUsing3PServices().
2. errors.ts: The x-api-key error handler surfaced "Invalid API key" for
any provider. Added getAPIProvider() === 'firstParty' guard so 3P users
see the real underlying error instead of a misleading auth message.
Note: The cli.tsx Gemini validation fix (originally part of this PR) was
independently implemented in PR #121 and is already on main.
OpenAI returns cached token counts in usage.prompt_tokens_details.cached_tokens
but the shim hardcoded cache_read_input_tokens to 0. This made prompt
caching invisible to the cost tracker and session summary even when
OpenAI's automatic caching was actively reducing costs.
Changes:
- Extend OpenAIStreamChunk usage interface with prompt_tokens_details
- Map cached_tokens to cache_read_input_tokens in convertChunkUsage()
- Same fix in _convertNonStreamingResponse() for non-streaming path
- cache_creation_input_tokens remains 0 (OpenAI auto-caching has no
creation cost — it is free and automatic)
Replace raw === '1' || === 'true' comparisons with isEnvTruthy() in
context.ts for consistency with getAPIProvider() in providers.ts.
This also covers the newly added CLAUDE_CODE_USE_GITHUB provider.
Add native Gemini model entries (without google/ prefix) to both
context window and max output token tables. Corrects gemini-2.5-pro
and gemini-2.5-flash max output tokens to 65,536 (was 8,192/32,768).
Addresses the most critical remaining issues in the provider shim layer,
building on top of #124 (recursive schema normalization + try/finally).
openaiShim.ts:
- Throw APIError via SDK factory instead of plain Error — enables retry
on 429/503 (was completely broken: zero retries for all 3P providers)
- Guard stop_reason !== null before emitting usage-only message_delta
(Azure/Groq send usage before finish_reason)
- Fix assistant content: join text parts instead of invalid as-string cast
(Mistral rejects array content on assistant role)
- Expose real HTTP Response in withResponse() for header inspection
- Skip stream_options for local providers (Ollama < 0.5 compatibility)
codexShim.ts:
- Throw APIError at all 4 throw sites (HTTP + 3 streaming errors)
- Add tool_choice 'none' mapping (was silently ignored)
- Forward is_error flag with Error: prefix (matching openaiShim)
- Set competing provider flags to undefined in updateSettingsForSource to ensure clean GitHub boot
- Fix resolveProviderRequest to default to github:copilot when OPENAI_MODEL is unset
- Hydrate secure tokens and managed settings in system-check.ts to prevent false negatives
- Add models:read scope to GitHub device flow
Wait for failed MCP transport cleanup before command exit so targeted live checks do not crash on Windows.
Co-Authored-By: Claude <noreply@anthropic.com>
Add the MCP doctor subcommand with text and JSON output, config-only mode, and scope filtering so users can diagnose MCP issues from the CLI.
Co-Authored-By: Claude <noreply@anthropic.com>
Add the diagnostics core and report model for MCP health, scope, and config analysis. This creates the structured report used by both text and JSON doctor output.
Co-Authored-By: Claude <noreply@anthropic.com>
The version kill-switch calls Anthropic's GrowthBook endpoint to
enforce a minimum version. This is currently safe for 3P users only
because isAnalyticsDisabled() returns true (disabling GrowthBook).
Adding an explicit provider guard makes this safety independent of the
analytics stub, preventing 3P users from being blocked by Anthropic's
version requirements in case of future upstream merges.
Add provider guard to migrateSonnet1mToSonnet45() so it only runs for
firstParty (Anthropic) users. Without this, a 3P user with
model='sonnet[1m]' would have it rewritten to an Anthropic-specific
alias that is invalid for OpenAI/Gemini/Ollama providers.
- Make getProviderLabel() switch exhaustive with explicit openai/gemini
arms instead of falling through to env-var checks in default
- Add clarifying comment on additionalProperties override in schema
normalization
Partially addresses #112. The streaming reader in openaiStreamToAnthropic
had no error handling - if an error occurred during streaming, the reader
lock was never released. Wrapped the while loop in try/finally to ensure
reader.releaseLock() is always called.
Partially addresses #39. The cost threshold dialog hardcoded
'Anthropic API' in the title, which is misleading for users on
OpenAI, Gemini, Ollama, or other providers. Now detects the active
provider via getAPIProvider() and shows the correct label.