CLAUDE_CODE_USE_GEMINI was missing from the is3P check in
isAnthropicAuthEnabled(), causing Gemini users to see the
Anthropic login screen at startup even with GEMINI_API_KEY set.
isAnthropicAuthEnabled() returns true when is3P is false, which
triggers the OAuth/login flow. Since CLAUDE_CODE_USE_GEMINI was
not included, Gemini was not treated as a 3P provider here,
showing the gcloud/Anthropic login prompt unexpectedly.
Fix: add CLAUDE_CODE_USE_GEMINI to the is3P check, consistent
with how CLAUDE_CODE_USE_OPENAI is handled in the same block.
Fixes#43.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The proxy bypass logic assigned port 80 to any non-https protocol,
including wss:// whose default port is 443. A NO_PROXY entry like
example.com:443 would not match wss://example.com because the port
was incorrectly resolved to 80.
Relates to #40
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
The /status panel showed 'undefined' for the API provider label when
using OpenAI or Gemini providers, and did not display the base URL or
model name. Added provider labels and property sections for both.
Relates to #39
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
DeepSeek V3 documentation specifies 128k context window for both
deepseek-chat and deepseek-reasoner. The previous 64k value caused
premature compaction and underutilization of available context.
Relates to #39
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
Some providers send an empty string as the first delta to signal
streaming start. The falsy check `if (delta.content)` treated "" as
absent, skipping content_block_start. The next delta with actual
content was emitted without it, violating the Anthropic protocol.
Changed to `delta.content != null` to distinguish between absent field
and empty string.
Relates to #42
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
OpenAI and Codex enforce strict JSON Schema validation — every key in
`properties` must also appear in `required`. Anthropic schemas often
mark fields as optional (omitted from `required`), which causes 400
errors on OpenAI/Codex endpoints.
Example: the Agent tool has `subagent_type` in `properties` but not
in `required`, producing:
"Invalid schema for function 'Agent': Missing 'subagent_type'
in required array"
Fix: add `normalizeSchemaForOpenAI()` in `convertTools()` that ensures
`required` is a superset of all `properties` keys before the schema is
sent to the API. Existing `required` entries are preserved; missing
ones are appended. Schemas without `properties` pass through unchanged.
Fixes#46.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
In CI mode, auth.ts throws if ANTHROPIC_API_KEY or
CLAUDE_CODE_OAUTH_TOKEN are missing — even when using
CLAUDE_CODE_USE_OPENAI=1 or CLAUDE_CODE_USE_GEMINI=1.
This crashes any OpenAI/Gemini/Ollama CI pipeline immediately.
Fix: guard the throw with !isUsing3PServices() so non-Anthropic
providers skip the check entirely.
Also added CLAUDE_CODE_USE_GEMINI to isUsing3PServices() which
was missing — Gemini users were excluded from the 3P detection
used elsewhere in the same function.
Fixes#40.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
These macros are used in ~10 files (autoUpdater, localInstaller,
nativeInstaller, update CLI) but were not defined in the build script's
`define` block. At runtime, they resolve to `undefined`, causing
commands like `npm install undefined` and `npm view undefined` to fail
silently during auto-update checks.
Sets MACRO.PACKAGE_URL to the published npm package name and
MACRO.NATIVE_PACKAGE_URL to undefined (no native binary distribution).
Relates to #29
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
The profile file may contain API keys (OPENAI_API_KEY, CODEX_API_KEY,
GEMINI_API_KEY) in plain text. Without explicit permissions, writeFileSync
uses the process umask — on systems with permissive umask (0022), the file
is world-readable (644), exposing credentials to other users.
Relates to #24
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
The Anthropic-to-OpenAI tool_choice mapping handled 'auto', 'any', and
'tool' but not 'none'. When 'none' was passed, the request was sent
without tool_choice, defaulting to 'auto' — the opposite of the
intended behavior (disable tool use).
Relates to #30
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
The import `src//types/message.js` contains a double slash that may cause
unpredictable module resolution depending on OS and bundler behavior.
Relates to #29
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
When certain OpenAI-compatible APIs (LM Studio, some proxies) send
multiple stream chunks with finish_reason set, the finish block ran
multiple times — emitting content_block_stop and message_delta for
each one. Each content_block_stop caused claude.ts to create and yield
a new assistant message, making every response appear twice in the UI.
Fix: add hasProcessedFinishReason flag (same pattern as the existing
hasEmittedFinalUsage flag) so the finish block only executes once per
response regardless of how many chunks contain finish_reason.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Without this fix, getContextWindowForModel() returns 200k for all OpenAI
models (the Claude default), causing two problems:
1. Auto-compact/warnings trigger at wrong thresholds (200k instead of 128k)
2. getModelMaxOutputTokens() returns 32k causing 400 errors from APIs that
cap output tokens lower (gpt-4o supports max 16384)
Fix:
- Add openaiContextWindows.ts with known context window sizes and max output
token limits for 30+ OpenAI-compatible models (OpenAI, DeepSeek, Groq,
Mistral, Ollama, LM Studio)
- Hook into getContextWindowForModel() so correct input limits are used
- Hook into getModelMaxOutputTokens() so correct output limits are sent,
preventing 400 "max_tokens is too large" errors
All existing warning, blocking, and auto-compact infrastructure works
automatically once the correct limits are returned.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds Google Gemini as a first-class provider using Gemini's OpenAI-compatible
endpoint, supporting gemini-2.0-flash, gemini-2.5-pro, and gemini-2.0-flash-lite
across all three model tiers (opus/sonnet/haiku).
- Add 'gemini' to APIProvider type with CLAUDE_CODE_USE_GEMINI env detection
- Map all 11 model configs to appropriate Gemini models per tier
- Route Gemini through existing OpenAI shim (generativelanguage.googleapis.com)
- Support GEMINI_API_KEY and GOOGLE_API_KEY for authentication
- Fix model display name to show actual Gemini model instead of Claude fallback
- Add Gemini support to provider-launch, provider-bootstrap, system-check scripts
- Add dev:gemini npm script for local development
Bootstrap: bun run profile:init -- --provider gemini --api-key <key>
Launch: bun run dev:gemini
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fixes#7. The modifiers-napi package is an Anthropic-internal native
addon, but a package with the same name exists on npm and could be a
supply chain attack vector. The build script already stubs it, but
the source code had live require() calls that would execute when
running without the bundler (e.g. bun dev, ts-node).
Replaced both functions with safe no-ops since modifier key detection
is not needed in the open-source build. Build verified passing.
When using OpenAI provider, getPublicModelDisplayName() was incorrectly
returning "Opus 4.6" because CLAUDE_OPUS_4_6_CONFIG.openai maps to 'gpt-4o',
causing a false match in the switch statement. Now returns null for OpenAI
provider so the raw model name (e.g. 'gpt-4o') is displayed directly.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>