Load nested SKILL.md files from .claude/skills and namespace them with colons so category-based skill layouts work in Claude Code clients.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The /login platform_setup screen only listed Amazon Bedrock,
Microsoft Foundry, and Vertex AI — OpenAI-compatible providers
and Gemini were completely absent, leaving users with no guidance
on how to use OpenClaude's main feature.
Changes:
- Selector label: "Amazon Bedrock, Microsoft Foundry, or Vertex AI"
→ "OpenAI, Gemini, Bedrock, Ollama, and more"
- Description updated to mention OpenAI-compatible providers and Gemini
- Added OpenAI and Gemini env var instructions to the docs list
Fixes#43 (login screen confusion for Gemini users).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
CLAUDE_CODE_USE_GEMINI was missing from the is3P check in
isAnthropicAuthEnabled(), causing Gemini users to see the
Anthropic login screen at startup even with GEMINI_API_KEY set.
isAnthropicAuthEnabled() returns true when is3P is false, which
triggers the OAuth/login flow. Since CLAUDE_CODE_USE_GEMINI was
not included, Gemini was not treated as a 3P provider here,
showing the gcloud/Anthropic login prompt unexpectedly.
Fix: add CLAUDE_CODE_USE_GEMINI to the is3P check, consistent
with how CLAUDE_CODE_USE_OPENAI is handled in the same block.
Fixes#43.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
OpenAI and Codex enforce strict JSON Schema validation — every key in
`properties` must also appear in `required`. Anthropic schemas often
mark fields as optional (omitted from `required`), which causes 400
errors on OpenAI/Codex endpoints.
Example: the Agent tool has `subagent_type` in `properties` but not
in `required`, producing:
"Invalid schema for function 'Agent': Missing 'subagent_type'
in required array"
Fix: add `normalizeSchemaForOpenAI()` in `convertTools()` that ensures
`required` is a superset of all `properties` keys before the schema is
sent to the API. Existing `required` entries are preserved; missing
ones are appended. Schemas without `properties` pass through unchanged.
Fixes#46.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
In CI mode, auth.ts throws if ANTHROPIC_API_KEY or
CLAUDE_CODE_OAUTH_TOKEN are missing — even when using
CLAUDE_CODE_USE_OPENAI=1 or CLAUDE_CODE_USE_GEMINI=1.
This crashes any OpenAI/Gemini/Ollama CI pipeline immediately.
Fix: guard the throw with !isUsing3PServices() so non-Anthropic
providers skip the check entirely.
Also added CLAUDE_CODE_USE_GEMINI to isUsing3PServices() which
was missing — Gemini users were excluded from the 3P detection
used elsewhere in the same function.
Fixes#40.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The import `src//types/message.js` contains a double slash that may cause
unpredictable module resolution depending on OS and bundler behavior.
Relates to #29
Co-Authored-By: Juan Camilo <juancamilo.auriti@gmail.com>
When certain OpenAI-compatible APIs (LM Studio, some proxies) send
multiple stream chunks with finish_reason set, the finish block ran
multiple times — emitting content_block_stop and message_delta for
each one. Each content_block_stop caused claude.ts to create and yield
a new assistant message, making every response appear twice in the UI.
Fix: add hasProcessedFinishReason flag (same pattern as the existing
hasEmittedFinalUsage flag) so the finish block only executes once per
response regardless of how many chunks contain finish_reason.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Without this fix, getContextWindowForModel() returns 200k for all OpenAI
models (the Claude default), causing two problems:
1. Auto-compact/warnings trigger at wrong thresholds (200k instead of 128k)
2. getModelMaxOutputTokens() returns 32k causing 400 errors from APIs that
cap output tokens lower (gpt-4o supports max 16384)
Fix:
- Add openaiContextWindows.ts with known context window sizes and max output
token limits for 30+ OpenAI-compatible models (OpenAI, DeepSeek, Groq,
Mistral, Ollama, LM Studio)
- Hook into getContextWindowForModel() so correct input limits are used
- Hook into getModelMaxOutputTokens() so correct output limits are sent,
preventing 400 "max_tokens is too large" errors
All existing warning, blocking, and auto-compact infrastructure works
automatically once the correct limits are returned.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds Google Gemini as a first-class provider using Gemini's OpenAI-compatible
endpoint, supporting gemini-2.0-flash, gemini-2.5-pro, and gemini-2.0-flash-lite
across all three model tiers (opus/sonnet/haiku).
- Add 'gemini' to APIProvider type with CLAUDE_CODE_USE_GEMINI env detection
- Map all 11 model configs to appropriate Gemini models per tier
- Route Gemini through existing OpenAI shim (generativelanguage.googleapis.com)
- Support GEMINI_API_KEY and GOOGLE_API_KEY for authentication
- Fix model display name to show actual Gemini model instead of Claude fallback
- Add Gemini support to provider-launch, provider-bootstrap, system-check scripts
- Add dev:gemini npm script for local development
Bootstrap: bun run profile:init -- --provider gemini --api-key <key>
Launch: bun run dev:gemini
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fixes#7. The modifiers-napi package is an Anthropic-internal native
addon, but a package with the same name exists on npm and could be a
supply chain attack vector. The build script already stubs it, but
the source code had live require() calls that would execute when
running without the bundler (e.g. bun dev, ts-node).
Replaced both functions with safe no-ops since modifier key detection
is not needed in the open-source build. Build verified passing.
When using OpenAI provider, getPublicModelDisplayName() was incorrectly
returning "Opus 4.6" because CLAUDE_OPUS_4_6_CONFIG.openai maps to 'gpt-4o',
causing a false match in the switch statement. Now returns null for OpenAI
provider so the raw model name (e.g. 'gpt-4o') is displayed directly.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The showSetupScreens() early return for CLAUDE_CODE_USE_OPENAI skipped
all trust state initialization (setSessionTrustAccepted, GrowthBook,
getSystemContext), causing downstream config lookups to fail silently.
This prevented the REPL component tree from mounting correctly —
useInput never fired, stdin stayed in cooked mode, and the terminal
appeared frozen.
Fix: skip only the UI dialogs (onboarding, trust, MCP approval) for
OpenAI provider while still running the critical state initialization
that the REPL depends on.
Closes#3
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>