The login picker previously sent third-party users to a dead-end info screen
that only mentioned env vars. This change reuses the existing provider wizard
from the login flow so first-run setup can continue without requiring slash
command access first.
Constraint: The existing provider setup logic must remain the single source of truth
Rejected: Build a separate third-party auth wizard in ConsoleOAuthFlow | would duplicate provider setup behavior and drift over time
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: Keep third-party onboarding routed through ProviderWizard unless the provider command flow is intentionally redesigned
Tested: bun test src/components/ConsoleOAuthFlow.test.tsx src/commands/provider/provider.test.tsx
Tested: tsc --noEmit via project diagnostics
Not-tested: Live gh-authenticated push and PR creation path
Co-authored-by: anandh8x <test@example.com>
Add flickerFreeMode to GlobalConfig so external users can enable
fullscreen alt-screen mode via /config instead of having to set
the CLAUDE_CODE_NO_FLICKER=1 env var manually.
Priority order in isFullscreenEnvEnabled():
CLAUDE_CODE_NO_FLICKER=0 → always off (env wins)
CLAUDE_CODE_NO_FLICKER=1 → always on (env wins)
tmux -CC detected → off (terminal safety guard)
config flickerFreeMode → user preference (new)
USER_TYPE=ant → internal default
The env var still takes full precedence so existing scripts and
automation are unaffected. The new setting only activates when
flickerFreeMode is explicitly set in config.
Normalize shell command stdout and stderr before the prompt-shell path and shared tool-result mappers use string operations. This prevents /security-review from crashing when a shell tool returns null output fields and adds regression coverage for both direct mapper calls and prompt generation.
Fixes#165
Co-authored-by: Claude <noreply@anthropic.com>
* ## PR: Add LM Studio Provider Support
### Summary
Adds comprehensive LM Studio integration to openclaude, following the same pattern as the existing Ollama provider. LM Studio is a popular local LLM inference tool that exposes an OpenAI-compatible API.
### Changes (4 files, 672 insertions)
**New Files:**
- `lmstudio_provider.py` (377 lines) - Full provider implementation with:
- Health check functions (`check_lmstudio_running`)
- Model listing (`list_lmstudio_models`)
- Chat completion (`lmstudio_chat`)
- Streaming support (`lmstudio_chat_stream`)
- Comprehensive docstring with setup instructions, troubleshooting, and model recommendations
- `test_lmstudio_provider.py` (227 lines) - Complete test suite with 12 passing tests covering:
- API URL construction
- Server health checks
- Model listing
- Chat completion functionality
**Modified Files:**
- `docs/quick-start-mac-linux.md` (+34 lines) - Added Option D: LM Studio with setup instructions and troubleshooting
- `docs/quick-start-windows.md` (+34 lines) - Added Option D: LM Studio with PowerShell syntax and troubleshooting
### Key Features
- No API key required (local inference)
- Default port: 1234 (LM Studio's standard)
- OpenAI-compatible API integration
- Consistent with existing provider patterns (Ollama, Atomic Chat)
- All tests passing (12/12)
### Usage
```bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaude
```
* made pr as doc only pr for lm studio
* LM studio recent ui changes fixes in doc
* added Instructions to env example to allow openclaude to be used system wide
* added suggested .env.example changes
I added the suggested .env.example changes suggested earlier within the pr thread
* feat: add --provider CLI flag for multi-provider support
Adds a --provider flag that maps friendly provider names to the
environment variables the codebase uses for provider detection.
No more manual env-var configuration — users can now simply run:
openclaude --provider openai --model gpt-4o
openclaude --provider gemini --model gemini-2.0-flash
openclaude --provider ollama --model llama3.2
openclaude --provider bedrock
openclaude --provider vertex
Implementation details:
- providerFlag.ts: core logic — maps provider names to env vars,
uses ??= so explicit env vars always win over the flag defaults
- providerFlag.test.ts: 18 tests covering all 7 providers,
error messages, model passthrough, and env-var precedence
- cli.tsx: early fast-path (mirrors --bare pattern) — sets env
vars before Commander option-building and module constants run
- main.tsx: adds --provider to Commander option chain for --help
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: custom OPENAI_BASE_URL always wins over Codex model alias detection
When OPENAI_MODEL=gpt-5.4 (or gpt-5.4-mini) and a custom OPENAI_BASE_URL
is set (Azure, OpenRouter, etc), the transport was incorrectly forced to
codex_responses because gpt-5.4 is in CODEX_ALIAS_MODELS. This caused
requests to be sent with Codex auth instead of the user's API key,
resulting in 401 Unauthorized errors.
Fix: only use codex_responses when the base URL is explicitly the Codex
endpoint, OR when no custom base URL is set and the model is a Codex
alias. An explicit OPENAI_BASE_URL always takes priority over model-name
based Codex detection.
Verified locally: gpt-5.4 via OpenRouter now correctly shows
Provider=OpenRouter, Endpoint=https://openrouter.ai/api/v1 instead of
routing to chatgpt.com/backend-api/codex.
Fixes#200, #203
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
React 19 requires `supportsMicrotasks: true` in the reconciler host
config so it can flush state updates from passive effects via
queueMicrotask. Without this, state updates triggered inside
useMcpConnectivityStatus were silently dropped, corrupting React's
internal executionContext and causing all keyboard input to freeze
after the "N MCP server(s) need auth" notification appeared.
Root cause (three-part fix):
1. reconciler.ts: declare supportsMicrotasks + scheduleMicrotask so
React 19 schedules passive-effect flushes correctly.
2. useMcpConnectivityStatus.tsx: wrap the MCP auth notification effect
in try/catch so any unexpected throw does not propagate into
flushPassiveEffects and permanently corrupt executionContext.
3. notifications.tsx: wrap addNotification, removeNotification, and
processQueue in try/catch for the same reason — these are called
from 12+ notification hooks across passive effects.
Also fixes a pre-existing test isolation bug in context.test.ts where
assigning `undefined` to process.env produced the string "undefined",
polluting the env for subsequent test files.
Resolves: #169, #205, #77
Images in the clipboard could fail to become pasted image attachments in OpenClaude. User-facing symptom: paste would detect that an image existed, but nothing would appear in the prompt, and bundled builds could also fail while converting BMP clipboard images into a format OpenClaude can send to the model.
Linux clipboard image paste had drifted between detection and extraction. checkImage accepted png/jpeg/jpg/gif/webp/bmp, but saveImage only tried image/png and image/bmp. When the clipboard advertised a JPEG, GIF, or WebP image, OpenClaude concluded that an image was present and then failed to write the temp screenshot file, so the paste path returned null and nothing was inserted into the prompt.
Bundled OpenClaude builds had a second failure mode. The build replaces image-processor-napi and sharp with explicit stub modules in bundled mode. getImageProcessor() treated those stubs as real processors, so BMP clipboard images reached sharp(imageBuffer).png() and then failed before they could be converted into a pasteable PNG for OpenClaude.
Keep the Linux clipboard commands generated from one MIME type list and reject __stub-marked image processors up front instead of failing in the middle of image paste.
- point all repository links to Gitlawb/openclaude
- make shim opt-in by default to preserve Anthropic-first flow
- add command availability check with first-run install guidance
- render runtime and shim state dynamically in control center
- make command palette shortcut hint platform-aware
The Gemini provider uses Google's OpenAI-compatible endpoint
(generativelanguage.googleapis.com/v1beta/openai) but the client
routing condition in client.ts only checked CLAUDE_CODE_USE_OPENAI
and CLAUDE_CODE_USE_GITHUB — CLAUDE_CODE_USE_GEMINI was missing.
This caused every Gemini request to fall through to the Anthropic
client path. Since ANTHROPIC_API_KEY is not set when using Gemini,
the Anthropic SDK threw:
"Could not resolve authentication method. Expected either apiKey
or authToken to be set."
Fix: add CLAUDE_CODE_USE_GEMINI to the OpenAI shim routing condition
so Gemini requests correctly reach createOpenAIShimClient(), which
maps GEMINI_API_KEY → OPENAI_API_KEY and sets OPENAI_BASE_URL to
the Google endpoint.
Closes#176
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
New contributors had to hunt through README and source files to find
required environment variables. This adds a single reference file at
repo root covering all supported providers with placeholder values,
inline comments, and sensible defaults.
Providers covered:
- Anthropic (default)
- OpenAI
- Google Gemini
- GitHub Models
- Ollama (local)
- AWS Bedrock
- Google Vertex AI
Also includes optional tuning vars: CLAUDE_CODE_MAX_RETRIES,
CLAUDE_CODE_UNATTENDED_RETRY, OPENCLAUDE_ENABLE_EXTENDED_KEYS,
OPENCLAUDE_DISABLE_CO_AUTHORED_BY, API_TIMEOUT_MS, CLAUDE_DEBUG.
Updated .gitignore to add !.env.example exception so the template
is not suppressed by the existing .env.* rule.
Closes#175
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously getRateLimitResetDelayMs only read the Anthropic-specific
'anthropic-ratelimit-unified-reset' header (Unix timestamp), returning
null for every other provider. This meant OpenAI, GitHub, and Codex
users in persistent retry mode (CLAUDE_CODE_UNATTENDED_RETRY=1) always
fell back to dumb exponential backoff even when the server included an
exact reset time in the response headers.
This change makes the function provider-aware:
- firstParty (Anthropic): existing behaviour preserved — reads
'anthropic-ratelimit-unified-reset' Unix timestamp
- openai / codex / github: reads 'x-ratelimit-reset-requests' and
'x-ratelimit-reset-tokens' (OpenAI relative duration strings like
"1s", "6m0s", "1h30m0s"), picks the larger of the two so retries
don't fire before both token and request limits have reset
- bedrock / vertex / foundry / gemini: returns null (no standard
reset header for these providers)
Adds parseOpenAIDuration() as an exported helper to convert OpenAI's
duration format into milliseconds.
16 new tests covering all provider paths and edge cases.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>