fix: custom OPENAI_BASE_URL always wins over Codex model alias detection (#222)
* feat: add --provider CLI flag for multi-provider support Adds a --provider flag that maps friendly provider names to the environment variables the codebase uses for provider detection. No more manual env-var configuration — users can now simply run: openclaude --provider openai --model gpt-4o openclaude --provider gemini --model gemini-2.0-flash openclaude --provider ollama --model llama3.2 openclaude --provider bedrock openclaude --provider vertex Implementation details: - providerFlag.ts: core logic — maps provider names to env vars, uses ??= so explicit env vars always win over the flag defaults - providerFlag.test.ts: 18 tests covering all 7 providers, error messages, model passthrough, and env-var precedence - cli.tsx: early fast-path (mirrors --bare pattern) — sets env vars before Commander option-building and module constants run - main.tsx: adds --provider to Commander option chain for --help Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: custom OPENAI_BASE_URL always wins over Codex model alias detection When OPENAI_MODEL=gpt-5.4 (or gpt-5.4-mini) and a custom OPENAI_BASE_URL is set (Azure, OpenRouter, etc), the transport was incorrectly forced to codex_responses because gpt-5.4 is in CODEX_ALIAS_MODELS. This caused requests to be sent with Codex auth instead of the user's API key, resulting in 401 Unauthorized errors. Fix: only use codex_responses when the base URL is explicitly the Codex endpoint, OR when no custom base URL is set and the model is a Codex alias. An explicit OPENAI_BASE_URL always takes priority over model-name based Codex detection. Verified locally: gpt-5.4 via OpenRouter now correctly shows Provider=OpenRouter, Endpoint=https://openrouter.ai/api/v1 instead of routing to chatgpt.com/backend-api/codex. Fixes #200, #203 --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -984,7 +984,7 @@ async function run(): Promise<CommanderCommand> {
|
||||
return Number.isFinite(n) ? n : undefined;
|
||||
}).hideHelp()).option('--from-pr [value]', 'Resume a session linked to a PR by PR number/URL, or open interactive picker with optional search term', value => value || true).option('--no-session-persistence', 'Disable session persistence - sessions will not be saved to disk and cannot be resumed (only works with --print)').addOption(new Option('--resume-session-at <message id>', 'When resuming, only messages up to and including the assistant message with <message.id> (use with --resume in print mode)').argParser(String).hideHelp()).addOption(new Option('--rewind-files <user-message-id>', 'Restore files to state at the specified user message and exit (requires --resume)').hideHelp())
|
||||
// @[MODEL LAUNCH]: Update the example model ID in the --model help text.
|
||||
.option('--model <model>', `Model for the current session. Provide an alias for the latest model (e.g. 'sonnet' or 'opus') or a model's full name (e.g. 'claude-sonnet-4-6').`).addOption(new Option('--effort <level>', `Effort level for the current session (low, medium, high, max)`).argParser((rawValue: string) => {
|
||||
.option('--model <model>', `Model for the current session. Provide an alias for the latest model (e.g. 'sonnet' or 'opus') or a model's full name (e.g. 'claude-sonnet-4-6').`).option('--provider <provider>', `AI provider to use (anthropic, openai, gemini, github, bedrock, vertex, ollama). Reads API keys from environment variables.`).addOption(new Option('--effort <level>', `Effort level for the current session (low, medium, high, max)`).argParser((rawValue: string) => {
|
||||
const value = rawValue.toLowerCase();
|
||||
const allowed = ['low', 'medium', 'high', 'max'];
|
||||
if (!allowed.includes(value)) {
|
||||
|
||||
Reference in New Issue
Block a user