fix: prevent cross-provider model env var leaks and sync Codex detection (#243)

Two provider routing bugs that cause silent wrong-model failures:

1. model.ts: getUserSpecifiedModelSetting() read ANTHROPIC_MODEL ||
   GEMINI_MODEL || OPENAI_MODEL with no provider check. A user
   switching from Anthropic to OpenAI with ANTHROPIC_MODEL still set
   would silently send the Anthropic model name to the OpenAI API.
   Now gates each env var behind the active provider from
   getAPIProvider().

2. providers.ts: isCodexModel() maintained a hardcoded list of 8 model
   names that was missing gpt-5.4-mini and gpt-5.2 from the canonical
   CODEX_ALIAS_MODELS table in providerConfig.ts. This caused a
   split-brain: getAPIProvider() returned 'openai' while
   resolveProviderRequest() selected 'codex_responses' transport.
   Now delegates to the exported isCodexAlias() to keep both detection
   systems in sync.
This commit is contained in:
Juan Camilo Auriti
2026-04-04 11:38:47 +02:00
committed by GitHub
parent ea335aeddc
commit fbf3385395
3 changed files with 16 additions and 13 deletions

View File

@@ -75,7 +75,15 @@ export function getUserSpecifiedModelSetting(): ModelSetting | undefined {
specifiedModel = modelOverride
} else {
const settings = getSettings_DEPRECATED() || {}
specifiedModel = process.env.ANTHROPIC_MODEL || process.env.GEMINI_MODEL || process.env.OPENAI_MODEL || settings.model || undefined
// Read the model env var that matches the active provider to prevent
// cross-provider leaks (e.g. ANTHROPIC_MODEL sent to the OpenAI API).
const provider = getAPIProvider()
specifiedModel =
(provider === 'gemini' ? process.env.GEMINI_MODEL : undefined) ||
(provider === 'openai' || provider === 'gemini' ? process.env.OPENAI_MODEL : undefined) ||
(provider === 'firstParty' ? process.env.ANTHROPIC_MODEL : undefined) ||
settings.model ||
undefined
}
// Ignore the user-specified model if it's not in the availableModels allowlist.