fix(model): codex/nvidia-nim/minimax now read OPENAI_MODEL env (#815)

getUserSpecifiedModelSetting() decides which env var to consult based on
the active provider. The check included openai and github but omitted
codex, nvidia-nim, and minimax — even though all three use the OpenAI
shim transport and get their model routing via CLAUDE_CODE_USE_OPENAI=1
+ OPENAI_MODEL (set by applyProviderProfileToProcessEnv).

Concrete failure: user switches from Moonshot profile (which persisted
settings.model='kimi-k2.6') to the Codex profile. The new profile
correctly writes OPENAI_MODEL=codexplan + base URL to
chatgpt.com/backend-api/codex. Startup banner reflects Codex / gpt-5.4
correctly. But at request time getUserSpecifiedModelSetting() returns
early for provider='codex' (not in the env-consult list), falls through
to the stale settings.model='kimi-k2.6', and the Codex API rejects:

  API Error 400: "The 'kimi-k2.6' model is not supported when using
  Codex with a ChatGPT account."

Fix: extract an isOpenAIShimProvider flag covering openai|codex|github|
nvidia-nim|minimax — all providers that set OPENAI_MODEL as their model
env var. The Gemini and Mistral branches stay as-is (they use
GEMINI_MODEL / MISTRAL_MODEL).

Five regression tests pin the fix for each OpenAI-shim provider plus
guard tests for openai and github that already worked.

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
This commit is contained in:
Kevin Codex
2026-04-22 09:01:44 +08:00
committed by GitHub
parent ee19159c17
commit 458120889f
2 changed files with 129 additions and 1 deletions

View File

@@ -91,11 +91,24 @@ export function getUserSpecifiedModelSetting(): ModelSetting | undefined {
const setting = normalizeModelSetting(settings.model)
// Read the model env var that matches the active provider to prevent
// cross-provider leaks (e.g. ANTHROPIC_MODEL sent to the OpenAI API).
//
// All OpenAI-shim providers (openai, codex, github, nvidia-nim, minimax)
// set CLAUDE_CODE_USE_OPENAI=1 + OPENAI_MODEL via
// applyProviderProfileToProcessEnv. Earlier this check only included
// openai/github — codex/nvidia-nim/minimax fell through to the stale
// settings.model, so switching from (say) Moonshot to Codex kept firing
// `kimi-k2.6` at the Codex endpoint and getting 400s.
const provider = getAPIProvider()
const isOpenAIShimProvider =
provider === 'openai' ||
provider === 'codex' ||
provider === 'github' ||
provider === 'nvidia-nim' ||
provider === 'minimax'
specifiedModel =
(provider === 'gemini' ? process.env.GEMINI_MODEL : undefined) ||
(provider === 'mistral' ? process.env.MISTRAL_MODEL : undefined) ||
(provider === 'openai' || provider === 'gemini' || provider === 'mistral' || provider === 'github' ? process.env.OPENAI_MODEL : undefined) ||
(isOpenAIShimProvider ? process.env.OPENAI_MODEL : undefined) ||
(provider === 'firstParty' ? process.env.ANTHROPIC_MODEL : undefined) ||
setting ||
undefined