Files
orcs-code/src/utils/providerProfile.ts
Kevin Codex 13de4e85df fix(provider): saved profile ignored when stale CLAUDE_CODE_USE_* in shell (#807)
* fix(provider): saved profile ignored when stale CLAUDE_CODE_USE_* in shell

Users reported "my saved /provider profile isn't picked up at startup —
the banner shows gpt-4o / api.openai.com even though I saved Moonshot".

Root cause: applyActiveProviderProfileFromConfig() bailed out whenever
hasProviderSelectionFlags(processEnv) was true — i.e. whenever ANY
CLAUDE_CODE_USE_* flag was present. But a bare `CLAUDE_CODE_USE_OPENAI=1`
with no paired OPENAI_BASE_URL / OPENAI_MODEL is almost always a stale
shell export left over from a prior manual setup, not genuine startup
intent. Respecting it skipped the saved profile and let StartupScreen.ts
fall through to the hardcoded `gpt-4o` / `https://api.openai.com/v1`
defaults — the exact symptom users see.

Fix: narrow the guard from "any flag set" to "flag set AND at least one
concrete config value (BASE_URL, MODEL, or API_KEY)". A bare stale flag
no longer blocks the saved profile. A real shell selection (flag + URL
or flag + model) still wins, preserving the "explicit startup intent
overrides saved profile" contract.

New helper: hasCompleteProviderSelection(env). Per-provider check for a
paired concrete value. Bedrock/Vertex/Foundry keep the flag-alone
semantic since they rely on ambient AWS/GCP credentials rather than env
config.

Three new tests cover the bug and the two counter-cases:
  - bare USE flag → profile applies (fixes the bug)
  - USE flag + BASE_URL → profile blocked (preserves explicit intent)
  - USE flag + MODEL → profile blocked (preserves explicit intent)

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

* fix(provider): don't overlay stale legacy profile on plural-managed env

Second half of the "saved profile not picked up in banner" bug. The prior
commit fixed the guard that prevented applyActiveProviderProfileFromConfig()
from firing when a stale CLAUDE_CODE_USE_* flag was in the shell. But even
when the plural system applies correctly, buildStartupEnvFromProfile() was
then loading the legacy .openclaude-profile.json AND overwriting the
plural-managed env with whatever that file contained.

addProviderProfile() (the call path the /provider preset picker uses) does
NOT sync the legacy file, so a user who went:

  manual setup: CLAUDE_CODE_USE_OPENAI=1 + OPENAI_MODEL=gpt-4o
              → writes .openclaude-profile.json as { openai, gpt-4o, ... }
  /provider:   add Moonshot preset, mark active
              → writes plural config; legacy file UNCHANGED

would see startup reliably apply Moonshot env first, then get it clobbered
by the stale legacy file. Banner shows gpt-4o / api.openai.com while
runtime ends up with the correct env via a different code path — exactly
the user-reported symptom.

Fix: in buildStartupEnvFromProfile, when the plural system has already
set env (CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1'), skip the
legacy-file overlay entirely and return processEnv unchanged. Legacy is
now strictly a first-run / fallback path for users who haven't adopted
the plural system.

Also removes the stripped-then-rebuilt env construction that was part of
the old overlay path — no longer needed.

Test updates:
  - Replaced "lets saved startup profile override profile-managed env"
    (encoded the old broken behavior) with a regression test that pins
    the new semantic: plural env survives when legacy is stale.
  - Added "falls back to legacy when plural hasn't applied" to pin the
    first-run path still works.

Co-Authored-By: OpenClaude <openclaude@gitlawb.com>

---------

Co-authored-by: OpenClaude <openclaude@gitlawb.com>
2026-04-22 00:59:32 +08:00

927 lines
26 KiB
TypeScript

import { existsSync, readFileSync, rmSync, writeFileSync } from 'node:fs'
import { resolve } from 'node:path'
import {
DEFAULT_CODEX_BASE_URL,
DEFAULT_OPENAI_BASE_URL,
isCodexBaseUrl,
resolveCodexApiCredentials,
resolveProviderRequest,
} from '../services/api/providerConfig.js'
import { parseChatgptAccountId } from '../services/api/codexOAuthShared.js'
import {
getGoalDefaultOpenAIModel,
normalizeRecommendationGoal,
type RecommendationGoal,
} from './providerRecommendation.js'
import { readGeminiAccessToken } from './geminiCredentials.js'
import { getOllamaChatBaseUrl } from './providerDiscovery.js'
import { getProviderValidationError } from './providerValidation.js'
import {
maskSecretForDisplay,
redactSecretValueForDisplay,
sanitizeApiKey,
sanitizeProviderConfigValue,
} from './providerSecrets.js'
export {
maskSecretForDisplay,
redactSecretValueForDisplay,
sanitizeApiKey,
sanitizeProviderConfigValue,
} from './providerSecrets.js'
import { isEnvTruthy } from './envUtils.ts'
import { PROVIDERS } from './configConstants.js'
export const PROFILE_FILE_NAME = '.openclaude-profile.json'
export const DEFAULT_GEMINI_BASE_URL =
'https://generativelanguage.googleapis.com/v1beta/openai'
export const DEFAULT_GEMINI_MODEL = 'gemini-2.0-flash'
export const DEFAULT_MISTRAL_BASE_URL = 'https://api.mistral.ai/v1'
export const DEFAULT_MISTRAL_MODEL = 'devstral-latest'
const PROFILE_ENV_KEYS = [
'CLAUDE_CODE_USE_OPENAI',
'CLAUDE_CODE_USE_GEMINI',
'CLAUDE_CODE_USE_MISTRAL',
'CLAUDE_CODE_USE_BEDROCK',
'CLAUDE_CODE_USE_VERTEX',
'CLAUDE_CODE_USE_FOUNDRY',
'OPENAI_BASE_URL',
'OPENAI_MODEL',
'OPENAI_API_KEY',
'CODEX_API_KEY',
'CODEX_CREDENTIAL_SOURCE',
'CHATGPT_ACCOUNT_ID',
'CODEX_ACCOUNT_ID',
'GEMINI_API_KEY',
'GEMINI_AUTH_MODE',
'GEMINI_ACCESS_TOKEN',
'GEMINI_MODEL',
'GEMINI_BASE_URL',
'GOOGLE_API_KEY',
'NVIDIA_NIM',
'NVIDIA_API_KEY',
'NVIDIA_MODEL',
'MINIMAX_API_KEY',
'MINIMAX_BASE_URL',
'MINIMAX_MODEL',
'MISTRAL_BASE_URL',
'MISTRAL_API_KEY',
'MISTRAL_MODEL',
] as const
const SECRET_ENV_KEYS = [
'OPENAI_API_KEY',
'CODEX_API_KEY',
'GEMINI_API_KEY',
'GOOGLE_API_KEY',
'NVIDIA_API_KEY',
'MINIMAX_API_KEY',
'MISTRAL_API_KEY',
] as const
export type ProviderProfile = 'openai' | 'ollama' | 'codex' | 'gemini' | 'atomic-chat' | 'nvidia-nim' | 'minimax' | 'mistral'
export type ProfileEnv = {
OPENAI_BASE_URL?: string
OPENAI_MODEL?: string
OPENAI_API_KEY?: string
CODEX_API_KEY?: string
CODEX_CREDENTIAL_SOURCE?: 'oauth' | 'existing'
CHATGPT_ACCOUNT_ID?: string
CODEX_ACCOUNT_ID?: string
GEMINI_API_KEY?: string
GEMINI_AUTH_MODE?: 'api-key' | 'access-token' | 'adc'
GEMINI_MODEL?: string
GEMINI_BASE_URL?: string
GOOGLE_API_KEY?: string
NVIDIA_NIM?: string
NVIDIA_API_KEY?: string
MINIMAX_API_KEY?: string
MINIMAX_BASE_URL?: string
MINIMAX_MODEL?: string
MISTRAL_BASE_URL?: string
MISTRAL_API_KEY?: string
MISTRAL_MODEL?: string
}
export type ProfileFile = {
profile: ProviderProfile
env: ProfileEnv
createdAt: string
}
type SecretValueSource = Partial<
Record<
| 'OPENAI_API_KEY'
| 'CODEX_API_KEY'
| 'GEMINI_API_KEY'
| 'GOOGLE_API_KEY'
| 'NVIDIA_API_KEY'
| 'MINIMAX_API_KEY'
| 'MISTRAL_API_KEY',
string | undefined
>
>
type ProfileFileLocation = {
cwd?: string
filePath?: string
}
function resolveProfileFilePath(options?: ProfileFileLocation): string {
if (options?.filePath) {
return options.filePath
}
return resolve(options?.cwd ?? process.cwd(), PROFILE_FILE_NAME)
}
export function isProviderProfile(value: unknown): value is ProviderProfile {
return (
value === 'openai' ||
value === 'ollama' ||
value === 'codex' ||
value === 'gemini' ||
value === 'atomic-chat' ||
value === 'nvidia-nim' ||
value === 'minimax' ||
value === 'mistral'
)
}
export function buildOllamaProfileEnv(
model: string,
options: {
baseUrl?: string | null
getOllamaChatBaseUrl: (baseUrl?: string) => string
},
): ProfileEnv {
return {
OPENAI_BASE_URL: options.getOllamaChatBaseUrl(options.baseUrl ?? undefined),
OPENAI_MODEL: model,
}
}
export function buildAtomicChatProfileEnv(
model: string,
options: {
baseUrl?: string | null
getAtomicChatChatBaseUrl: (baseUrl?: string) => string
},
): ProfileEnv {
return {
OPENAI_BASE_URL: options.getAtomicChatChatBaseUrl(options.baseUrl ?? undefined),
OPENAI_MODEL: model,
}
}
export function buildNvidiaNimProfileEnv(options: {
model?: string | null
baseUrl?: string | null
apiKey?: string | null
processEnv?: NodeJS.ProcessEnv
}): ProfileEnv | null {
const processEnv = options.processEnv ?? process.env
const key = sanitizeApiKey(options.apiKey ?? processEnv.NVIDIA_API_KEY)
if (!key) {
return null
}
const defaultBaseUrl = 'https://integrate.api.nvidia.com/v1'
const secretSource: SecretValueSource = { OPENAI_API_KEY: key }
return {
OPENAI_BASE_URL:
sanitizeProviderConfigValue(options.baseUrl, secretSource) ||
sanitizeProviderConfigValue(processEnv.OPENAI_BASE_URL, secretSource) ||
defaultBaseUrl,
OPENAI_MODEL:
sanitizeProviderConfigValue(options.model, secretSource) ||
sanitizeProviderConfigValue(processEnv.OPENAI_MODEL, secretSource) ||
'nvidia/llama-3.1-nemotron-70b-instruct',
OPENAI_API_KEY: key,
NVIDIA_NIM: '1',
}
}
export function buildMiniMaxProfileEnv(options: {
model?: string | null
baseUrl?: string | null
apiKey?: string | null
processEnv?: NodeJS.ProcessEnv
}): ProfileEnv | null {
const processEnv = options.processEnv ?? process.env
const key = sanitizeApiKey(options.apiKey ?? processEnv.MINIMAX_API_KEY)
if (!key) {
return null
}
const defaultBaseUrl = 'https://api.minimax.io/v1'
const defaultModel = 'MiniMax-M2.5'
const secretSource: SecretValueSource = { OPENAI_API_KEY: key }
return {
OPENAI_BASE_URL:
sanitizeProviderConfigValue(options.baseUrl, secretSource) ||
sanitizeProviderConfigValue(processEnv.OPENAI_BASE_URL, secretSource) ||
defaultBaseUrl,
OPENAI_MODEL:
sanitizeProviderConfigValue(options.model, secretSource) ||
sanitizeProviderConfigValue(processEnv.OPENAI_MODEL, secretSource) ||
defaultModel,
OPENAI_API_KEY: key,
MINIMAX_API_KEY: key,
MINIMAX_BASE_URL: defaultBaseUrl,
MINIMAX_MODEL: defaultModel,
}
}
export function buildGeminiProfileEnv(options: {
model?: string | null
baseUrl?: string | null
apiKey?: string | null
authMode?: 'api-key' | 'access-token' | 'adc'
processEnv?: NodeJS.ProcessEnv
}): ProfileEnv | null {
const processEnv = options.processEnv ?? process.env
const authMode = options.authMode ?? 'api-key'
const key = sanitizeApiKey(
options.apiKey ??
processEnv.GEMINI_API_KEY ??
processEnv.GOOGLE_API_KEY,
)
if (authMode === 'api-key' && !key) {
return null
}
const secretSource: SecretValueSource = key ? { GEMINI_API_KEY: key } : {}
const env: ProfileEnv = {
GEMINI_AUTH_MODE: authMode,
GEMINI_MODEL:
sanitizeProviderConfigValue(options.model, secretSource) ||
sanitizeProviderConfigValue(processEnv.GEMINI_MODEL, secretSource) ||
DEFAULT_GEMINI_MODEL,
}
if (authMode === 'api-key' && key) {
env.GEMINI_API_KEY = key
}
const baseUrl =
sanitizeProviderConfigValue(options.baseUrl, secretSource) ||
sanitizeProviderConfigValue(processEnv.GEMINI_BASE_URL, secretSource)
if (baseUrl) {
env.GEMINI_BASE_URL = baseUrl
}
return env
}
export function buildOpenAIProfileEnv(options: {
goal: RecommendationGoal
model?: string | null
baseUrl?: string | null
apiKey?: string | null
processEnv?: NodeJS.ProcessEnv
}): ProfileEnv | null {
const processEnv = options.processEnv ?? process.env
const key = sanitizeApiKey(options.apiKey ?? processEnv.OPENAI_API_KEY)
if (!key) {
return null
}
const defaultModel = getGoalDefaultOpenAIModel(options.goal)
const secretSource: SecretValueSource = { OPENAI_API_KEY: key }
const shellOpenAIModel = sanitizeProviderConfigValue(
processEnv.OPENAI_MODEL,
secretSource,
)
const shellOpenAIBaseUrl = sanitizeProviderConfigValue(
processEnv.OPENAI_BASE_URL,
secretSource,
)
const shellOpenAIRequest = resolveProviderRequest({
model: shellOpenAIModel,
baseUrl: shellOpenAIBaseUrl,
fallbackModel: defaultModel,
})
const useShellOpenAIConfig = shellOpenAIRequest.transport === 'chat_completions'
return {
OPENAI_BASE_URL:
sanitizeProviderConfigValue(options.baseUrl, secretSource) ||
(useShellOpenAIConfig ? shellOpenAIBaseUrl : undefined) ||
DEFAULT_OPENAI_BASE_URL,
OPENAI_MODEL:
sanitizeProviderConfigValue(options.model, secretSource) ||
(useShellOpenAIConfig ? shellOpenAIModel : undefined) ||
defaultModel,
OPENAI_API_KEY: key,
}
}
export function buildCodexProfileEnv(options: {
model?: string | null
baseUrl?: string | null
apiKey?: string | null
credentialSource?: 'oauth' | 'existing'
processEnv?: NodeJS.ProcessEnv
}): ProfileEnv | null {
const processEnv = options.processEnv ?? process.env
const key = sanitizeApiKey(options.apiKey ?? processEnv.CODEX_API_KEY)
const credentialEnv = key
? ({ ...processEnv, CODEX_API_KEY: key } as NodeJS.ProcessEnv)
: processEnv
const credentials = resolveCodexApiCredentials(credentialEnv)
if (!credentials.apiKey || !credentials.accountId) {
return null
}
const credentialSource =
options.credentialSource ??
(credentials.source === 'secure-storage' ? 'oauth' : 'existing')
const env: ProfileEnv = {
OPENAI_BASE_URL: options.baseUrl || DEFAULT_CODEX_BASE_URL,
OPENAI_MODEL: options.model || 'codexplan',
CODEX_CREDENTIAL_SOURCE: credentialSource,
}
if (key) {
env.CODEX_API_KEY = key
}
env.CHATGPT_ACCOUNT_ID = credentials.accountId
return env
}
export function buildMistralProfileEnv(options: {
model?: string | null
baseUrl?: string | null
apiKey?: string | null
processEnv?: NodeJS.ProcessEnv
}): ProfileEnv | null {
const processEnv = options.processEnv ?? process.env
const key = sanitizeApiKey(options.apiKey ?? processEnv.MISTRAL_API_KEY)
if (!key) {
return null
}
const env: ProfileEnv = {
MISTRAL_API_KEY: key,
MISTRAL_MODEL:
sanitizeProviderConfigValue(options.model, { MISTRAL_API_KEY: key }) ||
sanitizeProviderConfigValue(
processEnv.MISTRAL_MODEL,
{ MISTRAL_API_KEY: key },
) ||
DEFAULT_MISTRAL_MODEL,
}
const baseUrl =
sanitizeProviderConfigValue(options.baseUrl, { MISTRAL_API_KEY: key }) ||
sanitizeProviderConfigValue(
processEnv.MISTRAL_BASE_URL,
{ MISTRAL_API_KEY: key },
)
if (baseUrl) {
env.MISTRAL_BASE_URL = baseUrl
}
return env
}
export function buildCodexOAuthProfileEnv(
tokens: {
accessToken: string
idToken?: string
accountId?: string
},
): ProfileEnv | null {
const accountId =
tokens.accountId ??
parseChatgptAccountId(tokens.idToken) ??
parseChatgptAccountId(tokens.accessToken)
if (!accountId) {
return null
}
return {
OPENAI_BASE_URL: DEFAULT_CODEX_BASE_URL,
OPENAI_MODEL: 'codexplan',
CHATGPT_ACCOUNT_ID: accountId,
CODEX_CREDENTIAL_SOURCE: 'oauth',
}
}
export function createProfileFile(
profile: ProviderProfile,
env: ProfileEnv,
): ProfileFile {
return {
profile,
env,
createdAt: new Date().toISOString(),
}
}
export function isPersistedCodexOAuthProfile(
persisted: ProfileFile | null,
): boolean {
return (
persisted?.profile === 'codex' &&
persisted.env.CODEX_CREDENTIAL_SOURCE === 'oauth'
)
}
export function clearPersistedCodexOAuthProfile(
options?: ProfileFileLocation,
): string | null {
const persisted = loadProfileFile(options)
if (!isPersistedCodexOAuthProfile(persisted)) {
return null
}
return deleteProfileFile(options)
}
export function loadProfileFile(options?: ProfileFileLocation): ProfileFile | null {
const filePath = resolveProfileFilePath(options)
if (!existsSync(filePath)) {
return null
}
try {
const parsed = JSON.parse(readFileSync(filePath, 'utf8')) as Partial<ProfileFile>
if (!isProviderProfile(parsed.profile) || !parsed.env || typeof parsed.env !== 'object') {
return null
}
return {
profile: parsed.profile,
env: parsed.env,
createdAt:
typeof parsed.createdAt === 'string'
? parsed.createdAt
: new Date().toISOString(),
}
} catch {
return null
}
}
export function saveProfileFile(
profileFile: ProfileFile,
options?: ProfileFileLocation,
): string {
const filePath = resolveProfileFilePath(options)
writeFileSync(filePath, JSON.stringify(profileFile, null, 2), {
encoding: 'utf8',
mode: 0o600,
})
return filePath
}
export function deleteProfileFile(options?: ProfileFileLocation): string {
const filePath = resolveProfileFilePath(options)
rmSync(filePath, { force: true })
return filePath
}
export function hasExplicitProviderSelection(
processEnv: NodeJS.ProcessEnv = process.env,
): boolean {
// If env was already applied from a provider profile, preserve it.
if (processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1') {
return true
}
return (
isEnvTruthy(processEnv.CLAUDE_CODE_USE_OPENAI) ||
isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB) ||
isEnvTruthy(processEnv.CLAUDE_CODE_USE_GEMINI) ||
isEnvTruthy(processEnv.CLAUDE_CODE_USE_MISTRAL) ||
isEnvTruthy(processEnv.CLAUDE_CODE_USE_BEDROCK) ||
isEnvTruthy(processEnv.CLAUDE_CODE_USE_VERTEX) ||
isEnvTruthy(processEnv.CLAUDE_CODE_USE_FOUNDRY)
)
}
export function selectAutoProfile(
recommendedOllamaModel: string | null,
): ProviderProfile {
return recommendedOllamaModel ? 'ollama' : 'openai'
}
export async function buildLaunchEnv(options: {
profile: ProviderProfile
persisted: ProfileFile | null
goal: RecommendationGoal
processEnv?: NodeJS.ProcessEnv
getOllamaChatBaseUrl?: (baseUrl?: string) => string
resolveOllamaDefaultModel?: (goal: RecommendationGoal) => Promise<string>
getAtomicChatChatBaseUrl?: (baseUrl?: string) => string
resolveAtomicChatDefaultModel?: () => Promise<string | null>
readGeminiAccessToken?: () => string | undefined
}): Promise<NodeJS.ProcessEnv> {
const processEnv = options.processEnv ?? process.env
const persistedEnv =
options.persisted?.profile === options.profile
? options.persisted.env ?? {}
: {}
const persistedOpenAIModel = sanitizeProviderConfigValue(
persistedEnv.OPENAI_MODEL,
persistedEnv,
)
const persistedOpenAIBaseUrl = sanitizeProviderConfigValue(
persistedEnv.OPENAI_BASE_URL,
persistedEnv,
)
const shellOpenAIModel = sanitizeProviderConfigValue(
processEnv.OPENAI_MODEL,
processEnv as SecretValueSource,
)
const shellOpenAIBaseUrl = sanitizeProviderConfigValue(
processEnv.OPENAI_BASE_URL,
processEnv as SecretValueSource,
)
const persistedGeminiModel = sanitizeProviderConfigValue(
persistedEnv.GEMINI_MODEL,
persistedEnv,
)
const persistedGeminiBaseUrl = sanitizeProviderConfigValue(
persistedEnv.GEMINI_BASE_URL,
persistedEnv,
)
const shellGeminiModel = sanitizeProviderConfigValue(
processEnv.GEMINI_MODEL,
processEnv as SecretValueSource,
)
const shellGeminiBaseUrl = sanitizeProviderConfigValue(
processEnv.GEMINI_BASE_URL,
processEnv as SecretValueSource,
)
const shellGeminiAccessToken =
processEnv.GEMINI_ACCESS_TOKEN?.trim() || undefined
const storedGeminiAccessToken =
options.readGeminiAccessToken?.() ?? readGeminiAccessToken()
const shellGeminiKey = sanitizeApiKey(
processEnv.GEMINI_API_KEY ?? processEnv.GOOGLE_API_KEY,
)
const persistedGeminiKey = sanitizeApiKey(persistedEnv.GEMINI_API_KEY)
const persistedGeminiAuthMode = persistedEnv.GEMINI_AUTH_MODE
if (hasExplicitProviderSelection(processEnv)) {
for (let provider of PROVIDERS) {
if (provider === "anthropic") {
continue;
}
const env_key_name = `CLAUDE_CODE_USE_${provider.toUpperCase()}`
if (env_key_name in processEnv && isEnvTruthy(processEnv[env_key_name])) {
options.profile = provider;
}
}
}
if (options.profile === 'gemini') {
const env: NodeJS.ProcessEnv = {
...processEnv,
CLAUDE_CODE_USE_GEMINI: '1',
}
delete env.CLAUDE_CODE_USE_OPENAI
delete env.CLAUDE_CODE_USE_GITHUB
delete env.CODEX_CREDENTIAL_SOURCE
env.GEMINI_MODEL =
shellGeminiModel ||
persistedGeminiModel ||
DEFAULT_GEMINI_MODEL
env.GEMINI_BASE_URL =
shellGeminiBaseUrl ||
persistedGeminiBaseUrl ||
DEFAULT_GEMINI_BASE_URL
const geminiAuthMode =
persistedGeminiAuthMode === 'access-token' ||
persistedGeminiAuthMode === 'adc'
? persistedGeminiAuthMode
: 'api-key'
const geminiKey = shellGeminiKey || persistedGeminiKey
if (geminiAuthMode === 'api-key' && geminiKey) {
env.GEMINI_API_KEY = geminiKey
} else {
delete env.GEMINI_API_KEY
}
env.GEMINI_AUTH_MODE = geminiAuthMode
if (geminiAuthMode === 'access-token') {
const geminiAccessToken =
shellGeminiAccessToken || storedGeminiAccessToken
if (geminiAccessToken) {
env.GEMINI_ACCESS_TOKEN = geminiAccessToken
} else {
delete env.GEMINI_ACCESS_TOKEN
}
} else {
delete env.GEMINI_ACCESS_TOKEN
}
delete env.GOOGLE_API_KEY
delete env.OPENAI_BASE_URL
delete env.OPENAI_MODEL
delete env.OPENAI_API_KEY
delete env.CODEX_API_KEY
delete env.CHATGPT_ACCOUNT_ID
delete env.CODEX_ACCOUNT_ID
return env
}
if (options.profile === 'mistral') {
const env: NodeJS.ProcessEnv = {
...processEnv,
CLAUDE_CODE_USE_MISTRAL: '1',
}
delete env.CLAUDE_CODE_USE_OPENAI
delete env.CLAUDE_CODE_USE_GITHUB
delete env.CLAUDE_CODE_USE_GEMINI
delete env.CLAUDE_CODE_USE_BEDROCK
delete env.CLAUDE_CODE_USE_VERTEX
delete env.CLAUDE_CODE_USE_FOUNDRY
const shellMistralModel = sanitizeProviderConfigValue(
processEnv.MISTRAL_MODEL,
)
const persistedMistralModel = sanitizeProviderConfigValue(
persistedEnv.MISTRAL_MODEL,
)
const shellMistralBaseUrl = sanitizeProviderConfigValue(
processEnv.MISTRAL_BASE_URL,
)
const persistedMistralBaseUrl = sanitizeProviderConfigValue(
persistedEnv.MISTRAL_BASE_URL,
)
env.MISTRAL_MODEL =
shellMistralModel || persistedMistralModel || DEFAULT_MISTRAL_MODEL
const shellMistralKey = sanitizeApiKey(
processEnv.MISTRAL_API_KEY,
)
const persistedMistralKey = sanitizeApiKey(persistedEnv.MISTRAL_API_KEY)
const mistralKey = shellMistralKey || persistedMistralKey
if (mistralKey) {
env.MISTRAL_API_KEY = mistralKey
} else {
delete env.MISTRAL_API_KEY
}
if (shellMistralBaseUrl || persistedMistralBaseUrl) {
env.MISTRAL_BASE_URL = shellMistralBaseUrl || persistedMistralBaseUrl
} else {
delete env.MISTRAL_BASE_URL
}
delete env.GEMINI_API_KEY
delete env.GEMINI_AUTH_MODE
delete env.GEMINI_ACCESS_TOKEN
delete env.GEMINI_MODEL
delete env.GEMINI_BASE_URL
delete env.GOOGLE_API_KEY
delete env.OPENAI_BASE_URL
delete env.OPENAI_MODEL
delete env.OPENAI_API_KEY
delete env.CODEX_API_KEY
delete env.CHATGPT_ACCOUNT_ID
delete env.CODEX_ACCOUNT_ID
return env
}
const env: NodeJS.ProcessEnv = {
...processEnv,
CLAUDE_CODE_USE_OPENAI: '1',
}
delete env.CLAUDE_CODE_USE_MISTRAL
delete env.CLAUDE_CODE_USE_BEDROCK
delete env.CLAUDE_CODE_USE_VERTEX
delete env.CLAUDE_CODE_USE_FOUNDRY
delete env.CLAUDE_CODE_USE_GEMINI
delete env.CLAUDE_CODE_USE_GITHUB
delete env.CODEX_CREDENTIAL_SOURCE
delete env.GEMINI_API_KEY
delete env.GEMINI_AUTH_MODE
delete env.GEMINI_ACCESS_TOKEN
delete env.GEMINI_MODEL
delete env.GEMINI_BASE_URL
delete env.GOOGLE_API_KEY
if (options.profile === 'ollama') {
const getOllamaBaseUrl =
options.getOllamaChatBaseUrl ?? (() => 'http://localhost:11434/v1')
const resolveOllamaModel =
options.resolveOllamaDefaultModel ?? (async () => 'llama3.1:8b')
env.OPENAI_BASE_URL = persistedOpenAIBaseUrl || getOllamaBaseUrl()
env.OPENAI_MODEL =
persistedOpenAIModel ||
(await resolveOllamaModel(options.goal))
delete env.OPENAI_API_KEY
delete env.CODEX_API_KEY
delete env.CHATGPT_ACCOUNT_ID
delete env.CODEX_ACCOUNT_ID
return env
}
if (options.profile === 'atomic-chat') {
const getAtomicChatBaseUrl =
options.getAtomicChatChatBaseUrl ?? (() => 'http://127.0.0.1:1337/v1')
const resolveModel =
options.resolveAtomicChatDefaultModel ?? (async () => null as string | null)
env.OPENAI_BASE_URL = persistedEnv.OPENAI_BASE_URL || getAtomicChatBaseUrl()
env.OPENAI_MODEL =
persistedEnv.OPENAI_MODEL ||
(await resolveModel()) ||
''
delete env.OPENAI_API_KEY
delete env.CODEX_API_KEY
delete env.CHATGPT_ACCOUNT_ID
delete env.CODEX_ACCOUNT_ID
return env
}
if (options.profile === 'codex') {
env.OPENAI_BASE_URL =
persistedOpenAIBaseUrl && isCodexBaseUrl(persistedOpenAIBaseUrl)
? persistedOpenAIBaseUrl
: DEFAULT_CODEX_BASE_URL
env.OPENAI_MODEL = persistedOpenAIModel || 'codexplan'
delete env.OPENAI_API_KEY
const codexKey =
sanitizeApiKey(processEnv.CODEX_API_KEY) ||
sanitizeApiKey(persistedEnv.CODEX_API_KEY)
const liveCodexCredentials = resolveCodexApiCredentials(processEnv)
const codexAccountId =
processEnv.CHATGPT_ACCOUNT_ID ||
processEnv.CODEX_ACCOUNT_ID ||
liveCodexCredentials.accountId ||
persistedEnv.CHATGPT_ACCOUNT_ID ||
persistedEnv.CODEX_ACCOUNT_ID
if (codexKey) {
env.CODEX_API_KEY = codexKey
} else {
delete env.CODEX_API_KEY
}
if (codexAccountId) {
env.CHATGPT_ACCOUNT_ID = codexAccountId
} else {
delete env.CHATGPT_ACCOUNT_ID
}
delete env.CODEX_ACCOUNT_ID
return env
}
const defaultOpenAIModel = getGoalDefaultOpenAIModel(options.goal)
const shellOpenAIRequest = resolveProviderRequest({
model: shellOpenAIModel,
baseUrl: shellOpenAIBaseUrl,
fallbackModel: defaultOpenAIModel,
})
const persistedOpenAIRequest = resolveProviderRequest({
model: persistedOpenAIModel,
baseUrl: persistedOpenAIBaseUrl,
fallbackModel: defaultOpenAIModel,
})
const useShellOpenAIConfig = shellOpenAIRequest.transport === 'chat_completions'
const usePersistedOpenAIConfig =
(!persistedOpenAIModel && !persistedOpenAIBaseUrl) ||
persistedOpenAIRequest.transport === 'chat_completions'
env.OPENAI_BASE_URL =
(useShellOpenAIConfig ? shellOpenAIBaseUrl : undefined) ||
(usePersistedOpenAIConfig ? persistedOpenAIBaseUrl : undefined) ||
DEFAULT_OPENAI_BASE_URL
env.OPENAI_MODEL =
(useShellOpenAIConfig ? shellOpenAIModel : undefined) ||
(usePersistedOpenAIConfig ? persistedOpenAIModel : undefined) ||
defaultOpenAIModel
env.OPENAI_API_KEY = processEnv.OPENAI_API_KEY || persistedEnv.OPENAI_API_KEY
delete env.CODEX_API_KEY
delete env.CHATGPT_ACCOUNT_ID
delete env.CODEX_ACCOUNT_ID
return env
}
export async function buildStartupEnvFromProfile(options?: {
persisted?: ProfileFile | null
goal?: RecommendationGoal
processEnv?: NodeJS.ProcessEnv
getOllamaChatBaseUrl?: (baseUrl?: string) => string
resolveOllamaDefaultModel?: (goal: RecommendationGoal) => Promise<string>
readGeminiAccessToken?: () => string | undefined
}): Promise<NodeJS.ProcessEnv> {
const processEnv = options?.processEnv ?? process.env
const persisted = options?.persisted ?? loadProfileFile()
const profileManagedEnv = processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED === '1'
// The legacy single-profile file (~/.openclaude-profile.json) is a
// first-run / fallback mechanism. The newer plural provider-profile
// system (`/provider` presets + activeProviderProfileId in config) is
// applied earlier in the bootstrap via applyActiveProviderProfileFromConfig
// and signals completion with CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED=1.
//
// If the plural system has already set env, trust it — do NOT overlay the
// legacy file. addProviderProfile() does not sync the legacy file, so a
// stale legacy file (e.g. OpenAI defaults from an earlier manual setup)
// would otherwise overwrite the correct plural env and surface as the
// "banner shows gpt-4o / api.openai.com even though my saved profile is
// Moonshot" bug.
if (profileManagedEnv) {
return processEnv
}
if (!persisted) {
return processEnv
}
return buildLaunchEnv({
profile: persisted.profile,
persisted,
goal:
options?.goal ??
normalizeRecommendationGoal(processEnv.OPENCLAUDE_PROFILE_GOAL),
processEnv,
getOllamaChatBaseUrl:
options?.getOllamaChatBaseUrl ?? getOllamaChatBaseUrl,
resolveOllamaDefaultModel: options?.resolveOllamaDefaultModel,
readGeminiAccessToken: options?.readGeminiAccessToken,
})
}
export function applyProfileEnvToProcessEnv(
targetEnv: NodeJS.ProcessEnv,
nextEnv: NodeJS.ProcessEnv,
): void {
for (const key of PROFILE_ENV_KEYS) {
delete targetEnv[key]
}
Object.assign(targetEnv, nextEnv)
}
export async function applySavedProfileToCurrentSession(options: {
profileFile: ProfileFile
processEnv?: NodeJS.ProcessEnv
}): Promise<string | null> {
const processEnv = options.processEnv ?? process.env
const baseEnv = { ...processEnv }
const isCodexOAuthProfile =
options.profileFile.profile === 'codex' &&
options.profileFile.env.CODEX_CREDENTIAL_SOURCE === 'oauth'
delete baseEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED
delete baseEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID
if (isCodexOAuthProfile) {
delete baseEnv.CODEX_API_KEY
delete baseEnv.CODEX_ACCOUNT_ID
delete baseEnv.CHATGPT_ACCOUNT_ID
}
const nextEnv = await buildLaunchEnv({
profile: options.profileFile.profile,
persisted: options.profileFile,
goal: normalizeRecommendationGoal(processEnv.OPENCLAUDE_PROFILE_GOAL),
processEnv: baseEnv,
getOllamaChatBaseUrl,
readGeminiAccessToken,
})
const validationError = await getProviderValidationError(nextEnv)
if (validationError) {
return validationError
}
delete processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED
delete processEnv.CLAUDE_CODE_PROVIDER_PROFILE_ENV_APPLIED_ID
applyProfileEnvToProcessEnv(processEnv, nextEnv)
return null
}